Q&A Report: Rethink Your Impurity Analysis Strategy

In this webinar, Dr. Ejvind Mørtz discusses the benefits and applications of mass spectrometry (MS)-based host cell protein (HCP) analysis and why HCP ELISAs are insufficient for documenting product-related impurities. Watch to learn where to apply the assay in a typical process and what you gain from deploying it to your workflow.

These answers were provided by:

Ejvind Mørtz, PhD
COO and Co-Founder
Alphalyse

Do you have any recommendations for software to do the data analyses? Are there any publications about SWATH DIA?

Different instrument vendors sell their own software. They also have independent software providers that just develop software. They all work in different ways. For protein identification across all these different software packages, it is recommended that the proteins should be identified with at least two peptides to consider it as a positive identification.

Regarding the quantification, it is more individual how people are doing it. We use intact protein standards because they’ve taken all the sample preparation variability into account. Other organizations use synthetic peptides that are isotope labeled but they do not take digestion and MS preparation into account. Other people even use drug substance peptides that are also internal calibrants. So, there is no plug and play solution yet that you can just buy and use.

What is the ppm range for the linearity curve?

The detection limit of this method goes down to the low ppm range. Typically, we can identify and quantify down to between one and ten parts per million (ppm) for purified products. Some of them are even sub ppm due to a special sample preparation method for monoclonal antibody products (native digest).

Other types of products like vaccine are cruder. We have large amount and number of HCPs in these samples, e.g. in some of the COVID 19 vaccines. You have an almost equal amount of HCP and drug substance protein.

The upper range goes up to hundreds of thousands of the HCP depending on the product. The assay has a linear range of about four orders of magnitude.

How do you assess accuracy using the median standard curve approach when it encompasses two orders of magnitude?

We evaluate the accuracy on each of the seven standards. We use the median response curve and then we look for the deviation from the standard curve; how close are they to the median response curve?

If you want a higher accuracy than this twofold, we can look at one specific protein that’s of concern, and we can do a more accurate quantification. It requires that you have this as a purified protein standard. Then we can make the exact calibration curve and response curve for this protein. This could be an enzyme that you add during your manufacturing process, e.g. Benzonase.

So, there are different ways to make it more accurate. But with our generic method that’s immediately applicable to all types of samples, we use the median response curve to quantify the wide range of HCPs.

Did you create in-house DDA/DIA libraries for all the expression systems?

First, we run and create a library from each sample to make sure we do not miss low abundant HCPs in the purified drug substance. And then we use this library to run the SWATH analysis against this library and quantify the individual HCPs. This enables us to do this very reproducible quantification of low amount HCPs. So, we don’t have a generic library that we use for all samples from different products and projects. We create the library for each project.

How do you manage identified HCPs that are below the LOQ value?

If it is identified with two peptides, we will report it. We will also report the quantified amount, but we will say that it is below the limit of quantification. It is still relevant to have the information to compare batches.

It could be a problematic HCP. In many mAb products you have these lipases that could degrade Tween-20 even below one ppm. So, you still want to know if it’s there, even though it is below the 10 ppm LOD.

We currently use an HCP ELISA and will need to bridge to new reagents next year. Can LC-MS be applied for this?

The short answer is yes. The long answer is that to bridge to a new ELISA, you must do an HCP coverage analysis. You must compare the old ELISA to the new ELISA and determine the impurities detected by the ELISA. Are there differences? What are the differences? Also, we recommend that you use the LC-MS analysis to get the identities of these proteins that are covered and not covered by your ELISA.

You also need to know which HCPs are present in your HCP standard used for calibration. Does it match the HCPs in your early process sample?
You also want to know the individual impurities in your purified drug substance. Are these impurities detected by the ELISA assay?

Mass spectrometry can help answer all those questions. So, we see mass spectrometry not competing against ELISA, but used as a supporting tool to select the best ELISA reagents for your product and your impurities. And documenting that the ELISA is fit for purpose for your clinical trials.

How do MS HCP levels compare to ELISA levels (based on your experience)?

If you have a very good ELISA, your standard fits your process and it covers most of the impurities, then the number you get from the ELISA method matches quite well with the number you get from mass spectrometry. ELISA is based on antibodies that are raised in animals but some of the antigens (HCPs) are not immunogenic in the animal and might not be detected. On the other hand, mass spectrometry is more of a generic method, you can say. It involves cleaving proteins into peptides and the peptides hitting a detector. It measures more of the proteins. And then we use these standards to quantify the amounts, which makes it accurate within this twofold range.

Are there any difficult sample matrices for the LC-MS setup?

We have optimized this sample preparation for many years to make it as robust as possible. Sometimes we still see some matrix effects. So, for example, if you have a sample containing glycerol, you may have some proteins that are adhering to the glycerol. You can also have samples where you have denaturing reagents, for example, inclusion bodies if you use guanidine hydrochloride to dissolve the samples. So, these will also interfere with the analysis. Furthermore, detergents can also interfere with analysis. Therefore, we evaluate the performance of the standard proteins on each individual sample to see if there are any matrix effects that we should consider. For example, should we optimize the sample preparation a little bit or should we be cautious about the accuracy that we report for the assay?

Is it also possible to quantify enzymes added during process development?

Yes, it is so. And we can do an absolute quantification if you have it recombinantly/in pure form. This is one of the typical problems that you may have in CMC development: that if you add enzymes to your product, you must document the clearance of this specific protein to the authorities.

So, in this case we would develop a specific essay for the protein that you are adding to the process.

Is it faster to develop and validate a LC-MS assay under GMP compared to a process-specific ELISA?

If you must go and develop a process specific ELISA, it will typically take you one and a half years and then require validation. So that’s a couple of years we’re talking about. For mass spectrometry, e.g. for the Bavarian Nordic validation, it took about 4 to 6 months for us to validate the method and run the GMP release testing. So, mass spec is definitely faster. If you have an ELISA available with sufficient coverage and suitability for your process and DS HCPs, it can be just as fast to validate.

What are the reasons for selecting intact proteins for calibration standards?

That is to take the sample preparation into account. When adding these proteins before any treatment, they will take cleavage activity, peptide purification, variability and efficiency into account. And we see that with normalization to the intact proteins, we normalize most of the variability away. There’s no other way to get this kind of normalization.