We've updated our Privacy Policy to make it clearer how we use your personal data.

We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement
Antibodies in Research: The Good, the Bad, and the Validation Epidemic
Article

Antibodies in Research: The Good, the Bad, and the Validation Epidemic

Antibodies in Research: The Good, the Bad, and the Validation Epidemic
Article

Antibodies in Research: The Good, the Bad, and the Validation Epidemic

Read time:
 

Want a FREE PDF version of This Article?

Complete the form below and we will email you a PDF version of "Antibodies in Research: The Good, the Bad, and the Validation Epidemic "

First Name*
Last Name*
Email Address*
Country*
Company Type*
Job Function*
Would you like to receive further email communication from Technology Networks?

Technology Networks Ltd. needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy

The specificity of antibody binding is incredibly important for many research disciplines, yet sourcing the best antibody for your research can be a challenge. This is partly because not all suppliers validate their antibodies sufficiently. How much of a problem is this?

To address this question, we first need to define what properties constitute a “good” or “bad” antibody. At minimum, a “good” antibody is one that specifically detects its intended target in a particular assay and is consistent from experiment to experiment or batch to batch. To some degree, all antibodies display off-target (non-specific) binding properties, which can be exacerbated by how they are used (application, protocol, dilution).

“Bad” antibodies, on the other hand, are defined as those that are non-specific (do not detect the intended target or cross-react with unintended targets) or are not “fit for purpose” (do not work in the intended application).  It is critical to understand that antibodies are not inherently “good” or “bad”, but how they are screened, validated, and used determines their efficacy in research applications.

Typically, antibodies are initially developed by immunizing an animal with an antigen (peptide, protein, whole cell, small molecule, etc.). If done properly, the animal will most likely produce antibodies that will detect all or part of the immunogen, assuming it is sufficiently antigenic to elicit an immune response. Polyclonal antibodies, with rare exception, require purification against the antigen before use to remove serum proteins and pools of non-specific immunoglobulins.

Monoclonal antibodies are typically isolated from B-cell clones selected on their ability to detect the antigen.  If screened and selected properly, these antibodies should require minimal purification.

Recombinant antibodies are produced by transfecting the heavy and light chains of a monoclonal antibody in a mammalian expression system and also require minimal purification to remove cellular contaminants.

It is therefore important to understand that almost all antibodies, “good” and “bad” will recognize the antigen to which they were raised. The more significant question is – are they sufficiently sensitive, specific and functional to detect the intended target in a complex mixture of other biomolecules in the desired assay?

Certainly, there are antibodies on the market that have been subject to little, if any testing to confirm specificity and functionality.  For example, an antibody that was selected based solely on an ELISA assay is unlikely to work in immunohistochemistry due to significant differences in the presentation of the antigen.  However, if all you need is an ELISA antibody, then validation by ELISA might be sufficient.

Unfortunately, some vendors screen their antibodies by one assay and claim functionality in others. Similarly, the specificity of an antibody can be impacted by the conditions in which it is used – including the application, buffer conditions, protocols, dilution and incubation time.

An antibody that has been tested and validated for western blot should not be assumed to be equally specific and functional by immunocytochemistry or flow cytometry.  Similarly, an antibody that works for immunocytochemistry in formalin fixed cells may behave entirely differently in alcohol fixed conditions. Verification of the antibody MUST be performed to ensure specificity and sensitivity in every application and protocol independently regardless of the source.

This antibody problem begins with the sheer number of antibodies available for research purposes. We, and others, estimate that there are over 4.3 million different antibodies on the Research Use Only (RUO) market from over 200 suppliers.

Well-studied protein targets may have thousands of antibodies available, but the least-well characterized proteins may only have one or two. Even taking into account the need for site-specific antibodies to measure phosphorylation and other post-translational modifications, antibody conjugates and other derivatives, there are still many antibodies available for the most common targets. To make matters worse, many vendors sell the SAME antibody.

It is likely that only a certain percent of the antibodies available for any given target are “good” antibodies – meaning that they are target-specific and sensitive enough to detect endogenous signal in at least one application. The problem is that there are too many antibodies out there that are either non-specific (detect other targets in addition to – or in lieu of – the intended target), or not fit-for-purpose (not sensitive enough to detect endogenous signal or don’t work in the intended applications). Even the best antibody, when used incorrectly, can yield incorrect results.

Advertisement