Systematic error can often have a negative impact on the hit selection process during High Throughput Screening (HTS), as it can obscure important biological and chemical properties of screened compounds. To combat this a selection of error correction methods have been developed to eliminate them, however, these methods can often introduce a bias into data sets that do not contain any systematic errors. Therefore scientists from the two Canadian Universities, the University of Quebec and the McGill University, have tested three statistical procedures to assess the presence of systematic error in experimental HTS data.
They discovered that the successful assessment of the presence of systematic error in experimental HTS assays is possible by using a t-test.
High-throughput screening (HTS) is a key part of the drug discovery process during which thousands of chemical compounds are screened and their activity levels measured in order to identify potential drug candidates (i.e., hits). Many technical, procedural or environmental factors can cause systematic measurement error or inequalities in the conditions in which the measurements are taken.
Such systematic error has the potential to critically affect the hit selection process. Several error correction methods and software have been developed to address this issue in the context of experimental HTS [1-7]. Despite their power to reduce the impact of systematic error when applied to error perturbed datasets, those methods also have one disadvantage - they introduce a bias when applied to data not containing any systematic error . Hence, we need first to assess the presence of systematic error in a given HTS assay and then carry out systematic error correction method if and only if the presence of systematic error has been confirmed by statistical tests.
The paper, entitled 'Systematic error detection in experimental high-throughput screening' is freely available online through BMC Bioinformatics