We've updated our Privacy Policy to make it clearer how we use your personal data.

We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement
In Vitro Integrity: Maximizing the Value of Plate-Based Assay Data
Industry Insight

In Vitro Integrity: Maximizing the Value of Plate-Based Assay Data

In Vitro Integrity: Maximizing the Value of Plate-Based Assay Data
Industry Insight

In Vitro Integrity: Maximizing the Value of Plate-Based Assay Data


Want a FREE PDF version of This Industry Insight?

Complete the form below and we will email you a PDF version of "In Vitro Integrity: Maximizing the Value of Plate-Based Assay Data"

First Name*
Last Name*
Email Address*
Country*
Company Type*
Job Function*
Would you like to receive further email communication from Technology Networks?

Technology Networks Ltd. needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy

In vitro screening assays utilize a wide variety of instruments,  outputting data in different formats. To maximize the value of that data, more standardized analysis is needed. Genedata’s Screener software, which was updated to version 15 earlier this year, aims to streamline data workflows and make analysis easier. We caught up with Dr. Oliver Leven, Head of Genedata Screener Business Europe, to discuss the problems with difficult-to-integrate data and how researchers can make data more standardized, accessible and shareable whilst not compromising on security and integrity. 


Ruairi Mackenzie (RM): Is non-standard, difficult-to-integrate data damaging research?

Oliver Leven (OL): In answering this question, I’d like to define what we mean by data integration as two types of integration challenges are oftentimes confused: technical integration and semantic integration. Technical integration simply means that I can access and combine data while a semantic integration implies that I am able to understand and work with the integrated data. For example, let’s compare two Word documents, one written by a lawyer, the other by a medical researcher. While the documents can be easily exchanged and combined via technology such as email or a file sharing service, they are written by different subject matter experts who have different perspectives and use different language. Therefore, the depth and meaning of a document may not be fully understood by the respective other party. In some cases, depending on the context, the same word may even have completely different meanings.

For Genedata, the technical integration is a small matter and we focus our energies on streamlining semantic integrations. We ensure that all entities produced by our software have a well-defined meaning, either implicitly or by annotating them in a controlled fashion. A full tracking record of results and their annotations together with the underlying data, processing methods, and user input documents this sematic context. This actually is a big deal – it may be very easy to transfer a result from one system to another, but if you lose the context information and source, you may also lose the result’s meaning and value. So, difficult-to-integrate data can spoil research initiatives, because overarching conclusions cannot be drawn. That is why Genedata focuses on helping researchers to smoothly integrate data semantically.


RM: What can researchers do to standardize their data to make it more accessible?

OL: Standardization helps to make data easily accessible. Standardized data formats and semantics enable scientists to easily identify and extract relevant information. Standardized methodologies allow scientists to better comprehend results generated by others. Therefore, individual scientists should make sure they publish their results together with the methods that had been used to generate them, the full metadata and access to the underlying raw data. This can be achieved by using a single system for all data analyses, which enables standardized result generation and sharing. Our Genedata Screener platform gives scientists a single system that advances standardized data semantics and practices across an R&D organization. Streamlining all screening data analysis, Screener gives researchers standardized workflows for each type of analysis, making results and metadata accessible downstream in the corporate data warehouse. This way, results are fully comparable and easily accessible for all researchers and decision makers in the organization.


RM: How can Screener 15 make heterogeneous, plate-based assay data easier to integrate?

OL: Screener processes plate-data from any instrument, including the underlying raw data (kinetic traces, cell data population data, etc.), and allows scientists to go back from end results to this raw data, as needed – e.g. for reviewing it, and if need be, adapting the process to gain improved results. While Screener 15 extends the number of instruments that are ready-to-run off the shelf, Screener’s APIs allow the integration of any instrument generating plate-based data.


RM: You’ve recently released some new updates with Screener 15, particularly for Mechanism-of-analysis (MoA) studies. Why did you focus on MoA and how can Screener 15 benefit researchers in this area?

OL: We recognize that early inclusion of mechanistic information is key for modern drug discovery programs. Kinetic information in particular increases the yield of high-quality lead candidates by better candidate selection and accelerates their propagation by instilling higher confidence in their potential. Genedata Screener for Mechanistic Analysis provides this information as it enables scalable processing of biophysical and mechanistic assays in high throughput. Screener imports raw kinetic data from instruments and analyzes binding curves in a single workflow.  Data loads and processes in seconds. It’s pretty exciting that this system can analyze 500 compounds in the same amount of time previously required for 5 compounds. We are effectively eliminating spreadsheets and manual analysis, which we believe can save up to 80% of scientist’s current data handling and routine analysis time for these experiments. This enables researchers to shift focus towards improving experimentation and decision-making in the discovery process. 


RM: Data integrity and security will be paramount, especially if data becomes more shareable and accessible. How can researchers easily keep on top of their own data management? 

OL: Preventing data corruption or destruction and barring unauthorized people from accessing data are basic tenets of ensuring data integrity and security. The Genedata Screener platform supports those basic tenets with stringent data integrity and security management. Genedata Screener stores data in a highly structured database where research data are stored with and linked to their meta-data, source and method information, which eliminates the issue of broken relations. Data sets are fully versioned, so that a full audit trail and recovery upon user errors are given. Scientists can stay on top of their research as all data are stored in one central location where source data and analysis setups are easily viewed. 

Genedata Screener has a multi-tiered data access management, providing access only to authorized individuals and groups in projects. Depending on authorization and access roles, researchers can archive data or report results to corporate warehouses. Screener also gives options for secure exchange of data with external scientists, which access rights are tightly controlled, so that standardized procedures and streamlined workflows can be extended to external collaborations. 

Oliver Leven was speaking to Ruairi J Mackenzie, Science Writer for Technology Networks

Meet The Author
Ruairi J Mackenzie
Ruairi J Mackenzie
Senior Science Writer
Advertisement