Insight: How Algorithms Are Revolutionizing Liquid Chromatography With Dr Bob Pirok

Liquid chromatography (LC) is an essential technique is many analytical labs. Whilst a fully-tuned chromatography workflow can be an invaluable tool, the technique comes with a lengthy development process that has to be tweaked for each new sample. At the Van ‘t Hoff Institute for Molecular Sciences at the University of Amsterdam, Dr Bob Pirok and his team use informatics techniques to make LC more automated and pain-free. Technology Networks recently spoke to Pirok to find out more about his innovative algorithms.
Ruairi Mackenzie (RM): What problems exist with current models employed in analyte retention modeling and what are the knock-on effects?
Bob Pirok (BP): The problem does not center that much on the models, but on the way we employ them to predict and optimize separations. When facing a new sample, chromatographers currently undergo a method-development process in which they tweak and tailor an array of parameters to achieve better results. Depending on the complexity of the sample this process can become rather lengthy. This is particularly true when two-dimensional separations are used whether it may easily take several months to develop the method.
One method to combat this has been the use of retention modelling. These models relate to the mobile-phase composition with the retention factors. More simply said, they describe how the retention time, the time it takes for a molecule to migrate through the chromatographic column, changes as a function of method parameters (e.g. gradient length, gradient steepness, organic-modifier concentration in reversed-phase liquid chromatography etc.). The result of successful retention modelling culminates in several analyte-specific parameters that fully account for retention behavior for that combination of mobile phase and stationary phase (i.e. you have to repeat this process for a different column).
To this point it sounds very abstract. How does this help speed up method development? Having these parameters gives us a huge advantage. We can now tell a computer to predict the separation for an extremely large number of possible methods. Within minutes the computer will produce simulated separations of that same sample. Using criteria such as peak capacity, resolution and analysis time, the algorithm can then select the optimal method. This procedure works for both one-dimensional (1D) and two-dimensional (2D) separations and reduces the method-development process to mere hours or days.
One key component missing from the procedure is the data required to construct the retention model in the first place. The classic and robust approach is to measure the retention factor of an analyte under a number of different isocratic (i.e. constant) organic-modifier concentrations and fit a curve through these points. This is, however, a cumbersome process that has several disadvantages. First of all, you need to do this for every single analyte which you intend to optimize. For isocratic separations this requires a lot of time. Secondly, this procedure is only effective if you know what you are separating, yet in practice we rarely do. What I mean here is that the likelihood that you will (i) select the appropriate conditions to probe your analyte mixture, (ii) obtain any meaningful separation suitable for modelling in (iii) a reasonable amount of time is low. This, to me, is the reason why retention modelling largely has remained an academic endeavor. Consequently, our projects, which are almost exclusively funded by industry, often contain a focus on breaking this barrier.
Alternatively, gradients can be used. As is known from chromatographic theory, gradient separations hold a large number of advantages (e.g. fast, high-resolution separations, etc.). Yet the data obtained from these experiments is less reliable and thus not favored in academia.
RM: Why is it important to predict and monitor gradient deformation during LC?
BP: One reason why data obtained using gradients is difficult to employ is the uncertainty surrounding the exact mobile-phase composition experienced by the analyte as it migrated through the column. Chromatographers using gradients are already used to measuring the gradient-delay volume (i.e. dwell volume) in order to correct their data. There is, however, much more happening with the gradient inside the chromatographic system. As a result of an array of effects, including the geometric pump volumes, mixing processes, solvent miscibility, pressure and solvatochromic effects, the effective gradient rarely precisely resembles the programmed gradient.
In practice this is rarely noticed and the deformations often not significant enough for the user to worry about it. Yet in retention modelling, where the experienced gradient is related to the elution time of the analyte, this will induce a significant error on the resulting analyte parameters. This will particularly be a problem during method transfer and using data on other systems.
RM: How does your algorithm improve this process? What parameters does it take into account?
BP: Together with Agilent we have investigated methods to correct for this problem. Young PhD candidate Tijmen Bos (VU University Amsterdam), with assistance of PhD candidates Stef Molenaar, Leon Niezen and Mimi den Uijl, developed an algorithm which aims to reduce this error. In a nutshell, the algorithm mathematically assesses experimental dwell profiles measured on the instrument for correction. The result is a system-specific response function which acts as a fingerprint. This fingerprint comprises the pump flow profile, the dwell volume, flow rate and a number of instrument descriptors. In our publication, we demonstrated the significance of the error and showed how the algorithm helps to reduce this error.
RM: What will these innovations mean for data sharing between LC instruments?
BP: The impact for data sharing between LC instruments is two-fold. The first relates to the simple consequence that the algorithm will help to transfer optimized methods to other systems. From an academic perspective there is however a second important consequence. While retention modelling has been in developed for quite some time already, challenges still lie ahead before it can robustly facilitate automated method development. This is not something we do exclusively in Amsterdam. Scientists from different groups across the globe publish pieces of the full puzzle in our collaborative quest for automating method development. In doing so, we often publish retention parameters for the various analytes which we investigate. One of our main messages is that for gradients, these values are useless without the accompanying dwell measurements. We therefore encourage researchers to share their experimental data concerning the dwell volume of the pumps.
We are currently continuing this work, now focusing on embedding the mobile-phase compositional solvatochromic effects as well as investigating the magnitude of the impact of deformation as a function of molecular weight.
RM: How will algorithms of this kind help to automate LC?
BP: Gradient deformation is merely one of the challenging pieces of the puzzle on our quest for automating method development. Other parts include selectivity screening, finding suitable optimization criteria, but also data-analysis strategies to automatically interpret the complex input chromatograms for use as input data for the retention modelling. Ultimately, having the ability to accurately simulate and evaluate 1D and 2D separations saves us 95% of the time and resources required to develop a method.
Our goal is to reach the moment where we simply put our sample in the autosampler of the LC and have the system independently develop a recommended method to the scientist.
Dr Bob Pirok was speaking to Ruairi J Mackenzie, Science Writer for Technology Networks