What the PCR machine tells us about how to automate the lab
Automation must fit into the scientist's flow
When I think of successful implementations of lab automation, my mind goes immediately to the PCR machine.
Old hands of the molecular biology world will regale you with tales of the ‘good’ old days, when they had to manually dunk their samples in three water baths, set to three different temperatures; with an eye on a stopwatch to tell them when to move from one to the next.
The automated PCR machine replaced that, and delivered the holy trinity of lab automation, that is so often promised and yet rarely delivered:
They save the scientist time. Pop the samples in, program the thermocycling conditions, hit go. 2 minutes to set up, and a couple of hours tedious manual work avoided.
They’re more reliable. 20 or 30 cycles of carefully timed incubations is tough to do consistently.
End-to-end, they’re just as fast, if not faster than working manually. Especially if you forgot to pre-heat the water baths!
PCR machines are now completely ubiquitous. This is partly a reflection of the ubiquity of PCR as a technique – but it’s also a success story for lab automation. The first real pipetting robots were released at a similar time, and while humans are still doing most liquid handling in labs, precisely no one is running their PCRs manually.
I think the success of the PCR machine can tell us something about when and how to deploy lab automation today, and what it’ll take to move this field on to the next wave of adoption for lab automation.
PCR machines fit into a scientists workflow: Besides using the right tubes, scientists don’t need to adjust their experiment upstream or downstream of a PCR machine to get value out of it. It’s clear what goes in, and it’s clear what comes out.
They are simple to reprogram: Just punch in the new temperatures and times, and hit go. No training required.
They’re trustworthy: PCR machines very very rarely fail. And new ‘untested’ programs don’t require any testing before scientists can trust them with their precious samples. They can rely on the hardware being solid, while they’re trying to get their wetware working.
So how does this match up with liquid handling?
Well, if you take a standalone ‘naive’ liquid handler… it really doesn’t measure up well. Let’s go through each in turn:
Liquid handlers don’t slot into scientists’ workflows: they squash their users towards specific labware and tips. This might mean reformatting their samples specifically so they can be used on the robot. Worse, exactly what these constraints are is not obvious to a scientist who isn’t experienced with the system.
Liquid handlers are complex to reprogram: They require learning new, complicated, and often antiquated software. More than that though, they require a completely new way of thinking – describing their science in terms of a series aspirations and dispenses, liquid classes, and labware definitions, rather than describing it conceptually as they’re used to.
Liquid handlers are not trustworthy: at least not immediately. To get a new protocol working, most require testing on multiple levels – dry runs to check movements are correct, then more testing with water and the real reagents to check the liquid handling looks good – no dribbling or clogged tips with suspended material.
So what makes useful lab automation today?
Liquid handlers, and more integrated automation, have to be turned into boxes that function like PCR machines. Automation engineers must develop specific workflows on specific robots, perhaps with some known variables exposed to the user. When a scientist walks up with their samples, how to use and parameterise the system must be transparent, and they must trust it not to fail.
Just look at the plethora of ‘assay-ready’ ‘workstations’ that lab automation vendors sell. These are essentially pre-validated, pre-programmed versions of their naive robots, ready to do one thing, and one thing well. They’ve been PCR machine-ified.
We’re often asked for advice on how and when to start buying automation by startups who are getting their core processes up and running, and are thinking about scaling them up. I like to apply this PCR machine formula. Ask yourself honestly, which elements of your scientific workflow do you understand deeply enough that you can predict exactly how they will work, including all the flexibility you’ll need to build in?
This often requires patience – it means waiting for your scientists to validate and optimise their workflows, so you can be confident they know the different flavours and parameters they’ll need. And that a new project isn’t going to come along and completely derail it.
It also means, as a general rule, starting with individual liquid handlers is sensible, since it’s easier to find and automate smaller slices of your workflow, and build up your automation piece-by-piece, than it is taking longer, more complex ones, and trying to integrate steps together. There’s a good chance you’ll need to introduce changes which will mean going back into development and testing.
What about the future?
So that’s it, is it? Lab automation is forever confined to workflows that are very well understood, parameterised, and standardised? Well, honestly, if you’d asked me a few years ago, I’d have said yes. I’d caution most startups to think really hard before investing in any automation, because finding slices of their workflow that they deeply understand to this extent is hard when you’re still building your engine.
You could spend the same money on hiring a junior scientist for a year. A good one is guaranteed to actually pipette. Plus they problem solve, hypothesise, analyse data, and you can go get a pint with them after work. Without a solid idea of the application, liquid handlers aren’t even guaranteed to do a meaningful amount of pipetting for you.
But times are changin’ – I think there’s now a path to making lab automation as ubiquitous as the PCR machine.
This means solving the interface problem – allowing scientists to program robots in their language, not the robot’s. It means exposing the right level of flexibility to allow them to optimise and tweak. It also means rigorous validation and testing, before the scientist shows up, so that they can trust it’ll do what they ask of it.
Check out our demo of how Briefly tackles these issues below. If you’re interested in deploying this in your lab, get in touch!