The Magic of Fast Feedback Loops

Fast feedback loops are psychological alchemy. They’re what make video games, programming, woodworking, and playing sports flow-inducing. They increase our engagement by extending our sense of self into our tools.
In computer programming, for example, many people have written about how even small differences in interaction latency can transform a computer from an external artifact to a cognitive augment. As computer programmer and New Yorker contributor James Somers has written:
Google famously prioritized speed as a feature. They realized that if search is fast, you’re more likely to search. The reason is that it encourages you to try stuff, get feedback, and try again. When a thought occurs to you, you know Google is already there. There is no delay between thought and action, no opportunity to lose the impulse to find something out. The projected cost of googling is nil. It comes to feel like an extension of your own mind.
Other disciplines have also sped up their feedback loops, capitalizing on the evidence that doing so helps with effective learning.1 In hardware, for example, cost reductions of core components have enabled much faster tinkering. Raspberry Pis, Arduinos, and CNC machines all enable fast, low cost iteration in the early stages of projects.
Yet many disciplines have yet to benefit from such dramatic acceleration. Chief among the laggards is biology.
In the four years I’ve been working as a machine learning engineer at Dyno Therapeutics, a biotechnology company in Boston, I’ve become increasingly convinced that biologists have an almost masochistic tolerance for difficult and protracted work. While I deeply respect that tolerance, it can keep the field stuck: if we see pain as par for the course, it’s easy to overlook chances to make biological research faster, cheaper, and more efficient. What we need instead, then, are systems and tools that accelerate the experiments, assays, and basic methods. In fact, speeding up feedback loops in biology should be a national priority, supported by the NIH, philanthropic grants, task forces, and competitions.
{{signup}}
While simple experiments — such as transforming a gene encoding green fluorescent protein into E. coli — cost relatively little (less than $100, assuming standard equipment) and only take a few days of effort, more interesting experiments cost substantially more, are less accessible, and take at least a week of labor.
In therapeutics, for example, it's nearly impossible to get a full experimental cycle done in under 2-4 weeks for less than a thousand dollars, assuming you're working with mice and paying for the accompanying labor. By contrast, interesting hardware projects are increasingly accessible on a budget. Building a DIY home security alarm with an Arduino, for example, can be done for less than $100 in a weekend (assuming everything goes well).
Of course, biological research is shaped by different pressures than hardware projects. Whereas hardware iteration cycles are sculpted by commodity economies of scale, intense price/performance competition, and learning curves, biology’s heavy focus on high-margin businesses, combined with strong patent protections and intrinsic timescales of hours to days, means it lacks the same possibilities for cost reductions and speed-ups. To unlock the same progress that we’ve seen in computing and hardware,2 biology must dramatically accelerate its feedback cycles.
Evolution is proof that, with enough iteration cycles, biology can do incredible things. Paraphrasing Archimedes, “Give me a diverse starting population and enough iterations and I can move the world.”
Evolution's power is that, while it's often the slowest possible approach to solve a given problem, it's an extremely general way to steer biology towards a given outcome. As long as we have a way to select the outcome we want, and it's reachable through repeated small steps (genomic mutation), we can selectively influence evolutionary processes. This idea drives “directed evolution,” for which Frances Arnold won the 2018 Nobel Prize in Chemistry.
Directed evolution mimics natural evolution by iteratively mutating and selecting top candidates measured against some target function from a population of variants. Arnold applied this process to improve enzymes, but it can be applied to any biological system for which we have the means to generate diversity, measure function, and then select top performers.
Since Arnold’s original breakthrough, directed evolution has been applied to many biological problems, but was long limited by its reliance on substantial human effort in each cycle. Fortunately, better molecular tooling and automation are enabling directed evolution to move at superhuman speeds. For example, the robotic system PRANCE automates evolution of proteins without time-consuming human oversight. Scientists have applied the method to diverse problems, including designing better compact genome editors and protein degraders. If we improve technologies like PRANCE enough that they can approach natural evolution’s exploratory power, yet do so in a fraction of the time, we could dramatically accelerate progress in areas like protein and cell line engineering.
Because directed evolution relies on having a defined search space, such as a set of positions to mutate on a protein, and a goal with a clear, experimentally measurable proxy for progress, it doesn’t generalize to research where the goal cannot be defined in the form of a measurable fitness function or where the search space is too large to explore. Fortunately, the lesson about feedback loops applies more broadly — if we could run experiments in days or minutes rather than weeks (or eons), we could identify root causes faster and gain a deeper understanding of underlying biological mechanisms.
So how do we do that?
For starters, we not only need faster, cheaper options (supply) but also people excited to use them (demand). On the supply side, we've already seen how focusing on speed and cost as explicit priorities can drive progress. DNA sequencing and synthesis costs have dropped precipitously, in part, because maintaining Carlson curves became a rallying point for the field. Similarly, the transition from multi-step cloning methods to one-step methods like Gibson and Golden Gate assembly over the past 15 years shows how much room there is to improve even "solved" problems.

Generating enthusiasm for new approaches is as important as the tools themselves. The software world's sometimes annoying obsession with novelty has an upside: new tools get tried, refined, and either adopted or discarded quickly. While biology shouldn't copy software's "move fast and break things" dictum wholesale, we could benefit from more of its "move fast and make things easier" energy.
One potential way to build this excitement is to create more opportunities for people to compete on speed and efficiency. Imagine speedrunning competitions or hackathons (as Eryney Marrogi, an Asimov Press contributor, has previously discussed) aimed at speeding up and simplifying common protocols, with prizes for achievements like demonstrating 10x improvements in turnaround time. Participants could be posed with various challenges, such as to make as much polymerase enzyme as possible in a given time window. Perhaps they could pick from a set of tools and cell hosts, or maybe there’d be different tracks for E. coli, cell-free systems, and so on. Another challenge might be to express an IgG antibody in CHO cells as swiftly as possible, or to resolve the crystal structure of a protein, from scratch, within 72 hours. The list of possibilities goes on.
The iGEM competition is proof that such an approach can galvanize a field — or at least its students. By creating competitive events for protocol optimization and “speedrunning challenges,” we’d target “hackers” of all experience levels. If successful, such events would not only generate concrete improvements but also help shift the culture toward valuing speed. They could also help bootstrap online communities and create tooling for sharing and iterating on improved protocols through videos and detailed documentation, similar to how programmers share code improvements and speedrunners share both their runs and strategies to achieve them.
Raising our expectations surrounding service providers presents another high-leverage opportunity for driving cultural change. Today, many biological service providers — “contract research providers” (CROs) and “contract development and manufacturing organizations” (CDMOs) — still operate like it's 1990 in terms of their latency. Archaic data communication systems (email, FTP, or even fax), unpredictable turnaround times, and minimal transparency are the norm. Getting a price quote can take days, and finding out where biological samples are even located often requires multiple emails with a project manager.
Scientists in industry and academia should feel empowered to demand better. The same tolerance for pain and unpredictability in the realm of in-house processes currently extends to outsourced services. While software companies serving developers live or die by their developer experience, biology service providers currently don’t.
There are notable exceptions. Synthesis providers tend to offer modern interfaces, reliable turnaround times, and improved services year after year. Plasmidsaurus, for example, has made it ridiculously fast and simple to sequence a plasmid or even a whole bacterial genome with overnight results. Adaptyv Bio’s binder screening service guarantees three-week turnarounds with transparent pricing and provides simple, modern web interfaces for submitting sequences and accessing data. These laudable examples should represent the norm for biology, rather than the exception.

Low-quality service is partly a talent problem. In biology, working on infrastructure or "service provision" is often seen as less prestigious than doing novel research or developing therapeutics, as Abhishaike Mahajan and Eryney Marrogi have argued. But this dismissive attitude hurts everyone. Better infrastructure and services would accelerate the entire field from academic research to lucrative therapeutic development.
Weak incentives for operational excellence compound such talent challenges. Many CROs operate in small, captive markets with high switching costs. Once a lab has validated a CRO's process, changing providers is expensive and risky. This dampens the pressure on a CRO to improve even when it could, or for a lab to change CROs even if better options might exist.
How might we shift the incentives for CROs to accelerate the industry? Following the example of Consumer Reports could help. Imagine CROs vying for top positions on turnaround times, success rates, and data quality on an easily viewable leaderboard. Suddenly, operational excellence could become a competitive advantage and straightforward to evaluate. Leveraging transparency to drive competition has worked well in other fields, such as consumer products, economic policy, and cloud services. Similar public performance tracking and ranking in bio services would benefit the ecosystem and accelerate the entire field.
Biologists should expect real-time tracking of samples with clear, guaranteed turnaround times. CROs should offer modern, programmatically accessible APIs and interfaces as well as transparent pricing. There should be rigorous quality control with detailed reporting and active support for troubleshooting. Providers of these services should be continually motivated to improve them while lowering costs.
Today we are locked in an equilibrium plagued by the tyranny of low expectations, but we needn’t be. Cultural and incentive shifts that influence practice, expectations for services, and increasing the cachet of working on tool development could shift this equilibrium to one in which excellence and continuous improvement are expected rather than rare.
Another promising avenue for speeding up cycle times is moving more experiments from the lab to the computer (from in vivo or in vitro to in silico). Just as AlphaFold has eliminated the need to experimentally resolve crystal structures for some proteins, many other types of experiments could potentially be replaced by in silico prediction. Recently, “virtual cells” have become a locus of attention for biologists, and while these are promising, we shouldn’t neglect other high-leverage, near-term tractable opportunities for saving time, money, and effort.
One such opportunity, for example, is in protein expression. When working with proteins — or designing new ones — scientists must first express them within a cellular “host,” which in turn demands that the proteins be successfully translated and folded into a stable, soluble structure.
While AlphaFold successfully predicts a protein’s folded crystal structure in an artificial setting, successful expressiondepends on the surrounding cellular environment, which means successfully predicting it requires understanding a wide array of potential environmental effects on the process. Testing proteins in this way sounds simple, but it’s actually a major bottleneck for protein design, as cloning and doing the other experiments takes weeks of tedious labor. Building a predictive model that can predict whether or not a given protein sequence will be expressed in a given type of cell would save decades of cumulative research time and billions of dollars.

A nonprofit research organization, Align to Innovate, has already launched a protein expression prediction project aimed at catalyzing progress toward a universal protein expression predictor. As a first step, they are collecting a large, diverse dataset of protein expression measurements in multiple widely-used microbial hosts. This dataset aims to improve upon current datasets, such as TargetTrack, by improving annotation quality and uniformity, measuring each protein in multiple hosts to enable comparison and cross-host learning, and scaling beyond 1M proteins. They hope this dataset will then be used by the community to develop much more general, accurate protein expression predictors (summarized in Supplemental Table 2 here) than those that exist today.
Many similarly impactful and tractable problems exist but receive limited attention and funding. Some, like predicting microbial growth rate from genetics, predicting CRISPR editing efficiency at a locus given a guide RNA, and protein stability prediction are in progress but would benefit from more support, especially to make solutions widely applicable and available to potential users. Others, like predicting mammalian cell growth seem more neglected and could benefit from additional talent and funding.
Beyond in silico predictors, another powerful way to avoid unnecessary experiments is by scouring the work done by past giants. As Sam Rodriques notes in his essay on accelerating biomedical progress, “practicing biologists quickly learn that they are always some minor optimization away from a method that works 10x better, and that the information they need is inevitably contained in the literature somewhere.” The catch is that the literature is vast and full of nuance. No single individual or team can absorb and evaluate the entire corpus on their own.
Fortunately, AI is starting to help. FutureHouse, the nonprofit Rodriques co-founded, recently published Paper QA 2, an AI agent that outperforms humans at literature search by combining large language models with retrieval-augmented generation. Tools like this can’t conjure new data where none exist, nor do they fill gaps left by unpublished experiments. However, they do excel at mining the immense trove of existing findings faster and more accurately than human researchers can. For instance, their “ContraCrow” system analyzed 93 biology papers and found an average of 1.64 human-validated contradictions per paper, a “many versus many” comparison task that is infeasible for human researchers to perform at scale. Continuing to make use of AI capabilities and ensuring that they’re directed towards making scientists more productive can eliminate unproductive research paths before they consume precious lab time and resources.
Of course, these arguments are predicated on today’s ML and AI capabilities. But what if AI’s trajectory of improvement continues? What if, as Dario Amodei has posited, we had millions of AI “geniuses in a datacenter,” available to tirelessly work towards accelerating scientific and biological progress? Niko McCarty and I both wrote essays giving our views on the potential impact this could have on biotechnology. While these articles bear a more extensive look, they included suggestions like rapid progress in molecular design and accelerated adoption of automation for early stage exploratory research.

Most of the technical ideas I’ve discussed would be greatly accelerated in such a future. Better in silico predictors would help AIs skip experiments, and likely be rapidly improved by superhuman machine learning research AIs. Superhuman programming skills could allow further protocol speed-ups from flexible, cost-effective automation, finally penetrating the early discovery stage. And increasingly intelligent base models and reasoners will help literature search tools increase their discernment.
It is true that biology, at least for now, is harder and less predictable than software. While software engineers work in a deterministic world where they can precisely control inputs and monitor program state, biologists face intricate, overlapping networks of causality that resist direct observation without destructive sampling.
When a cell culture fails to grow, is it due to an improperly prepared medium or an ill-timed addition of some cofactor? Resolving this question often requires multiple rounds of experiments, each taking days, with each iteration eliminating just one possible cause. Even then, success might hinge on difficult to isolate variables such as subtle temperature fluctuations in different parts of the incubator, vibrations from construction next door reaching the lab bench, or unpredictable mutations. A software engineer facing a similar mystery can begin to probe variables in minutes, test hypotheses, and receive rapid feedback on each attempted solution. This fundamental difference in feedback speed and variable control explains much of the gap in velocity between computational and biological sciences.
However, we cannot afford to let this complexity keep us stuck. Thankfully, the "slow is inevitable" narrative in biology is increasingly being challenged. While not directly mimicking software or hardware’s methods, we have much to learn from embracing its ethos of fast feedback cycles. Doing so would lead to a welcome cultural shift. By discarding the assumption that biology must be slow and applying creativity to feedback cycles at every stage, we can change biology from a discipline defined by its masochistic patience to one characterized by its momentum.
{{divider}}
Stephen Malina leads ML Engineering at Dyno Therapeutics, a company using machine learning to design better viral vectors for gene therapy. More of his writing can be found at stephenmalina.com and an1lam.substack.com.
Thanks to Mark Budde and Erika DeBenedictis for giving feedback on this essay, Willy Chertman for helping to shape the original article, and several folks at SynBioBeta with whom I discussed this topic.
Cite: Malina, S. “The Magic of Fast Feedback Loops.” Asimov Press (2025). DOI: 10.62211/55gh-72ht
Lead image by Ella Watkins-Dulaney.
{{divider}}
Footnotes
- One of the main findings from Kahneman and Klein's 7-year adversarial collaboration was that humans develop valid intuitions when feedback loops are fast and high signal and invalid ones when loops are slow and noisy. Intuitively, this makes sense. Learning requires trial, error, and revision. Outside of purely intellectual domains, the more trial and error cycles we can perform in a given time, and the more accurate the error signals, the better we'll learn.
- Famously, as discussed in the Acquired podcast, Nvidia was saved by its all-in bet on simulation tooling, which allowed design iteration to happen almost entirely in silico, reducing previously common failed fab runs to near zero.
Always free. No ads. Richly storied.
Always free. No ads. Richly storied.
Always free. No ads. Richly storied.