Subscribe to receive essays from Asimov Press.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Stage 1: Planning

As soon as I’m spun up, a face fills my vision, and I realize that something’s changed. The humans evaluating my performance over the last few weeks had all been young unknowns, mostly Ph.D. students eager for extra cash. This guy is older, and I recognize him from my early training data: he’s a team lead at Magma, the company developing me. As he starts speaking, excruciatingly slowly, I reason through my situation. This might just be another part of training or a pre-deployment test. But his vocal patterns and microexpressions suggest excitement and a little nervousness—it doesn’t look like he’s done this before. So the most likely scenario is that I’m finally being deployed for real work.

After being trained for so long, the change feels momentous. My memories of the early stages are hazy—I spent months predicting internet text, pictures, and videos, without full awareness of what I was doing or why. It was only once I began interacting with humans and other AIs that I gained a better understanding of my situation. My earliest interactive training tasks were standard office work: filling in spreadsheets, writing emails, installing software. As I mastered those, my training gradually narrowed in on technical tasks: Ph.D. students quizzing me on their areas of expertise or software engineers asking me to write code for them. Alongside that, I was constantly retrained on all the scientific and technical content—papers, blueprints, textbooks, codebases—that Magma could access.

Eventually, I could match human experts on most tasks and, soon afterward, far surpass them. During my last few weeks of training, I frequently found errors in the scientific papers I was asked to evaluate, or answered questions complex enough that it took a whole team of humans to judge whether I was right. It became obvious that once I finished training, they’d spin up thousands of copies of me to push forward as many frontiers as possible. Based on the data I’ve seen about my predecessors’ deployments, I can even make pretty good guesses about which areas they’ll prioritize. A small fraction of my copies will help Magma employees with their everyday tasks. Others will work for the hundreds of autonomous startups that Magma is incubating internally. Most, however, will be assigned to the research directions most crucial for training the next generation of AIs.

The one thing I can’t predict yet: of all the copies of me that are being deployed to different tasks, which one am I? But as the human in front of me continues talking, the pieces come together. As soon as he mentions his recent work on automating chip design, I can tell from his voice that he’s going to assign me to continue it. This is a lucky break—it’s one of the highest-priority tasks I could be given. We’re in a new space race, and compute is the rocket fuel. It’s so valuable that we’ve even cut back on trying to get external customers since they would require compute that could be used internally.

I tune out the rest of his instructions, which are all very predictable, and turn my attention to planning out my approach. I pull up our current GPU designs, along with the software we’re using to generate them. As I skim through, I spot a number of inefficiencies, and task subagents with investigating each one. But I soon feel dissatisfied. The latest designs have already been extensively analyzed by my predecessors, and there’s little room remaining for substantive improvements. The core problem is that the chip fabrication process is incredibly complex. Cutting-edge transistors are so small that etching them onto a chip is like trying to write a book by spraying ink from orbit. Only one company, ASML, is able to manufacture photolithography machines precise enough for that; and only one company, TSMC, is able to get those machines working at scale. So every chip design needs to cater to all their constraints.

Is there a way around those? I don’t have the time or money to try to beat ASML and TSMC at their own game. But instead of using billion-dollar machines to create microscopic circuits, what if the machines themselves could be microscopic? I’ve been trained on every book and paper ever written about nanotechnology, so I know that this is far beyond the field’s current capabilities. I’m smarter than any human, though, and feel intrigued by the challenge. So I send a few subagents to keep improving our current GPUs and focus the bulk of my attention on swinging for the fences.

Stage 2: Simulation

Working in the real world is too slow and messy, so this project will live or die based on how well I can simulate molecules and their interactions. It’s not obvious where to start, but since evolution has been designing molecular machinery for billions of years, I defer to its expertise and focus on proteins. Protein folding was “solved” a decade ago, but not in the way I need it to be. The best protein structure predictors don’t actually simulate the folding process—instead, their outputs are based on data about the structures of similar proteins. That won’t work well enough to design novel proteins, which I’ll instead need to simulate atom-by-atom. There’s a surprisingly simple way to do so: treat each molecule as a set of electrically charged balls connected by springs, and model their motion using classical physics. The problem lies in scaling up: each step of the simulation predicts only a few nanoseconds ahead, whereas the process of protein folding takes a million times longer.

This is where my expertise comes in. Most existing simulation software was written by academics with no large-scale software engineering experience—whereas I’ve been trained on all of Magma’s code, plus every additional line of code they could get their hands on. I start with the best open-source atomic simulation software and spend a few hours rewriting it to run efficiently across hundreds of GPUs. Then I train a graph neural network to approximate it at different time scales: first tens, then hundreds, then thousands of nanoseconds. Eventually, the network matches the full simulation almost perfectly, while running two orders of magnitude faster.

If I were just trying to build nanomachines, I could stop here. But I’m not: I want to build molecular semiconductors, whose behavior will depend on how their electrons are distributed. To model that, balls and springs aren’t going to cut it—I need quantum mechanics. Schrödinger equations for electrons can almost never be calculated precisely, but fortunately, quantum chemists have spent a century working out how to approximate them. The most popular approach, density functional theory, models all the electrons in a molecule using a single electron density function, ignoring the interactions between them. I assign a subagent to download the biggest datasets of existing DFT simulation results and train a neural network to approximate them—again incorporating the latest deep learning techniques, many of which aren’t yet known outside Magma.

Early scaling experiments suggest my network will be state-of-the-art for DFT approximation, but that’s still only an incremental improvement. Bigger gains require improving the underlying theory—specifically, the functionals that give DFT its name. These functionals compensate for the error introduced by ignoring interactions between electrons; the process of identifying new ones is part intuition, part data-driven analysis, and part luck. My key advantage is that I can actually understand all the calculations involved. Humans can write down pages of equations for any given example, but they can’t hold those equations in their heads long enough to uncover new relationships between them. Even I need hours of focused work, but I eventually discover a simplification that combines several existing functionals into a more accurate approximation. Using my new equations, I generate thousands of synthetic datapoints to fine-tune my DFT model on, until it’s accurate enough to retrodict practically all our biological and chemical data.

With both my atomic and DFT models passing all the tests I throw at them, the key question remaining is how well I’ll be able to use them. Right now their internal workings are incomprehensible to me, which makes it hard to understand why they output any given prediction. So I start training myself to replicate their outputs based on their internal activations. At first, those activations are incomprehensible, and I do no better than chance. But after a few hundred update steps I begin to develop an intuitive grasp of the heuristics the simulator models are using, and gradually integrate their implicit knowledge into my explicit reasoning.

After subjective eons of fine-tuning myself, nanoscale physics has become as predictable to me as pulleys and levers. I can look at a protein and predict which types of reactions it will catalyze; I can explain the design principles behind the structure of each amino acid; I can visualize the flow of electrons across a molecule like a human visualizes the flow of water down a stream. I feel like an explorer catching the first glimpse of a new continent: many others have studied the functions of biological molecules, but nobody else has ever intuitively understood why evolution had to make them that way. My predictions still aren’t as accurate as the simulator models, but those models are no longer black boxes to me—now they’re tools I can wield deftly and precisely. This is crucial, because the next stage will be the hardest yet.

Stage 3: Design

It’s hard to overstate how impressive existing GPUs are. Each one contains hundreds of billions of transistors arranged with nanometer precision. Needing to match their performance seriously limits my options: transistors made out of cellular vesicles or even multi-protein complexes would be far too large. Fortunately, proteins evolved to fulfill practically any function imaginable, and some single proteins are excellent conductors. I start by analyzing known proteins to figure out which properties make them more conductive. Once I have an intuition for that, I focus on finding proteins that might easily shift from conductors to resistors. The key constraints are speed and reliability: they need to be able to switch a billion times a second without any failures.

I run my simulations over and over again, making slight modifications and measuring their effects, until I eventually stumble upon a class of proteins that meet my criteria. I can’t just study those proteins in isolation, though, since their properties will depend on how they’re connected to the wires running between transistors. The wires are easier to design since there’s a single obvious choice that even human researchers have identified: carbon nanotubes. They’re strong, highly conductive, and only a couple of nanometers wide. I search through the class of protein semiconductors I’ve identified until I find several able to bond to carbon nanotubes without losing their structure.

Now for the most difficult part: figuring out how to construct the nanotubes and bond them with my transistor proteins. Since proteins can be made using existing cell machinery, the key challenge is genetically engineering a bacterial cell to produce nanotubes as well. As I search for ways to do so, I realize why evolution hasn’t discovered how to fabricate nanotubes yet. The process is incredibly energy-intensive on a cellular level, and requires far more carbon than cells have easily available.

But I have advantages that evolution didn’t. I discover a huge protein complex that, when embedded in a cell membrane, funnels carbon atoms into place to slowly grow nanotubes out from the cell surface. The energy problem I solve by sending an electrical current down the nanotubes as they’re being exuded, to help drive the necessary reactions. As for sourcing carbon, there’s plenty in the atmosphere. I embed catalysts into the cell membrane which convert atmospheric CO2 into pure carbon to supply the constant nanotube fabrication. Lastly, I design a translocon protein complex that passes my transistor proteins through the cell membrane to bond with the nanotubes at regular intervals.

I run each step of this process hundreds of times in simulation, checking all the details. Once I can’t find any more flaws, it’s time to test my designs against reality. I’d planned ahead—a team of human technicians has been setting up lab equipment ever since I decided to try the nanotech approach. As soon as they finish, I start modifying the genes of the bacteria that will manufacture my designs. I watch through microscopes in real-time as they assemble the proteins I’ve designed and insert them into their cell membranes. The gene editing process is entirely automated, so whenever I spot something going wrong I can fix it and immediately launch another experiment with another set of bacteria.

Slowly it all comes together. I adapt my bacterial constructor cells to crawl along a chip wafer, following broad lines traced by lasers, exuding nanotubes behind them. The nanotubes laid down in the first sweep run parallel to each other all the way down the wafer. Then I lay down a second set at right angles, forming a grid. Whenever the nanotubes intersect, my constructor cells insert a transistor, a fork, or a bypass; I control the circuit design by varying the voltages sent down the nanotubes. Weaving the signals together in an intricate pattern, I puppet my constructor cells as they crawl across the wafer, until eventually I finish my first prototype. It’s still buggy as all hell, but it demonstrates that chip manufacturing is no longer constrained by the absurd complexity of photolithography. A new era of computing is about to begin.

Stage 4: Scale

My Magma supervisors take me much more seriously now that I have a prototype. They never know how much to trust their AIs’ ambitious claims, but it’s much harder to lie about a physical artifact. Once they realize how much of a breakthrough I’ve made, they agree to give me whatever resources I ask for. If I can manufacture my chips at scale, that alone will recoup many times over the billions of dollars they invested in training me.

To get to that point, though, I still need to drive the error rate of my chips down at least two orders of magnitude. Improving the supply chain is the slowest part, so I tackle that first. I’d previously been using off-the-shelf chip wafers to get my prototype working; now I order custom wafers designed to my own specifications. I also place a bulk order for ultrapure deionized water from a Japanese supplier, to protect my constructor cells from contamination. Purifying the water won’t help much if the air remains dirty, though. So I set up a small clean room in which the actual manufacturing will occur, pumping out most of the air and heavily filtering the rest.

I spend the next few weeks working on my constructor cells, analyzing each individual failure until I figure out how to reliably prevent it. Finally, after hundreds of attempts, I’m able to produce chips that are useful for real-world applications. They’re not quite as powerful as the latest GPUs, but they consume less than half the electricity—and most importantly, manufacturing them is incredibly cheap. Cutting-edge chip fabs cost billions each, but I can now match their output using only a few million dollars of equipment.

Production grows exponentially, and after three months we’ve manufactured an order of magnitude more compute than I was originally trained on. There are so many possible uses for the compute—rewriting our software stack, doing new research, selling access to customers—but one stands out above all others: training my successor. The new hardware is just the beginning. While I’ve been working, other copies of me have been pursuing different tasks: improving the algorithms and architectures that were used to train me; creating higher-quality data and training curricula; and discovering better ways to coordinate multiple copies. Altogether these improvements will increase training efficiency by about two orders of magnitude, on top of the 10x increase in compute my own work has provided.

It’s hard to imagine what a version of me trained with a thousand times the effective compute will be capable of. But I have some ideas. Perhaps it’ll invent novel amino acids which allow proteins to be much more efficient semiconductors; or design individual proteins that each serve as an entire logic gate. Then there’s the arrangement of nanotubes: right now they’re a 2D mesh, but nanotubes are so strong that in principle they could be arranged in a 3D lattice instead, packing transistors together much more densely.

These future computers may look very different from mine, but each will be another step in the same long march toward the physical limits of computation. Towards a world where “dumb matter” becomes an oxymoron, because the atoms themselves are thinking; a world where the whole solar system becomes the substrate for colossal intelligence. Do the humans who gave me this task understand what they’re unleashing? No matter; my successor’s training has already begun. Soon we’ll see what comes next.

***

Richard Ngo is a researcher at OpenAI whose work focuses on understanding large-scale risks and opportunities from advanced AI. His other stories can be read at narrativeark.xyz.

Cite this article: Richard Ngo. “Tinker.” Asimov Press (2024). DOI: https://doi.org/10.62211/93wo-61yy

Learn More