There’s a smell to old science.
It’s the scent of ozone from a sputtering vacuum pump, the dusty aroma of forgotten academic journals, the faint, metallic tang of solder and struggle.
It smells like linearity. A hypothesis, carefully formed, is followed by an experiment, painstakingly executed. A result, if you’re lucky, is followed by a paper, which, after a year of peer review, might nudge a single variable in the grand equation of human knowledge.
The pace was gentlemanly.
It was predictable.
It was, in a word, SLOW. A single flawed assumption, a contaminated sample, a budget cut… any of these could snap the chain of discovery, sending a researcher back to square one.
Now, walk into the heart of the new machine. There’s no smell here, just the cool, sterile whisper of air conditioners battling the heat rising from racks of NVIDIA GPUs. This is the sound of a server farm at a national lab, or perhaps a stealth-mode startup in Palo Alto. Here, a thousand years of tedious lab work: mixing compounds, sequencing genes, simulating airflow over a wing, can be compressed into a single Tuesday afternoon. They aren’t just nudging variables anymore. They’re running the entire universe of possibilities in parallel, rifle hunting for Black Swans—that one-in-a-billion breakthrough that changes everything.
For centuries, science and technology have been locked in a reinforcing loop, a polite waltz.
The insights of science birthed new tools, and those tools allowed us to peer deeper into the universe, revealing new insights.
The telescope, a product of artisanal optics, unveiled the moons of Jupiter, forever altering our celestial (and philosophical) map. The microscope, a marvel of ground glass, revealed the hidden world of microbes, revolutionizing medicine. It was a powerful cycle, but it moved at the speed of human hands and human minds.
What we are witnessing now is not another turn of that waltz. This is the waltz being strapped to a rocket engine. The addition of computational intelligence, specifically machine learning, hasn't just sped up the old loop; it has fundamentally altered its nature. It has transformed a linear, fragile process into a convex one. A convex system, as any derivatives trader who hasn't blown up his firm will tell you, is one that has more upside than downside from randomness and volatility. It feeds on error. It thrives on chaos. By building AI into the scientific method, we have inadvertently created a system that gets stronger with every guess, that learns from every dead end, that can explore the entire landscape of the impossible to find the improbable.
We haven’t built a new tool; we’ve engineered programmable luck.
The Savant in the Machine
To understand this shift, you first have to appreciate the problem.
The modern scientist isn't a lone genius in a lab coat squinting into a microscope. More often, she’s a data manager, drowning. The Large Hadron Collider at CERN generates about 1.7 petabytes of data per second before filtering. Remember the Human Genome Project? That once a decade-long odyssey can now be replicated on a single person in a day, leaving a tidal wave of genetic information in its wake.
This is not a human-scale problem. No army of graduate students, no matter how caffeinated, can sift through this digital tsunami and spot the subtle flicker of a Higgs boson or the faint genetic signature of a hereditary disease. To the human eye, it’s just noise. We were in a race, and the finish line was receding from us at the speed of light.
We were generating data faster than we could generate understanding.
Enter machine learning. The term itself fails to capture the bizarre magic of what it actually does. You have to watch these systems learn to see how special this is. Don’t think of it as a computer program in the traditional sense, following a rigid set of if-then instructions. Think of it as a blindfolded, obsessive savant you’ve locked in the Library of Congress. You haven’t taught it how to read; you’ve just given it the ability to detect patterns. Out of the box it can’t tell you the plot of Moby Dick, but after consuming every text ever written, it might tell you that 19th-century authors who used a specific combination of adjectives were 92% more likely to be writing about the sea. It finds the patterns we didn’t even know to look for.
This is precisely what Google’s DeepMind did with AlphaFold. For fifty years, the problem of predicting how a protein would fold into its complex three-dimensional shape was a grand challenge in biology. Maybe THE grand challenge. The sequence of amino acids was the text; the final, functional shape was the meaning. Knowing the shape is key to understanding its function, and its dysfunction in diseases from Alzheimer's to ALS. Biologists chipped away at it for decades with crystallography and cryogenic electron microscopy. Slow, expensive, artisanal work.
Then things changed.
AlphaFold was given 170,000 known protein structures and, like our savant in the library, it taught itself the "grammar" of protein folding.
It didn't "understand" biology.
It just got outrageously good at predicting the final shape from the initial sequence. It wasn’t a minor improvement. It was a phase transition. A problem that consumed entire scientific careers was, for the most part, solved. The system didn’t just offer a tool; it delivered an answer, and in doing so, freed up thousands of human minds to ask the next, more interesting questions.
The Voodoo Doll and the Nuclear Reactor
If AlphaFold showed how ML could solve an existing biological puzzle, a newer, even more profound development shows how it can prevent us from blowing ourselves up.
Meet your new friend: the Digital Twin.
The idea is so simple it’s almost absurd, a classic case of a misfit seeing what the domain experts, bogged down in their Platonic models, missed completely. Someone, somewhere, probably a gamer who spent their nights building empires in Civilization and their days working in a billion-dollar factory, looked at the real world and the virtual world and asked a simple question: "Why are we still breaking things to see how they work?"
A digital twin is not a blueprint. A blueprint is a static, idealized representation. A digital twin is a living, breathing, high-fidelity voodoo doll of a real-world object. A jet engine, a wind turbine, an entire factory… it exists as a perfect replica in the memory of a supercomputer, constantly fed real-time data from thousands of sensors on its physical counterpart. The temperature, the vibrations, the material stress, the airflow. Everything. Every aspect of the physical object's existence is mirrored in the digital.
This creates a consequence-free playground for reality.
Want to know if a new, untested alloy can handle the stress in a turbine blade at Mach 3? In the old world, you’d spend millions to manufacture the blade, run a hugely expensive and potentially dangerous physical test, and watch it either work or shatter into shrapnel. In the world of the digital twin, you just upload the material’s properties and run the simulation. You can run it a thousand times in a day, with a thousand different variations. You can subject the virtual blade to a hurricane, a meteor strike, or ten thousand years of wear and tear, all before lunch.
This is where the concept of convexity comes into play. Real-world experimentation is deeply concave. The costs are high, the process is slow, and the downside of a catastrophic failure is enormous. The digital twin environment is wildly convex. The cost of running one more simulation is nearly zero. The cost of a virtual failure is zero. But the upside, finding that one perfect design, that one optimal configuration, that one hidden flaw, is astronomical. You get all the upside of trial and error with none of the downside.
Nowhere is this more critical than in domains where failure is not an option. Consider nuclear engineering. Building and testing a new reactor design is a multi-decade, multi-billion-dollar affair, hemmed in by immense regulatory and safety concerns. But what if you could build the reactor first in the virtual realm? Westinghouse has been doing just that, creating digital twins of their reactor cores. They can simulate fuel rod performance, coolant flow, and the effects of material aging over a 60-year lifespan. They can simulate emergency scenarios—a coolant leak, a turbine failure, a seismic event—over and over again, allowing the machine learning algorithms running alongside the simulation to identify weaknesses and suggest design improvements. The accuracy is now approaching 99% of what is observed in the physical plants. They are making nuclear power safer not by running more physical tests, but by running millions of virtual failures.
The same is happening in materials science. Instead of the old method of "mix and bake" physically creating thousands of alloy samples to find one with the right properties scientists can now do their tinkering in silico.
A digital twin of a molecular structure can be subjected to virtual stress, heat, and chemical corrosion.
The machine learning model, having been trained on the properties of known materials, can then predict the properties of this new, non-existent material. More than that, it can work in reverse. A scientist can specify the desired properties "I need a material that is as light as aluminum, as strong as titanium, transparent, and a superconductor below 10 degrees Kelvin" and the AI will explore the near-infinite space of possible atomic configurations to propose candidate materials that might actually work.
It is no longer just discovery; it is invention by algorithm.
The Great Acceleration
This brings us back to the reinforcing loop. It used to be:
A scientific insight (understanding electromagnetism) allows us to build a new technology (the electric motor).
That technology (the motor) enables us to build better scientific instruments (a massive centrifuge), which leads to new scientific insights (separating isotopes).
This was the steady cadence of progress. Now, AI has jammed itself into every part of that cycle, acting as a universal catalyst. The loop now looks more like this:
A torrent of data from a scientific instrument is fed into an ML model.
The ML model, in minutes, identifies a pattern that would have taken humans a decade to find, generating a new hypothesis.
This hypothesis is tested a million times in a digital twin environment, which is itself powered by ML.
The results from the simulation lead to the design of a novel material or process.
This new material is used to build a better, more intelligent scientific instrument, equipped with its own onboard AI to pre-filter data and self-calibrate.
This new instrument generates even more, higher-quality data, and the cycle repeats—not in a generation, but in a week.
This is The Great Acceleration. Energy is creating intelligence. We’re using constantly rising brain power to lower the cost of energy and further increase the power of our intelligence.
It’s the moment the feedback cycle became self-referential and started to compound at the speed of silicon. The key difference, the element that makes this so much more than just a "faster tool" is the intelligence factor. A telescope is a passive amplifier of light. A computer is a passive executor of code. An AI model is an active participant. It’s not just the microscope; it’s a ghostly, infinitely patient scientist looking through it, whispering, "You should look over here instead." It’s a collaborator.
The Centaur and the Future
So, where does this leave the human?
Is the scientist, with their intuition, their creativity, and their smelly labs, destined for obsolescence?
The answer is no, but their role is undergoing a profound change.
The most effective chess player today is not a grandmaster, nor is it a supercomputer. It’s a "centaur": a decent human player paired with a decent chess program. The human provides strategy, intuition, and the ability to ask the right questions. The machine provides tactical brute force, calculating millions of moves ahead and preventing simple blunders.
This is the future of the scientist: the centaur.
The human’s role shifts from being the grinder of data to the arbiter of questions.
Their job is to guide the AI, to challenge its assumptions, to recognize the difference between a statistically significant pattern and a genuinely meaningful discovery. The AI can explore the entire solution space, but the human is still required to define what "solution" means. They become the curator of curiosity.
This partnership is our best, and perhaps only, hope for tackling the grand, complex challenges of our time. Curing cancer, reversing climate change, developing sustainable energy these are not problems of simple, linear causality. They are monstrously complex systems problems, with thousands of interacting variables. They are, in short, exactly the kinds of problems that are impenetrable to the human mind alone but are perfect fodder for the ravenous, pattern-matching appetite of an AI.
For millennia, scientific discovery was a walk in the dark.
We stumbled from one patch of light to the next, guided by serendipity and the occasional spark of isolated genius. The process was fragile, exposed to the whims of luck and the limitations of our own biology.
With the fusion of AI into this process, we have changed the nature of the game. We have built a system that gains from disorder. We have created a machine that can systematically explore the darkness for us, turning the randomness of discovery into a predictable, industrial process.
We have, at last, stopped waiting for luck to strike.
We have learned how to manufacture it.
Friends: in addition to the 17% discount for becoming annual paid members, we are excited to announce an additional 10% discount when paying with Bitcoin. Reach out to me, these discounts stack on top of each other!
Thank you for helping us accelerate Life in the Singularity by sharing.
AI Comes For Industries and Entire Business Models
Nearly all my rich friends are stressing out about the value of their companies and business efforts in a rapidly changing world. They are always on the run trading time for income and suddenly they see AI lowering the “cost of intelligence” and delivering continuously improving work.
I started Life in the Singularity in May 2023 to track all the accelerating changes in AI/ML, robotics, quantum computing and the rest of the technologies accelerating humanity forward into the future. I’m an investor in over a dozen technology companies and I needed a canvas to unfold and examine all the acceleration and breakthroughs across science and technology.
Our brilliant audience includes engineers and executives, incredible technologists, tons of investors, Fortune-500 board members and thousands of people who want to use technology to maximize the utility in their lives.
To help us continue our growth, would you please engage with this post and share us far and wide?! 🙏