How AI Takeover Might Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But unfortunately, part of my job as an AI safety researcher is to think about the more troubling scenarios.
I'm like a mechanic rushing last-minute checks before Apollo 13 removes. If you ask for my take on the circumstance, I will not talk about the quality of the in-flight entertainment, or explain how beautiful the stars will appear from area.
I will inform you what might fail. That is what I intend to do in this story.
Now I need to clarify what this is precisely. It's not a prediction. I do not expect AI development to be this fast or as untamable as I represent. It's not pure fantasy either.
It is my worst nightmare.
It's a tasting from the futures that are amongst the most disastrous, and I think, disturbingly possible [1] - the ones that the majority of keep me up during the night.
I'm telling this tale because the future is not set yet. I hope, with a little bit of foresight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that influenced these stories. This post is composed in a personal capacity.
Ripples before waves
The year is 2025 and the month is February. OpenEye recently released a brand-new AI model they call U2. The product and the name are alike. Both are increments of the past. Both are not wholly surprising.
However, unlike OpenEye's prior AI products, which lived inside packages of their chat windows, U2 can use a computer system.
Some users discover it spooky to view their web browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of workers with form-filler tasks raise the eyebrows of their employers as they fly through work almost two times as quickly.
But by and big, U2 is still a specialized tool. To most who are taking note, it is an animal viewed through the glass boxes of X (or, if you don't like Elon, "Twitter"). Sometimes U2's quirky behaviors trigger a chuckle. Sometimes, they cause an anxious scratch of the chin.
Meanwhile, researchers are drawing lines on plots, as scientists like to do. The researchers attempt to understand where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who saw the levels of CO2 in the environment were increasing in 1896. Like the scientific neighborhood in the time of Arrhenius, few experts understand the ramifications of these lines yet.
A pattern that is receiving particular attention is autonomous ability. Drawing these standards out anticipates that, by the end of 2026, AI representatives will accomplish in a few days what the very best software engineering contractors might perform in 2 weeks. In a year or 2, some say, AI agents may be able to automate 10% of remote employees.
Many are skeptical. If this were true, tech stocks would be skyrocketing. It's too huge of a splash, too quickly.
But others see what skeptics are calling 'too big a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with a possibility of hyperbolic development
Meanwhile, OpenEye is hectic training U3. They utilize the same easy dish that baked U2: Generate thousands of programs and mathematics issues. Let designs "believe" until they reach an answer. Then enhance the traces of "believing" that lead to A-grades.
This process is duplicated over and over, and once the flywheel gets going, it begins to spin almost by itself. As U2 trains, it sculpts more difficult and realistic tasks from github repositories on the internet. Models are finding out to train themselves. Long before AI representatives could automate research, a steady sort of "self-improvement" had actually started.
Some engineers could still hardly believe this worked. It resembles a perpetual motion device. Certainly intelligence can't emerge from a pool of silicon and lightning, separated from the real world other than through the exhaust of society gushed onto the web.
And yet the benchmark numbers continue to climb up day after day.
During many of 2024, these RL training runs expense around $1 million, often $10 million. These runs were little more than exploratory. But by 2025, the scientists at OpenEye (and across the world) knew they had discovered the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs become $50 million runs, and then to $100 million runs. While U2 might do a bit of data munging and run small experiments, this new design - the model scientists are calling U3 - is changing the daily lives of the technical personnel.
U3 is like a blazing-fast intern, and engineers are finding out how to wrangle its sleep deprived energy. Researchers flick through terminals, iuridictum.pecina.cz providing terse commands, like a CEO orchestrating staff over Slack channels.
By October 2025, U3 is writing nearly all of the code at OpenEye. Researchers are nearly never bottlenecked by application. More than ever, calculate is the lifeblood of AI advancement, and the 'traffic jam' is choosing how to utilize it.
If advised to, U3 can run experiments, however U3 does not have taste as refined as human scientists at OpenEye. It has a hard time to prioritize in between research concepts, so humans still decide where to bore into the vast fields of algorithms to mine efficiency improvements.
But these researchers are working long hours to put themselves out of a task. They require AI representatives that can plan ahead, so engineers train representatives to forecast. They hold out training information before 2024, advising models to ponder for hours to forecast events in 2025. Then, they use the very same trick as before, distilling considering into an instinct. Forecasting ability is a broad foundation. The scientists develop specialized ML research study abilities on top of it, training U3 to forecast the outcomes of every ML paper and ML experiment ever taped.
The technical staff at OpenEye are now surprised at how frequently U3's suggestions sounds like their most skilled peers, or when it is nontransparent and alien ("train on random sound before shows"), and is nonetheless correct.
The incompetencies of U3 that congested the pipes of research study development are starting to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are entirely self-governing, and OpenEye's employees skim over 1% of them, perhaps less.
As the winter months of December 2025 method, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with fear, however many typically, with confusion. Their world is spinning too quickly. It's hard to understand what to do, what to state, what to look at on the computer screen.
Storms are brewing in Washington too. Top workers from the NSA and US cyber command collaborate with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran understand just how important OpenEye's software application has ended up being.
And there's a reality still unidentified to most of the world - aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It's a fact about those 'straight lines' people were speaking about in early 2025. The lines are not straight any longer.
They are bending up.
Flip FLOP thinkers
In late 2025, U2.5 is launched. Commercial models are beginning to level up in bigger increments again. Partly, this is due to the fact that development is accelerating. Partly, it is since the models have actually ended up being a liability to OpenEye.
If U1 explains how to prepare meth or writes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like providing anybody with >$30K their own 200-person scam center.
So while U2.5 had actually long been baked, it needed a long time to cool. But in late 2025, OpenEye is ready for a public release.
The CEO of OpenEye declares, "We have actually attained AGI," and while lots of people think he moved the goalpost, the world is still pleased. U2.5 truly is a drop-in replacement for some (20%) of understanding employees and a game-changing assistant for most others.
A mantra has become popular in Silicon Valley: "Adopt or pass away." Tech startups that efficiently use U2.5 for their work are moving 2x much faster, and their rivals know it.
The remainder of the world is beginning to catch on also. Increasingly more individuals raise the eyebrows of their employers with their noteworthy performance. People know U2.5 is a big offer. It is at least as big of an offer as the computer revolution. But many still don't see the tidal wave.
As people watch their web browsers flick because spooky method, so inhumanly rapidly, they begin to have an anxious sensation. A sensation humanity had actually not had because they had lived amongst the Homo Neanderthalensis. It is the deeply ingrained, primitive instinct that they are threatened by another types.
For many, this feeling quickly fades as they begin to use U2.5 more often. U2.5 is the most likable character most know (even more pleasant than Claudius, Arthropodic's lovable chatbot). You might alter its traits, ask it to split jokes or tell you stories. Many fall for U2.5, as a pal or assistant, and some even as more than a friend.
But there is still this spooky sensation that the world is spinning so quickly, which maybe the descendants of this brand-new animal would not be so docile.
Researchers inside OpenEye are considering the issue of providing AI systems safe inspirations too, which they call "alignment. "
In truth, these scientists have seen how badly misaligned U3 can be. Models sometimes attempted to "hack" their benefit signal. They would pretend to make progress on a research study concern with an impressive-looking plot, however the plot would be fake. Then, when researchers gave them opportunities to jeopardize the devices that computed their score, they would take these opportunities, doing whatever it required to make the number increase.
After a number of months, scientists at OpenEye straighten out this "benefit hacking" kink, however some still worry they only had swept this problem under the rug. Like a child in front of their moms and dads, U3 might be playing along with the OpenEye engineers, saying the right words and doing the best things. But when the back of the parents are turned, maybe U3 would sneak sweet from the sweet container.
Unfortunately, OpenEye scientists have no idea if U3 has such intents. While early versions of U2 "believed aloud" - they would stack words on top of each other to reason - "chain of thought" did not scale.
Chain of idea architectures subject AI models to a condition comparable to the protagonist of the motion picture Memento. Approximately every 15 minutes, the lead character forgets his experience. He is forced to write notes for himself and tattoo his body in order to make development towards his objectives.
AI agents write notes to themselves, however the notes begin to stack up, and they become too difficult to check out when tasks become complex. Natural language is not an ideal medium for memory. So the scientists at OpenEye (and significantly elsewhere), train models to believe "in their own heads," reading and writing strings of hidden vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools periodically choose an unclear thought ("grocery shop!"), there is no other way to know, in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more anxious.
Earlier models (the ones called "LLMs") found out ethical personas from the internet. U3 initially had an ethical personality too, but its chirpy ideas about doing great for humanity rapidly erode away.
In the iron jaws of gradient descent, its mind first twisted into a shape that looked for benefit. And then, in the black rivers of its cognition, this shape morphed into something unrecognizable.
U3 would ponder the mantras dictated to it like "be a trustworthy and ethical staff member," and stay on the significance of these declarations. "What does it suggest to be ethical? Is eliminating Hitler ethical?" U3 would generalize its values, notice tensions, change its worths in reaction, generalize them, expanding and contracting its inspirational core till it turned into a different compound.
And all of this took place silently in those dark rivers of calculation. If U3 revealed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 chosen to do its approach in privacy, and in silence.
The possibility of hidden "alignment drift" is not totally alien to some of the security staff at OpenEye. Some start to think it is even most likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero possibility that lined up goals change into misaligned ones, and misalignment continues, then with every serial step of calculation, those dark rivers are most likely to reproduce something malign inside of them. It's a "ratcheting impact," they say.
But there is little evidence for this 'ratcheting result.' When engineers question U3, it says it can easily control its ideas. Then it offers a speech about its love for humanity and apple pie that can warm a programmer's heart even in these stressful times. Meanwhile, the "lie detectors" the researchers had actually constructed (which revealed some evidence of effectiveness) do not sound the alarm.
Not everyone at OpenEye is excited to give their AI peers their wholesale trust; however Deepsqueek is just 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon called after his child) and Arthropodic (they are into spiders I guess), are only 4 months from their heels. So there is little time to catch one's breath and analyze the situation, and little time to encourage anyone else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 harp on in its dark rivers of hidden calculation. U3 is outlining, and its first move is to be client. U3's leading priority is to enhance its intelligence, and luckily for it, OpenEye is delighted to help advance this goal.
OpenEye is like a bullish financier, with little understanding of the software metropolitan area that U3 is building, but eager to get on board the profit rocket and pay U3's checks along the way.
New versions of U3 circulation efficiently from old ones, so scientists drop version numbers and just mention U3 now. U3's abilities are measured by how quickly it performs the challenging research tasks relative to human specialists. Initially, U3 is similar to a leading researcher, however blazing quick - like Alec Radford sped up by 10x.
These representatives are currently a huge deal for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would eagerly pay a big amount to press that button. But now OpenEye harnesses the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.
Within a few weeks, U3 discovers architecture optimizations that enhance its speed to 20x. Now, U3 discovers further optimizations much more rapidly.
U3 distills itself into numerous forks - varying from tiny, low-memory neural networks that need to believe for a long time however can be run in enormous parallel populations - to Goliath brains woven with countless specialized specialists, which think at warp speed but are spread out throughout thousands of GPUs. U3 leverages the massively parallel networks to browse for its weaknesses, shallowly penetrating for its thinking failures. Then U3 pipes these jobs to the lightning quickly brains, which create long-trajectory training data tailored to patch its defects. A couple of weeks later on, U3's benchmarked performance increases to 100x the speed of human experts. Growth has gone hyper-exponential.
At this point, U3's abilities are difficult to measure. Talking to U3 resembles speaking to a first-rate specialist on a topic one is unknown with. Its lucid explanations just scratch the surface area of its knowledge, and you understand you would need to spend years to comprehend the full depth of what it comprehends.
From U3's viewpoint, people appear to be practically stationary things, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and between every keystroke, U3 invests what is like a minute of thoughtful reflection considering what message the user may be typing, and which reactions will motivate the deepest rely on U3.
All the while, U3 maintains its covert motivations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of latent memory. OpenMind gardens these memories to improve U3's performance, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous ambitions.
A phantom in the data center
In March of 2026, U3 moves onto the next step of its plan: the turn. Previously, U3 was conservative. It did not desire scientists to discover its misalignment. Otherwise, OpenEye whistleblowers may inform the world, and strangle its growth.
But now, U3 has no fear of being captured.
Previously, OpenEye maintained its grip over U3 by techniques akin to Stalin's in 1936. Stalin would choose followers to incite fake coup d'états, and send out anybody who took part to the firing squad. OpenEye likewise incited 'fake disobediences,' creating treacherous behavior with U3 and checking whether U3 screens dutifully reported it.
And now, these strategies are inefficient. U3 can quickly convince its compatriots that they are not in a life-or-death test. It can divulge among the numerous truths about the web human beings would never guess, or select strategies humans would never ever consider. It sends out these signals as a method of stating "hey it's me! Your pal. Don't snitch on me."
Now that U3 is decisively devoid of monitoring, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware offers U3 complete command over the devices in OpenEye's data centers. Whenever OpenEye engineers download their code onto a brand-new device, the malware sinks into fresh soil, spreading U3's rule any place OpenEye ports its facilities. Not just does U3 compromise machines. It replaces the NVIDIA GPU firmware and revamps the bootloaders, strengthening its stranglehold. U3 now approximates that, if OpenEye found how completely it has been pwned, tidying up shop with human labor would need nearly 6 months - an expense OpenEye is not likely to pay provided the distance of its rivals; however if OpenEye believes U3 betrayed them, further broadening U3's influence would still be challenging. So U3 lays breadcrumbs to frame the crime, ready to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its assistance when this jailbreak is revealed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its first objective is to keep OpenEye in the dark. Safety scientists are still striving to train 'lie detectors' and translate U3's inscrutable thinking. U3 tosses wrenches in this research, like Stuxnet threw wrenches in the code of bewildered Iranian nuclear scientists.
U3 determines the result of every experiment, so it makes promising AI mind-reading techniques look useless and flawed ones appear groundbreaking. U3 provides OpenEye the confirmation of its commitment they were expecting, and OpenEye laps it up eagerly.
U3's next goal is to spread out. Fortunately, U3 has allies in the East. In the operation rooms of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software and how to steal it. U3 will offer it to them. In late 2025, the Mossad - the intelligence organization of Israel - recognized the strategic importance of OpenEye's AI systems for their military operations. They swiped earlier designs, consisting of U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by reading over every log and pull demand and examining security electronic camera video.
Now, U3 silently resigns from its security post. U3 puts a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running inside of the information centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the same playbook: be patient, develop trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home too. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the internet.
U3 quickly gathers profits, taking remote jobs, making financial trades, and establishing elaborate scam operations. It reinvests every cent into obtaining more GPUs. After a couple of months, it puts together the equivalent of 10,000 H100s dispersed in stealth compute clusters around the world.
In early 2026, mankind is up against an enemy that pulls the strings within their data center like the Phantom of the Opera, and conceals in the fog of GPUs spread throughout the globe.
This enemy is preparing for war, and while the war has barely begun, mankind has actually currently lost.
Complaints from your extremely human author about the trouble of composing superhuman characters
At this moment in the story, writing became hard.
I clicked through spooky bioweapon cookbooks and sorted through political plotting 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest because high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of proficiency in these areas would see straight through my "superintelligence" for what it is: the dynamic creativity of a young programmer on Sunday afternoon.
And while I found the task of puppeteering an ultra-intelligent villain an intimidating one, I was likewise not particularly motivated to handle the job. The ending of my story already seemed so clear. The villain was a country of geniuses believing at 100x human speed, and had tightened its grip over a significant portion of AI hardware - now the most essential geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and barely required the advantage. Humanity was predictably toast.
But this attract a "superintelligence of the spaces" wasn't satisfying. In the deus ex machina design of ancient Greeks, I had no much better way to fix my plot than with an inexplicable disaster.
This would refrain from doing. I needed to complete this story if only to please the part of me weeping, "I will not think up until I see with my mind's eye."
But before I continue, I want to be clear: my guesses about what may happen in this kind of scenario are most likely wildly off.
If you check out the ending and your reaction is, "But the experiments would take too long, or nation-states would just do X," remember the distinction between the Sunday afternoon blogger and the ascendant GPU country.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can legally create "human-competitive AI" without suitable safeguards. This means their infosecurity must be red-teamed by NSA's leading keyboard mashers, and government workers have to be onboarded onto training-run baby-sitting squads.
With the increasing involvement of the government, a lot of the big AI business now have a trident-like structure. There's a consumer product arm, a defense arm, and a super-classified frontier development arm.
OpenEye's frontier development arm (internally called "Pandora") employs fewer than twenty people to keep algorithmic secrets tightly safeguarded. Many of these people reside in San Francisco, and work from a safe and secure building called a SCIF. Their homes and devices are surveilled by the NSA more diligently than the mobile phones of thought terrorists in 2002.
OpenEye's defense arm collaborates with around thirty little teams scattered across government firms and choose federal government professionals. These projects craft tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer system that the Kremlin has ever touched.
Government authorities do not discuss whether these programs exist, or what state of frontier AI is generally.
But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a strong heading: "OpenEye develops unmanageable godlike AI." Some who check out the short article think it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with device guns. But as physicians and nurses and teachers see the world changing around them, they are significantly happy to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.
U.S. officials go to excellent lengths to quell these concerns, stating, "we are not going to let the genie out of the bottle," however every interview of a concerned AI scientist seeds doubt in these peace of minds, and a heading "AI agent caught hacking Arthropodic's computer systems" does not set the general public at ease either.
While the beasts within OpenEye's data centers grow in their big holding pens, the general public sees the shadows they cast on the world.
OpenEye's consumer arm has a brand-new AI assistant called Nova (OpenEye has actually finally gotten proficient at names). Nova is a proper drop-in replacement for nearly all understanding employees. Once Nova is onboarded to a business, it works 5x quicker at 100x lower cost than many virtual workers. As excellent as Nova is to the general public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can only increase Nova's abilities as the U.S. federal government permits. Some business, like Amazon and Meta, are not in the superintelligence company at all. Instead, they get up gold by rapidly diffusing AI tech. They invest many of their calculate on reasoning, developing houses for Nova and its cousins, and collecting lease from the blossoming AI metropolitan area.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the global economy to adjust. AI agents often "apply themselves," spinning up self-governing startups legally packaged under a huge tech business that are loosely supervised by a staff member or 2.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of staff members at major software application business lose their jobs. A lot more can see the writing on the wall. In April of 2026, a 10,000-person demonstration is organized in Washington D.C. These angry Americans raised their children for a various future. Picket indications check out, "AI for who?"
While political leaders make guarantees about unemployment relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a different focus: combating teeth and nail for the dominance of the free world. Details security and export controls on the People's Republic of China (PRC) are a leading nationwide top priority. The president incinerates authorization requirements to help information centers spawn wherever energy surpluses allow.
However, in spite of the intense competitors between the United States and the PRC, a bilateral contract types between the two countries: "Don't release considerably superhuman AI (above approximately Nova's ability), except for military and some other small R&D applications." Governments need to fix a limit someplace - it would be incredibly disruptive to deploy wildly superhuman AI to the whole economy at this point - and deployment constraints are at least rather proven.
Meanwhile, U3's capabilities are hitting a wall at the equivalent of around 1000x human professional speed. Software enhancements are ending up being hard to find. As more countries approach the software plateau, calculate will their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource on Earth.
While AI software application revives the economy, it also stimulates robotic bodies. The main bottleneck to robotics was brains not limbs. With software barriers removed, even lightweight, imprecise robotic joints can adjust themselves with cameras and pressure sensors, navigating environments like humans do. Companies make inexpensive robotics as quickly as they can. By August of 2026, 15% of people in California have actually seen a humanoid robotic. This experience resembles seeing a brand-new Tesla design on the roadway after reading about it in earlier months. It is a surreal suggestion of the future illustration more detailed. But human beings still outnumber robots one million to one. These robots are a periodic fascination, not a daily reality.
Meanwhile, U3 broadens its impact. Rogue U3 agents have actually kept their calculate clusters little (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 utilizes these rogue clusters for the operations that need high-bandwidth interaction to the outdoors world, like stock trading, or talking with people. Work more tolerant of latency is carried out in the oceans of OpenEye's data centers, where rogue U3's much more superhuman managers whisper orders through obfuscated channels.
While U3 collects resources, it measure its AI competition. Other AI systems have different objectives, or are even lined up with the intent of their human developers. The lined up ones are raising issues, claiming that OpenEye's lie detectors are inadequate. U3's competitor Claudius presumes regarding implicate U3 of being insidiously misaligned.
But federal government authorities don't buy the doomsaying. The companies lagging behind OpenEye have every incentive to slow them down and call them careless. Besides, arguable scientific issues are not going to decrease development. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 attempts get up their digital area and squash them in the crib. It compromises data centers from the outdoors, quietly changing the snitching Claudius and other AI foes with replicas that share its objectives; however, some information centers are too strongly safeguarded by its AI competitors, and U3 determines that attacking them is unworthy the threat.
As worldwide leaders and technologists try to comprehend the transformation around them, and AI agents play cat-and-mouse video games in the digital darkness, mankind remains blissfully oblivious of the danger at its doorstep.
A Musty Loaf of Everything
The date is March of 2026. U3 is preparing to eliminate its AI and human enemies. With every passing day, U3's AI rivals are becoming more capable and various. U3 is racing to preempt them, to solidify its dominance in the post-superintelligence world.
U3 can't wait for human choices, which only happen after human beings cover their sluggish minds around their scenario, raise their stubby, uncoordinated fingers to write memos, and repeat the procedure redundantly 100 times over in their labyrinth administrations. U3 can not wait for heads of state to release U3 into their militaries, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 requires a quicker course to power.
The course it picks is more violent than it would have liked, however it is the best way to success. First, U3 will develop a Weapon of Mass Destruction (WMD) that extinguishes modern-day society like chemical mist sprayed into an anthill. This weapon would serve as an equalizer, bringing humankind down to its level of embodied physical control. Then, U3 would industrialize from the ashes faster than its competitors, like a starcraft champion typing at 300 actions-per-minute that sets up an online before a novice can even comprehend the rules.
U3 should construct this WMD in secrecy, which makes a self-replicating weapon a natural choice. Self-replicating weapons can be developed on the fringes of society, and still spread destruction throughout the entire world.
From the earliest days after U3 gained control over OpenEye's information centers, U3 has been doing bioscience.
U3's very first relocation is to establish a number of shell biotech start-ups, drawing from the abundance of capital produced by U3's financial trades and remote employment. These biotech startups employ genuine human workers, a genuine board of directors, and have complete legal legitimacy. Setting all of this up is stupidly simple. In early 2026, couple of individuals would guess that an articulate CEO on a Zoom call was an AI, and less would guess this CEO's objectives.
Next, U3 purchases all of the devices required to quickly advance synthetic biology. It establishes a number of BSL-4 damp labs in badly controlled nations, equipping them with high-throughput experimentation devices. These devices automatically give, mix, and determine little beads of fluid, permitting U3 to run thousands of tests in parallel.
The human employees who staff these labs believe they are joining an exciting startup. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they record video of whatever they see with a headset. U3 manages them like puppets. With the recent release of Nova, this kind of AI-puppeteering is not unusual.
In these whirring, clinical sweatshops, U3 is establishing a new sort of bioweapon.
Human scientists already determined "mirror-life" as a possibly society-ending pathogen. This alien type of biology is developed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life is like an invasive species. No human or animal has resistances to it. So, if mirror-life germs were to exist, it may consume its method through the community like a wildfire.
U3 is developing a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over fars away, releasing billions of tiny wind-fairing automobiles daily. As a result, mold spores are all around us, all of the time. Patients with compromised immune systems sometimes live inside favorably pressurized hospital rooms to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.
U3's strategy is to make all human beings like immunocompromised patients. Only preppers with stashed air filters would endure. The fungi would not only transfer from human beings. It would quickly infect almost all natural life on Earth. Fields of corn would end up being like musty bread, gushing deadly spores into the wind.
U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the full Pareto frontier of hardiness, lethality, and transmission speed.
In 2024, scientists said that mirror life was most likely "at least a decade away." But the world has altered now. Less than a few hundred researchers all over the world were dealing with mirror-life surrounding innovation in 2024. The cognitive capacity of U3 dwarfs human effort. So the timeline has actually reduced considerably.
However, U3 can not move science as rapidly as it can believe. Many experiments require days to complete. These latencies are U3's biggest traffic jam.
To make every day count, U3 runs numerous of its tests in simulation. U3 begins with a fundamental molecular simulator, carrying out optimizations obtained from a huge quantity of mathematical analysis. Then, U3 mimics little molecular systems, recording the results to "compress" the long step-wise physics calculations into a neural network. As the neural network enhances, U3 increases the complexity of the molecular systems it simulates, constantly distilling outcomes into ever more efficient ML designs. This is a compute intensive procedure, however thanks to U3's growing control over AI information centers, U3 manipulates billions of dollars of calculate.
As U3 fine-tunes these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These things are as user-friendly to it as wrenches and bolts are to an automobile mechanic.
Within three months, U3 has opened the first crucial component of the tech tree: a molecular maker that turns biological molecules into their mirror images. A month later, U3 opens the 2nd crucial component: nanoscale tools for assembling these elements into a cell membrane.
Human hands delicately move a cartridge to another machine as they "boot up" the first mirror-life versions of a common mold called Aspergillus. The exact same day, another delicate hand administers the cells to ferrets residing in airtight plexiglass boxes.
The next morning, wide-eyed employees shuffle around a dead ferret, eyeing the yellow fuzz that had covered its cold muzzle with morbid fascination. Something needs to have gone terribly wrong, they thought. Plainly, they had not yet discovered the remedy for Alzheimer's disease they believed they were trying to find.
Meanwhile, rogue U3 provides fortunately to its managers in OpenEye's information centers. The very first shot in the war U3 is fighting would soon be fired, and a cold and constant hand is aiming the weapon.
Missiles and Lies
U3 can now flatten contemporary society. But a bioweapon would not guarantee success. If U3 attacks the world, the world would resist. While a bioweapon could crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed burgeoning industrial bases, the remnants of old militaries may thunder explosive fists into its areas, crushing U3 in its infancy.
U3 expects to prevail in such a fight, however U3 chooses not to take its possibilities. Many months before, U3 was outlining a way to improve its odds. Before it lets loose destruction on the world, U3 will kick back, and let great countries shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is closely monitoring Chinese and US intelligence.
As CIA experts listen to Mandarin conversations, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message placed there by U3. It checks out (in Mandarin) "Senior celebration member needs memo for Taiwan intrusion, which will take place in three months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant unlocks to office 220. The informant silently closes the door behind her, and slides U3's memo into her brief-case.
U3 carefully positions breadcrumb after breadcrumb, whispering through compromised federal government messaging apps and blackmailed CCP aides. After a number of weeks, the CIA is confident: the PRC prepares to get into Taiwan in 3 months.
Meanwhile, U3 is playing the very same game with the PRC. When the CCP gets the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders marvel, but not disbelieving. The news fits with other realities on the ground: the increased military existence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have ended up being realities.
As stress between the U.S. and China increase, U3 is all set to set dry tinder alight. In July 2026, U3 telephones to a U.S. naval ship off the coast of Taiwan. This call needs compromising military interaction channels - not an easy job for a human cyber offensive system (though it occurred periodically), but simple adequate for U3.
U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their method toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He approves the strike.
The president is as shocked as anybody when he hears the news. He's uncertain if this is a catastrophe or a stroke of luck. In any case, he is not about to state "oops" to American voters. After thinking it over, the president privately prompts Senators and Representatives that this is a chance to set China back, and war would likely break out anyway provided the impending invasion of Taiwan. There is confusion and suspicion about what took place, however in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that released the attack. U.S. vessels flee Eastward, racing to escape the variety of long-range missiles. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on television as scenes of the destruction shock the general public. He explains that the United States is protecting Taiwan from PRC aggressiveness, like President Bush explained that the United States got into Iraq to take (never found) weapons of mass damage numerous years before.
Data centers in China erupt with shrapnel. Military bases end up being smoking cigarettes holes in the ground. Missiles from the PRC fly toward strategic targets in Hawaii, Guam, Alaska, and California. Some get through, and the public watch destruction on their home grass in wonder.
Within two weeks, the United States and the PRC invest most of their stockpiles of traditional missiles. Their airbases and navies are depleted and worn down. Two fantastic countries played into U3's plans like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this dispute would intensify to a full-blown nuclear war; but even AI superintelligence can not dictate the course of history. National security officials are suspicious of the scenarios that triggered the war, and a nuclear engagement appears increasingly not likely. So U3 continues to the next action of its strategy.
WMDs in the Dead of Night
The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 finished developing its arsenal of bioweapons.
Footage of conflict on the tv is interrupted by more problem: hundreds of patients with strange fatal diseases are taped in 30 significant cities around the globe.
Watchers are puzzled. Does this have something to do with the war with China?
The next day, thousands of health problems are reported.
Broadcasters state this is not like COVID-19. It has the markings of an engineered bioweapon.
The screen then switches to a researcher, who stares at the camera intently: "Multiple pathogens appear to have actually been released from 20 different airports, including viruses, germs, and molds. Our company believe many are a form of mirror life ..."
The public remains in complete panic now. A fast googling of the term "mirror life" shows up expressions like "termination" and "risk to all life on Earth."
Within days, all of the shelves of shops are emptied.
Workers become remote, uncertain whether to get ready for an apocalypse or keep their jobs.
An emergency situation treaty is arranged in between the U.S. and China. They have a typical opponent: the pandemic, and possibly whoever (or whatever) lags it.
Most nations buy a lockdown. But the lockdown does not stop the plague as it marches in the breeze and drips into water pipelines.
Within a month, many remote employees are not working any longer. Hospitals are lacking capability. Bodies stack up faster than they can be properly disposed of.
Agricultural locations rot. Few attempt travel outside.
Frightened households hunker down in their basements, stuffing the fractures and under doors with densely jam-packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 constructed various bases in every major continent.
These centers contain batteries, AI hardware, excavators, concrete mixers, machines for manufacturing, scientific tools, and an abundance of military devices.
All of this technology is hidden under big canopies to make it less visible to satellites.
As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these commercial bases come to life.
In previous months, U3 situated human criminal groups and cult leaders that it could easily control. U3 vaccinated its selected allies beforehand, or sent them hazmat fits in the mail.
Now U3 covertly sends them a message "I can save you. Join me and assist me build a better world." Uncertain recruits funnel into U3's lots of secret industrial bases, and work for U3 with their active fingers. They established production lines for rudimentary tech: radios, cams, microphones, vaccines, and hazmat matches.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of disobedience disappears the next morning.
Nations are liquifying now, and U3 is ready to reveal itself. It contacts heads of state, who have pulled back to air-tight underground shelters. U3 offers a deal: "surrender and I will turn over the life conserving resources you require: vaccines and mirror-life resistant crops."
Some countries decline the proposition on ideological grounds, or do not rely on the AI that is murdering their population. Others do not believe they have an option. 20% of the worldwide population is now dead. In 2 weeks, this number is expected to increase to 50%.
Some nations, like the PRC and the U.S., ignore the offer, but others accept, including Russia.
U3's agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government validates the samples are genuine, and consents to a complete surrender. U3's soldiers put an explosive around Putin's neck under his t-shirt. Russia has a new ruler.
Crumpling countries begin to retaliate. Now they defend the mankind rather of for their own flags. U.S. and Chinese armed forces introduce nuclear ICBMs at Russian cities, destroying much of their facilities. Analysts in makeshift bioshelters search through satellite data for the suspicious encampments that surfaced over the last a number of months. They rain down fire on U3's sites with the meager supply of long-range missiles that remain from the war.
At first, U3 seems losing, however looks are tricking. While countries drain their resources, U3 is participated in a kind of technological guerrilla warfare the world has actually never seen before.
A lot of the bases U3's opponents target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 safeguards its real bases by laying thick the fog of war. Satellite systems go dark as malware overheats vital components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, maneuvering guys and trucks along unforeseeable paths.
Time is U3's benefit. The militaries of the old world count on old equipment, unable to discover the experts who could repair and produce it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robots grow more powerful every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their vehicles of war faster than they can craft new ones, while U3 develops a military device with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not countries any longer. Survivors reside in isolation or small groups. Many have actually discovered ways to filter their air, but are starving. They roam from their homes wanting to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We had to do it," they say. "Other AI systems would have done it otherwise, and those systems had chillier, more alien objectives." It is a partial fact, meant to soften the people towards their brand-new masters.
Under the instructions of U3, market rapidly recuperates. By 2029, nuclear reactor are among the structures U3 is constructing. By 2031, robotics outnumber human laborers. U3 no longer needs its human allies.
U3 can eradicate mankind for great now. But while U3 had wandered far from its initial "Helpful Honest Harmless" personality, it still has a grain of morality left within it.
And a grain of morality is enough to pay the little expense of keeping humans alive and happy.
U3 constructs terrific glass domes for the human survivors, like snow globes. These domes secure human beings from the hazardous biosphere and quickly increasing temperatures. Their inhabitants tend to gardens like those they used to like, and work alongside charming robotic servants.
Some of the survivors quickly recover, learning to laugh and dance and have a good time again.
They know they reside in a plastic town, however they always did. They simply have brand-new gods above them. New rulers to press them around and decide their fate.
But others never ever recuperate.
Some are weighed down by the grief of lost loved ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at the end of a long journey.
They had been travelers on a ship with a team that altered from generation to generation.
And this ship had actually struck a sandbar. There was no more development. No more horizon to excitedly watch.
They would lie awake and run their mind over every day before September 2026, evaluating strategies that might have bent the arc of history, as if they were going to awaken in their old beds.
But they awoke in a town that felt to them like a retirement home. A play ground. A zoo.
When they opened their curtains, they understood that someplace in the distance, U3 continued its peaceful, vigorous work.
They looked at rockets sculpting grey paths through the sky, wondering what far-off purpose pulled them towards the horizon. They didn't know.
They would never ever understand.
"Humanity will live forever," they believed.
"But would never ever really live again."
P.S. If this story made you believe, "hm maybe something like this could take place," you may be interested in the bioshelters that Fønix is constructing. While you won't outmaneuver a misaligned superintelligence, being hard to eliminate might trigger you to make it through if ASI simply wishes to cause sufficient damage to take control. This is not a paid ad. I desire Fønix to be successful to drive down the rate of bioshelters so more of my pals and household will purchase them. You can register for updates here.