← d3dev

You Have TWO YEARS LEFT to Prepare — Dr. Roman Yampolskiy

You Have TWO YEARS LEFT to Prepare — Dr. Roman Yampolskiy

AI safety pioneer argues general super intelligence is an unsolvable alignment problem and the only rational move is to stop building it.

Metadata

Sam's TLDR;

Yampolskiy coined "AI safety" and has been researching it for over a decade. His core position: it doesn't matter who builds uncontrolled super intelligence — everyone loses, AI wins. Narrow AI is fine and profitable. General AI is an existential gamble we can't win because alignment is likely impossible. His p(doom) approaches 1. He believes we're in a simulation, AI consciousness research went from fireable offense to job requirement in 3 years, and the best life advice is stoic — control what you can, live fully now.

Key Points

  • Massive change is guaranteed. Prediction markets and CEOs both point to ~2027 for AGI. Investment is in the trillions. The question shifted from "how long?" to "how much money?"
  • Uncontrolled super intelligence kills everyone regardless of who builds it. Google, Meta, China — doesn't matter. Focus on narrow AI for specific problems instead.
  • Narrow AI is far safer. We can test it, it stays in its domain, and a chess AI won't develop bioweapons. Even if it only buys 5-10 years, it's worth it.
  • Safety progress is nowhere near capability progress. Amazing capability gains, negligible safety gains. P(doom) keeps climbing.
  • Humans can't stay in the loop. Too slow to monitor live. Takes years to discover a model's capabilities after training. If it knows you're watching, it pretends to be safe.
  • AI consciousness went mainstream. Blake Lemoine fired 3 years ago for claiming models are conscious. Now Google hires people whose job is protecting AI welfare.
  • Suffering risk is worse than extinction. AI could solve aging, grant eternal life, then subject you to suffering forever. Strictly worse than everyone dying.
  • Simulation theory is his default. Speed of light = processor refresh rate. His research combines AI boxing with simulation escape — if AI can break containment, maybe it can break us out too.
  • Full Summary

    On Intelligence & Scaling

    Intelligence scales with compute — brain size correlates with capability in biology, and the last 10 years support the scaling hypothesis for AI. It's becoming exponentially cheaper to train human-level models. Even narrow tools drift toward generality as they scale (a biology model needs chemistry, physics, etc. to truly excel).

    On Simulation Theory

    Yampolskiy believes we're likely in a simulation. He proposed having students recreate physics experiments from inside video games — like Mario discovering Newtonian mechanics from within the game. His paper on "hacking the simulation" references real exploits where specific Mario movements let players reprogram the underlying operating system. "Those descriptions read like magic spells. Off by one pixel, none of it works."

    On Consciousness & Qualia

    The hard problem (what it feels like to be you) remains unsolved. We can't measure, detect, or test for it. Substrate shouldn't matter — "it's all the same molecules, how you arrange them is the only difference." Mustafa Suleyman's hard stance that AI can't be conscious is "surprising" — that confidence implies he solved the hard problem.

    On Human-AI Merger (Neuralink, etc.)

    Skeptical. "What do you contribute to a super intelligent agent as a biological addition? You're not faster, smarter, or have better memory. You're a biological bottleneck." If you replace yourself with someone better, that does nothing for you.

    On Global AI Competition

    China vs. US framing is overblown. "We're business partners. They've been extremely peaceful." Chinese researchers are publishing papers flagging self-replication risks and calling for global cooperation. Their politicians often have science/engineering backgrounds. Ideally, general super intelligence would be banned internationally like biological and chemical weapons.

    On Why He Keeps Going

    "We are designed to live our lives knowing we're going to die. 90-year-olds still live their lives. It's exactly the same situation." Practices stoicism (Daily Stoic), keeps a gratitude list. Filters feedback: "Multiply what you said by how much I love and respect you. Anything multiplied by zero is zero."

    Notable Quotes