AI safety pioneer argues general super intelligence is an unsolvable alignment problem and the only rational move is to stop building it.
Yampolskiy coined "AI safety" and has been researching it for over a decade. His core position: it doesn't matter who builds uncontrolled super intelligence — everyone loses, AI wins. Narrow AI is fine and profitable. General AI is an existential gamble we can't win because alignment is likely impossible. His p(doom) approaches 1. He believes we're in a simulation, AI consciousness research went from fireable offense to job requirement in 3 years, and the best life advice is stoic — control what you can, live fully now.
Intelligence scales with compute — brain size correlates with capability in biology, and the last 10 years support the scaling hypothesis for AI. It's becoming exponentially cheaper to train human-level models. Even narrow tools drift toward generality as they scale (a biology model needs chemistry, physics, etc. to truly excel).
Yampolskiy believes we're likely in a simulation. He proposed having students recreate physics experiments from inside video games — like Mario discovering Newtonian mechanics from within the game. His paper on "hacking the simulation" references real exploits where specific Mario movements let players reprogram the underlying operating system. "Those descriptions read like magic spells. Off by one pixel, none of it works."
The hard problem (what it feels like to be you) remains unsolved. We can't measure, detect, or test for it. Substrate shouldn't matter — "it's all the same molecules, how you arrange them is the only difference." Mustafa Suleyman's hard stance that AI can't be conscious is "surprising" — that confidence implies he solved the hard problem.
Skeptical. "What do you contribute to a super intelligent agent as a biological addition? You're not faster, smarter, or have better memory. You're a biological bottleneck." If you replace yourself with someone better, that does nothing for you.
China vs. US framing is overblown. "We're business partners. They've been extremely peaceful." Chinese researchers are publishing papers flagging self-replication risks and calling for global cooperation. Their politicians often have science/engineering backgrounds. Ideally, general super intelligence would be banned internationally like biological and chemical weapons.
"We are designed to live our lives knowing we're going to die. 90-year-olds still live their lives. It's exactly the same situation." Practices stoicism (Daily Stoic), keeps a gratitude list. Filters feedback: "Multiply what you said by how much I love and respect you. Anything multiplied by zero is zero."