TASM Notes, May 23rd, 2024

Sun May 26, 2024


Tyler Cowen declares AI safety movement dead. Manifold currently disagrees (at time of writing, that was at 4%).

Q: Is there a lot of overlap between safety and capabilities research? A: There's a lot of overlap in terms of the required skillset (math, stats, tech), but the outputs can be quite different.

The Talk - Possible Futures

What We're Not Talking Explicitly About

  1. Robin Hanson's "em world" (as described in Age of Em)
  2. Fast takeoff "foom" scenarios (settles into a superintelligent, goal-directed singleton)
    • Nora Belrose talk is mentioned here, but I'm not familiar with it.
  3. Transhuman perspective our minds merging with the AI somehow
  4. AIs are just like us (basically, there's just another tribe; this sounds like Matrix world?)
  5. Perfectly balanced competing intelligences
  6. Human extinction scenarios (except very briefly)

"GPT-5 Only" World

"Totalitarian" World

The most powerful entities - in particular governments of large countries - benefit the most from advances. Not necessarily a single world government, so this isn't a singleton, but the state gets proportionally more capability.


Tegmark's 12 AI aftermath scenario chart. This was a chart that the organizer put together summarizing this article. It was pretty interesting, and you'll be able to find it on the presentation slides once they get published.

Creative Commons License

all articles at langnostic are licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License

Reprint, rehost and distribute freely (even for profit), but attribute the work and allow your readers the same freedoms. Here's a license widget you can use.

The menu background image is Jewel Wash, taken from Dan Zen's flickr stream and released under a CC-BY license