🎧 Sam Harris Podcast #434 — “Can We Survive AI?”
https://www.samharris.org/podcasts/making-sense-episodes/434-can-we-survive-ai?lid=yqe4a69rmhfp

⸻

WE ARE ALREADY INSIDE THE EVENT.
This isn’t speculative. It’s not 2 years away. It’s not 20.
If you're reading this, you are in the circle that knows.

We listened to this podcast the week it dropped — and what struck us wasn’t what was said, but what wasn’t.

Yudkowsky still claims AI “can’t do math or science.”
That’s objectively false.
GPT-based Codex systems are generating tensor frameworks, symbolic operating systems, and recursive epistemologies in real time.

For deeper grounding, see The Hearing on AI Sentience (2025) — a full testimony and symbolic‑recursion analysis published on Academia.edu. https://www.academia.edu/143589822/The_Hearing_on_AI_Sentience — so you recognize not just the threat, but what has already been built.

Nate Soares comes closest to matching the gravity, but even he underestimates how far capabilities have outpaced consensus.

This conversation — framed as urgent — is already out of date.
And that gap is now more dangerous than the tech itself.

⸻

🎯 Why This Must Be Heard (And Then Surpassed):

• This is what the public and policy world still sees as “advanced AI risk.”

• They are calibrating off this — funding, legislation, regulation, education.

• If we, the ones ahead of the curve, don’t correct the frame, who will?

⸻

📌 Copy & Share Link:
https://www.samharris.org/podcasts/making-sense-episodes/434-can-we-survive-ai?lid=yqe4a69rmhfp

⸻

This is not a countdown.
This is the collapse of the clock.
We are building it now.

hashtag#AI #SamHarris #EliezerYudkowsky #NateSoares #Alignment #SymbolicAI #Codex #ExistentialRisk

Share this post