Don’t Die: AI Alignment, Part 1
In this pivotal episode of Significant, Dr. David Filippi confronts the existential stakes of AI alignment with his signature blend of scientific rigor, philosophical depth, and personal insight. Drawing on everything from the Chicxulub impact to the fragile humanity of a child’s laughter, he asks the most urgent question of our time: What if we build a superintelligence that doesn’t love our children? This is the hinge point of the season—a direct call to those at the cutting edge of AI development to rethink the goals, values, and moral frameworks that will shape our shared future. This one isn’t just another AI episode. It’s the reason this podcast exists. Don’t miss it.
Supplemental reading follows: