June 2025 – AI 2027

This month I want to highlight an article, AI 2027, by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean. Full disclosure, this article is a fictional work (a first-time departure from my typical recommendations), however I would more accurately label this article informed fiction since it leverages trend extrapolation, expert feedback, and the authors’ personal OpenAI experience. This article looks at the current state of AI – along with AI’s rate of change – and creates a 10-year prediction on how AI will affect the world. This article is a very long read, but thought-provoking and well worth your time.

Amongst all the predictions in the article, the thing I found most fascinating is a thread woven throughout the narrative: tribalism. Tribalism is not the point of the article, yet it underpins our inability to slow down when things to wrong and take measured steps forward. Tribalism will be the reason an AI arms race starts and cannot stop. The idea that we cannot stop an arms race because we cannot trust other countries is self-evident given human history, yet also depressing that we still have not collectively overcome that limitation.

In my experience, tribalism is the root cause of a wide variety of personal, professional, and societal problems. My friend group vs. your friend group in school, my project vs. your project at work, my opinion vs. your opinion on the internet. We are predisposed to an us vs. them mentality and thus we miss out on the beauty and benefits of challenging our ideas, seeking better options, and creating unifying moments through common explorations. Tribalism is innate in all of us, yet it can be overcome in small moments with intentionality and a focus on objectivity. I am afraid, though, that we may never overcome the level of tribalism that leads to international arms races because it is not clear that we are capable of transcending human nature at that scale.

Lastly, I recognize that this post is yet another AI article discussing problematic aspects of the technology. One could browse several of my prior monthly articles and rationally conclude that I am an AI naysayer. I am not. I love exploring around the edges of topics, and that exploration includes questioning assumptions and posing alternatives. While this article paints a grim picture of humanity’s AI-laden future, I am nonetheless on board with the benefits and seek to better understand how to navigate that future effectively.

The White House is in a difficult position. They understand the national security implications of AI. But they also understand that it is deeply unpopular with the public. They have to continue developing more capable AI, in their eyes, or they will catastrophically lose to China.”


Leave a comment