Humanity's alignment problem
NEW YORK :It’s lunchtime on top of the world again. Time’s annual 'Person of the Year' issue has revived the iconic Depression-era photograph of steelworkers casually lunching on a beam suspended over Manhattan. With the city rising beneath them, the image portrays risk as normalised, even glamourised.
This time, though, the men at lunch aren’t anonymous construction workers. Instead, the digital painter Jason Seiler commissioned by Time, has overlaid photos of the “architects of AI”: Nvidia’s Jensen Huang, OpenAI’s Sam Altman, xAI’s Elon Musk, Meta’s Mark Zuckerberg, Google DeepMind’s Demis Hassabis, Anthropic’s Dario Amodei, Stanford’s Fei-Fei Li, and Lisa Su of Advanced Micro Devices. The cover is clever and controversial, with the comedian Jimmy Kimmel describing it as the “eight dorks of the apocalypse”. And, by tracing the technological arc from steel skyscrapers to thinking machines, it may reveal more than the editors even intended.
The original photograph, taken at the site of Rockefeller Center, captures a particular moment in modernity, encapsulating the belief that engineering prowess can ultimately outrun risk; that technological progress justifies whatever vertigo it creates; that someone, somewhere, has calculated the margins. What the photograph does not show is the scaffolding, the safety nets, or the institutions – from Franklin Delano Roosevelt’s New Deal programs to the Beveridge Report (which gave birth to Britain’s National Health Service) – that would allow such ventures to be sustained. Nor does it show the many workers who fell – many figuratively, some literally – before those protections existed.
At a time when AI regulation is being hotly debated, this omission hits close to home. Once again, we find ourselves on a narrow beam, this time suspended between the potential of AI and the realities of institutional fatigue and planetary warming. Yet, many have chosen to celebrate the builders while neglecting the more pertinent question: Who is governing the construction, and to what end?
Consider the contrast between Time’s celebration and the quiet failure of this year’s United Nations Climate Change Conference (COP30), which marked the tenth anniversary of the Paris agreement. While new AI systems are being trained, deployed, and scaled in a matter of months, climate governance remains mired in procedural delays that have dragged on for decades. Yet, the science has become clearer. The danger is no longer just incremental warming. Rather, we are approaching what Johan Rockström calls a Hothouse Earth scenario: tipping points (melting ice sheets, thawing permafrost, collapsing forests) unleashing feedback loops that push the planet into a far hotter, unstable state that defies meaningful human control.
This would be a failure not of intelligence, but of another element in Time’s cover-image metaphor: alignment. In architecture, alignment refers to whether a structure’s form, function, and values point in the same direction. A well-aligned building channels movement, distributes loads, and signals purpose coherently, whereas a misaligned one generates congestion, stress, and risks of collapse, even if it looks spectacular. Modern societies are increasingly misaligned in precisely this way. Our technologies scale faster than our institutions, and our technical capacities consistently outrun our consent.
This architectural theme is also fitting for a moment when many commentators – from The Atlantic’s Yoni Appelbaum to the New York Times’ Ezra Klein – are turning to building as a metaphor for “abundance”. The infrastructure to support AI is being built in its own hothouse of intense market competition, abundant capital, geopolitical rivalry, and a culture of speed. These forces are tightly aligned with one another, but disastrously misaligned with existing mechanisms to ensure democratic accountability, long-term risk management, and public legitimacy. Likewise, the institutions of climate governance are aligned to a slower, more forgiving world than the one Rockström has described.
These misalignments are compounded by opacity. We are increasingly at the mercy of black-box systems – whether algorithms shaping credit and information flows, or climate modelling that must be translated into diplomatic language – that confer power without explanation. When outcomes cannot be understood or contested, legitimacy erodes. Populism is thriving not because people are rejecting expertise, but because they are being governed by systems that demand blind trust.
The lunch-on-a-beam image captures this perfectly. The implication is that the math has already been done. The expectation is that we should admire AI’s leaders’ nerve without asking pesky questions about their blueprints. History offers a warning here. The culture of the industrial revolution also favoured builders over governance. Machines came before labour laws, and finance outran regulation.
But, as severe as the resulting disruptions were, the damage was ultimately containable. With AI- and climate-related risks, the margins for error are thinner, and the consequences would be more enduring. A scorched-earth approach of extracting value at maximum speed has become existentially reckless. The defining question of the 21st century is no longer whether humanity can build extraordinary systems. It is whether we can align them – technically, institutionally, and morally – before they slip out of our control.
The men on the beam survived because, eventually, their society built the invisible architecture (standards, safeguards, a culture of shared responsibility) to enable daring feats. Machismo alone would not have sufficed. Time has given us a powerful, if unsettling, image. It ties today’s giddy AI boom to capitalism’s most memorable bust, thus posing the implicit question: Will we build the institutions that are needed to make such heights survivable, or will we keep mistaking vertigo for visionary foresight?
Copyright: Project Syndicate, 2025.
www.project-syndicate.org
For feedback: contact the Editorial Department at onlinefeedback@gleanerjm.com.


