The joke on the internet asks: “What are the seven most terrifying words in the English language?” The answer: “Ronan Farrow’s been asking questions about you.”
The investigative journalist has a piece in The New Yorker this week, where the subject of said inquiries is Sam Altman, the billionaire founder and CEO of OpenAI, the company that owns ChatGPT.
Farrow’s new piece suggests timely, broader questions of who has power, who should have it, who absolutely shouldn’t … and what we do if they have it, anyway.
OpenAI’s products now reach into everything, from your smartphone to defence contracts to law enforcement. Its operations have a growing hunger for electric power; its datacentres are spreading across the planet; and the labour market implications of its potential to replace jobs suggest an industrial upheaval for white-collar workers on a world-changing scale.
The commercial momentum of this company is such that, despite a projected loss of $14bn in 2026 reported in early March – tripling estimates made in 2025 – OpenAI still held an eye-watering market valuation of $852bn by March’s end.
Farrow’s piece claims the OpenAI board had doubts about whether they could trust Altman when they fired him in 2023.
As per Farrow, Altman then convened a “war room” comprising of crisis communicators – and some influential company investors – to defend his reputation. He was reinstated five days later, reportedly, pressure from investor Microsoft and a threat from 700 staff to resource any competing Altman venture were critically persuasive in discussions.
Three years later, the company, with a CEO its own board did not allegedly trust, has publicly concluded a deal with the US military to use its technology in classified operations.
The deal was announced in the wake of its AI rival, Anthropic, expressing concern that the US government could, potentially, employ its own proprietary AI tools as instruments of “mass surveillance” and for “fully autonomous weapons”.
The Trump administration emphatically ceased business with Anthropic, and OpenAI leapt in.
Facing a backlash, Altman described the original deal OpenAI concluded with Pete Hegseth’s department as “opportunistic and sloppy”. The company subsequently released a statement reassuring the public its Pentagon agreement had “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s”.
At OpenAI’s word, the company believes “strongly in democracy” and that the “only good path forward requires deep collaboration between AI efforts and the democratic process”.
How perplexing! As Jake Laperruque from Tech Policy observes, OpenAI’s cited “red lines” against mass domestic surveillance, direct autonomous weapons systems and high-stakes automated decisions seem to be largely indistinguishable from those “that caused the planned Anthropic agreement not only to fail, but to explode in shocking fashion”.
I’m also curious regarding the company’s interpretation of “deep collaboration” with the democratic process. Perhaps we could glean the nature of it with the information that OpenAI top executive, Greg Brockman, was revealed as a $25m donor to a Trump fundraising vehicle in January.
Brockman is also a participant in an AI “SuperPAC” fundraising vehicle that in 2025 raised $125m to further its goal of backing candidates who support national AI regulations rather than state-by-state rules.
In December last year, Trump signed an executive order limiting state regulations of AI, preferring a “minimally burdensome national standard” to regulate technology.
I’m sure it’s just a coincidence. So are they all, all honourable men.
And yet, somehow, concerns do nag about the character of decision-making processes regarding a technology that OpenAI’s own staff researchers believe is a “threat to humanity”.
Ethical anxiety has inspired activist/historian Rutger Bregman to start a “QuitGPT” campaign for a worldwide boycott of Altman’s company. Meanwhile, questions remain over the role of AI tools such as Palantir’s Maven in US strikes on Iran, including the bombing of a girls’ school in Minab.
The rubble of that school is the grotesque terrain over which the debate over who gets entrusted with power over tools that could kill us all must be asked - because AI is just one of the mechanisms for our own mass annihilation proliferating now.
Those who gain power over these may be good people, bad people, misunderstood people or the overwhelmingly more common mixture of every kind of person on any given day. Whether their talents are for computer programming or demagoguery, every social organisation, from the local tech startup to the collective representatives of nation-states, has to affirm meaningful social, political, legal and economic guardrails that channel their available options away from human fallibility and collectively minimise the harm they can do.
Dear god, haven’t we learned by now that self-regulated enterprises do not regulate in the interest of anyone or anything beyond their commercial or political self-interest? Sanctions, recalls, suspensions and multiple supervisory stakeholders with the authority to enforce these are what keeps us alive.
The moment demands a global and unified willingness to regulate the complex risks posed. It’s a problem we cannot outsource to Farrow or AI. Our shared fates depend on sitting down with one another and all our human fallibilities, and working it out for ourselves.
-
Van Badham is a Guardian Australia columnist

5 hours ago
2

















































