theguardian.com
OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid
17–22 minutes
On 16 May 2023, Sam Altman, OpenAI’s charming, softly spoken, eternally optimistic billionaire CEO, and I stood in front of the US Senate judiciary subcommittee meeting on AI oversight. We were in Washington DC, and it was at the height of AI mania. Altman, then 38, was the poster boy for it all.
Timeline
From Loopt to OpenAI: Sam Altman's career in brief
Hide
April 1985
Born in Chicago. His parents are a dermatologist and real-estate broker. He is the oldest of four children. Becomes interested in computers after acquiring an Apple Macintosh at the age of eight. Studies computer science at Stanford University but drops out after two years to found a social networking app called Loopt.
March 2012
Loopt isn’t terribly popular but is acquired by a US fintech company for nearly $45m. Altman promptly sets up a venture capital company, Hydrazine Capital, with his brother Jack. According to the 2024 Bloomberg Billionaires index, the majority of Altman’s estimated net worth of $2bn derives from Hydrazine.
February 2014
Is promoted from partner to president of startup incubator Y Combinator, which holds investments in Airbnb, Dropbox, Stripe and many others. (Currently Y Combinator agrees to invest half a million dollars in a startup for a 7% stake – which can increase rapidly if the company reaches the feted $1bn “unicorn” status.)
December 2015
Founds OpenAI as a nonprofit organisation to develop AI “for the benefit of humanity”.
March 2019
Leaves Y Combinator when he is asked to choose between the incubator and his CEO role at OpenAI – which had raised $1bn in 2015 from Altman, Elon Musk, Peter Thiel, Y Combinator, Microsoft and Amazon, among others. Microsoft invests another $1bn.
November 2022
OpenAI launches ChatGPT, a chatbot based on LLM (large language models) that users can ask to summarise longer texts, write computer code, have human-like interactions, write song lyrics, generate ideas and many other tasks. It takes ChatGTP five days to reach 1 million users (it took Facebook 10 months).
May 2023
Altman embarks on a global tour, meeting leaders such as Rishi Sunak, Emmanuel Macron and Narendra Modi to talk about the pros and cons of AI – the economic opportunities and the societal risks. Appears at the US Senate hearing about AI safety.
November 2023
The OpenAI board removes Altman and fellow founder Greg Brockman from the board because Altman “was not consistently candid in his communications”. Three days later, after threats from OpenAI employees to resign and pressure from Microsoft, he is reinstated.
January 2024
Marries engineer Oliver Mulherin at their estate in Hawaii. They live in San Francisco and spend weekends in the Napa wine region. Altman is a prepper, in 2016 telling the New Yorker: “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israel Defense Forces, and a big patch of land in Big Sur I can fly to.”
February 2024
OpenAI co-founder Elon Musk sues the company for abandoning its original nonprofit mission and reserving its most advanced technology for paying clients. The company pushes back, publishing emails from Musk where he suggests Tesla should acquire OpenAI and acknowledges the company needs to make vast sums of money to finance its ambitions. In June, Musk drops the lawsuit.
Raised in St Louis, Missouri, Altman was the Stanford dropout who had become the president of the massively successful Y Combinator startup incubator before he was 30. A few months before the hearing, his company’s product ChatGPT had taken the world by storm. All through the summer of 2023, Altman was treated like a Beatle, stopping by DC as part of a world tour, meeting prime ministers and presidents around the globe. US Senator Kyrsten Sinema gushed: “I’ve never met anyone as smart as Sam… He’s an introvert and shy and humble… But… very good at forming relationships with people on the Hill and… can help folks in government understand AI.” Glowing portraits at the time painted the youthful Altman as sincere, talented, rich and interested in nothing more than fostering humanity. His frequent suggestions that AI could transform the global economy had world leaders salivating.
Senator Richard Blumenthal had called the two of us (and IBM’s Christina Montgomery) to Washington to discuss what should be done about AI, a “dual-use” technology that held tremendous promise, but also had the potential to cause tremendous harm – from tsunamis of misinformation to enabling the proliferation of new bioweapons. The agenda was AI policy and regulation. We swore to tell the whole truth, and nothing but the truth.
Altman was representing one of the leading AI companies; I was there as a scientist and author, well known for my scepticism about many things AI-related. I found Altman surprisingly engaging. There were moments when he ducked questions (most notably Blumenthal’s “What are you most worried about?”, which I pushed Altman to answer with more candour), but on the whole he seemed genuine, and I recall saying as much to the senators at the time. We both came out strongly for AI regulation. Little by little, though, I realised that I, the Senate, and ultimately the American people, had probably been played.
In truth, I had always had some misgivings about OpenAI. The company’s press campaigns, for example, were often over the top and even misleading, such as their fancy demo of a robot “solving” a Rubik’s Cube that turned out to have special sensors inside. It received tons of press, but it ultimately went nowhere.
For years, the name OpenAI – which implied a kind of openness about the science behind what the company was doing – had felt like a lie, since in reality it has become less and less transparent over time. The company’s frequent hints that AGI (artificial general intelligence, AI that can at least match the cognitive abilities of any human) was just around the corner always felt to me like unwarranted hype. But in person, Altman dazzled; I wondered whether I had been too hard on him previously. In hindsight, I had been too soft.
I started to reconsider after someone sent me a tip, about something small but telling. At the Senate, Altman painted himself as far more altruistic than he really was. Senator John Kennedy had asked: “OK. You make a lot of money. Do you?” Altman responded: “I make no… I get paid enough for health insurance. I have no equity in OpenAI,” elaborating that: “I’m doing this ’cause I love it.” The senators ate it up.
Altman wasn’t telling the full truth. He didn’t own any stock in OpenAI, but he did own stock in Y Combinator, and Y Combinator owned stock in OpenAI. Which meant that Sam had an indirect stake in OpenAI, a fact acknowledged on OpenAI’s website. If that indirect stake were worth just 0.1% of the company’s value, which seems plausible, it would be worth nearly $100m.
That omission was a warning sign. And when the topic returned, he could have corrected it. But he didn’t. People loved his selfless myth. (He doubled down, in a piece with Fortune, claiming that he didn’t need equity with OpenAI because he had “enough money”.) Not long after that, I discovered OpenAI had made a deal with a chip company that Altman owned a piece of. The selfless bit started to ring hollow.
The discussion about money wasn’t, in hindsight, the only thing from our time in the Senate that didn’t feel entirely candid. Far more important was OpenAI’s stance on regulation around AI. Publicly, Altman told the Senate he supported it. The reality is far more complicated.
On the one hand, maybe a tiny part of Altman genuinely does want AI regulation. He is fond of paraphrasing Oppenheimer (and is well aware that he shares a birthday with the leader of the Manhattan Project), and recognises that, like nuclear weaponry, AI poses serious risks to humanity. In his own words, spoken at the Senate (albeit after a bit of prompting from me): “Look, we have tried to be very clear about the magnitude of the risks here… My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world.”
Presumably Altman doesn’t want to live in regret and infamy. But behind closed doors, his lobbyists keep pushing for weaker regulation, or none at all. A month after the Senate hearing, it came out that OpenAI was working to water down the EU’s AI act. By the time he was fired by OpenAI in November 2023 for being “not consistently candid” with its board, I wasn’t all that surprised.
At the time, few people supported the board’s decision to fire Altman. A huge number of supporters came to his aid; many treated him like a saint. The well-known journalist Kara Swisher (known to be quite friendly with Altman) blocked me on Twitter for merely suggesting that the board might have a point. Altman played the media well. Five days later he was reinstated, with the help of OpenAI’s major investor, Microsoft, and a petition supporting Altman from employees.
But a lot has changed since. In recent months, concerns about Altman’s candour have gone from heretical to fashionable. Journalist Edward Zitron wrote that Altman was “a false prophet – a seedy grifter that uses his remarkable ability to impress and manipulate Silicon Valley’s elite”. Ellen Huet of Bloomberg News, on the podcast Foundering, reached the conclusion that “when [Altman] says something, you cannot be sure that he actually means it”. Paris Marx has warned of “Sam Altman’s self-serving vision”. AI pioneer Geoffrey Hinton recently questioned Altman’s motives. I myself wrote an essay called the Sam Altman Playbook, dissecting how he had managed to fool so many people for so long, with a mixture of hype and apparent humility.
Many things have led to this collapse in faith. For some, the trigger moment was Altman’s interactions earlier this year with Scarlett Johansson, who explicitly asked him not to make a chatbot with her voice. Altman proceeded to use a different voice actor, but one who was obviously similar to her in voice, and tweeted “Her” (a reference to a movie in which Johansson supplied the voice for an AI). Johansson was livid. And the ScarJo fiasco was emblematic of a larger issue: big companies such as OpenAI insist their models won’t work unless they are trained on all the world’s intellectual property, but the companies have given little or no compensation to many of the artists, writers and others who have created it. Actor Justine Bateman described it as “the largest theft in the [history of the] United States, period”.
Meanwhile, OpenAI has long paid lip service to the value of developing measures for AI safety, but several key safety-related staff recently departed, claiming that promises had not been kept. Former OpenAI safety researcher Jan Leike said the company prioritised shiny things over safety, as did another recently departed employee, William Saunders. Co-founder Ilya Sutskever departed and called his new venture Safe Superintelligence, while former OpenAI employee Daniel Kokotajlo, too, has warned that promises around safety were being disregarded. As bad as social media has been for society, errant AI, which OpenAI could accidentally develop, could (as Altman himself notes) be far worse.