‘Every step we take closer to very powerful AI, everybody’s character gets plus 10 crazy points.’ Sam Altman on OpenAI turmoil.

Daily Trade

“Every step we take closer to very powerful AI, everybody’s character gets plus 10 crazy points”


— OpenAI CEO Sam Altman

This is what Sam Altman had to say about the stresses of working with artificial intelligence as he revealed his own thoughts on the dramatic shakeup of OpenAI’s executive board last November.

The OpenAI chief blamed the stresses of working with AI for heightened tensions within the San Francisco company he helped to found in 2015, as he argued the “high stakes” involved in developing artificial general intelligence (AGI) had driven people “crazy”.

He explained that working with AI is a “very stressful thing” due to the pressures involved, as the tech CEO said he now expects “more strange things” to start happening across the globe as the world gets “closer to very powerful AI.”

“As the world gets closer to AGI the stakes, the stress, the level of tension – that’s all going to go up,” Altman said during a discussion at the World Economic Forum in Davos. “For us, {the board shakeup] was a microcosm of it, but probably not the most stressful experience we ever faced.”

Microsoft
MSFT,
+1.13%

is an investor in OpenAI, and at one point offered a job to Altman before blessing his reinstatement.

Altman said the lesson he has taken away from the shakeup that saw him removed as OpenAI’s CEO on Nov. 17 and reinstated on Nov. 21, is the importance of being prepared, as he suggested OpenAI had failed to deal with looming issues inside the company.

“You don’t want important but not urgent problems out there hanging. We had known our board had gotten too small and we knew that we didn’t have the level of experience we needed, but last year was such a wild year for us in so many ways that we sort of just neglected it,” he said.

“Having a higher level of preparation , more resilience, more time spent thinking about all the strange ways things can go wrong, that’s really important,” Altman added.

Speaking on a panel titled “Technology in a Turbulent World,” Altman also spoke about OpenAI’s legal dispute with the New York Times
NYT,
+0.55%
,
which saw the publication file a copyright lawsuit against the AI company in December over use of its articles in training ChatGPT.

Altman said he was “surprised” by the New York Times’ decision to sue OpenAI as he claimed the California company had previously been in “productive negotiations” with the publisher. “We wanted to pay them a lot of money,” he said.

The tech chief, however, sought to push back against claims that OpenAI is reliant on information gathered from the New York Times, as he instead claimed future AIs will be trained on smaller datasets obtained via deals with publishers.

“We are open to training on the New York Times but it’s not our priority. We actually don’t need to train on their data. I think that this is something people don’t understand,” Altman said.

“One thing that I expect to start changing is that these models will be able to take smaller amounts of higher-quality training data during their training process and think harder about it,” Altman added. “You don’t need to read 2,000 biology textbooks to understand high-school level biology.”

The OpenAI chief, however, acknowledged there is “a great need for new economic models” that would see those whose work is used to train AI models, rewarded for their efforts. He explained that future models could also see AIs link to publishers’ own sites.

“OpenAI is acknowledging that they have trained their models on The Times’ copyrighted works in the past and admitting that they will continue to copy those works when they scrape the Internet to train models in the future,” The New York Times lead counsel Ian Crosby told MarketWatch.

“Free riding on The Times’ investment in quality journalism by copying it to build and operate substitutive products without permission is the opposite of fair use,” Crosby said.

Earlier in the week, Altman also addressed the possibility of Donald Trump winning another term as president in the upcoming U.S. elections scheduled for November this year, as he suggested the AI industry will be “fine” either way.

“I believe that America’s going to be fine no matter what happens in this election,” Altman said in an interview with Bloomberg. “I believe that AI is going to be fine no what matter happens in this election and we will have to work very hard to make that so.”

Altman, however, warned that those in power have failed to understand Trump’s appeal.

“It never occurred to us that what trump is saying might be resonating with a lot of people,” Altman said. “I think there has been a real failure to learn lessons about what’s working for the citizens of America, and what’s not.”

Articles You May Like

Why AI Stocks See Powerful Results: The Efficiency Advantage
Chart analyst Carter Worth breaks down his most important technical indicator
Why Nuclear Energy Stocks Could Be the Smartest AI Play
What You Need to Know About Q3 Earnings
U.S. will be ‘more pro-crypto’ after this election, no matter who wins, says Ripple CEO Garlinghouse