Ethereum co-founder Vitalik Buterin said the crypto community had reservations about Sam Bankman-Fried and FTX from the beginning, despite Bankman-Fried's high profile in the mainstream media as an industry leader.
"I think a lot of people have this misconception that everybody deeply respected Sam and that he caught the entire ecosystem by surprise," Buterin said in a podcast interview with Sriram Krishnan and Aarthi Ramamurthy. “I think it is true that nobody expected a literal $8 billion blow up, but if you're looking at Ethereum influencers like Anthony Sassano, a lot of them disrespected him and FTX from the beginning."
Krishnan and Aarthi interview founders, CEOs, and film makers on their eponymous podcast, which featured Buterin last month.
Buterin went on to explain that many in the crypto ecosystem were suspicious of Bankman-Fried because he seemed unable to articulate a coherent vision for why cryptocurrency technology was valuable.
"He was just not able to articulate a vision of why crypto was good—he just clearly saw it as purely a business opportunity," Buterin recalled. "It's like, 'Oh, hey, crypto is this thing where you can make money.'"
Buterin contrasted Bankman-Fried's outlook with the cypherpunk values and decentralization goals that initially animated Bitcoin, Ethereum and other blockchain projects, saying the disgraced entrepreneur was just "regurgitating other people's perspectives of 'disintermediation is good, creating more open markets is good'—things that have been said by influencers for years."
"He just, I think, never really struck the community as a person who deeply believed it," he said. "That more than anything else might really be the cause of the mistrust that existed already."
The Ethereum founder also touched on the rapid progress of artificial intelligence systems like ChatGPT, expressing optimism about their potential to augment human creativity rather than fully replace human jobs and talents.
"I think one of the positive aspects of this is that I think it's a good example of how, instead of AI killing 30% of the jobs, which would be catastrophic and terrible, it's like, AI is killing 30% of your job, which is like actually an amazing time saver," Buterin said.
He acknowledged that more jobs will be lost as AI approaches full human capabilities, but that for now, AI is something to be embraced.
"At this stage one, the last section of the sprint's to human level AI, that aspect of things is interesting—it's empowering people with more than replacing people, at least so far," Buterin said.
He also suggested that powerful AI tools like image generators could help individual creators make films and other artistic works without the need for an expensive Hollywood-style production.
"The thing happening that we don't want to see is artists getting replaced; the thing that I wants to see is an author, instead of just being able to write a novel, also being able to personally make a movie," Buterin said. "I would want to see the cost of making a movie go down from $100,000 to one person with basically just his creativity and a couple of months with an AI platform."
In his view, disrupting Hollywood would be a good thing.
"We get away from all of these horrible remixes, like, Marvel fights King Kong with a touch of Star Trek versus Star Wars on the side, " he said. "Get to actual real stories reflecting different people's values."
"That enhancement of existing individual creativity, that excites me," Buterin told his hosts.
Indeed, the Ethereum founder positioned AI as one of the most significant developments in all of history.
"This human to superhuman AI transition, that level of transition, I would argue has only happened basically three or four times in the history of the Earth," he said.
But Buterin also acknowledged the need for more research and potentially some limited regulation around advanced AI to address possible risks down the line. He emphasized that any rules should be narrowly targeted rather than sweeping bans that would hinder innovation.
"I see these really compelling arguments from the AI risk people. But then, like, we have a yet a long, multi 100 year history of people predicting all kinds of really awful consequences to the next wave of technology. What's happened, over and over again, for centuries, is we adapt."
Editor’s note: This article was written with the assistance of AI. Edited and fact-checked by Ryan Ozawa.