Tony Fadell critiques Sam Altman during a interview.

In a spirited interview San Francisco on Tuesday, iPod creator, Nest Labs founder, and investor Tony Fadell took a shot at OpenAI CEO Sam Altman.
Tony Fadell critiques Sam Altman during a interview.

i. In a spirited interview San Francisco on Tuesday, iPod creator, Nest Labs founder, and investor Tony Fadell took a shot at OpenAI CEO Sam Altman. He said, "I've been doing AI for 15 years, people, I'm not just spouting sh** — I'm not Sam Altman, okay?" regarding his understanding of the longer history of AI development before the LLM craze and the serious issues with LLM hallucinations.

The comment elicited a shocked "oohs" from the stunned audience amidst only a few small handfuls of applause.

Fadell had been on a roll in his interview, having discussed topics ranging from what kind of "a**holes" can produce great products to what's wrong with today's LLMs.

Though admitting that LLMs are "great for certain things," he said that there were serious concerns yet to be addressed.

"LLMs are trying to be this 'general' thing because we're trying to make science fiction happen," he said. "[LLMs are] a know-it-all…I hate know-it-alls."

Instead, Fadell recommended using AI agents that focus on particular things and where one is more transparent as regards the errors and hallucinations involved. In this sense, one would know all that about the AI beforehand, so that one "hires" them with an idea of what such agents would do for him.

"I'm hiring them to…educate me, I'm hiring them to be a co-pilot with me, or I'm hiring them to replace me," he explained. "I want to know what this thing is," adding that governments should get involved to force such transparency.

Otherwise, he noted, companies using AI would be putting their reputations on the line for "some bullshit technology," he said.

"Right now we're all adopting this thing and we don't know what problems it causes," Fadell pointed out. He noted also that a recent report said that doctors using ChatGPT to create patient reports had hallucinations in 90% of them. "Those could kill people," he continued. "We are using this stuff and we don't even know how it works."

(Fadell seemed to be pointing back to the recent report by researchers who found that transcriptions of AI by University of Michigan researchers had an alarming number of hallucinations, which might get dangerous in medical usage.).

The comment about Altman came as he went along to tell the crowd he has been working with AI technology for years. For one, Nest used AI back in 2011 to power its thermostat.

"We couldnt talk about AI we couldn't talk about machine learning Fadell noted, because people would get scared as sh** I dont want AI in my house now everybody wants AI everywhere

Blog
|
2024-10-30 17:01:51