Confused by artificial general intelligence, or AGI? That's the thing OpenAI seems so obsessed with eventually building in a way that "benefits all of humanity." You'd be well advised, at this point, considering what they just raised: $6.6 billion dollars to get closer to the task.
But if you're still scratching your head, wondering what in the world AGI is, anyway? You're certainly not alone.
Fei-Fei Li, perhaps one of the most renowned researchers in the world and a recipient of the oft-used title of "godmother of AI," said at Credo AI's summit on responsible AI leadership Thursday that she doesn't even know what AGI stands for. Elsewhere in her speech, she speaks of her role in the birth of modern AI, society's protection against advanced AI models, and why she thinks her new unicorn startup World Labs is going to change everything.
But when she was asked what she thought about an "AI singularity," Li was just as perplexed as the rest of us.
"I come from academic AI and have been educated in more rigorous and evidence-based methods, so I don't really know what all these words mean," said Li to a packed room in San Francisco, beside a big window overlooking the Golden Gate Bridge. "I frankly don't even know what AGI means. Like people say you know it when you see it, I guess I haven't seen it. The truth is, I don't spend much time thinking about these words because I think there's so many more important things to do…"
If someone would know what AGI is, it probably would be Fei-Fei Li. In 2006, she created the world's first large-scale AI training and benchmarking dataset: ImageNet. That was critical in catalyzing our current AI boom. Between 2017-2018 she was Chief Scientist of AI/ML at Google Cloud. Li today leads the Stanford Human-Centered AI Institute (HAI). Startup World Labs is building "large world models." (That term is nearly as confusing as AGI, if you ask me.)
OpenAI CEO Sam Altman has attempted to define AGI in a profile by The New Yorker last year. Altman said it was the "equivalent of a median human that you could hire as a coworker."
On the other hand, OpenAI's charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work."
Evidently, these definitions weren't quite good enough for a $157 billion company to be working toward. So OpenAI created the five levels it internally uses to gauge its progress toward AGI. The first level is chatbots (like ChatGPT), then reasoners (apparently, OpenAI o1 was this level), agents (that's coming next, supposedly), innovators (AI that can help invent things), and the last level, organizational (AI that can do the work of an entire organization).
Still, still confusing? So am I, and so is Li. And, naturally, the whole thing sounds like much more than a median human coworker could do.
Earlier in the talk, Li said that she has long been intrigued by the concept of intelligence and that was enough to lead her to studying AI long before it was profitable to do so. In the early 2000s, Li says she and a few others were quietly laying the foundation for the field.
"2012, my ImageNet combined with AlexNet and GPUs – many people call that the birth of modern AI. It was driven by three key ingredients: big data, neural networks, and modern GPU computing. And once that moment hit, I think life was never the same for the whole field of AI, as well as our world."
Asked about California's contentious AI bill, SB 1047, Li spoke gingerly so as not to reopen an argument the governor just laid to rest with a veto of the measure last week. (We talked recently to the author of SB 1047, and he was much more eager to reopen his argument with Li.)
"Some of you may know that I have been vocal about my concerns regarding this bill [SB 1047], which was vetoed, but now I am reflecting, and with a great deal of excitement, forward-looking, said Li. "I was truly flattered, or honored, that Governor Newsom invited me to participate in the next steps of post-SB 1047."
California's governor just named Li and many other AI experts to a task force to help the state develop guardrails for deploying AI. She said she is taking an evidence-based approach to the role and will fight hard for academic research and funding, but she also wants to make sure that California does not punish technologists in the process.
We need to really look at potential impact on humans and our communities rather than putting the burden on technology itself… It wouldn't make sense if we penalise a car engineer-Ford or GM-for instance-if a car is misused purposefully or through a series of accidents and harms a person. Just penalizing the car engineer will not make cars safer.". What we need to do is innovate for safer measures, make the regulatory framework better – whether it's seatbelts or speed limits – and the same applies with AI.
That's one of the better arguments I've heard against SB 1047, which would punish tech companies for dangerous AI models.
While she advises California on the regulation of AI, Li runs her startup, World Labs, out of a downtown San Francisco location. Li founded her first startup; she is one of the few women leading an AI lab on the cutting edge.
"We're far from a very diverse AI ecosystem," says Li. "I do believe that diverse human intelligence will lead to diverse artificial intelligence, and will just give us better technology."
In the next couple of years, she is excited to bring "spatial intelligence" closer to reality. Li says human language, which today's large language models are based on, probably took a million years to develop, whereas vision and perception likely took 540 million years. That means creating large world models is much more complicated to do.
"It's not only making computers see, but really making computer understand the whole 3D world, which I call spatial intelligence," said Li. "We're not just seeing to name things… We're really seeing to do things, to navigate the world, to interact with each other, and closing that gap between seeing and doing requires spatial knowledge. As a technologist, I'm very excited about that."