This Week in AI: OpenAI is Feeling the Strain.

After the break, here are some show notes from OpenAI's DevDay.
This Week in AI: OpenAI is Feeling the Strain.

After the break, here are some show notes from OpenAI's DevDay.

The keynote yesterday morning in San Francisco was marked not only by how tone-deafly subdued it was, but especially compared to last year's rah-rah, hypebeast-y address from CEO Sam Altman. This DevDay, Altman did not burst up onstage to pitch shiny new projects. Not even an appearance: instead, head of platform product Olivier Godement emceed.

On the list for this first of several OpenAI DevDays — the next is in London this month, followed by the last in Singapore in November — were quality-of-life improvements. For example, OpenAI released a real-time voice API and vision fine-tuning, which lets developers use images to tweak its GPT-4o model. Model distillation was also rolled out, which takes a large AI model like GPT-4o and uses it to fine-tune a smaller model.

Just wasn't what it seemed. OpenAI had set the expectation bar fairly low earlier this summer, when the company said DevDay would be all about educating developers, rather than making product pitches. What wasn't there to discuss on that tightly packed, 60-minute Tuesday keynote raised legitimate questions about the degree to which OpenAI's many, many AI projects have progressed.

We didn't receive any word on what might replace OpenAI's nearly year-old image generator, DALL-E 3, nor any update on the limited preview for Voice Engine, the company's voice-cloning tool. And though the company has been promising its video generator, Sora, and it still has no launch timeline, Media Manager remains one of the developments with a mum's the word on it—the app the company says it's developing to let creators control how their content is used in model training.

When contacted for comment, an OpenAI spokesperson said to TechCrunch that OpenAI is "slowly rolling out the [Voice Engine] preview to more trusted partners" and that Media Manager is "still in development."

But it's pretty clear now OpenAI is stretched to its limits — and has been for quite some time.

The Wall Street Journal has reported recently that the companies' teams working on GPT-4o were only given nine days to conduct safety assessments. Fortune describes as many of the OpenAI staff believing the company's first "reasoning" model, o1, wasn't ready to be shown.

As it speeds toward a funding round that could bring in up to $6.5 billion, OpenAI has its fingers in many underbaked pies. Here, while DALL-3 trails image generators like Flux on most qualitative tests, Sora has reportedly been so slow to generate footage that OpenAI is revamping the model, and OpenAI continues to delay rollout of the revenue-sharing program for its bot marketplace, the GPT Store, that it initially pegged for the first quarter of this year.

Nor am I surprised to see OpenAI devoured by staff burnout and executive departures. You cannot be a jack-of-all-trades and end up being a master of none, pleasing nobody.

News
AI bill vetoed: California governor Gavin Newsom vetoed SB 1047, one of the state's most high-profile bills regulating the development of AI in the state. In a statement, Newsom referred to the legislation as "well-intentioned" but "[not] the best approach" to protect the public from the dangers of AI.

AI bills passed: Newsom did sign other AI regulations into law – including bills dealing with AI training data disclosures, deepfake nudes, and more.

Y Combinator slammed for funding AI startup, PearAI: A startup accelerator is in hot water for funding an AI startup. The company funded PearAI, whose founders admitted to merely copying an open source project called Continue.

Copilot gets overhauled: On Tuesday, Microsoft's AI-powered Copilot assistant got an overhaul. It can read off your screen, think deeply, and now even speak to you aloud with some of its new tricks.

This week, one of OpenAI's lesser-known co-founders, Durk Kingma, announced his decision to join Anthropic, and it isn't known what he will be working on.

Training AI on photos of customers: Meta's Ray-Bans have a camera at the front, with which all kinds of AR features will be possible. But it may turn out to be a privacy issue-that is, whether the company plans to train models on images from users, the company won't say.

Raspberry Pi's AI camera: Raspberry Pi, the company selling tiny, cheap, single-board computers, has released the Raspberry Pi AI Camera-an add-on with onboard AI processing.

Research paper of the week
AI coding platforms have snapped up millions of users and raised hundreds of millions of dollars from VCs. But are they delivering on their promises to boost productivity?

Maybe not, at least so says the new analysis by engineering analytics firm Uplevel. Uplevel stitched together a comparison of data from about 800 of its developer customers — some of whom said they were using GitHub's AI coding tool, Copilot, and some of whom said they weren't. Devs who leaned on Copilot introduced 41 percent more bugs and were no less likely to burn out than those who did not, according to the analysis.

It is not only a security issue, but it may also sound like a copyright and privacy issue. In all this, the developers are witnessed as excited about AI-based assistive coding tools. Most of the respondents from GitHub's latest poll indicated embracing AI tools in one form or another. Enterprises are positive, too: Microsoft said in April that Copilot had over 50,000 enterprise customers.

Model of the week
MIT spinoff Liquid AI, announced its first series of generative AI models this week: Liquid Foundation Models, or LFMs for short.

"So what?" you might ask. Models are a commodity-more's the point, new ones are introduced almost every day. Well, LFMs employ a novel model architecture and notch competitive scores on a range of industry benchmarks.

Most models are based on what's known as a transformer. A team of Google researchers introduced the transformer in 2017. The transformer has since established itself as the clear leader in generative AI model architectures so far. Translators form the bases for Sora and the latest version of Stable Diffusion, as well as text-generating models such as Anthropic's Claude and Google's Gemini .

But there's a catch with transformers: they're not particularly great at processing huge volumes of data.

Liquid claims its LFMs have a smaller footprint in memory compared to transformer architectures, so they can take much larger quantities of data on the same hardware. "By efficiently compressing inputs, LFMs can process longer sequences," the company said in a blog post.

Liquid's LFMs are available on a number of cloud platforms, and the team is continuing to refine the architecture with further releases.

Grab bag
You blink, and you miss it: This week saw an AI company file to go public.

The San Francisco-based startup, called Cerebras, makes hardware aimed at running and training AI models. Its business directly competes with Nvidia.

So how does Cerebras plan to compete against the chip giant, which had commanded between 70% and 95% of the AI chip segment as of July? According to performance, says Cerebras. The company claims its flagship AI chip, which it both sells direct and offers as a service through its cloud, can outcompete Nvidia's hardware.

But Cerebras has yet to turn that claimed performance advantage into profits. The firm had a net loss of $66.6 million in the first half of 2024, according to SEC filings. And for last year, Cerebras reported a net loss of $127.2 million on revenue of $78.7 million.

According to reports in Bloomberg, Cerebras is looking to raise up to $1 billion in the IPO. The company had raised $715 million in venture capital so far and was valued at more than $4 billion three years ago.

Blog
|
2024-10-03 03:56:46