This Week in AI: Anthropic’s CEO Discusses Scaling AI, and Google Launches Flood Prediction Initiative

On Monday, Anthropic CEO Dario Amodei filled five hours of a podcast interview with AI influencer Lex Fridman discussing everything from timelines for superintelligence to progress on Anthropic's next flagship tech.
This Week in AI: Anthropic’s CEO Discusses Scaling AI, and Google Launches Flood Prediction Initiative

On Monday, Anthropic CEO Dario Amodei filled five hours of a podcast interview with AI influencer Lex Fridman discussing everything from timelines for superintelligence to progress on Anthropic's next flagship tech. Just for you, we've culled the following salient points to save you from downloading an entire thing.

Despite such evidence, Amodei still believes that scaling up models is a viable way for further-capable AI. According to him, scaling up not only increases the compute use while training the models but also the size of models and the training set sizes of models.

"The scaling is probably going to continue, and there's some magic to it that we haven't really explained on theoretical grounds yet," Amodei said.

Amodei does not think that a lack of data is going to be a problem for AI development, at least in the way that many people have supposed. Either by inventing synthetic data or extrapolating out from existing data, AI developers are going to "get around" data limitations he says. (Whether issues associated with synthetic data can be solved is a matter for another day, I note here).

Amodei does, however, concede that compute for AI is likely to become more expensive in the near term, partly due to scale. He expects that next year companies will spend billions of dollars on clusters to train models and hundreds of billions by 2027. "Indeed, OpenAI is reportedly planning a $100 billion data center.".

And Amodei was candid about precisely how untrustworthy, even the best models are, by their nature.

"It's just very hard to control the behavior of a model — to steer the behavior of a model in all circumstances at once," he said. "There's this 'whack-a-mole' aspect, where you push on one thing and these other things start to move as well, that you may not even notice or measure."

Still, Amodei thinks that Anthropic — or someone else — will have built a "superintelligent" AI by 2026 or 2027, he says — one that surpasses "human-level" on lots of things. And he is worried about what happens next.

"We are rapidly running out of truly convincing blockers, truly compelling reasons why this will not happen in the next few years," he said. "I worry about economics and the concentration of power. That's actually what I worry about more — the abuse of power."

Good thing, then, that he's in a position to do something about it.

News
An AI news app: An AI newsreader Particle, launched by former Twitter engineers, aims to help readers better understand the news with the help of AI technology.
Writer raises: Writer has raised $200 million at a $1.9 billion valuation to expand its enterprise-focused generative AI platform.

Build on Trainium: Amazon Web Services (AWS) today launched Build on Trainium, a new program providing $110 million to institutions, researchers, and students who are developing AI using AWS infrastructure.

Red Hat buys a startup: IBM's Red Hat is acquiring Neural Magic, a startup that optimizes AI models to run faster on commodity processors and GPUs.

Free Grok: X, formerly Twitter, is testing a free version of its AI chatbot, Grok.

AI for the Grammy: Now and Then, The Beatles track honed with AI and published last year, enters two Grammy Awards.

Anthropic for defense: Anthropic is partnering with the data analytics firm Palantir and AWS to provide U.S. intelligence and defense agencies with access to Anthropic's Claude family of AI models.

New gTLD: OpenAI bought Chat.com, as part of an impressive collection of well-known domain names.

Research paper of the week
Google has developed a better AI model to predict floods, according to the company.

A model it's based on-pretty much a building off of the prior work that the company has done in this area-predicts flooding conditions accurately up to seven days in advance in dozens of countries. In principle, the model could offer a flood forecast for any place on Earth, but Google notes that much of the planet is lacking critical historical data to validate against.

Google is also offering access to the model in API to disaster management and hydrology experts, and it is opening up its Flood Hub platform and making available the forecasts from the model.

"By making our forecasts available globally on Flood Hub … we hope to contribute to the research community," the company writes in a blog post. "These data can be used by expert users and researchers to inform more studies and analysis into how floods impact communities around the world."

Model of the week
AI developer Rami Seid has published a model that simulates Minecraft, and with it, he has set a new benchmark: the program runs on a single Nvidia RTX 4090.

Like other "open-world" models published recently by AI startup Decart, Seid's, named Lucid v1, simulates Minecraft's game world in real time, or nearly so. Coming in at 1 billion parameters, Lucid v1 takes keyboard and mouse movements as input and generates frames, fully simulated physics and graphics.

Lucid v1, by my reckoning, is no exception. Resolution is pretty low, but it really quickly "forgets" the level layout — turn your character around and see a rearranged scene.

But Seid and her collaborator, Ollin Boer Bohan, say they have no plans to leave this model as is — it's available for download and powers the online demo here.

Grab bag
DeepMind, Google's elite AI lab, today released to the world the code for AlphaFold 3, its AI-powered model of protein prediction.

First announced six months ago, AlphaFold 3 promised to perform better than any previous system. But while many researchers see the AI model as a potential scientific game-changer, DeepMind controversially refused to release the underlying code for AlphaFold 3. Instead, the company provided access to an AlphaFold 3 web server, effectively limiting scientists to the number and types of predictions they could run.

Critics say the decision is simply another step in protecting DeepMind's commercial interests at the expense of reproducibility. DeepMind spin-off, Isomorphic Labs, is putting AlphaFold 3 to work that models proteins in concert with other molecules, in drug discovery.

Academics can now apply the model to make whatever predictions they want — including how proteins behave in the presence of potential drugs. Scientists with an academic affiliation can request code access here.

Blog
|
2024-11-14 18:36:56