Anthropic CEO embraces a highly optimistic view of AI in a 15,000-word tribute.

CEO Dario Amodei, not an AI "doomer. "
Anthropic CEO embraces a highly optimistic view of AI in a 15,000-word tribute.

CEO Dario Amodei, not an AI "doomer. "

At least, that's what I read of the "mic drop" of a ~15,000-word essay Amodei published to his blog late Friday. (I tried asking Anthropic's Claude chatbot whether it concurred, but alas, the post exceeded the free plan's length limit.)

At the broadest level of abstraction, Amodei outlines a world where all AI risks are neutralized and the technology realizes previously impossible prosperity, social uplift, and abundance. He claims this isn't to discount AI's downsides-something against which Amodei opens by, without naming names, denouncing AI companies that are prone to overhyped and crass propagandizing their technology's capabilities. But one could say that the essay tips too far in the techno-utopianist direction, making claims that are simply not supported by fact.

According to Amodei, "powerful AI" might arrive as soon as 2026. By powerful AI, he means AI that's "smarter than a Nobel Prize winner" in fields like biology and engineering, and that can prove unsolved mathematical theorems or write "extremely good novels." This AI, Amodei says, will be able to control any software or hardware imaginable, including industrial machinery, and basically do most of the jobs that humans do today — but better.

[This AI] can do all activities, communications or remote operations …including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on," Amodei writes. "It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use."

Lots would have to happen to reach that point.

Even the best AI today cannot "think" in the way we understand the term. Models do not so much reason as reproduce patterns they have observed in their training data.

Assuming for the sake of Amodei's argument that the AI industry does soon "solve" human-like thought, would robotics catch up to allow future AI to perform lab experiments, manufacture its own tools, and so on? The brittleness of today's robots imply it's a long shot.

But Amodei is optimistic — very optimistic.

He believes that within the next 7-12 years AI might allow for the treatment of almost all infectious diseases, eradication of most cancers, curing of genetic disorders, and arresting Alzheimer's in its earliest stages. In the next 5-10 years, Amodei hopes that conditions like PTSD, depression, schizophrenia and addiction will be cured with AI-concocted drugs, or genetically prevented via embryo screening (a decidedly controversial opinion) — and that AI-developed drugs will also exist that "tune cognitive function and emotional state" to "get [our brains] to behave a bit better and have a more fulfilling day-to-day experience".

When that arrives, people's average longevity would reach 150 years, Amodei says.

My basic prediction is that AI-enabled biology and medicine will let us compress the progress that human biologists would have accomplished in the next 50-100 years into 5-10 years, he writes. I'll call this the 'compressed 21st century': the notion that after powerful AI, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.

These, too, seem to be stretches, given that AI hasn't yet changed medicine — or may never — on any large scale. Even if AI reduces the labor and cost it takes to get a drug into pre-clinical testing, it will fail at a later stage, just as human-designed drugs have failed. Given the AI that currently deploys in healthcare has been shown in numerous ways to be biased and dangerous or otherwise highly challenging to deploy in current clinical and lab environments, the idea that all these issues and more will be solved roughly within the decade does seem, well, aspirational.

But Amodei doesn't stop there.

AI could solve world hunger, he claims. It could turn the tide on climate change. And it could transform the economies in most developing countries; Amodei believes AI can bring the per-capita GDP of sub-Saharan Africa ($1,701 as of 2022) to the per-capita GDP of China ($12,720 in 2022) in 5-10 years.

These are sweeping declarations, but hardly news to anyone who has ever heard adepts of the "Singularity" lobby make them. To his credit, Amodei recognizes that such strides would require "an enormous undertaking in global health, philanthropy, [and] political advocacy," which he believes will happen because it represents the global economic self-interest.

That would be a big change in human behavior if so, given people have shown repeatedly that their main interest is in what benefits them in the near term. Deforestation is but one example among thousands. It's also interesting that many workers paid sub-minimum wage to put labels on the datasets used in training the AI see their employers rake in tens of millions-or hundreds of millions-in capital from the results.

Amodei barely touches on the danger AI poses to civil society, suggesting, in fact, that a coalition of democracies would need to both secure AI's supply chain and block foes who would use AI for nefarious purposes from the means of powerful AI production (semiconductors, etc.). In the same breath, he states that AI, in the right hands, might even be used to undermine repressive governments and even lessen bias within the legal system. AI has traditionally helped to lock in bias in the law.

"A really mature and successful deployment of AI would be able to minimize bias and be fairer to everyone," writes Amodei.

So, if AI takes over every conceivable job and does it better and faster, won't that leave humans in a lurch economically speaking? Amodei admits that, yes, it would, and that at that point, society would have to have conversations about "how the economy should be organized."

But he offers no solution.

People do want a feeling of accomplishment, even a feeling of competition, and in a post-AI world it will be perfectly possible to spend years attempting some very difficult task with a complex strategy, similar to what people do today when they embark on research projects, try to become Hollywood actors, or found companies," he writes. "The facts that (a) an AI somewhere could in principle do this task better, and (b) this task is no longer an economically rewarded element of a global economy, don't seem to me to matter very much."

Amodei improves on the thought at the end, closing that AI is just a technological accelerant — that humans naturally tend toward "rule of law, democracy, and Enlightenment values." But in so saying, he glosses over AI's many costs. AI is going to have — is already having — an enormous environmental impact. And it's generating inequality. Noted Nobel Prize-winning economist Joseph Stiglitz and others noted that such labor disruptions attributable to AI could only serve to concentrate wealth further into companies and leave workers more powerless than ever.

That, of course is also true of Anthropic, loath as Amodei is to acknowledge that. It's a business, after all — one reportedly valued at close to $40 billion. And their recipients of those benefits of its AI tech are, mainly, corporations whose responsibility is to enhance returns to shareholders, not humanity.

A cynic might observe that the essay comes at a rather interesting time, inasmuch as Anthropic is allegedly in the process of raising billions of dollars in venture funds. OpenAI CEO Sam Altman penned a similar technopotimist manifesto just before OpenAI closed a $6.5 billion funding round. Maybe that's just a coincidence.

For one thing, however, Amodei isn't a philanthropist. Like any CEO, he has a product to sell. It just so happens that his product is going to "save the world"—and those who think otherwise risk being left behind. Or so he'd have you believe.

Blog
|
2024-10-12 17:58:20