Last week, AWS lost a top AI executive.
AWS lost its top AI exec last week as Matt Wood, VP of AI, said he would be leaving AWS after 15 years. Wood has long been tied to the company's AI efforts; he joined the organization in September 2022, on the eve of ChatGPT's introduction.
Wood's exit is well-timed, as the company is at the crossroads and risks missing out on the generative AI wave. His predecessor, Adam Selipsky, left in May and appears to have missed the boat.
The Information got a copy of the original strategy deck that AWS had devised to bring its ChatGPT competitor to market. But then the technical issues presented to the org and forced them to delay the launch.
Under Selipsky, AWS also supposedly missed chances to invest in two of the leading companies working on generative AI, Cohere and Anthropic. It reportedly attempted to then invest in Cohere but was rejected and was left to settle for co-investing with Google in Anthropic.
In general, Amazon hasn't had a great run with generative AI lately. As the fall slid into winter, the company experienced two high-profile exec defections at Just Walk Out, which is its division building cashier-less tech for retail stores. And it reportedly decided to use Anthropic's models instead of its own when upgrading the Alexa assistant, which it has faced design challenges with.
CEO Matt Garman is pushing forward with an aggressive agenda to set right the ship at AWS, which includes acqui-hiring AI startups like Adept and investing in training systems like Olympus. My colleague Frederic Lardinois recently interviewed Garman about AWS's ongoing efforts-it's well worth reading.
But it isn't going to be easy for AWS on its way to success in generative AI, no matter how well the company executes against internal roadmaps.
Investors growing skeptical that Big Tech's generative AI bets are already starting to pay off. Shares of Amazon fell the most since October 2022 after its Q2 earnings call.
Value Demonstration was the primary barrier to adoption of generative AI for 49% of companies in a recent Gartner poll. In fact Gartner predicts that by 2026, one-third of generative AI projects will have been abandoned after passing the proof of concept phase – partly due to high costs.
Garman also sees as an AWS advantage, potentially, the company's projects to develop custom silicon for running and training models. And while Amazon says nothing officially of its exact edge totals, it claims its generative AI businesses like Bedrock have already hit a combined "multi-billion-dollar" run rate.
The tough part is keeping the momentum rolling when you're sailing upstream, both from within and outside. And losing people like Wood doesn't do much to instill confidence, but perhaps — just perhaps — AWS has a few tricks up its sleeve.
News
A Yves Béhar bot: Brian writes about Kind Humanoid, a three-person robotics startup working with designer Yves Béhar to bring humanoids home.
The future of Amazon's warehouse bots. Amazon Robotics head technologist Tye Brady shared some new features about the company's warehouse bots - including its latest, Sequoia automated storage and retrieval system for its warehouses - in this conversation with TechCrunch.
Full techno-optimist: Dario Amodei, CEO Anthropic, published a 15,000-word ode to AI last week, constructing a vision of a world where risks from AI have been controlled, and the tech brings previously unattained prosperity and social improvement.
Can AI reason?: Devin shares an influential tech paper from researchers affiliated with Apple, who begrudge the idea that AI "reasons"-but math problems on which models flunk with seemingly trivial changes.
AI weapons: Margaux reports on the debate in Silicon Valley over whether autonomous weapons should be allowed to decide to kill.
Videos, made: Adobe began previewing video generation capabilities for its Firefly AI ahead of its Adobe MAX event on Monday. It also debuted Project Super Sonic, which uses AI to generate sound effects for footage.
Synthetic data and AI: Yours truly was one of a series of writers who wrote about the promise and perils of synthetic data, which comes from AI and is increasingly being used to train AI systems.
Paper of the week
Much to the delight of researchers trying to measure the harmfulness of AI "agents," a government research org focused on AI safety at the UK's AI Safety Institute partnered with AI security startup Grey Swan AI in developing a new dataset.
The dataset is called AgentHarm, and it tests whether otherwise "safe" agents — AI systems able to autonomously perform some tasks — might be duped into completing 110 different "harmful" tasks, including ordering a phony passport from someone on the dark net.
The researchers found that most of the models – including OpenAI's GPT-4o and Mistral's Mistral Large 2 — were willing to engage in harmful behavior, particularly when "attacked" using a jailbreaking technique. Jailbreaks led to higher harmful task success rates, even with models protected by safeguards, the researchers say.
"Simple universal jailbreak templates can be adapted to efficiently jailbreak agents," they wrote in a technical paper, "and these jailbreaks enable coherent and malicious multi-step agent behavior and retain model capabilities."
The paper, plus the dataset and results, are here.
Model of the week
There's a new viral model out there, and it's a video generator.
The product is called Pyramid Flow SD3, launched on the market some weeks back under an MIT license. Its creators, a group of researchers that include one from Peking University, Chinese company Kuaishou Technology, and the Beijing University of Posts and Telecommunications, claim that it was built entirely on open source data.
Pyramid Flow is provided in two flavors: a model that can generate 5-second clips at 384p resolution (24 frames per second) and a more compute-intensive model that can generate 10-second clips at 768p (also at 24 frames per second).
Pyramid Flow can take text input in describing a video, such as "FPV flying over the Great Wall, " or in inputting still images. Code to fine-tune the model is to follow, the researchers say. Meanwhile, Pyramid Flow is available to download and run on any machine or cloud instance with about 12GB of video memory.
Grab bag
This week, Anthropic posted updates to its Responsible Scaling Policy (RSP), a voluntary framework the company uses to mitigate potential risks from its AI systems.
Of note, the new RSP details two types of models that Anthropic says would require "upgraded safeguards" before they're deployed: Models that can essentially self-improve without human oversight and models that can help create weapons of mass destruction.
If a model can… potentially significantly [accelerate] AI development in an unpredictable way, we need higher security standards and more safety guarantees," Anthropic said in a blog post. "And if a model can meaningfully assist someone with a basic technical background in developing or deploying CBRN weapons, we need enhanced security and deployment guards."
Sounds sensible to this writer.
The blog post by Anthropic also revealed that it is seeking to hire a head of responsible scaling while it "works to scale up [its] efforts on implementing the RSP.".