Crossed the distributional series
Distributional, an AI testing platform founded by Intel's former GM of AI software, Scott Clark, has closed a $19 million Series A funding round led by Two Sigma Ventures.
Clark says that Distributional was inspired by the AI testing problems he ran into while applying AI at Intel, and — before that — his work at Yelp as a software lead in the company's ad-targeting division.
"As the value of AI applications continues to grow, so do the operational risks," he told TechCrunch. "AI product teams use our platform to proactively and continuously detect, understand, and address AI risk before it introduces risk in production."
Clark was brought to Intel via acquisition.
Clark also co-founded Intel's acquisition in 2020, SigOpt, a model experimentation and management platform. Clark remained on and was elected VP and GM of Intel's AI and supercomputing software group in 2022.
"It was really tough to get more than about 10 people around the table for discussions of all of this and know exactly how things were really going," Clark says about the observability and AI monitoring problems that frequently hindered he and his team at Intel.
AI is non-deterministic, Clark pointed out -- that is, it produces different outputs for the same piece of data. Add to that fact that AI models have many dependencies -- from software infrastructure to training data -- and pinpointing bugs in an AI system can feel like searching for a needle in a haystack.
According to a survey conducted by the Rand Corporation in 2024, more than 80% of AI projects fail. It seems the generative AI has proven to be one of the biggest pitfalls for companies. A Gartner study suggests that a third of deployments will be abandoned by 2026.
"It requires writing statistical tests on distributions of many data properties," Clark said. "AI needs to be continuously and adaptively testing through the lifecycle to catch behavioral change".
Clark built Distributional somewhat to abstract away this AI auditing work. Clark drew upon techniques that he and SigOpt's team built in working with enterprise customers. The Application Developers can automatically develop statistical tests for AI models and apps to a developer's specifications and then organize the results of these tests on a dashboard.
From there, users of Distributional can collaborate on test "repositories," triage failed tests, and re-clone tests where necessary. The entire environment can be deployed on-premises (although Distributional also offers a managed plan) and integrated with popular alerting and database tools.
We provide visibility across the organization of what, when, and how AI applications were tested and how that has changed over time, added Clark, and provide a repeatable process for AI testing for similar applications by utilizing sharable templates, configurations, filters, and tags.
To say the least, AI is an unwieldy beast. Even the best labs in AI have weak risk management. A platform like this of Distributional's could ease the testing burden, and maybe even help companies actually achieve ROI.
At least, that's Clark's pitch.
"Whether instability, inaccuracy, or the dozens of other potential challenges, it can be hard to identify AI risk," he said. "If teams fail to get AI testing right, they risk AI applications never making it into production. Or, if they do productionalize, they risk these applications behaving in unexpected and potentially harmful ways with no visibility into these issues."
Distributional isn't the first to market with tech to probe and analyze an AI's reliability. Kolena, Prolific, Giskard, and Patronus are just some of many AI experimentation solutions. Tech giants Google Cloud, AWS, and Azure also provide model evaluation tools.
So why would a customer choose Distributional?
Well, Clark claims that Distributional-one on the brink of commercializing its product suite-delivers a more "white glove" experience than most. Distributional takes care of installation, implementation, and integration for clients and provides AI testing troubleshooting-for a fee.
"Monitoring tools often focus on higher-level metrics and specific instances of outliers, which gives a limited sense of consistency, but without insights on broader application behavior," Clark said. The test objective of Distributional is to allow teams to achieve a definition of desired behavior for any AI application, ensure that it still behaves as specified in production and through development, identify when this changes, and determine what needs to evolve or be corrected to reach a steady state once again.
Flush with new cash from its Series A, Distributional plans to add to its technical team on both the UI and AI research engineering sides. Clark said the company should land at a headcount of 35 by year's end as Distributional initiates its first wave of enterprise deployments.
"We have raised substantial capital in the space of just one year since our founding, and, even with our growing team, are well-positioned to take advantage over the next couple of years of this enormous opportunity," Clark added.
Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence VC, and Alumni Ventures also invested in Distributional's Series A round. So far, the San Francisco-based company has raised $30 million.