Meta has developed an AI model capable of self-training.

The AI model is designed to enhance its own results.
Meta has developed an AI model capable of self-training.

Here's one that will freak the AI fearmongers out. According to Reuters, Meta has published a new generative AI model that can train itself to improve its outputs.
That's right; it's alive – but also not really.
According to Reuters:

Meta said on Friday it's bringing out a "Self-Taught Evaluator" that may provide a path toward less human engagement in the development of AI. It can decompose complicated problems into smaller logical steps and seems to enhance the accuracy of the responses on challenging problems of subjects such as science, coding, and maths.

So, the place of human monitoring is taken over by Meta's new AI systems inside AI systems that should allow the processes to test and improve aspects within the model itself. That consequently brings better outputs.
Meta claims:

"In this work, we present a framework for training evaluators without human annotations relying only on synthetic training data. Beginning from unlabeled instructions, our iterative scheme for self-improvement generates contrasting model outputs and trains an LLM-as-a-Judge to produce reasoning traces and final judgments, repeating the latter at each new iteration using improved predictions."

Spooky, right? Perhaps for Halloween this year you could dress up as "LLM-as-a-Judge", although the amount of explaining you'd have to do probably makes it a non-starter.

Notes Reuters, the new project is just one of many other new AI developments from Meta, all of which have now been released in model form for third parties to test. Meta's also published code for its updated "Segment Anything" process, a new multimodal language model combining both text and speech, a system to help establish and better ward off AI-based cyberattacks, improved translation tools, and a new method of identifying inorganic raw materials.

The models are being made available as part of Meta's open source approach to the development of generative AI. Through this, the company will make its findings in AI available to a range of external developers in hopes of moving its tools forward.

Which also carries with it a level of risk, in that we don't know to what extent AI can actually do as yet. And getting AI to train AI sounds like a path to trouble in some respects, but we're also still a long way from automated general intelligence, which will eventually enable machine-based systems to simulate human thinking and come up with creative solutions without intervention.
But the genuine fear here is that we're on the verge of creating systems that could be smarter than us and then see humans as a threat. Once more, not in the near future; we have to continue our research to simulate real brain-like activity for millions of years.

Still, even this doesn't mean that we can't create problematic outcomes with the tools and means available today.

This risks not being anything like a Terminator-style robot apocalypse, but as more and more systems start to include generative AI, advances like this may improve outputs but could lead to more unpredictable, and potentially harmful results.

Though that, I guess, is what these initial tests are for, but maybe open sourcing everything in this way expands the potential risk.

Blog
|
2024-10-23 06:17:28