It wasn't going to be as huge a leap forward as its predecessors, The Information reported this week, based on what internal employee testers said about the code-named Orion: that even though it performs orders of magnitude better than any of OpenAI's existing models, it showed less improvement than in the jump from GPT-3 to GPT-4.
In other words, progress appears to be slowing. Indeed, Orion could well not be reliably any better than its antecedents in some areas, like coding.
In this respect, OpenAI has established its foundation team to determine how the company can still improve its models given that soon, there will be a declining supply of new training data. According to reports, these new strategies train Orion with synthetically generated data from AI models as well as more improving models during the post-training process.
No comment for now from OpenAI. Last year, the company dismissed reports about plans on its flagship model saying: "We don't have plans to release a model code-named Orion this year.".