OpenAI's efforts to develop its next major model, GPT-5, are falling behind schedule, with results not yet justifying the enormous costs, according to a new report in the Wall Street Journal.
This echoes a previous report in The Information suggesting that OpenAI is looking for new strategies like GPT-5 might not be as big a step forward as previous models. But the WSJ story includes additional details about the 18-month development of GPT-5, named Orion.
OpenAI has reportedly completed at least two large training rounds, which aim to improve a model by training it on huge amounts of data. An initial phase of training proceeded more slowly than expected, suggesting that a larger phase would be both time-consuming and costly. And while GPT-5 appears to be more capable than its predecessors, it's not yet advanced enough to justify the cost of running the model.
The WSJ also reports that rather than relying solely on publicly available data and licensing agreements, OpenAI has also hired people to create new data by writing code or solving math problems. It also uses synthetic data created by another of its models, o1.
OpenAI did not immediately respond to a request for comment. The company previously stated there would be no model named Orion This year.
#OpenAIs #GPT5 #reportedly #falling #short #expectations