Separator

OpenAI Delays Open-Weight AI Model Launch for Additional Safety Testing

Separator

img

OpenAI CEO Sam Altman announced that the company will postpone the release of its open model, which had already been delayed by a month earlier this summer.

OpenAI initially aimed to unveil the model next week, but Altman now states that the launch will be postponed indefinitely for additional safety evaluations.

The release of OpenAI’s open model is one of the most eagerly awaited AI events of the summer, alongside the anticipated debut of GPT-5 from the ChatGPT creator.

In contrast to GPT-5, OpenAI’s open model will be accessible for developers to download and run locally without restriction.

With both of these launches, OpenAI seeks to reaffirm its position as Silicon Valley’s premier AI lab—a challenge that becomes increasingly difficult as xAI, Google DeepMind, and Anthropic invest substantial resources in their own projects.

Also Read: Kirsty Coventry's Vision for the Future of IOC

Developers will have to wait longer to experience the first open model released by OpenAI in years due to this delay. OpenAI’s open model is projected to possess reasoning abilities comparable to the company’s o-series of models, with OpenAI aiming to make it the best in its class relative to other open models.

The competitive landscape of open AI models became even more intense this week.

 

Earlier, the Chinese AI startup Moonshot AI introduced Kimi K2, a one-trillion-parameter open AI model that surpasses OpenAI’s GPT-4.1 AI model in several agentic-coding benchmarks.

In June, when Altman first announced delays concerning OpenAI’s open model, he mentioned that the company had achieved something “unexpected and quite amazing,” though he did not provide additional details on that matter.

Also Read: Has India Finally Found a Solution for its Clean Energy Woes?

However, this openness raises a related concern: once the model is released, its weights cannot be undone. Malicious actors, nation-state entities, or harmful developers could potentially modify or misuse the model for purposes such as misinformation, fraud, or other harmful applications.

Reports suggest that OpenAI’s safety and policy teams are conducting final assessments of areas deemed high-risk, including misuse risks, capability limits, and mitigation strategies. The delay implies that these concerns necessitate further red-teaming, sandbox evaluations, or possibly last-minute adjustments to the architecture.

Additionally, the announcement reflects a degree of internal transparency that is not always typical in the industry. Instead of discreetly postponing the launch, Altman opted to publicly announce the delay early, even without providing a new timeline—acknowledging user expectations while clarifying the company’s reasoning.

Current Issue




🍪 Do you like Cookies?

We use cookies to ensure you get the best experience on our website. Read more...