HomeLEARN AIOpenAI releases "GPT-4o mini," a high-performance, super-low cost-model

OpenAI releases “GPT-4o mini,” a high-performance, super-low cost-model

OpenAI has unveiled GPT-4o mini, a smaller and more cost-effective version of its powerful GPT-4o model. 
GPT-4o mini is being touted as “the most cost-efficient small model in the market,” with pricing that dramatically undercuts competitors. 
Developers will pay just $0.15 per million input tokens and $0.60 per million output tokens, compared to $5.00 and $15.00 for GPT-4o, respectively.
Olivier Godement, OpenAI’s Head of Product, API, discussed the model’s potential with VentureBeat: “The cost per intelligence is so good, I expect it’s going to be used for all sorts of customer support, software engineering, creative writing, all kinds of tasks.”
Despite the “mini,” GPT-4o mini boasts impressive capabilities. It outperforms GPT-3.5 Turbo on various benchmarks and can handle both text and vision inputs. 
OpenAI reports that GPT-4o mini achieves an 82.0% score on the Massive Multitask Language Understanding (MMLU) benchmark, surpassing competitors like Google’s Gemini 1.5 Flash (77.9%) and Anthropic’s Claude 3 Haiku (73.8%).

The model is set to replace GPT-3.5 Turbo for ChatGPT Plus and Teams subscribers, offering users a more powerful model at no additional cost. 
Early adopters, including startups Ramp and Superhuman, have reported promising results for tasks like receipt categorization and personalized email responses.
OpenAI keen to assert GPT-4o mini’s safety
While OpenAI is pushing the boundaries with GPT-4o mini’s capabilities and affordability, it’s not skimping on safety. It uses the same mechanisms it developed for the bigger GPT-4o model.
OpenAI also brought in over 70 experts from fields like social psychology and misinformation to put GPT-4o through its paces. 
These specialists helped identify potential risks, allowing the team to address issues before they became problems. Learnings were rolled into GPT-4o mini.
OpenAI also introduced what they’re calling the “instruction hierarchy” method, which “helps to improve the model’s ability to resist jailbreaks, prompt injections, and system prompt extractions. This makes the model’s responses more reliable and helps make it safer to use in applications at scale.” 
That’s probably a pitch for enterprise users who want to avoid erroneous results and hallucinations at all costs.
Looking ahead, OpenAI plans to expand GPT-4o mini’s capabilities, including its ability to generate imagery, audio, and video outputs. The model is also slated to be available through Apple Intelligence this fall, coinciding with the release of iOS 18.
While GPT-4o mini is quite exciting, OpenAI has faced setbacks in other areas. The company recently delayed the release of voice and emotion-reading features for ChatGPT, citing the need for additional safety testing. 
People were stunned when the company demoed GPT-4o and its speech synthesis, but things have quieted since then. 
Nevertheless, GPT-4o mini proves that people at OpenAI are still working hard despite a handful of recent controversies. 

RELATED ARTICLES

Most Popular