OpenAI has introduced fine-tuning for GPT-4o, empowering developers to tailor the model for unique applications. This new feature offers enhanced performance and efficiency, enabling fine-tuned models to follow intricate domain-specific instructions or modify response styles. With up to 1 million training tokens available for free daily through September 23rd, developers can begin refining GPT-4o to meet their specific needs, making it a versatile tool for a range of industries
OpenAI
Fine-tuning for GPT-4o, a feature that developers have been requesting for the longest, was released by OpenAI. Additionally, through September 23rd, they are giving away 1 million training tokens per day to all organizations for free.
With bespoke datasets, developers may now fine-tune GPT-4o to get faster performance at a cheaper cost for their particular use cases. By fine-tuning, the model can be made to follow intricate domain-specific instructions or to alter the tone and structure of responses. With as few as a few dozen instances in their training data set, developers can immediately achieve good outcomes for their apps.
Fine-tuning can have a significant impact on model performance in a range of fields, including creative writing and coding. This is just the beginning; Open AI will keep making investments to increase the number of ways developers may customize their models.
Also Read: SWE-bench Verified: How OpenAI is Setting New Benchmarks for AI in Software Engineering
All developers have access to GPT-4o fine-tuning on all premium use tiers.
Click build on the fine-tuning dashboard and choose gpt-4o-2024-08-06 from the base model drop-down to get started. It costs $25 per million tokens for GPT-4o fine-tuning training, $3.75 per million for inference, and $15 per million for output tokens.
On all paid usage tiers, GPT-4o micro fine-tuning is also accessible to developers. Go to the fine-tuning dashboard and choose gpt-4o-mini-2024-07-18 from the drop-down menu for the base model. Open AI is giving out 2 million training tokens per day for free on the GPT-4o mini until September 23.
Also Read: What is OpenAI System Card and How is GPT-4o Following AI Safety Measures?
In the last few months, Open AI has collaborated with a select group of reliable partners to test GPT-4o fine-tuning and discover their use cases. Here are a few instances of success:
Cosine’s Genie is an AI software engineering helper that can work with humans to modify code, create features, and autonomously find and fix errors. It can reason through intricate technical issues and alter code more accurately while using fewer tokens. Genie is driven by a refined GPT-4o model that was trained on real software engineer work samples, allowing the model to learn how to react in a certain way. Additionally, the model was taught to provide output in predetermined formats, like changes that were simple to commit to codebases.
Also Read: OpenAI’s AI Detection Tool Sparks Debate Over ChatGPT Watermarking
Genie obtains a SOTA score of 43.8% on the new SWE-bench Verified benchmark using an optimized GPT-4o model. Additionally, Genie achieved the most improvement in this benchmark’s history with a SOTA score of 30.08% on SWE-bench Full, surpassing its prior SOTA score of 19.27%.
Recently, Distyl(opens in a new window), a Fortune 500 company partner for AI solutions, took first position on the industry-leading text-to-SQL benchmark, BIRD-SQL. Distyl’s optimized GPT-4o performed exceptionally well in tasks including query reformulation, intent classification, chain-of-thought, and self-correction, with a notable accomplishment of 71.83% execution accuracy on the leaderboard.
Also Read: Microsoft Lists OpenAI as Competitor Despite $13 Billion Investment
You maintain complete control of your models and complete ownership of your company data, including all inputs and outputs. This guarantees that your data won’t ever be shared or used to develop new models.
For refined models, Open AI has also added layered safety mitigations to make sure they aren’t abused. For instance, they monitor usage to make sure apps follow usage regulations and perform automatic safety evaluations on optimized models regularly.
The Open AI team is eager to see what may be created by optimizing GPT-4o by developers. Please get in touch with their staff to learn more about the possibilities available for customizing the model.
Also Read: OpenAI ChatGPT Voice Rolled Out Plus Users, Check How to use it on Mobile
This post was last modified on August 21, 2024 6:29 am
Rish Gupta is an Indian entrepreneur who serves as the chief executive officer (CEO) of…
Are you looking to advance your engineering career in the field of robotics? Check out…
Artificial intelligence is a topic that has recently made internet users all over the world…
Boost your learning journey with the power of AI communities. The article below highlights the…
Demystify the world of Artificial Intelligence with our comprehensive AI Glossary and Terminologies Cheat Sheet.…
Scott Wu is the co-founder and Chief Executive Officer of Cognition Labs, an artificial intelligence…
View Comments
Can you be more specific about the content of your article? After reading it, I still have some doubts. Hope you can help me.
Thank you for your sharing. I am worried that I lack creative ideas. It is your article that makes me full of hope. Thank you. But, I have a question, can you help me?
Your point of view caught my eye and was very interesting. Thanks. I have a question for you.