OpenAI is set to launch a new feature that will enable corporate clients to tailor its most advanced AI model, GPT-4o, to their specific needs by using their own company data. OpenAI will let businesses customize GPT-4o for specific use with the new feature. This move comes amid increasing competition in the AI sector and growing pressure on businesses to show tangible returns on their AI investments.
The new feature, known in the AI industry as “fine-tuning,” will be available starting Tuesday. Fine-tuning allows companies to train AI models with additional, specialized data related to a specific task or field. For example, a skateboard manufacturer could fine-tune the AI to act as a customer service chatbot, answering questions about skateboard maintenance and parts.
While fine-tuning is a first for GPT-4o, OpenAI has previously allowed users to fine-tune other models, including GPT-4o mini, a less powerful, cost-effective version of GPT-4o. Fine-tuning is common among tech companies offering customizable AI models. OpenAI aims to simplify the process by working directly with customers, rather than relying on third-party services or less powerful tools.
Olivier Godement, OpenAI’s head of product for its API, stated, “We’ve been extremely focused on lowering the bar, the friction, the amount of work it takes to get started.”
Simple Process, Substantial Results
OpenAI will let businesses customize GPT-4o for specific use by allowing them to fine-tune the model with their own data. To fine-tune a model, customers need to upload their data to OpenAI’s servers. The training process typically takes one to two hours, according to John Allard, a software engineer at OpenAI who works on customization. Initially, this feature will only support text-based data, with plans to expand to images and other content in the future.
Fine-tuning can greatly enhance the performance of GPT-4o by optimizing it for specific business needs. This could improve the model’s ability to understand and respond to complex, domain-specific instructions, making it more effective in tasks ranging from coding and creative writing to customer service and technical support.
Broad Accessibility, Even for Small Businesses
OpenAI claims that developers can see significant performance improvements with just a few dozen examples in their training data sets. This makes advanced AI technology more accessible, even for businesses with limited data resources.
This update follows a multiyear paid agreement between OpenAI and Condé Nast, the media giant behind The New Yorker, Vogue, and Wired. This partnership allows OpenAI to train its models on Condé Nast’s content library while linking users to news stories from these outlets.
A New Trend in AI?
By providing fine-tuning capabilities, OpenAI will let businesses customize GPT-4o for specific use, improving the model’s effectiveness in various tasks. This isn’t the first such deal for OpenAI. In June, the company signed a similar agreement with Time, granting access to over a century of content for model training. These partnerships highlight a significant shift in how AI companies are monetizing and utilizing media content.
One early example of fine-tuning GPT-4o comes from Cosine, a Y Combinator-backed startup developing Genie, an AI-powered software engineering assistant. By training GPT-4o on real-world examples from software engineers, the model learned to respond in a style similar to human engineers. This fine-tuning allowed Genie to outperform other AI-assisted coding solutions, according to OpenAI.
OpenAI’s new feature could play a crucial role in helping businesses unlock the full potential of AI in their operations. However, the customization process comes with a cost. Training GPT-4o using fine-tuning will cost $25 for every million tokens, with one million tokens roughly equating to 2,500 pages of text. Additionally, businesses will need to pay $3.75 for every million tokens processed as input and $15 per million tokens for output generation. By contrast, fine-tuning GPT-4o mini will cost $3 per million tokens.