AI Essentials: What is fine-tuning?

Engine
3 min readOct 17, 2024

--

By Min Jun Jung, Policy Fellow, Engine Advocacy & Foundation

This blog continues our series called “AI Essentials,” which aims to bridge the knowledge gap surrounding AI-related topics. It discusses what fine-tuning is and how startups can leverage it to drive greater innovation in the AI ecosystem.

Developing AI is incredibly expensive, but that hasn’t stopped startups with limited resources from innovating in AI and creating new products to improve our lives or businesses through a process called fine-tuning.

Fine-tuning is the process of adapting a pretrained model to perform specific tasks. Rather than training a model from scratch, fine-tuning leverages the general knowledge already acquired by the model during its initial training, specializing it for a developer’s specific needs.

For example, OpenAI’s GPT-4 is a generalized large language model capable of performing a variety of tasks, ranging from writing essays to planning parties. Fine-tuning involves taking a model like GPT-4 and adapting it for a particular use, such as creating e-commerce product descriptions.

The process of fine-tuning is similar to training an AI model; however, instead of starting with random weights, fine-tuning uses the weights of a model that has already learned general patterns from a large and diverse dataset, often available as open-source or open-weight models. The process begins by gathering a dataset specific to a new task. For instance, fine-tuning a model to create e-commerce product descriptions would involve collecting numerous examples of product descriptions, sales statistics, and order histories. This dataset is used to pass examples through the model, which adjusts its internal weights to learn patterns unique to this task. As the model learns from the new data, it specializes its knowledge, becoming especially proficient in the task at hand.

Startups fine-tuning existing models to circumvent the high costs and complexities of building an AI model from scratch. This approach allows them to sidestep the initial training phase, which consumes vast amounts of data and compute resources, focusing instead on innovating with existing models. Fine-tuning not only cuts costs and saves resources but also can reduce risks related to bias and user data privacy, as it leverages models trained on diverse, high-quality data. Overall, fine-tuning democratizes AI development, enabling startups to build innovative AI solutions that compete effectively with larger incumbents and address unique service gaps.

Fine-tuning, which relies on access to pretrained models and model weights, is deeply intertwined with open-source AI. To foster innovation powered by fine-tuning, policymakers should also support open-source initiatives. As an additional benefit, open-source and open-weight models can actually make AI safer by attracting testing and input from researchers and developers worldwide.

By advocating for open-source, policymakers enable startups to innovate without the prohibitive costs associated with developing models from scratch, fostering greater inclusion, innovation, and creativity in the AI ecosystem.

Engine is a non-profit technology policy, research, and advocacy organization that bridges the gap between policymakers and startups. Engine works with government and a community of thousands of high-technology, growth-oriented startups across the nation to support the development of technology entrepreneurship through economic research, policy analysis, and advocacy on local and national issues.

--

--

Engine

Engine is the voice of startups in government. We are a nonprofit that supports entrepreneurship through economic research, policy analysis, and advocacy.