Go Big With GPT or Fine Tune Your Own?

Jason Bell
4 min readFeb 27, 2024

The advent of large language models (LLMs) such as OpenAI’s GPT series has marked a significant milestone in the field of artificial intelligence. These models, trained on vast datasets, have shown remarkable abilities in understanding and generating human-like text, making them invaluable for a wide range of applications.

However, the “one-size-fits-all” approach of general LLMs may not be optimal for every use case, leading to a growing interest in smaller, fine-tuned models tailored to specific needs.

I’m going to explore the pros and cons of using a general large language model versus opting for a smaller model that has been fine-tuned for specific tasks.

Pros of General Large Language Models

Versatility and Broad Knowledge Base: General LLMs are trained on diverse datasets that encompass a wide array of topics, languages, and styles. This extensive training enables them to handle a variety of tasks without needing further specialization, from generating articles to answering questions across many domains.

Reduced Need for Fine-tuning: For many applications, the out-of-the-box capabilities of general LLMs are sufficient, eliminating the need for extensive customization. This can significantly reduce development time and costs, making LLMs accessible to a wider range of users and organisations.

Continual Improvement and Support: Large language models often come with ongoing support and updates from their developers. This means they benefit from continual improvements, security updates, and expanded capabilities over time, ensuring that they remain effective tools for their users.

Cons of General Large Language Models

Resource Intensiveness: General LLMs are often resource-intensive, requiring substantial computational power for both training and inference. This can make them less accessible for individuals or organisations with limited computing resources.

It’s no secret that OpenAI have a problem with the environmental impacts of its model training and everyday execution.

Potential for Bias and Errors: Despite their broad knowledge base, general LLMs can still produce biased or inaccurate outputs due to the biases present in their training data. While efforts are made to mitigate these issues, they cannot be completely eliminated, posing a risk in sensitive applications.

Generalisation over Specialisation: In some cases, the broad capabilities of general LLMs may come at the expense of depth in specific domains. For tasks requiring deep, specialised knowledge, a general LLM might not perform as well as a model fine-tuned for that particular area.

Pros of Smaller, Fine-Tuned Models

Efficiency and Lower Resource Requirements: Smaller models require less computational power, making them more efficient to run and more accessible to users with limited resources. This can also result in faster response times, which is crucial for applications requiring real-time processing.

Customisation and Specialisation: Fine-tuning allows for the customisation of models to specific tasks, languages, or domains, potentially leading to superior performance in those areas compared to general LLMs. This can be particularly beneficial for niche applications or when handling specialized knowledge.

Reduced Risk of Bias and Errors: By carefully selecting and curating the training data, it’s possible to reduce the risk of bias and errors in the outputs. Fine-tuning offers the opportunity to address specific concerns and ensure the model’s outputs align more closely with the desired standards and values.

Cons of Smaller, Fine-Tuned Models

Requirement for Specific Data and Expertise: Fine-tuning a model requires access to high-quality, task-specific data and expertise in machine learning and domain-specific knowledge. This can pose a barrier to entry for those without the necessary resources or skills.

Maintenance and Scalability: Smaller, specialised models may require regular updates and retraining to maintain their effectiveness, especially as the nature of the task or the associated data changes over time. Scaling these models to handle additional tasks or domains can also be challenging without significant rework.

When to Use Each Approach

The choice between using a general large language model and a smaller, fine-tuned model depends on several factors, including the specific requirements of the task, available resources, and the need for specialisation.

Use a General Large Language Model When:
- You require versatility and the ability to handle a wide range of tasks.
- You have access to sufficient computational resources.
- You wish to minimise development time and costs.

Opt for a Smaller, Fine-Tuned Model When:
- The task requires deep, specialised knowledge that general LLMs cannot provide.
- Efficiency and low resource consumption are priorities.
- You have access to high-quality, task-specific data and the expertise to develop and maintain the model.

Take time to experiment with each option, I bounce between ChatGPT and a locally run model like ocra-mini or Gemini, the very small 2 billion parameter models that I can run on a laptop (my old 2015 MacBook Pro is doing just fine with these models).

The landscape is in constant change, with new methods and models coming out regularly. It’s worth keeping up with the movements, as one might just fit the bill for you.

Both general large language models and smaller, fine-tuned models offer distinct advantages and limitations. The decision to use one over the other should be guided by the specific needs of the task at hand, considering factors such as required expertise, resource availability, and the need for customisation and specialisation.

--

--

Jason Bell
Jason Bell

Written by Jason Bell

The Startup Quant and founder of ATXGV: Author of two machine learning books for Wiley Inc.