LLMs — The Benefits and Risks of Self Hosting.

Jason Bell
3 min readFeb 26, 2024

In the rapidly evolving landscape of artificial intelligence (AI), large language models (LLMs) like GPT (Generative Pre-trained Transformer) have emerged as powerful tools for a wide array of applications, from content creation to customer service. However, while these models offer significant advantages, companies should be cautious of over-relying on them.

While 2022 and 2023 saw the incline of adoption of the likes of ChatGPT and other AI providers, 2024 may just be the year when the legalities of these models may come to bite.

It’s important to know the current landscape of these actions, predict the future in terms of what it means for us and also business in general. And when I say “predict the future”, I mean with our own heads and guts, not a large language model.

Accuracy and Limitations of Large Language Models

Large language models are trained on vast datasets, enabling them to generate human-like text based on the input they receive. This capability has been leveraged in creating content, answering queries, and even coding. However, their accuracy can be a double-edged sword. While LLMs can process and generate information at an unprecedented scale, they are not infallible. They sometimes produce biased, incorrect, or nonsensical answers due to the limitations of their training data and the inherent challenges in understanding human context and subtleties.

Moreover, LLMs lack true comprehension. Their responses are based on patterns in data rather than an understanding of the world. This can lead to errors in judgment or ethical lapses, such as generating content that is inadvertently offensive or misleading. The reliance on these models without proper oversight can therefore pose significant risks to a company’s reputation and operational integrity.

Fine-Tuning and Customisation

One way to enhance the accuracy and relevance of LLM outputs is through fine-tuning. This process involves training the model on a more specific dataset after its initial pre-training on a broad corpus. Fine-tuning allows companies to tailor the model’s responses to better align with their specific needs and values, potentially reducing the incidence of errors and improving the model’s utility in niche applications.

However, fine-tuning also presents challenges. It requires significant computational resources and expertise in machine learning. Moreover, if the dataset used for fine-tuning is biased or flawed, it can exacerbate the model’s tendency to generate biased or inaccurate outputs. Companies must carefully curate their training data and continuously monitor the model’s performance to mitigate these risks.

Experimentation

What I’ve noticed during 2023 was users were happy to jump to ChatGPT without a care of the downsides of using the system. I love what GPT3.5 and GPT4 can do, they are incredible, at the same time though I’m very aware of what information I’m typing into the UI.

I’ve been using Ollama a lot recently, firstly to try different models but also as an experiment to try smaller compact model builds on everyday hardware. I wrote recently about using models with inference (prediction) on CPU hardware, rather than GPU.

Self-Hosting: Benefits and Risks

Self-hosting LLMs offers companies greater control over their AI infrastructure, potentially leading to better performance, customisation, and data privacy. By hosting the model on their own servers, businesses can ensure that their proprietary data does not leave their control, addressing concerns about data security and privacy.

However, self-hosting comes with its own set of challenges. It requires substantial investment in hardware and expertise to manage and maintain the infrastructure. Moreover, the responsibility for updating the model to reflect new information and for safeguarding against security vulnerabilities lies entirely with the company. This can be a daunting task, given the rapid pace of developments in AI and cybersecurity.

Conclusion

Large language models represent a significant advancement in AI, offering businesses the potential to innovate and improve efficiency. However, their limitations and the complexities involved in fine-tuning and self-hosting necessitate a cautious approach. Companies must balance the benefits of leveraging these powerful tools against the risks of over-reliance, inaccuracies, and ethical concerns. By doing so, they can harness the potential of LLMs while mitigating the risks associated with these cutting-edge technologies.

--

--

Jason Bell

The Startup Quant and founder of ATXGV: Author of two machine learning books for Wiley Inc.