Using CPU for AI Inference: A Cost-Effective Alternative to GPU-Based Cloud Solutions

Jason Bell
4 min readFeb 25, 2024

In the rapidly evolving world of artificial intelligence (AI) and machine learning (ML), the demand for powerful and efficient computing resources has never been higher. So much so that the news this week of NVIDIA’s share price made the headlines.

NVIDIA share prices over the years. Yeah I wish I’d bought in 2019 too.

As with all distruptive technologies we know that there’s no certainty that everything will continue on an upward curve, especially where share prices are concerned. With the amount of legal actions and directives for generative AI things could turn quickly. When there’s plenty of supply and no demand, well the prices and fortunes can change quickly.

How Did We Get Here?

Traditionally, Graphics Processing Units (GPUs) have been the go-to choice for AI training and inference tasks due to their superior parallel processing capabilities. However, the high cost of GPU-based cloud services has led many researchers and developers to explore the use of Central Processing Units (CPUs) for AI inference as a more cost-effective alternative. This post explores the viability of using CPUs for AI inference, the benefits, challenges, and when it makes sense to opt for CPUs over GPUs.

Understanding AI Inference on CPUs

AI inference is the process of using a trained AI model to make predictions or decisions based on new data. While AI training is a compute-intensive task that benefits significantly from the parallel processing power of GPUs, inference tasks can often be run efficiently on CPUs, especially when optimised properly. This is due to the fact that inference tasks generally require less computational power compared to the training phase.

Benefits of Using CPUs for AI Inference

1. Cost-Effectiveness: CPUs are generally less expensive than GPUs, both in terms of upfront costs and operational expenses. This makes CPU-based inference an attractive option for startups and businesses looking to deploy AI solutions without the heavy investment required for GPU-based cloud services.

2. Flexibility and Availability: CPUs are ubiquitous and available in virtually all computing environments, from personal laptops to cloud servers. This widespread availability makes it easier to deploy and scale AI applications without being limited by the availability of specialised GPU resources.

3. Optimised Software Libraries: Advances in software libraries and frameworks have significantly improved the efficiency of running AI inference tasks on CPUs. Libraries such as Intel’s oneDNN (part of the oneAPI Deep Neural Network Library) and OpenVINO toolkit have been optimised for high performance on Intel CPUs, making it feasible to achieve near-GPU performance for certain inference tasks.

Challenges and Considerations

1. Performance Limitations: While CPUs have become more efficient at handling AI inference tasks, GPUs still offer superior performance for complex models and large-scale applications due to their parallel processing capabilities. Therefore, the choice between CPU and GPU depends on the specific requirements of the application, including the model complexity and latency requirements.

2. Optimisation Efforts: To achieve optimal performance on CPUs, AI models and inference code may need to be specifically optimised for CPU architecture. This can include leveraging specific software libraries, adjusting batch sizes, and tuning model parameters, which may require additional development effort.

3. Energy Efficiency: While CPUs are becoming more energy-efficient, GPUs still have an edge in terms of performance per watt for high-intensity computing tasks. This aspect is crucial for large-scale deployments where energy consumption directly impacts operational costs.

When to Use CPUs for AI Inference

Choosing between CPU and GPU for AI inference depends on several factors, including cost constraints, application requirements, and scalability needs. CPUs are well-suited for:

- Small to medium-sized models where the inference latency meets the application requirements.
- Applications with sporadic or low request rates, where the cost savings of CPUs outweigh the performance benefits of GPUs.
- Deployments where minimising operational expenses is a priority, and the slightly lower performance of CPUs is acceptable.

While GPUs remain the preferred choice for training AI models and handling complex inference tasks, CPUs present a viable and cost-effective alternative for many inference applications. By carefully evaluating the specific needs of their AI applications and optimising their models and infrastructure accordingly, businesses and researchers can leverage CPUs to reduce costs without significantly compromising performance. As software libraries and CPU technologies continue to advance, the gap between CPU and GPU performance for AI inference is expected to narrow, further enhancing the attractiveness of CPU-based inference solutions.

--

--

Jason Bell

A polymath of ML/AI, expert in container deployments and engineering. Author of two machine learning books for Wiley Inc.