Skip to content

Deploying large language models (LLMs) locally to save money and protect data privacy is a great idea!

But diving into the world of models, you'll be overwhelmed by various parameters and models: 7B, 14B, 32B, 70B... Even for the same model, there are so many parameters. Which one should you choose?

And how powerful is my computer? Which model can it run?

Don't worry! This article will help you clarify your thoughts and simply tell you how to choose the right hardware for local LLM deployment! You'll no longer be confused after reading this!

There is a Hardware Configuration and Model Size Reference Table at the bottom of this article.

Understanding LLM Parameters: What Do 7B, 14B, and 32B Mean?

  • Meaning of Parameters: The numbers 7B, 14B, 32B represent the number of parameters in a large language model (LLM), where "B" is an abbreviation for Billion. Parameters can be thought of as the "weights" that the model learns during training, and they store the model's understanding of language, knowledge, and patterns.
  • Parameter Count and Model Capability: Generally, the more parameters a model has, the more complex it is. Theoretically, it can learn and store richer information, thereby capturing more complex language patterns and performing more powerfully in understanding and generating text.
  • Resource Consumption and Model Size: Models with more parameters also require more computing resources (GPU power), more memory (VRAM and system RAM), and more data for training and running.
  • Small vs. Large Models:
    • Large Models (e.g., 32B, 65B, or even larger): Capable of handling more complex tasks, generating more coherent and nuanced text, and may perform better in knowledge Q&A, creative writing, etc. However, they have high hardware requirements and run relatively slowly.
    • Small Models (e.g., 7B, 13B): Consume fewer resources, run faster, and are more suitable for running on resource-limited devices or in application scenarios sensitive to latency. Small models can also perform well on some simple tasks.
  • Selection Trade-offs: Choosing a model size requires a trade-off between the model's capabilities and hardware resources. More parameters are not necessarily "better". You need to choose the most suitable model based on the actual application scenario and hardware conditions.

What Kind of Hardware Do I Need to Run Local Models?

  • Core Requirement: Video RAM (VRAM)

    • Importance of VRAM: When running a large model, the model's parameters and intermediate calculation results need to be loaded into VRAM. Therefore, the size of VRAM is the most critical hardware indicator for running local LLMs. Insufficient VRAM will prevent the model from loading, or only allow the use of very small models, or even severely reduce the running speed.
    • The Bigger, the Better: Ideally, having a GPU with as much VRAM as possible is the best, as this allows you to run models with larger parameters and get better performance.
  • Second Most Important: System Memory (RAM)

    • Role of RAM: System RAM is used to load the operating system, run programs, and supplement VRAM. When VRAM is insufficient, system RAM can be used as "overflow" space, but it is much slower (because RAM is much slower than VRAM) and will significantly reduce model running efficiency.
    • Sufficient RAM is also Important: It is recommended to have at least 16GB or even 32GB or more of system RAM, especially when your GPU VRAM is limited, as larger RAM can help alleviate VRAM pressure.
  • Processor (CPU)

    • Role of CPU: The CPU is mainly responsible for data preprocessing, model loading, and some model calculation tasks (especially in the case of CPU offloading). A good performance CPU can improve the model loading speed and assist the GPU in calculations to a certain extent.
    • NPU (Neural Network Processor): Some laptops are equipped with an NPU, which is a dedicated hardware for accelerating AI calculations. The NPU can accelerate specific types of AI operations, including the inference process of some large models, thereby improving efficiency and reducing power consumption. If your laptop has an NPU, it will be a bonus, but the GPU is still the core for running local large models. The support and effect of NPU depend on the specific model and software.
  • Storage (Hard Drive/SSD)

    • Role of Storage: You need enough hard drive space to store model files. Large model files are usually very large. For example, a quantized 7B model may require 4-5GB of space, and larger models require tens or even hundreds of GB of space.
    • SSD is Better than HDD: Using a solid-state drive (SSD) instead of a mechanical hard drive (HDD) can significantly speed up model loading.

Hardware Priority

  1. Video RAM (VRAM) (Most Important)
  2. System Memory (RAM) (Important)
  3. GPU Performance (Compute Power) (Important)
  4. CPU Performance (Supporting Role)
  5. Storage Speed (SSD is Better than HDD)

What if I Don't Have a Dedicated GPU?

  • Run with Integrated Graphics and CPU: If you don't have a dedicated GPU, you can still use integrated graphics (such as Intel Iris Xe) or rely entirely on the CPU to run the model. However, the performance will be greatly limited. It is recommended to focus on running 7B or even smaller, highly optimized models and use technologies such as quantization to reduce resource requirements.
  • Cloud Services: If you need to run large models but your local hardware is insufficient, you can consider using cloud GPU services such as Google Colab, AWS SageMaker, RunPod, etc.

How to Run Local Models?

For beginners, it is recommended to use some user-friendly tools that simplify the process of running local models:

  • Ollama: Operates through the command line, but it is very simple to install and use, focusing on running models quickly.
  • LM Studio: Has a simple and intuitive interface, supports model download, model management, and one-click execution.

Hardware Configuration and Model Size Reference Table

Swipe left and right to see the full content

X86 Laptop
Laptop with integrated graphics (e.g., Intel Iris Xe)Shared system memory (8GB+ RAM)8-bit, or even 4-bit quantization≤ 7B (Extremely Quantized)* A very basic local running experience, suitable for learning and light experience. * Limited performance, slow inference speed. * It is recommended to use 4-bit or lower precision quantized models to minimize VRAM usage. * Suitable for running small models, such as TinyLlama, etc.
Entry-level gaming laptop/thin and light laptop with discrete graphics (e.g., RTX 3050/4050)4-8 GB VRAM + 16GB+ RAM4-bit - 8-bit quantization7B - 13B (Quantized)* Can run 7B models relatively smoothly, and some 13B models can also be run through quantization and optimization. * Suitable for experiencing some mainstream small and medium-sized models. * Note that VRAM is still limited, and running large models will be difficult.
Mid-to-high-end gaming laptop/mobile workstation (e.g., RTX 3060/3070/4060)8-16 GB VRAM + 16GB+ RAM4-bit - 16-bit (flexible choice)7B - 30B (Quantized)* Can run 7B and 13B models more comfortably and has the potential to try models around 30B (requires good quantization and optimization). * You can choose different quantization precisions according to your needs to achieve a balance between performance and model quality. * Suitable for exploring more types of medium and large models.

ARM (Apple M Series)
Raspberry Pi 4/54-8 GB RAM4-bit quantization (or lower)≤ 7B (Extremely Quantized)* Limited by memory and computing power, it is mainly used to run extremely small models or as an experimental platform. * Suitable for researching model quantization and optimization techniques.
Apple M1/M2/M3 (Unified Memory)8GB - 64GB Unified Memory4-bit - 16-bit (flexible choice)7B - 30B+ (Quantized)* The unified memory architecture makes memory utilization more efficient. Even an M-series Mac with 8GB of memory can run models of a certain size. * Higher memory versions (16GB+) can run larger models and even try models above 30B. * Apple chips have advantages in energy efficiency.

Nvidia GPU Computer
Entry-level discrete graphics (e.g., RTX 4060/4060Ti)8-16 GB VRAM4-bit - 16-bit (flexible choice)7B - 30B (Quantized)* Performance is close to that of mid-to-high-end gaming laptops, but desktop computers have better heat dissipation and can run stably for a long time. * High cost performance, suitable for entry-level local LLM players.
Mid-range discrete graphics (e.g., RTX 4070/4070Ti/4080)12-16 GB VRAM4-bit - 16-bit (flexible choice)7B - 30B+ (Quantized)* Can run medium and large models more smoothly and has the potential to try larger parameter models. * Suitable for users who have high requirements for local LLM experience.
High-end discrete graphics (e.g., RTX 3090/4090, RTX 6000 Ada)24-48 GB VRAM8-bit - 32-bit (or even higher)7B - 70B+ (Quantized/Native)* Can run most open-source LLMs, including large models (such as 65B, 70B). * You can try higher bit precision (such as 16-bit, 32-bit) to obtain the best model quality, or use quantization to run larger models. * Suitable for professional developers, researchers, and heavy LLM users.
Server-level GPU (e.g., A100, H100, A800, H800)40GB - 80GB+ VRAM16-bit - 32-bit (Native Precision)30B - 175B+ (Native/Quantized)* Designed for AI computing, with ultra-large video memory and extremely strong computing power. * Can run ultra-large models and even perform model training and fine-tuning. * Suitable for enterprise-level applications, large-scale model deployment, and research institutions.

Table Supplementary Notes

  • Quantization: Refers to reducing the numerical precision of model parameters, such as from 16-bit floating-point (float16) to 8-bit integer (int8) or 4-bit integer (int4). Quantization can significantly reduce model size and VRAM usage and accelerate inference speed, but may slightly reduce model accuracy.
  • Extreme Quantization: Refers to using very low bit precision quantization, such as 3-bit or 2-bit. It can further reduce resource requirements, but model quality degradation may be more obvious.
  • Native: Refers to the model running at its original precision, such as float16 or bfloat16. You can obtain the best model quality, but the resource requirements are the highest.
  • Quantized Parameter Range: The "Recommended LLM Parameter Range (After Quantization)" in the table refers to the model parameter range that this hardware can roughly run smoothly under the premise of reasonable quantization. The actual model size and performance that can be run also depend on specific model architecture, quantization level, software optimization, and other factors. The parameter range given here is for reference only.
  • Unified Memory: A feature of Apple Silicon chips, the CPU and GPU share the same physical memory, and the data exchange efficiency is higher.