Skip to content

Deploying large language models locally to save money and protect data privacy is a great idea!

But diving into the world of models can be overwhelming with various parameters and models: 7B, 14B, 32B, 70B... Even for the same model, there are so many parameter options. Which one should you choose?

And what is the capability of your computer? Which model can it handle?

Don't panic! This article will help you clarify your thoughts and show you, in the simplest way, how to choose the right hardware for local large language model deployment! You'll no longer be confused after reading this!

A Hardware Configuration and Model Size Reference Table is at the bottom of this article.

Understanding LLM Parameters: What Do 7B, 14B, 32B Represent?

  • Meaning of Parameters: The numbers 7B, 14B, 32B represent the number of parameters in a large language model (LLM), where "B" stands for Billion. Parameters can be thought of as the "weights" learned by the model during training, storing the model's understanding of language, knowledge, and patterns.
  • Parameter Count and Model Capability: Generally, the more parameters a model has, the more complex it is. Theoretically, it can learn and store richer information, thus capturing more complex language patterns and performing more powerfully in understanding and generating text.
  • Resource Consumption and Model Size: Models with more parameters also require more computing resources (GPU power), larger memory (VRAM and system RAM), and more data for training and running.
  • Small Models vs. Large Models:
    • Large Models (e.g., 32B, 65B, or larger): Can handle more complex tasks, generate more coherent and nuanced text, and may perform better in knowledge Q&A, creative writing, and other areas. However, they have high hardware requirements and run relatively slowly.
    • Small Models (e.g., 7B, 13B): Consume fewer resources, run faster, and are more suitable for running on devices with limited resources or in application scenarios that are sensitive to latency. Small models can also perform well on some simple tasks.
  • Trade-offs in Selection: Choosing the model size requires a trade-off between the model's capabilities and hardware resources. More parameters do not necessarily mean "better." The most suitable model should be chosen based on the actual application scenario and hardware conditions.

What Kind of Hardware Do I Need to Run Local Models?

  • Core Requirement: Video RAM (VRAM)

    • Importance of VRAM: When running large models, the model's parameters and intermediate calculation results need to be loaded into the VRAM. Therefore, the size of the VRAM is the most critical hardware indicator for running local large models. Insufficient VRAM will cause the model to fail to load, or only allow for the use of very small models, and may even severely reduce the running speed.
    • The More, the Better: Ideally, having a GPU with as much VRAM as possible is best, as this allows you to run models with larger parameters and achieve better performance.
  • Second Most Important: System Memory (RAM)

    • Role of RAM: System memory RAM is used to load the operating system, run programs, and supplement VRAM. When VRAM is insufficient, system RAM can be used as "overflow" space, but it is much slower (because RAM is much slower than VRAM) and will significantly reduce model running efficiency.
    • Sufficient RAM is also Important: It is recommended to have at least 16GB or even 32GB or more of system RAM, especially when your GPU VRAM is limited. More RAM can help alleviate VRAM pressure.
  • Processor (CPU)

    • Role of CPU: The CPU is mainly responsible for data preprocessing, model loading, and some model calculation tasks (especially in the case of CPU offloading). A good CPU can improve model loading speed and assist the GPU in calculations to a certain extent.
    • NPU (Neural Processing Unit): Some laptops are equipped with an NPU, which is a hardware specifically designed to accelerate AI calculations. NPU can accelerate specific types of AI operations, including the inference process of certain large models, thereby improving efficiency and reducing power consumption. If your laptop has an NPU, it will be a plus, but the GPU is still the core of running local large models. The support and effect of NPU depend on the specific model and software.
  • Storage (Hard Drive/SSD)

    • Role of Storage: You need enough hard drive space to store model files. Large model files are usually very large. For example, a quantized 7B model may require 4-5GB of space, and larger models may require tens or even hundreds of GB of space.
    • SSD is Better than HDD: Using a solid-state drive (SSD) instead of a mechanical hard drive (HDD) can significantly speed up model loading.

Hardware Priority

  1. Video RAM (VRAM) (Most Important)
  2. System Memory (RAM) (Important)
  3. GPU Performance (Compute Power) (Important)
  4. CPU Performance (Auxiliary Role)
  5. Storage Speed (SSD Better than HDD)

What If I Don't Have a Dedicated GPU?

  • Run with Integrated Graphics and CPU: If you don't have a dedicated GPU, you can still use integrated graphics (such as Intel Iris Xe) or rely entirely on the CPU to run the model. However, performance will be greatly limited. It is recommended to focus on running 7B or even smaller, highly optimized models and use technologies such as quantization to reduce resource requirements.
  • Cloud Services: If you need to run large models but lack local hardware, consider using cloud GPU services such as Google Colab, AWS SageMaker, RunPod, etc.

How to Run Local Models?

For beginners, it is recommended to use some user-friendly tools that simplify the process of running local models:

  • Ollama: Operates through the command line, but is very simple to install and use, focusing on running models quickly.
  • LM Studio: Has a simple and intuitive interface, supports model downloading, model management, and one-click running.

Hardware Configuration and Model Size Reference Table

Scroll left and right to see the whole table

X86 Laptops
Integrated Graphics Laptop (e.g., Intel Iris Xe)Shared System Memory (8GB+ RAM)8-bit, or even 4-bit quantization≤ 7B (Extremely Quantized)* Very basic local running experience, suitable for learning and light experience.* Limited performance, slow inference speed.* It is recommended to use 4-bit or lower precision quantized models to minimize VRAM usage as much as possible.* Suitable for running small models, such as TinyLlama.
Entry-Level Gaming Laptop/Thin & Light Laptop with Dedicated GPU (e.g., RTX 3050/4050)4-8 GB VRAM + 16GB+ RAM4-bit - 8-bit quantization7B - 13B (Quantized)* Can run 7B models relatively smoothly. Some 13B models can also be run through quantization and optimization.* Suitable for experiencing some mainstream medium and small models.* Note that VRAM is still limited, and running large models will be difficult.
Mid-to-High-End Gaming Laptop/Mobile Workstation (e.g., RTX 3060/3070/4060)8-16 GB VRAM + 16GB+ RAM4-bit - 16-bit (flexible choice)7B - 30B (Quantized)* Can run 7B and 13B models more comfortably and has the potential to try models around 30B (requires good quantization and optimization).* You can choose different quantization precisions according to your needs to achieve a balance between performance and model quality.* Suitable for exploring more types of medium and large models.

ARM (Apple M Series)
Raspberry Pi 4/54-8 GB RAM4-bit quantization (or lower)≤ 7B (Extremely Quantized)* Limited by memory and computing power, mainly used for running extremely small models or as an experimental platform.* Suitable for researching model quantization and optimization technologies.
Apple M1/M2/M3 (Unified Memory)8GB - 64GB Unified Memory4-bit - 16-bit (flexible choice)7B - 30B+ (Quantized)* The unified memory architecture makes memory utilization more efficient. Even an M-series Mac with 8GB of memory can run a certain size of model.* Higher memory versions (16GB+) can run larger models and even try models above 30B.* Apple chips have advantages in energy efficiency.

Nvidia GPU Computers
Entry-Level Dedicated Graphics Card (e.g., RTX 4060/4060Ti)8-16 GB VRAM4-bit - 16-bit (flexible choice)7B - 30B (Quantized)* Performance is close to that of mid-to-high-end gaming laptops, but desktops have better heat dissipation and can run stably for a long time.* High cost performance, suitable for entry-level local LLM players.
Mid-Range Dedicated Graphics Card (e.g., RTX 4070/4070Ti/4080)12-16 GB VRAM4-bit - 16-bit (flexible choice)7B - 30B+ (Quantized)* Can run medium and large models more smoothly and has the potential to try larger parameter models.* Suitable for users with higher requirements for local LLM experience.
High-End Dedicated Graphics Card (e.g., RTX 3090/4090, RTX 6000 Ada)24-48 GB VRAM8-bit - 32-bit (or even higher)7B - 70B+ (Quantized/Native)* Can run most open-source LLMs, including large models (such as 65B, 70B).* You can try higher bit precision (such as 16-bit, 32-bit) to get the best model quality, or use quantization to run larger models.* Suitable for professional developers, researchers, and heavy LLM users.
Server-Level GPU (e.g., A100, H100, A800, H800)40GB - 80GB+ VRAM16-bit - 32-bit (Native Precision)30B - 175B+ (Native/Quantized)* Designed specifically for AI computing, with ultra-large video memory and extremely strong computing power.* Can run ultra-large models and even perform model training and fine-tuning.* Suitable for enterprise-level applications, large-scale model deployment, and research institutions.

Table Supplementary Notes

  • Quantization: Refers to reducing the numerical precision of model parameters, such as reducing from 16-bit floating-point numbers (float16) to 8-bit integers (int8) or 4-bit integers (int4). Quantization can significantly reduce model size and VRAM usage, and accelerate inference speed, but may slightly reduce model accuracy.
  • Extreme Quantization: Refers to using very low bit precision quantization, such as 3-bit or 2-bit. Can further reduce resource requirements, but the decline in model quality may be more obvious.
  • Native: Refers to the model running at its original precision, such as float16 or bfloat16. Can obtain the best model quality, but has the highest resource requirements.
  • Quantized Parameter Range: The "Recommended LLM Parameter Range (Quantized)" in the table refers to the model parameter range that the hardware can roughly run smoothly under the premise of reasonable quantization. The actual size and performance of the model that can be run also depend on the specific model architecture, quantization level, software optimization, and other factors. The parameter range given here is for reference only.
  • Unified Memory: A feature of Apple Silicon chips, where the CPU and GPU share the same physical memory, resulting in higher data exchange efficiency.