Skip to content

Want to locally deploy powerful open-source AI models like Qwen 2.5, Llama 3, and DeepSeek-R1, but struggling with the lack of simple and easy-to-use methods?

Don't worry! The golden combination of Ollama + Open WebUI will clear all obstacles for you.

This article will provide a step-by-step tutorial detailing how to use Ollama + Open WebUI to easily build a local AI environment, giving you an exclusive and powerful AI assistant to explore the infinite possibilities of AI!

Warm reminder: Due to hardware limitations, local deployments usually cannot run the largest version of DeepSeek-R1 (such as 67B). But don't worry, smaller models (such as 1.3B or 7B) can run smoothly on most personal computers and provide excellent reasoning capabilities. More importantly, you can choose the version that best suits your needs!

Why Choose Ollama + Open WebUI?

Among many local deployment solutions, the Ollama + Open WebUI combination stands out as the preferred choice for many AI enthusiasts. What is their charm?

  • Ollama: A Simplified Model Engine
    • Ollama is like an "AI model treasure chest". With just one command, you can download, install, and run various mainstream large language models, such as Llama 3 and DeepSeek-R1!
  • Open WebUI: Elegant and Easy-to-Use Interactive Interface
    • Open WebUI provides a beautiful and intuitive web interface for Ollama.
    • Fully open source and free.

After deployment, just open http://127.0.0.1:8080 in your browser to start chatting with your AI assistant:

image.png

Exclusive to Windows Users: One-Click Startup Integrated Package, Say Goodbye to Cumbersome Configuration!

Considering the difficulties that Windows users may encounter when configuring the Docker environment, we have thoughtfully prepared an integrated package that can be used after downloading and decompressing, truly achieving "out-of-the-box"!

  1. Download and Decompress the Integrated Package:

    Integrated package download address: https://www.123684.com/s/03Sxjv-4cTJ3

    0.webp

    • If you have not installed Ollama, please double-click the ollama-0.1.28-setup.exe file in the integrated package to install it first. The installation process is very simple, just click "Next" all the way.
  2. Start WebUI:

    • Double-click the 启动webui.bat file in the integrated package to start Open WebUI.

    image.png

    • The first time you start it, the system will prompt you to set up an administrator account. Please follow the prompts to complete the registration.

    1.webp

Choose the Model You Want to Use

After entering Open WebUI, you will see the model selection area in the upper left corner. If there are no models in the list, don't worry, it means you have not downloaded any models yet.

3.webp

You can directly enter the model name in the input box to download it online from Ollama.com:

4.webp

Model Selection Tips:

  • Model Treasure House: Go to https://ollama.com/models to browse the rich model resources officially provided by Ollama.
  • Parameter Scale: Each model has different versions (such as 1.3B, 7B, 67B, etc.), representing different parameter scales. The more parameters, the more powerful the model is, but it also requires more computing resources (memory and video memory).
  • Do What You Can: Choose the appropriate model according to your hardware configuration. Generally speaking, if your "memory + video memory" size is greater than the model file size, you can run the model smoothly.
  • Deepseek-R1 Selection: Search for deepseek-r1 in Ollama's model library to find it.

6.webp

Taking the deployment of the deepseek-r1 model as an example:

  1. Select Model Specifications: On the https://ollama.com/library page, find the model version you want to deploy (for example, deepseek-r1). image.png

  2. Download Model: Paste the model name (for example, deepseek-r1) into the input box in the upper left corner of Open WebUI, and click the "Pull from ollama.com" button to start downloading.

    image.png

  3. Wait for the Download to Complete: The download time depends on your network speed and the model size, please be patient.

    image.png

Start Your AI Journey

After the model is downloaded, you can chat with DeepSeek-R1 in Open WebUI! Feel free to explore its powerful features!

10.webp

If the model supports it, you can also upload pictures, files, etc. for multimodal interaction. Let your AI assistant not only be able to speak well, but also be able to "read pictures and recognize words"!

image.png

Advanced Exploration: Hidden Treasures of Open WebUI

Open WebUI has more functions than just this! Click the menu button in the upper left corner and you will find more surprises:

image.png

  • Personalized Customization: In the "Settings" panel, you can adjust the interface theme, font size, language, etc. according to your preferences to create an exclusive AI interaction experience.

    • You can also customize prompt words to make your AI assistant understand you better!

    image.png

  • Multi-User Management: In the "Administrator" panel, you can set user registration methods, permissions, etc. to facilitate multiple people sharing your local AI resources.

    image.png

  • Adjust Detailed Parameters: Click in the upper right corner to set advanced parameters.

image.png

Multi-Model Comparison: Which One Is Better?

Open WebUI also supports multi-model comparison, allowing you to easily compare the output results of different models and find the one that best suits your needs!

image.png

GPU Acceleration: Squeeze Out Your Graphics Card Performance! (Optional)

If you have an NVIDIA graphics card and have installed the CUDA environment, then congratulations, you can use simple operations to let Ollama use GPU to accelerate model reasoning and greatly improve the response speed of the AI assistant!

  • Double-click the GPU-cuda支持.bat file in the integrated package to install CUDA dependencies.

Ollama + Open WebUI, this golden combination, opens a door to the local AI world for you. Now, you can get rid of the cloud restrictions, create your own AI think tank, and explore the infinite possibilities of AI!