Want to deploy powerful open-source AI models like Qwen 2.5, Llama 3, and DeepSeek-R1 locally but struggling with complex setup processes?
Don't worry! The golden combination of Ollama + Open WebUI is here to remove all obstacles for you.
This article provides a comprehensive tutorial on how to easily set up a local AI environment using Ollama + Open WebUI, allowing you to have your own dedicated and powerful AI assistant and explore the endless possibilities of AI!
Kindly Reminder: Due to hardware limitations, local deployments usually can't run the largest versions of DeepSeek-R1 (e.g., 67B). But don't worry, smaller models (e.g., 1.3B or 7B) can run smoothly on most personal computers and provide excellent reasoning capabilities. More importantly, you can choose the version that best suits your needs!
Why Choose Ollama + Open WebUI?
Among many local deployment solutions, the Ollama + Open WebUI combination stands out as the preferred choice for many AI enthusiasts. What makes them so attractive?
- Ollama: A Simplified Model Engine
- Ollama is like an "AI model treasure box". With just one command, you can download, install, and run various mainstream large language models, such as Llama 3 and DeepSeek-R1!
- Open WebUI: An Elegant and Easy-to-Use Interface
- Open WebUI puts a beautiful face on Ollama. It provides a visually appealing and intuitive web interface.
- Completely open-source and free.
After deployment, simply open http://127.0.0.1:8080
in your browser to start chatting with your AI assistant:
Windows Users Exclusive: One-Click Startup Package, Say Goodbye to Complicated Configurations!
Considering the difficulties Windows users may encounter when configuring a Docker environment, we have thoughtfully prepared an integrated package that can be used immediately after downloading and extracting, truly achieving "out-of-the-box" functionality!
Download and Extract the Integrated Package:
Integrated Package Download Address: https://www.123684.com/s/03Sxjv-JmvJ3
- If you haven't installed Ollama yet, first double-click the
ollama-0.1.28-setup.exe
file in the package to install it. The installation process is very simple; just click "Next" all the way through.
- If you haven't installed Ollama yet, first double-click the
Start WebUI:
- Double-click the
启动webui.bat
file in the integrated package to start Open WebUI.
- The first time you start it, the system will prompt you to set up an administrator account. Please follow the prompts to complete the registration.
- Double-click the
Choose the Model You Want to Use
After entering Open WebUI, you will see the model selection area in the upper left corner. If there are no models in the list, don't worry; it means you haven't downloaded any models yet.
You can directly enter the model name in the input box to download it online from Ollama.com:
Model Selection Tips:
- Model Treasure Trove: Visit https://ollama.com/models to browse the rich model resources provided by Ollama.
- Parameter Scale: Each model has different versions (e.g., 1.3B, 7B, 67B, etc.), representing different parameter scales. The more parameters, the more powerful the model is usually, but it also requires more computing resources (memory and video memory).
- Act According to Your Abilities: Choose the appropriate model based on your hardware configuration. Generally speaking, if your "memory + video memory" size is greater than the model file size, you can run the model smoothly.
- Deepseek-R1 Selection: Search for
deepseek-r1
in Ollama's model library to find it
Taking deploying the deepseek-r1
model as an example:
Select Model Specifications: On the https://ollama.com/library page, find the model version you want to deploy (e.g.,
deepseek-r1
).Download Model: Paste the model name (e.g.,
deepseek-r1
) into the input box in the upper left corner of Open WebUI, and click the "Pull from ollama.com" button to start downloading.Wait for the Download to Complete: The download time depends on your network speed and model size; please be patient.
Start Your AI Journey
Once the model is downloaded, you can chat with DeepSeek-R1 in Open WebUI! Explore its powerful features!
If the model supports it, you can also upload pictures, files, etc., for multimodal interaction. Let your AI assistant not only speak well but also "read pictures"!
Advanced Exploration: Hidden Treasures of Open WebUI
Open WebUI's features go far beyond this! Click the menu button in the upper left corner, and you will find more surprises:
Personalized Customization: In the "Settings" panel, you can adjust the interface theme, font size, language, etc., according to your preferences to create a personalized AI interaction experience.
- You can also customize prompts to make your AI assistant understand you better!
Multi-User Management: In the "Admin" panel, you can set user registration methods, permissions, etc., to facilitate multiple people sharing your local AI resources.
Adjust Detailed Parameters: Click in the upper right corner to set advanced parameters
Multi-Model Comparison: Which One is Better?
Open WebUI also supports a multi-model comparison function, allowing you to easily compare the output results of different models and find the one that best meets your needs!
GPU Acceleration: Squeeze Your Graphics Card Performance! (Optional)
If you have an NVIDIA graphics card and have installed the CUDA environment, then congratulations, you can use Ollama to accelerate model reasoning through simple operations, greatly improving the response speed of the AI assistant!
- Double-click the
GPU-cuda支持.bat
file in the integrated package to install CUDA dependencies.
Ollama + Open WebUI, this golden combination, opens a door to the local AI world for you. Now, you can get rid of cloud constraints, build your own AI think tank, and explore the endless possibilities of AI!