Offine-Text-Translate Local Offline Text Translation OTT
Supports multi-language local offline text translation with API interface.
This project is a repackaging of the open-source project LibreTranslate, aiming to provide an easy-to-deploy translation API service directly on local machines without Docker. It includes pre-compiled Windows executable packages that can be used by simply double-clicking, making it convenient for beginners and non-technical users.
The first startup requires downloading models, after which it can run offline.
If you prefer to use the original LibreTranslate project or deploy it in Docker, please visit https://github.com/LibreTranslate/LibreTranslate
Using Pre-compiled Windows Package
- If you cannot access https://raw.githubusercontent.com, you must set a proxy address in set.ini with
PROXY=
You can also download pre-packaged models from Baidu Netdisk. After extraction, copy the ".local" folder to the root directory of this software. Click to download from Baidu Netdisk
Click to download the pre-compiled Windows package, extract it to an English directory without spaces, and double-click start.exe.
On the first startup, models will be automatically downloaded. After completion, the API service address and port will be displayed, and it's ready to use. (Alternatively, you can download pre-packaged models from Baidu Netdisk. After extraction, copy the ".local" folder to the root directory of this software.)
You can write your own programs to request this API service, replacing functions like Baidu Translate, or fill it into software that requires translation features. For example, to use it in Video Translation and Dubbing Software, go to the software menu → Settings → OTT and enter the server address and port (default: http://127.0.0.1:9911).
Source Code Deployment on Windows
First, go to python.org to download and install Python 3.9+ (recommended 3.10). During installation, carefully check and select the "Add ... Path" checkbox for easier use later.
Install the Git client on Windows. Click to download, choose the 64-bit Git for Windows Setup, download it, and double-click to install, following the steps until completion.
Create an empty directory, e.g., "ott" on the D drive, then navigate to this directory
D:/ott
. In the folder address bar, typecmd
and press Enter. In the opened cmd window, entergit clone https://github.com/jianchang512/ott .
and press Enter.Create a virtual environment. In the same cmd window, continue by entering the command
python -m venv venv
and press Enter.
Note: If you see an error like "python is not recognized as an internal or external command", it means the checkbox in Step 0 was not selected during Python installation. Re-double-click the downloaded Python installer, choose "Modify", and ensure "Add ... Path" is selected. After reinstalling Python, **you must close the current cmd window**, as it might still show the command not found. Then navigate to `D:/ott`, type `cmd` in the address bar, press Enter, and re-run `python -m venv venv`.
After the previous command completes successfully, continue by entering
.\venv\scripts\activate
and press Enter, then executepip install -r requirements.txt --no-deps
. If you see "not found version xxx", change the mirror source to the official pip or Alibaba Cloud mirror.If you need to enable CUDA acceleration for translation, continue by executing:
pip uninstall -y torch
pip install torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
Set the proxy in set.ini with
PROXY=proxy address
, e.g., if your proxy address ishttp://127.0.0.1:10189
, then setPROXY=http://127.0.0.1:10189
.Execute the command to start the service:
python start.py
.