Skip to content

Offline-Text-Translate Local Offline Text Translation OTT

Supports local offline text translation for multiple languages, providing an API interface.

This project is a re-packaging based on the LibreTranslate open-source project. The aim is to provide an easy-to-deploy translation API service directly on local machines, without the need for Docker. It also offers pre-compiled Windows EXE packages, eliminating the need for deployment – simply double-click to use, making it convenient for beginners.

The first startup requires downloading models; subsequent runs can be done offline.

If you want to use the native LibreTranslate project or deploy it in Docker, please visit https://github.com/LibreTranslate/LibreTranslate

Using the Pre-compiled Windows Package

  1. If you cannot access https://raw.githubusercontent.com, you MUST set the proxy address in set.ini via PROXY=.

You can also download pre-packaged models from Baidu Netdisk. After extracting, copy the ".local" folder inside to overwrite the one in the software's root directory. Click here to download from Baidu Netdisk

  1. Click to download the pre-compiled Windows package, extract it to an English directory without spaces, and double-click start.exe.

  2. The first time you start it, it will automatically download the models. Once downloaded, it will display the address and port of the current API service, and you can start using it. (You can also download pre-packaged models from Baidu Netdisk. After extracting, copy the ".local" folder inside to overwrite the one in the software's root directory).

  3. You can write your own program to request this API service to replace functions like Baidu Translate, or fill it into software that requires translation functions. For example, if you want to use it in video translation and dubbing software, fill in the server address and port in the software menu - settings - OTT (default http://127.0.0.1:9911).

Source Code Deployment on Windows

  1. First, download and install Python 3.9+ (3.10 is recommended) from python.org. When installing, carefully check and select the "Add ... Path" checkbox for easier use later.

  2. Install a Git client on Windows. Click to download, choose to download "64-bit Git for Windows Setup", double-click to install, and proceed through the steps until completion.

  3. Create an empty directory, for example, create a directory named "ott" under the D drive: D:/ott. Then, enter this directory, type cmd in the folder address bar, and press Enter. In the opened cmd black window, enter git clone https://github.com/jianchang512/ott . and press Enter to execute.

  4. Create a virtual environment. In the cmd window from the previous step, continue to enter the command python -m venv venv and press Enter.

Note: If you see a message like "python is not an internal or external command, nor a runnable program", it means that the checkbox was not selected during the Step 0 installation. Double-click the downloaded Python installation package again, select "Modify", and make sure to select "Add ... Path".

After reinstalling Python, **you must close the opened cmd window**, otherwise you may still see the "command not found" message. Then, enter `D:/ott`, type `cmd` in the address bar, press Enter, and re-execute `python -m venv venv`.
  1. After the above command is executed successfully, continue to enter .\venv\scripts\activate and press Enter. Then execute pip install -r requirements.txt --no-deps. If you see a message like "not found version xxx", please change the mirror source to the official pip or Alibaba Cloud mirror.

  2. If you need to enable CUDA for accelerated translation, then execute the following commands separately: pip uninstall -y torch

    pip install torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121

  3. Set the proxy in set.ini with PROXY=proxy_address. For example, if your proxy address is http://127.0.0.1:10189, then after filling it in: PROXY=http://127.0.0.1:10189

  4. Execute the command to start the service: python start.py