Xiaohongshu has open-sourced a speech recognition project called FireRedASR, which excels in Chinese speech recognition. Previously, they only open-sourced a smaller AED model. Recently, they released a larger LLM model, further improving recognition accuracy.
This ASR model has been integrated into a package and can be easily used in the video translation software (pyVideoTrans).
Package Download and Model Instructions
Model Sizes:
- AED Model (model.pth.tar): 4.35GB
- LLM Model: Includes two models
- Xiaohongshu Recognition Model (model.pth.tar): 3.37GB
- Qwen2-7B Model (4 files): Total 17GB
Total model size is approximately 21GB. Even when compressed into 7z format, the size still exceeds 10GB. Due to size limitations, it cannot be uploaded to GitHub or cloud storage, so the package only includes the main program and no model files.
After downloading the package, please follow the steps below to download the model files separately and place them in the specified locations.
Note: The model files are hosted on huggingface.co, which is not directly accessible in some regions. You may need a VPN to download them.
Main Package Download
The main package is relatively small, 1.7GB. You can download it directly by opening the following link in your browser:
https://github.com/jianchang512/fireredasr-ui/releases/download/v0.3/fireredASR-2025-0224.7z
After downloading, extract the archive. You should see a file structure similar to the image below:

Download AED Model
Downloading the AED model is straightforward; you only need to download one model file.
Download the
model.pth.tarfile.Download link:
https://huggingface.co/FireRedTeam/FireRedASR-AED-L/resolve/main/model.pth.tar?download=true
Place the downloaded
model.pth.tarfile into thepretrained_models/FireRedASR-AED-Lfolder in the package directory.
After downloading, the file location should look like this:

Download LLM Model
Downloading the LLM model is slightly more complex, requiring a total of 5 files (1 Xiaohongshu model + 4 Qwen2 model files).
1. Download Xiaohongshu Model (model.pth.tar):
Download link: https://huggingface.co/FireRedTeam/FireRedASR-LLM-L/resolve/main/model.pth.tar?download=true
Place the downloaded
model.pth.tarfile into thepretrained_models/FireRedASR-LLM-Lfolder in the package. Make sure the folder name containsLLMand do not place it in the wrong location.
The file location should look like this:

2. Download Qwen2 Model (4 files):
Download the following 4 links separately and place the files into the
pretrained_models/FireRedASR-LLM-L/Qwen2-7B-Instructfolder in the package.- https://huggingface.co/Qwen/Qwen2-7B-Instruct/resolve/main/model-00001-of-00004.safetensors?download=true
- https://huggingface.co/Qwen/Qwen2-7B-Instruct/resolve/main/model-00002-of-00004.safetensors?download=true
- https://huggingface.co/Qwen/Qwen2-7B-Instruct/resolve/main/model-00003-of-00004.safetensors?download=true
- https://huggingface.co/Qwen/Qwen2-7B-Instruct/resolve/main/model-00004-of-00004.safetensors?download=true
After downloading, the Qwen2-7B-Instruct folder should contain 4 files, as shown below:

Launch the Package
Once all model files are downloaded and correctly placed, double-click the 启动.bat file in the package directory to start the program.
After launching, the program will automatically open the address http://127.0.0.1:5078 in your browser. If you see the interface below, it means the program has started successfully and is ready to use.

Using in Video Translation Software
If you want to use the FireRedASR model in the pyVideoTrans video translation software, follow these steps:
Ensure you have downloaded and placed the model files as described above and successfully launched the package.
Open the pyVideoTrans software.
In the software menu, go to Menu -> Speech Recognition Settings -> OpenAI Speech Recognition & Compatible AI.
In the settings interface, fill in the relevant information as shown in the image below.

After filling in the details, click Save.
In the speech recognition channel selection, choose OpenAI Speech Recognition.

API Address:
Default address: http://127.0.0.1:5078/v1
Using in OpenAI SDK
from openai import OpenAI
client = OpenAI(api_key='123456',
base_url='http://127.0.0.1:5078/v1')
audio_file = open("5.wav", "rb")
transcript = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file,
response_format="json",
timeout=86400
)
print(transcript.text)