Why Denoise?
In many speech-related applications, the presence of noise can severely impact performance and user experience. For example:
- Speech Recognition: Noise reduces the accuracy of speech recognition, especially in low signal-to-noise ratio environments.
- Voice Cloning: Noise degrades the naturalness and clarity of synthesized speech based on reference audio.
Speech denoising can address these issues to some extent.
Common Denoising Methods
Currently, the main methods for speech denoising are:
- Spectral Subtraction: This is a classic denoising method with a simple principle.
- Wiener Filtering: This method works well for stable noise, but its effectiveness is limited for varying noise.
- Deep Learning: This is currently the most advanced denoising method. It leverages powerful deep learning models, such as Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), and Generative Adversarial Networks (GANs), to learn the complex relationships between noise and speech, achieving more accurate and natural denoising results.
ZipEnhancer Model: Deep Learning Denoising
This tool is based on the ZipEnhancer model open-sourced by Tongyi Laboratory and provides an easy-to-use interface and API, allowing everyone to easily experience the power of deep learning denoising.
The project is open source on GitHub
The core of the ZipEnhancer model is the Transformer network structure and multi-task learning strategy. It can not only remove noise but also enhance speech quality and eliminate echo simultaneously. Its working principle is as follows:
- Self-Attention Mechanism: Captures important long-term relationships in the speech signal, understanding the context of the sound.
- Multi-Head Attention Mechanism: Analyzes speech features from different perspectives, achieving more refined noise suppression and speech enhancement.
How to Use This Tool?
Windows Pre-packaged Version:
- Download and extract the pre-packaged version (https://github.com/jianchang512/remove-noise/releases/download/v0.1/win-remove-noise-0.1.7z).
- Double-click the
runapi.bat
file; the browser will automatically openhttp://127.0.0.1:5080
. - Select an audio or video file to start denoising.
Source Code Deployment:
- Environment Preparation: Ensure that Python 3.10 - 3.12 is installed.
- Install Dependencies: Run
pip install -r requirements.txt --no-deps
. - CUDA Acceleration (Optional): If you have an NVIDIA graphics card, you can install CUDA 12.1 to accelerate processing:bash
pip uninstall -y torch torchaudio torchvision pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
- Run Program: Run
python api.py
.
Linux System:
- Need to install the
libsndfile
library:sudo apt-get update && sudo apt-get install libsndfile1
. - Note: Please ensure that the
datasets
library version is 3.0, otherwise errors may occur. You can use thepip list | grep datasets
command to view the version.
Interface Preview
API Usage
Interface Address: http://127.0.0.1:5080/api
Request Method: POST
Request Parameters:
stream
: 0 returns the audio URL, 1 returns the audio data.audio
: The audio or video file to be processed.
Return Results (JSON):
- Success (stream=0):
{"code": 0, "data": {"url": "Audio URL"}}
- Success (stream=1): WAV audio data.
- Failure:
{"code": -1, "msg": "Error message"}
Example Code (Python): (Optimized based on the original text)
import requests
url = 'http://127.0.0.1:5080/api'
file_path = './300.wav'
# Get audio URL
try:
res = requests.post(url, data={"stream": 0}, files={"audio": open(file_path, 'rb')})
res.raise_for_status()
print(f"Denoised Audio URL: {res.json()['data']['url']}")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
# Get audio data
try:
res = requests.post(url, data={"stream": 1}, files={"audio": open(file_path, 'rb')})
res.raise_for_status()
with open("ceshi.wav", 'wb') as f:
f.write(res.content)
print("Denoised audio saved as ceshi.wav")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")