1. 资源下载
源码地址
模型下载地址:
large-v3模型:https://huggingface.co/Systran/faster-whisper-large-v3/tree/main
large-v2模型:https://huggingface.co/guillaumekln/faster-whisper-large-v2/tree/main
large-v2模型:https://huggingface.co/guillaumekln/faster-whisper-large-v1/tree/main
medium模型:https://huggingface.co/guillaumekln/faster-whisper-medium/tree/main
small模型:https://huggingface.co/guillaumekln/faster-whisper-small/tree/main
base模型:https://huggingface.co/guillaumekln/faster-whisper-base/tree/main
tiny模型:https://huggingface.co/guillaumekln/faster-whisper-tiny/tree/main
下载cuBLAS and cuDNN
https://github.com/Purfview/whisper-standalone-win/releases/tag/libs
2. 创建环境
在conda
环境中创建python
运行环境
conda create -n faster_whisper python=3.9 # python版本要求3.8到3.11
激活虚拟环境
conda activate faster_whisper
安装faster-whisper
依赖
pip install faster-whisper
3. 运行
执行完以上步骤后,我们可以写代码了
from faster_whisper import WhisperModel
model_size = "large-v3"
path = r"D:\Works\Python\Faster_Whisper\model\small"
# Run on GPU with FP16
model = WhisperModel(model_size_or_path=path, device="cuda", local_files_only=True)
# or run on GPU with INT8
# model = WhisperModel(model_size, device="cuda", compute_type="int8_float16")
# or run on CPU with INT8
# model = WhisperModel(model_size, device="cpu", compute_type="int8")
segments, info = model.transcribe("C:\\Users\\21316\\Documents\\录音\\test.wav", beam_size=5, language="zh", vad_filter=True, vad_parameters=dict(min_silence_duration_ms=1000))
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
说明:
local_files_only=True 表示加载本地模型
model_size_or_path=path 指定加载模型路径
device="cuda" 指定使用cuda
compute_type="int8_float16" 量化为8位
language="zh" 指定音频语言
vad_filter=True 开启vad
vad_parameters=dict(min_silence_duration_ms=1000) 设置vad参数
更多内容欢迎访问博客
对应视频内容欢迎访问视频
文章出处登录后可见!
已经登录?立即刷新