|
|
2 жил өмнө | |
|---|---|---|
| .. | ||
| README.md | 2 жил өмнө | |
| infer.py | 2 жил өмнө | |
| infer.sh | 2 жил өмнө | |
| utils | 2 жил өмнө | |
Note: The modelscope pipeline supports all the models in model zoo to inference and finetune. Here we take the model of FSMN-VAD as example to demonstrate the usage.
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
inference_pipeline = pipeline(
task=Tasks.voice_activity_detection,
model='damo/speech_fsmn_vad_zh-cn-16k-common-pytorch',
)
segments_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav')
print(segments_result)
inference_pipeline = pipeline(
task=Tasks.auto_speech_recognition,
model='damo/speech_fsmn_vad_zh-cn-16k-common-pytorch',
)
import soundfile
speech, sample_rate = soundfile.read("example/asr_example.wav")
param_dict = {"in_cache": dict(), "is_final": False}
chunk_stride = 1600# 100ms
# first chunk, 100ms
speech_chunk = speech[0:chunk_stride]
rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict)
print(rec_result)
# next chunk, 480ms
speech_chunk = speech[chunk_stride:chunk_stride+chunk_stride]
rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict)
print(rec_result)
Full code of demo, please ref to demo
task: Tasks.voice_activity_detectionmodel: model name in model zoo, or model path in local diskngpu: 1 (Default), decoding on GPU. If ngpu=0, decoding on CPUncpu: 1 (Default), sets the number of threads used for intraop parallelism on CPUoutput_dir: None (Default), the output path of results if setbatch_size: 1 (Default), batch size when decoding
audio_in: the input to decode, which could be:
e.g.: asr_example.wav,e.g.: asr_example.pcm,e.g.: bytes data from a microphonee.g.: audio, rate = soundfile.read("asr_example_zh.wav"), the dtype is numpy.ndarray or torch.Tensorwav.scp, kaldi style wav list (wav_id \t wav_path), e.g.:
asr_example1 ./audios/asr_example1.wav
asr_example2 ./audios/asr_example2.wav
In this case of wav.scp input, output_dir must be set to save the output results
audio_fs: audio sampling rate, only set when audio_in is pcm audio
output_dir: None (Default), the output path of results if set
FunASR also offer recipes egs_modelscope/vad/TEMPLATE/infer.sh to decode with multi-thread CPUs, or multi GPUs.
infer.shmodel: model name in model zoo, or model path in local diskdata_dir: the dataset dir needs to include wav.scpoutput_dir: output dir of the recognition resultsbatch_size: 64 (Default), batch size of inference on gpugpu_inference: true (Default), whether to perform gpu decoding, set false for CPU inferencegpuid_list: 0,1 (Default), which gpu_ids are used to infernjob: only used for CPU inference (gpu_inference=false), 64 (Default), the number of jobs for CPU decodingcheckpoint_dir: only used for infer finetuned models, the path dir of finetuned modelscheckpoint_name: only used for infer finetuned models, valid.cer_ctc.ave.pb (Default), which checkpoint is used to infer bash infer.sh \
--model "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch" \
--data_dir "./data/test" \
--output_dir "./results" \
--batch_size 1 \
--gpu_inference true \
--gpuid_list "0,1"
bash infer.sh \
--model "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch" \
--data_dir "./data/test" \
--output_dir "./results" \
--gpu_inference false \
--njob 64