
FunASR hopes to build a bridge between academic research and industrial applications on speech recognition. By supporting the training & finetuning of the industrial-grade speech recognition model released on ModelScope, researchers and developers can conduct research and production of speech recognition models more conveniently, and promote the development of speech recognition ecology. ASR for Fun!Model Zoo
Install Conda:
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh
source ~/.bashrc
conda create -n funasr python=3.7
conda activate funasr
Install Pytorch (version >= 1.7.0):
pip3 install torch torchvision torchaudio
For more versions, please see https://pytorch.org/get-started/locally
If you are in the area of China, you could set the source to speed the downloading.
pip config set global.index-url https://mirror.sjtu.edu.cn/pypi/web/simple
Install ModelScope:
pip install "modelscope[audio]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
For more details about modelscope, please see modelscope installation
Install FunASR and other packages:
git clone https://github.com/alibaba/FunASR.git && cd FunASR
pip install --editable ./
We have trained many academic and industrial models, model hub
If you have any questions about FunASR, please contact us by
email: funasr@list.alibaba-inc.com
Dingding group:
!
This project is licensed under the The MIT License. FunASR also contains various third-party components and some code modified from other repos under other open source licenses.
@inproceedings{gao2020universal,
title={Universal ASR: Unifying Streaming and Non-Streaming ASR Using a Single Encoder-Decoder Model},
author={Gao, Zhifu and Zhang, Shiliang and Lei, Ming and McLoughlin, Ian},
booktitle={arXiv preprint arXiv:2010.14099},
year={2020}
}
@inproceedings{gao2022paraformer,
title={Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition},
author={Gao, Zhifu and Zhang, Shiliang and McLoughlin, Ian and Yan, Zhijie},
booktitle={INTERSPEECH},
year={2022}
}