游雁 2 лет назад
Родитель
Сommit
f28280a84c
4 измененных файлов с 64 добавлено и 2 удалено
  1. 1 0
      docs/FQA.md
  2. 4 0
      docs/index.rst
  3. 16 0
      docs/modescope_pipeline/asr_pipeline.md
  4. 43 2
      docs/modescope_pipeline/quick_start.md

+ 1 - 0
docs/FQA.md

@@ -0,0 +1 @@
+# FQA

+ 4 - 0
docs/index.rst

@@ -74,7 +74,11 @@ FunASR hopes to build a bridge between academic research and industrial applicat
 
 
    ./papers.md
    ./papers.md
 
 
+.. toctree::
+   :maxdepth: 1
+   :caption: FQA
 
 
+   ./FQA.md
 
 
 
 
 Indices and tables
 Indices and tables

+ 16 - 0
docs/modescope_pipeline/asr_pipeline.md

@@ -17,6 +17,22 @@ rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyu
 print(rec_result)
 print(rec_result)
 ```
 ```
 
 
+#### API-docs
+##### define pipeline
+- `task`: `Tasks.auto_speech_recognition`
+- `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk
+- `ngpu`: 1 (Defalut), decoding on GPU. If ngpu=0, decoding on CPU
+- `ncpu`: 1 (Defalut), sets the number of threads used for intraop parallelism on CPU 
+- `output_dir`: None (Defalut), the output path of results if set
+- `batch_size`: 1 (Defalut), batch size when decoding
+##### infer pipeline
+- `audio_in`: the input to decode, which could be: 
+  - wav_path, `e.g.`: asr_example.wav, 
+  - pcm_path, 
+  - audio bytes stream
+  - audio sample point
+  - wav.scp
+
 #### Inference with you data
 #### Inference with you data
 
 
 #### Inference with multi-threads on CPU
 #### Inference with multi-threads on CPU

+ 43 - 2
docs/modescope_pipeline/quick_start.md

@@ -59,8 +59,7 @@ from modelscope.utils.constant import Tasks
 
 
 inference_pipeline = pipeline(
 inference_pipeline = pipeline(
     task=Tasks.speech_timestamp,
     task=Tasks.speech_timestamp,
-    model='damo/speech_timestamp_prediction-v1-16k-offline',
-    output_dir='./tmp')
+    model='damo/speech_timestamp_prediction-v1-16k-offline',)
 
 
 rec_result = inference_pipeline(
 rec_result = inference_pipeline(
     audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_timestamps.wav',
     audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_timestamps.wav',
@@ -88,6 +87,44 @@ rec_result = inference_sv_pipline(audio_in=('https://isv-data.oss-cn-hangzhou.al
 print(rec_result["scores"][0])
 print(rec_result["scores"][0])
 ```
 ```
 
 
+### FAQ
+#### How to switch device from GPU to CPU with pipeline
+
+The pipeline defaults to decoding with GPU (`ngpu=1`) when GPU is available. If you want to switch to CPU, you could set `ngpu=0`
+```python
+inference_pipeline = pipeline(
+    task=Tasks.auto_speech_recognition,
+    model='damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
+    ngpu=0,
+)
+```
+
+#### How to infer from local model path
+Download model to local dir, by modelscope-sdk
+
+```python
+from modelscope.hub.snapshot_download import snapshot_download
+
+local_dir_root = "./models_from_modelscope"
+model_dir = snapshot_download('damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch', cache_dir=local_dir_root)
+```
+
+Or download model to local dir, by git lfs
+```shell
+git lfs install
+# git clone https://www.modelscope.cn/<namespace>/<model-name>.git
+git clone https://www.modelscope.cn/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch.git
+```
+
+Infer with local model path
+```python
+local_dir_root = "./models_from_modelscope/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
+inference_pipeline = pipeline(
+    task=Tasks.auto_speech_recognition,
+    model=local_dir_root,
+)
+```
+
 ## Finetune with pipeline
 ## Finetune with pipeline
 ### Speech Recognition
 ### Speech Recognition
 #### Paraformer model
 #### Paraformer model
@@ -132,6 +169,10 @@ if __name__ == '__main__':
 ```shell
 ```shell
 python finetune.py &> log.txt &
 python finetune.py &> log.txt &
 ```
 ```
+
+### FAQ
+### Multi GPUs training and distributed training
+
 If you want finetune with multi-GPUs, you could:
 If you want finetune with multi-GPUs, you could:
 ```shell
 ```shell
 CUDA_VISIBLE_DEVICES=1,2 python -m torch.distributed.launch --nproc_per_node 2 finetune.py > log.txt 2>&1
 CUDA_VISIBLE_DEVICES=1,2 python -m torch.distributed.launch --nproc_per_node 2 finetune.py > log.txt 2>&1