游雁 2 лет назад
Родитель
Сommit
fefeadbd0b
1 измененных файлов с 2 добавлено и 24 удалено
  1. 2 24
      funasr/runtime/python/grpc/Readme.md

+ 2 - 24
funasr/runtime/python/grpc/Readme.md

@@ -5,7 +5,6 @@ The audio data is in streaming, the asr inference process is in offline.
 ## For the Server
 ## For the Server
 
 
 ### Prepare server environment
 ### Prepare server environment
-#### Backend is modelscope pipeline (default)
 Install the modelscope and funasr
 Install the modelscope and funasr
 
 
 ```shell
 ```shell
@@ -22,18 +21,6 @@ cd funasr/runtime/python/grpc
 pip install -r requirements_server.txt
 pip install -r requirements_server.txt
 ```
 ```
 
 
-#### Backend is funasr_onnx (optional)
-
-Install [`funasr_onnx`](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python/onnxruntime).
-
-```
-pip install funasr_onnx -i https://pypi.Python.org/simple
-```
-
-Export the model, more details ref to [export docs](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python/onnxruntime).
-```shell
-python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize True
-```
 
 
 ### Generate protobuf file
 ### Generate protobuf file
 Run on server, the two generated pb files are both used for server and client
 Run on server, the two generated pb files are both used for server and client
@@ -51,11 +38,6 @@ python -m grpc_tools.protoc  --proto_path=./proto -I ./proto    --python_out=. -
 python grpc_main_server.py --port 10095 --backend pipeline
 python grpc_main_server.py --port 10095 --backend pipeline
 ```
 ```
 
 
-If you want run server with onnxruntime, please set `backend` and `onnx_dir`.
-```
-# Start server.
-python grpc_main_server.py --port 10095 --backend onnxruntime --onnx_dir /models/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
-```
 
 
 ## For the client
 ## For the client
 
 
@@ -87,9 +69,5 @@ python grpc_main_client_mic.py --host 127.0.0.1 --port 10095
 
 
 <div align="left"><img src="proto/workflow.png" width="400"/>
 <div align="left"><img src="proto/workflow.png" width="400"/>
 
 
-## Reference
-We borrow from or refer to some code as:
-
-1)https://github.com/wenet-e2e/wenet/tree/main/runtime/core/grpc
-
-2)https://github.com/Open-Speech-EkStep/inference_service/blob/main/realtime_inference_service.py
+## Acknowledge
+1. This project is maintained by [FunASR community](https://github.com/alibaba-damo-academy/FunASR).