游雁 2 лет назад
Родитель
Сommit
f8c740d5a8

+ 1 - 170
funasr/runtime/onnxruntime/readme.md

@@ -1,170 +1 @@
-# ONNXRuntime-cpp
-
-## Export the model
-### Install [modelscope and funasr](https://github.com/alibaba-damo-academy/FunASR#installation)
-
-```shell
-# pip3 install torch torchaudio
-pip install -U modelscope funasr
-# For the users in China, you could install with the command:
-# pip install -U modelscope funasr -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html -i https://mirror.sjtu.edu.cn/pypi/web/simple
-```
-
-### Export [onnx model](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/export)
-
-```shell
-python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize True
-```
-
-## Building for Linux/Unix
-
-### Download onnxruntime
-```shell
-# download an appropriate onnxruntime from https://github.com/microsoft/onnxruntime/releases/tag/v1.14.0
-# here we get a copy of onnxruntime for linux 64
-wget https://github.com/microsoft/onnxruntime/releases/download/v1.14.0/onnxruntime-linux-x64-1.14.0.tgz
-tar -zxvf onnxruntime-linux-x64-1.14.0.tgz
-```
-
-### Install openblas
-```shell
-sudo apt-get install libopenblas-dev #ubuntu
-# sudo yum -y install openblas-devel #centos
-```
-
-### Build runtime
-```shell
-git clone https://github.com/alibaba-damo-academy/FunASR.git && cd funasr/runtime/onnxruntime
-mkdir build && cd build
-cmake  -DCMAKE_BUILD_TYPE=release .. -DONNXRUNTIME_DIR=/path/to/onnxruntime-linux-x64-1.14.0
-make
-```
-## Run the demo
-
-### funasr-onnx-offline
-```shell
-./funasr-onnx-offline     --model-dir <string> [--quantize <string>]
-                          [--vad-dir <string>] [--vad-quant <string>]
-                          [--punc-dir <string>] [--punc-quant <string>]
-                          --wav-path <string> [--] [--version] [-h]
-Where:
-   --model-dir <string>
-     (required)  the asr model path, which contains model.onnx, config.yaml, am.mvn
-   --quantize <string>
-     false (Default), load the model of model.onnx in model_dir. If set true, load the model of model_quant.onnx in model_dir
-
-   --vad-dir <string>
-     the vad model path, which contains model.onnx, vad.yaml, vad.mvn
-   --vad-quant <string>
-     false (Default), load the model of model.onnx in vad_dir. If set true, load the model of model_quant.onnx in vad_dir
-
-   --punc-dir <string>
-     the punc model path, which contains model.onnx, punc.yaml
-   --punc-quant <string>
-     false (Default), load the model of model.onnx in punc_dir. If set true, load the model of model_quant.onnx in punc_dir
-
-   --wav-path <string>
-     (required)  the input could be: 
-      wav_path, e.g.: asr_example.wav;
-      pcm_path, e.g.: asr_example.pcm; 
-      wav.scp, kaldi style wav list (wav_id \t wav_path)
-  
-   Required: --model-dir <string> --wav-path <string>
-   If use vad, please add: --vad-dir <string>
-   If use punc, please add: --punc-dir <string>
-
-For example:
-./funasr-onnx-offline \
-    --model-dir    ./asrmodel/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch \
-    --quantize  true \
-    --vad-dir   ./asrmodel/speech_fsmn_vad_zh-cn-16k-common-pytorch \
-    --punc-dir  ./asrmodel/punc_ct-transformer_zh-cn-common-vocab272727-pytorch \
-    --wav-path    ./vad_example.wav
-```
-
-### funasr-onnx-offline-vad
-```shell
-./funasr-onnx-offline-vad     --model-dir <string> [--quantize <string>]
-                              --wav-path <string> [--] [--version] [-h]
-Where:
-   --model-dir <string>
-     (required)  the vad model path, which contains model.onnx, vad.yaml, vad.mvn
-   --quantize <string>
-     false (Default), load the model of model.onnx in model_dir. If set true, load the model of model_quant.onnx in model_dir
-   --wav-path <string>
-     (required)  the input could be: 
-      wav_path, e.g.: asr_example.wav;
-      pcm_path, e.g.: asr_example.pcm; 
-      wav.scp, kaldi style wav list (wav_id \t wav_path)
-
-   Required: --model-dir <string> --wav-path <string>
-
-For example:
-./funasr-onnx-offline-vad \
-    --model-dir   ./asrmodel/speech_fsmn_vad_zh-cn-16k-common-pytorch \
-    --wav-path    ./vad_example.wav
-```
-
-### funasr-onnx-offline-punc
-```shell
-./funasr-onnx-offline-punc    --model-dir <string> [--quantize <string>]
-                              --txt-path <string> [--] [--version] [-h]
-Where:
-   --model-dir <string>
-     (required)  the punc model path, which contains model.onnx, punc.yaml
-   --quantize <string>
-     false (Default), load the model of model.onnx in model_dir. If set true, load the model of model_quant.onnx in model_dir
-   --txt-path <string>
-     (required)  txt file path, one sentence per line
-
-   Required: --model-dir <string> --txt-path <string>
-
-For example:
-./funasr-onnx-offline-punc \
-    --model-dir  ./asrmodel/punc_ct-transformer_zh-cn-common-vocab272727-pytorch \
-    --txt-path   ./punc_example.txt
-```
-### funasr-onnx-offline-rtf
-```shell
-./funasr-onnx-offline-rtf     --model-dir <string> [--quantize <string>]
-                              [--vad-dir <string>] [--vad-quant <string>]
-                              [--punc-dir <string>] [--punc-quant <string>]
-                              --wav-path <string> --thread-num <int32_t>
-                              [--] [--version] [-h]
-Where:
-   --thread-num <int32_t>
-     (required)  multi-thread num for rtf
-   --model-dir <string>
-     (required)  the model path, which contains model.onnx, config.yaml, am.mvn
-   --quantize <string>
-     false (Default), load the model of model.onnx in model_dir. If set true, load the model of model_quant.onnx in model_dir
-
-   --vad-dir <string>
-     the vad model path, which contains model.onnx, vad.yaml, vad.mvn
-   --vad-quant <string>
-     false (Default), load the model of model.onnx in vad_dir. If set true, load the model of model_quant.onnx in vad_dir
-
-   --punc-dir <string>
-     the punc model path, which contains model.onnx, punc.yaml
-   --punc-quant <string>
-     false (Default), load the model of model.onnx in punc_dir. If set true, load the model of model_quant.onnx in punc_dir
-     
-   --wav-path <string>
-     (required)  the input could be: 
-      wav_path, e.g.: asr_example.wav;
-      pcm_path, e.g.: asr_example.pcm; 
-      wav.scp, kaldi style wav list (wav_id \t wav_path)
-
-For example:
-./funasr-onnx-offline-rtf \
-    --model-dir    ./asrmodel/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch \
-    --quantize  true \
-    --wav-path     ./aishell1_test.scp  \
-    --thread-num 32
-```
-
-## Acknowledge
-1. This project is maintained by [FunASR community](https://github.com/alibaba-damo-academy/FunASR).
-2. We acknowledge mayong for contributing the onnxruntime of Paraformer and CT_Transformer, [repo-asr](https://github.com/RapidAI/RapidASR/tree/main/cpp_onnx), [repo-punc](https://github.com/RapidAI/RapidPunc).
-3. We acknowledge [ChinaTelecom](https://github.com/zhuzizyf/damo-fsmn-vad-infer-httpserver) for contributing the VAD runtime.
-4. We borrowed a lot of code from [FastASR](https://github.com/chenkui164/FastASR) for audio frontend and text-postprocess.
+Please ref to [websocket service](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/websocket)

+ 128 - 105
funasr/runtime/websocket/readme.md

@@ -2,157 +2,180 @@
 
 # Service with websocket-cpp
 
-## Export the model
-### Install [modelscope and funasr](https://github.com/alibaba-damo-academy/FunASR#installation)
 
-```shell
-# pip3 install torch torchaudio
-pip3 install -U modelscope funasr
-# For the users in China, you could install with the command:
-# pip3 install -U modelscope funasr -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html -i https://mirror.sjtu.edu.cn/pypi/web/simple
-```
+## Quick Start
+### Docker Image start
 
-### Export [onnx model](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/export)
+Pull and start the FunASR runtime-SDK Docker image using the following command:
 
 ```shell
-python -m funasr.export.export_model \
---export-dir ./export \
---type onnx \
---quantize True \
---model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch \
---model-name damo/speech_fsmn_vad_zh-cn-16k-common-pytorch \
---model-name damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch
+sudo docker pull registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.1.0
+
+sudo docker run -p 10095:10095 -it --privileged=true -v /root:/workspace/models registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.1.0
 ```
 
-## Building for Linux/Unix
+If you have not installed Docker, please refer to [Docker Installation](https://alibaba-damo-academy.github.io/FunASR/en/installation/docker.html).
+
+### Server Start
+
+After Docker is started, start the funasr-wss-server service program:
 
-### Download onnxruntime
 ```shell
-# download an appropriate onnxruntime from https://github.com/microsoft/onnxruntime/releases/tag/v1.14.0
-# here we get a copy of onnxruntime for linux 64
-wget https://github.com/microsoft/onnxruntime/releases/download/v1.14.0/onnxruntime-linux-x64-1.14.0.tgz
-tar -zxvf onnxruntime-linux-x64-1.14.0.tgz
+cd FunASR/funasr/runtime
+./run_server.sh \
+  --download-model-dir /workspace/models \
+  --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
+  --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx  \
+  --punc-dir damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx
 ```
+For detailed server parameters, please refer to [Server Parameter Introduction](#Server Parameter Introduction).
+
+### Client Testing and Usage
+
+Download the client test tool directory samples:
 
-### Download ffmpeg
 ```shell
-wget https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2023-07-09-12-50/ffmpeg-N-111383-g20b8688092-linux64-gpl-shared.tar.xz
-tar -xvf ffmpeg-N-111383-g20b8688092-linux64-gpl-shared.tar.xz
-# 国内可以使用下述方式
-# wget https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/dep_libs/ffmpeg-N-111383-g20b8688092-linux64-gpl-shared.tar.xz
-# tar -xvf ffmpeg-N-111383-g20b8688092-linux64-gpl-shared.tar.xz
+wget https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/sample/funasr_samples.tar.gz
 ```
 
-### Install openblas
+We take the Python language client as an example to explain. It supports various audio formats (.wav, .pcm, .mp3, etc.), video input (.mp4, etc.), and multi-file list wav.scp input. For other versions of clients, please refer to the document ([click here](#Detailed Usage of Clients)). For customized service deployment, please refer to [How to Customize Service Deployment](#How to Customize Service Deployment).
+
 ```shell
-sudo apt-get install libopenblas-dev #ubuntu
-# sudo yum -y install openblas-devel #centos
+python3 wss_client_asr.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "../audio/asr_example.wav"
 ```
 
-### Build runtime
-required openssl lib
+## Building for Linux/Unix
+
+### Dependencies Download and Install
+
+The third-party libraries have been pre-installed in Docker. If not using Docker, please download and install them manually ([Download and Install Third-Party Libraries](requirements_install.md)).
 
-```shell
-apt-get install libssl-dev #ubuntu 
-# yum install openssl-devel #centos
 
+### Build runtime
 
+```shell
 git clone https://github.com/alibaba-damo-academy/FunASR.git && cd FunASR/funasr/runtime/websocket
 mkdir build && cd build
 cmake  -DCMAKE_BUILD_TYPE=release .. -DONNXRUNTIME_DIR=/path/to/onnxruntime-linux-x64-1.14.0 -DFFMPEG_DIR=/path/to/ffmpeg-N-111383-g20b8688092-linux64-gpl-shared
 make
 ```
-## Run the websocket server
 
+
+### Start Service Deployment
+
+#### API-reference:
+```text
+--download-model-dir Model download address, download the model from Modelscope by setting the model ID. If starting from a local model, this parameter can be left out.
+--model-dir ASR model ID in Modelscope or the absolute path of local model
+--quantize True for quantized ASR model, False for non-quantized ASR model. Default is True.
+--vad-dir VAD model ID in Modelscope or the absolute path of local model
+--vad-quant True for quantized VAD model, False for non-quantized VAD model. Default is True.
+--punc-dir PUNC model ID in Modelscope or the absolute path of local model
+--punc-quant True for quantized PUNC model, False for non-quantized PUNC model. Default is True.
+--port Port number for the server to listen on. Default is 10095.
+--decoder-thread-num Number of inference threads started by the server. Default is 8.
+--io-thread-num Number of IO threads started by the server. Default is 1.
+--certfile SSL certificate file. Default is: ../../../ssl_key/server.crt.
+--keyfile SSL key file. Default is: ../../../ssl_key/server.key.
+```
+
+#### Example of Starting from Modelscope
 ```shell
-cd bin
-./funasr-wss-server [--download-model-dir <string>]
-                    [--model-thread-num <int>] [--decoder-thread-num <int>]
-                    [--io-thread-num <int>] [--port <int>] [--listen_ip
-                    <string>] [--punc-quant <string>] [--punc-dir <string>]
-                    [--vad-quant <string>] [--vad-dir <string>] [--quantize
-                    <string>] --model-dir <string> [--keyfile <string>]
-                    [--certfile <string>] [--] [--version] [-h]
-Where:
-   --download-model-dir <string>
-     Download model from Modelscope to download_model_dir
-
-   --model-dir <string>
-     default: /workspace/models/asr, the asr model path, which contains model_quant.onnx, config.yaml, am.mvn
-   --quantize <string>
-     true (Default), load the model of model_quant.onnx in model_dir. If set false, load the model of model.onnx in model_dir
-
-   --vad-dir <string>
-     default: /workspace/models/vad, the vad model path, which contains model_quant.onnx, vad.yaml, vad.mvn
-   --vad-quant <string>
-     true (Default), load the model of model_quant.onnx in vad_dir. If set false, load the model of model.onnx in vad_dir
-
-   --punc-dir <string>
-     default: /workspace/models/punc, the punc model path, which contains model_quant.onnx, punc.yaml
-   --punc-quant <string>
-     true (Default), load the model of model_quant.onnx in punc_dir. If set false, load the model of model.onnx in punc_dir
-
-   --decoder-thread-num <int>
-     number of threads for decoder, default:8
-   --io-thread-num <int>
-     number of threads for network io, default:8
-   --port <int>
-     listen port, default:10095
-   --certfile <string>
-     default: ../../../ssl_key/server.crt, path of certficate for WSS connection. if it is empty, it will be in WS mode.
-   --keyfile <string>
-     default: ../../../ssl_key/server.key, path of keyfile for WSS connection
-  
-example:
-# you can use models downloaded from modelscope or local models:
-# download models from modelscope
 ./funasr-wss-server  \
   --download-model-dir /workspace/models \
   --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx \
   --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
   --punc-dir damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx
+```
+
+Note: In the above example, `model-dir`,`vad-dir`,`punc-dir` are the model names in Modelscope, downloaded directly from Modelscope and exported as quantized onnx. If starting from a local model, please change the parameter to the absolute path of the local model.
+
 
-# load models from local paths
+#### Example of Starting from Local Model
+
+##### Export the Model
+
+```shell
+python -m funasr.export.export_model \
+--export-dir ./export \
+--type onnx \
+--quantize True \
+--model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch \
+--model-name damo/speech_fsmn_vad_zh-cn-16k-common-pytorch \
+--model-name damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch
+```
+
+Export Detailed Introduction([docs](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/export))
+
+##### Start the Service
+```shell
 ./funasr-wss-server  \
-  --model-dir /workspace/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx \
-  --vad-dir /workspace/models/damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
-  --punc-dir /workspace/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx
+  --download-model-dir /workspace/models \
+  --model-dir ./exportdamo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx \
+  --vad-dir ./exportdamo/speech_fsmn_vad_zh-cn-16k-common-onnx \
+  --punc-dir ./export/damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx
+```
 
+### Client Usage
+
+
+Download the client test tool directory [samples](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/sample/funasr_samples.tar.gz)
+
+```shell
+wget https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/sample/funasr_samples.tar.gz
+```
+
+After deploying the FunASR service on the server, you can test and use the offline file transcription service through the following steps. Currently, the following programming language client is supported:
+
+- [Python](#python-client)
+- [CPP](#cpp-client)
+- [html](#Html-client)
+- [Java](#Java-client)
+
+#### python-client
+
+If you want to run the client directly for testing, you can refer to the following simple instructions, taking the Python version as an example:
+```shell
+python3 wss_client_asr.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "../audio/asr_example.wav" --output_dir "./results"
+```
+
+API-reference
+```text
+--host: IP address of the machine where FunASR runtime-SDK service is deployed. The default value is the IP address of the local machine (127.0.0.1). If the client and service are not on the same server, it needs to be changed to the IP address of the deployment machine.
+--port: The port number of the deployed service is 10095.
+--mode: "offline" means offline file transcription.
+--audio_in: The audio file that needs to be transcribed, which supports file path and file list (wav.scp).
+--output_dir: The path to save the recognition result.
 ```
 
-## Run websocket client test
+### cpp-client
+
+After entering the directory samples/cpp, you can test it with CPP, as follows:
 
 ```shell
-./funasr-wss-client  --server-ip <string>
-                    --port <string>
-                    --wav-path <string>
-                    [--thread-num <int>] 
-                    [--is-ssl <int>]  [--]
-                    [--version] [-h]
+./funasr-wss-client --server-ip 127.0.0.1 --port 10095 --wav-path ../audio/asr_example.wav
+```
 
-Where:
-   --server-ip <string>
-     (required)  server-ip
+API-reference:
 
-   --port <string>
-     (required)  port
+```text
+--server-ip: The IP address of the machine where FunASR runtime-SDK service is deployed. The default value is the IP address of the local machine (127.0.0.1). If the client and service are not on the same server, it needs to be changed to the IP address of the deployment machine.
+--port: The port number of the deployed service is 10095.
+--wav-path: The audio file that needs to be transcribed, which supports file path.
+```
 
-   --wav-path <string>
-     (required)  the input could be: wav_path, e.g.: asr_example.wav;
-     pcm_path, e.g.: asr_example.pcm; wav.scp, kaldi style wav list (wav_id \t wav_path)
+### Html-client
 
-   --thread-num <int>
-     thread-num
+Open `html/static/index.html` in the browser, and you can see the following page, which supports microphone input and file upload for direct experience.
 
-   --is-ssl <int>
-     is-ssl is 1 means use wss connection, or use ws connection
+<img src="images/html.png"  width="900"/>
 
-example:
-./funasr-wss-client --server-ip 127.0.0.1 --port 10095 --wav-path test.wav --thread-num 1 --is-ssl 1
+### Java-client
 
-result json, example like:
-{"mode":"offline","text":"欢迎大家来体验达摩院推出的语音识别模型","wav_name":"wav2"}
+```shell
+FunasrWsClient --host localhost --port 10095 --audio_in ./asr_example.wav --mode offline
 ```
+For more details, please refer to the [documentation](../java/readme.md) 
 
 
 ## Acknowledge

+ 21 - 33
funasr/runtime/websocket/readme_zh.md

@@ -3,7 +3,7 @@
 # 采用websocket协议的c++部署方案
 
 ## 快速上手
-### 镜像启动
+### 启动docker镜像
 
 通过下述命令拉取并启动FunASR runtime-SDK的docker镜像:
 
@@ -12,7 +12,7 @@ sudo docker pull registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-run
 
 sudo docker run -p 10095:10095 -it --privileged=true -v /root:/workspace/models registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.1.0
 ```
-如果您没有安装docker,可参考[Docker安装](#Docker安装)
+如果您没有安装docker,可参考[Docker安装](https://alibaba-damo-academy.github.io/FunASR/en/installation/docker.html)
 
 ### 服务端启动
 
@@ -25,7 +25,7 @@ cd FunASR/funasr/runtime
   --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx  \
   --punc-dir damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx
 ```
-服务端详细参数介绍可参考[服务端参数介绍](#服务端参数介绍)
+服务端详细参数介绍可参考[服务端参数介绍](#命令参数介绍)
 
 ### 客户端测试与使用
 
@@ -44,22 +44,8 @@ python3 wss_client_asr.py --host "127.0.0.1" --port 10095 --mode offline --audio
 
 ### 依赖库下载
 
-#### Download onnxruntime
-```shell
-bash third_party/download_onnxruntime.sh
-```
-
-#### Download ffmpeg
-```shell
-bash third_party/download_ffmpeg.sh
-```
+Docker中已经预安装了依赖三方库,如果不用docker,请手动下载并安装([三方库下载与安装](requirements_install.md))
 
-#### Install openblas and openssl
-```shell
-sudo apt-get install libopenblas-dev libssl-dev #ubuntu
-# sudo yum -y install openblas-devel openssl-devel #centos
-
-```
 
 ### 编译
 
@@ -72,6 +58,22 @@ make
 
 ### 启动服务部署
 
+#### 命令参数介绍:
+```text
+--download-model-dir 模型下载地址,通过设置model ID从Modelscope下载模型。如果从本地模型启动,可以不设置。
+--model-dir  modelscope 中 ASR model ID,或者本地模型绝对路径
+--quantize  True为量化ASR模型,False为非量化ASR模型,默认是True
+--vad-dir  modelscope 中 VAD model ID,或者本地模型绝对路径
+--vad-quant   True为量化VAD模型,False为非量化VAD模型,默认是True
+--punc-dir  modelscope 中 标点 model ID,或者本地模型绝对路径
+--punc-quant   True为量化PUNC模型,False为非量化PUNC模型,默认是True
+--port  服务端监听的端口号,默认为 10095
+--decoder-thread-num  服务端启动的推理线程数,默认为 8
+--io-thread-num  服务端启动的IO线程数,默认为 1
+--certfile  ssl的证书文件,默认为:../../../ssl_key/server.crt
+--keyfile   ssl的密钥文件,默认为:../../../ssl_key/server.key
+```
+
 #### 从modelscope中模型启动示例
 ```shell
 ./funasr-wss-server  \
@@ -107,21 +109,7 @@ python -m funasr.export.export_model \
   --punc-dir ./export/damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx
 ```
 
-#### 命令参数介绍:
-```text
---download-model-dir 模型下载地址,通过设置model ID从Modelscope下载模型。如果从本地模型启动,可以不设置。
---model-dir  modelscope 中 ASR model ID,或者本地模型绝对路径
---quantize  True为量化ASR模型,False为非量化ASR模型,默认是True
---vad-dir  modelscope 中 VAD model ID,或者本地模型绝对路径
---vad-quant   True为量化VAD模型,False为非量化VAD模型,默认是True
---punc-dir  modelscope 中 标点 model ID,或者本地模型绝对路径
---punc-quant   True为量化PUNC模型,False为非量化PUNC模型,默认是True
---port  服务端监听的端口号,默认为 10095
---decoder-thread-num  服务端启动的推理线程数,默认为 8
---io-thread-num  服务端启动的IO线程数,默认为 1
---certfile  ssl的证书文件,默认为:../../../ssl_key/server.crt
---keyfile   ssl的密钥文件,默认为:../../../ssl_key/server.key
-```
+
 
 ### 客户端用法详解
 

+ 15 - 0
funasr/runtime/websocket/requirements_install.md

@@ -0,0 +1,15 @@
+#### Download onnxruntime
+```shell
+bash third_party/download_onnxruntime.sh
+```
+
+#### Download ffmpeg
+```shell
+bash third_party/download_ffmpeg.sh
+```
+
+#### Install openblas and openssl
+```shell
+sudo apt-get install libopenblas-dev libssl-dev #ubuntu
+# sudo yum -y install openblas-devel openssl-devel #centos
+```