Просмотр исходного кода

Automated deployment: Wed Nov 15 08:25:51 UTC 2023 12d694f94cb4d25ee9a981a16f9197e4bb13b4eb

LauraGPT 2 лет назад
Родитель
Сommit
5a7ec4120d

+ 10 - 4
en/_sources/runtime/python/libtorch/README.md.txt

@@ -1,6 +1,7 @@
 # Libtorch-python
 
 ## Export the model
+
 ### Install [modelscope and funasr](https://github.com/alibaba-damo-academy/FunASR#installation)
 
 ```shell
@@ -18,14 +19,16 @@ pip install onnx onnxruntime # Optional, for onnx quantization
 python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type torch --quantize True
 ```
 
-## Install the `funasr_torch`.
-    
+## Install the `funasr_torch`
+
 install from pip
+
 ```shell
 pip install -U funasr_torch
 # For the users in China, you could install with the command:
 # pip install -U funasr_torch -i https://mirror.sjtu.edu.cn/pypi/web/simple
 ```
+
 or install from source code
 
 ```shell
@@ -36,11 +39,13 @@ pip install -e ./
 # pip install -e ./ -i https://mirror.sjtu.edu.cn/pypi/web/simple
 ```
 
-## Run the demo.
+## Run the demo
+
 - Model_dir: the model path, which contains `model.torchscripts`, `config.yaml`, `am.mvn`.
 - Input: wav formt file, support formats: `str, np.ndarray, List[str]`
 - Output: `List[str]`: recognition result.
 - Example:
+
      ```python
      from funasr_torch import Paraformer
 
@@ -55,7 +60,7 @@ pip install -e ./
 
 ## Performance benchmark
 
-Please ref to [benchmark](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/python/benchmark_libtorch.md)
+Please ref to [benchmark](https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/benchmark_libtorch.md)
 
 ## Speed
 
@@ -70,4 +75,5 @@ Test [wav, 5.53s, 100 times avg.](https://isv-data.oss-cn-hangzhou.aliyuncs.com/
 |   Onnx   |   0.038    |
 
 ## Acknowledge
+
 This project is maintained by [FunASR community](https://github.com/alibaba-damo-academy/FunASR).

+ 1 - 1
en/_sources/runtime/python/onnxruntime/README.md.txt

@@ -180,7 +180,7 @@ Output: `List[str]`: recognition result
 
 ## Performance benchmark
 
-Please ref to [benchmark](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/docs/benchmark_onnx.md)
+Please ref to [benchmark](https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/benchmark_onnx.md)
 
 ## Acknowledge
 

BIN
en/objects.inv


+ 3 - 3
en/runtime/python/libtorch/README.html

@@ -149,7 +149,7 @@ pip install onnx onnxruntime <span class="c1"># Optional, for onnx quantization<
 </div>
 </div>
 <div class="section" id="install-the-funasr-torch">
-<h2>Install the <code class="docutils literal notranslate"><span class="pre">funasr_torch</span></code>.<a class="headerlink" href="#install-the-funasr-torch" title="Permalink to this headline"></a></h2>
+<h2>Install the <code class="docutils literal notranslate"><span class="pre">funasr_torch</span></code><a class="headerlink" href="#install-the-funasr-torch" title="Permalink to this headline"></a></h2>
 <p>install from pip</p>
 <div class="highlight-shell notranslate"><div class="highlight"><pre><span></span>pip install -U funasr_torch
 <span class="c1"># For the users in China, you could install with the command:</span>
@@ -166,7 +166,7 @@ pip install -e ./
 </div>
 </div>
 <div class="section" id="run-the-demo">
-<h2>Run the demo.<a class="headerlink" href="#run-the-demo" title="Permalink to this headline"></a></h2>
+<h2>Run the demo<a class="headerlink" href="#run-the-demo" title="Permalink to this headline"></a></h2>
 <ul>
 <li><p>Model_dir: the model path, which contains <code class="docutils literal notranslate"><span class="pre">model.torchscripts</span></code>, <code class="docutils literal notranslate"><span class="pre">config.yaml</span></code>, <code class="docutils literal notranslate"><span class="pre">am.mvn</span></code>.</p></li>
 <li><p>Input: wav formt file, support formats: <code class="docutils literal notranslate"><span class="pre">str,</span> <span class="pre">np.ndarray,</span> <span class="pre">List[str]</span></code></p></li>
@@ -188,7 +188,7 @@ pip install -e ./
 </div>
 <div class="section" id="performance-benchmark">
 <h2>Performance benchmark<a class="headerlink" href="#performance-benchmark" title="Permalink to this headline"></a></h2>
-<p>Please ref to <a class="reference external" href="https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/python/benchmark_libtorch.md">benchmark</a></p>
+<p>Please ref to <a class="reference external" href="https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/benchmark_libtorch.md">benchmark</a></p>
 </div>
 <div class="section" id="speed">
 <h2>Speed<a class="headerlink" href="#speed" title="Permalink to this headline"></a></h2>

+ 1 - 1
en/runtime/python/onnxruntime/README.html

@@ -305,7 +305,7 @@ pip install -e ./
 </div>
 <div class="section" id="performance-benchmark">
 <h2>Performance benchmark<a class="headerlink" href="#performance-benchmark" title="Permalink to this headline"></a></h2>
-<p>Please ref to <a class="reference external" href="https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/docs/benchmark_onnx.md">benchmark</a></p>
+<p>Please ref to <a class="reference external" href="https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/benchmark_onnx.md">benchmark</a></p>
 </div>
 <div class="section" id="acknowledge">
 <h2>Acknowledge<a class="headerlink" href="#acknowledge" title="Permalink to this headline"></a></h2>