嘉渊 2 lat temu
rodzic
commit
f97e0eb9ee

+ 1 - 1
docs/build_task.md

@@ -103,7 +103,7 @@ def build_model(cls, args, train):
         )
     return model
 ```
-This function defines the detail of the model. For different speech recognition models, the same speech recognition `Task` can usually be shared and the remaining thing needed to be done is to define a specific model in this function. For example, a speech recognition model with a standard encoder-decoder structure has been shown above. Specifically, it first defines each module of the model, including encoder, decoder, etc. and then combine these modules together to generate a complete model. In FunASR, the model needs to inherit `AbsESPnetModel` and the corresponding code can be seen in `funasr/train/abs_espnet_model.py`. The main function needed to be implemented is the `forward` function.
+This function defines the detail of the model. For different speech recognition models, the same speech recognition `Task` can usually be shared and the remaining thing needed to be done is to define a specific model in this function. For example, a speech recognition model with a standard encoder-decoder structure has been shown above. Specifically, it first defines each module of the model, including encoder, decoder, etc. and then combine these modules together to generate a complete model. In FunASR, the model needs to inherit `FunASRModel` and the corresponding code can be seen in `funasr/train/abs_espnet_model.py`. The main function needed to be implemented is the `forward` function.
 
 Next, we take `SANMEncoder` as an example to introduce how to use a custom encoder as a part of the model when defining the specified model and the corresponding code can be seen in `funasr/models/encoder/sanm_encoder.py`. For a custom encoder, in addition to inheriting the common encoder class `AbsEncoder`, it is also necessary to define the `forward` function to achieve the forward computation of the `encoder`. After defining the `encoder`, it should also be registered in the `Task`. The corresponding code example can be seen as below:
 ```python

+ 1 - 1
docs_cn/build_task.md

@@ -102,7 +102,7 @@ def build_model(cls, args, train):
         )
     return model
 ```
-该函数定义了具体的模型。对于不同的语音识别模型,往往可以共用同一个语音识别`Task`,额外需要做的是在此函数中定义特定的模型。例如,这里给出的是一个标准的encoder-decoder结构的语音识别模型。具体地,先定义该模型的各个模块,包括encoder,decoder等,然后在将这些模块组合在一起得到一个完整的模型。在FunASR中,模型需要继承`AbsESPnetModel`,其具体代码见`funasr/train/abs_espnet_model.py`,主要需要实现的是`forward`函数。
+该函数定义了具体的模型。对于不同的语音识别模型,往往可以共用同一个语音识别`Task`,额外需要做的是在此函数中定义特定的模型。例如,这里给出的是一个标准的encoder-decoder结构的语音识别模型。具体地,先定义该模型的各个模块,包括encoder,decoder等,然后在将这些模块组合在一起得到一个完整的模型。在FunASR中,模型需要继承`FunASRModel`,其具体代码见`funasr/train/abs_espnet_model.py`,主要需要实现的是`forward`函数。
 
 下面我们将以`SANMEncoder`为例,介绍如何在定义模型的时候,使用自定义的`encoder`来作为模型的组成部分,其具体的代码见`funasr/models/encoder/sanm_encoder.py`。对于自定义的`encoder`,除了需要继承通用的`encoder`类`AbsEncoder`外,还需要自定义`forward`函数,实现`encoder`的前向计算。在定义完`encoder`后,还需要在`Task`中对其进行注册,下面给出了相应的代码示例:
 ```python

+ 56 - 1
egs/aishell/conformer/run.sh

@@ -16,7 +16,6 @@ infer_cmd=utils/run.pl
 feats_dir="../DATA" #feature output dictionary
 exp_dir="."
 lang=zh
-dumpdir=dump/fbank
 feats_type=fbank
 token_type=char
 scp=wav.scp
@@ -143,4 +142,60 @@ if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
         } &
         done
         wait
+fi
+
+# Testing Stage
+if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
+    echo "stage 4: Inference"
+    for dset in ${test_sets}; do
+        asr_exp=${exp_dir}/exp/${model_dir}
+        inference_tag="$(basename "${inference_config}" .yaml)"
+        _dir="${asr_exp}/${inference_tag}/${inference_asr_model}/${dset}"
+        _logdir="${_dir}/logdir"
+        if [ -d ${_dir} ]; then
+            echo "${_dir} is already exists. if you want to decode again, please delete this dir first."
+            exit 0
+        fi
+        mkdir -p "${_logdir}"
+        _data="${feats_dir}/data/${dset}"
+        key_file=${_data}/${scp}
+        num_scp_file="$(<${key_file} wc -l)"
+        _nj=$([ $inference_nj -le $num_scp_file ] && echo "$inference_nj" || echo "$num_scp_file")
+        split_scps=
+        for n in $(seq "${_nj}"); do
+            split_scps+=" ${_logdir}/keys.${n}.scp"
+        done
+        # shellcheck disable=SC2086
+        utils/split_scp.pl "${key_file}" ${split_scps}
+        _opts=
+        if [ -n "${inference_config}" ]; then
+            _opts+="--config ${inference_config} "
+        fi
+        ${infer_cmd} --gpu "${_ngpu}" --max-jobs-run "${_nj}" JOB=1:"${_nj}" "${_logdir}"/asr_inference.JOB.log \
+            python -m funasr.bin.asr_inference_launch \
+                --batch_size 1 \
+                --ngpu "${_ngpu}" \
+                --njob ${njob} \
+                --gpuid_list ${gpuid_list} \
+                --data_path_and_name_and_type "${_data}/${scp},speech,${type}" \
+                --key_file "${_logdir}"/keys.JOB.scp \
+                --asr_train_config "${asr_exp}"/config.yaml \
+                --asr_model_file "${asr_exp}"/"${inference_asr_model}" \
+                --output_dir "${_logdir}"/output.JOB \
+                --mode asr \
+                ${_opts}
+
+        for f in token token_int score text; do
+            if [ -f "${_logdir}/output.1/1best_recog/${f}" ]; then
+                for i in $(seq "${_nj}"); do
+                    cat "${_logdir}/output.${i}/1best_recog/${f}"
+                done | sort -k1 >"${_dir}/${f}"
+            fi
+        done
+        python utils/proce_text.py ${_dir}/text ${_dir}/text.proc
+        python utils/proce_text.py ${_data}/text ${_data}/text.proc
+        python utils/compute_wer.py ${_data}/text.proc ${_dir}/text.proc ${_dir}/text.cer
+        tail -n 3 ${_dir}/text.cer > ${_dir}/text.cer.txt
+        cat ${_dir}/text.cer.txt
+    done
 fi

+ 2 - 2
funasr/main_funcs/calculate_all_attentions.py

@@ -21,12 +21,12 @@ from funasr.modules.rnn.attentions import NoAtt
 from funasr.modules.attention import MultiHeadedAttention
 
 
-from funasr.train.abs_espnet_model import AbsESPnetModel
+from funasr.models.base_model import FunASRModel
 
 
 @torch.no_grad()
 def calculate_all_attentions(
-    model: AbsESPnetModel, batch: Dict[str, torch.Tensor]
+    model: FunASRModel, batch: Dict[str, torch.Tensor]
 ) -> Dict[str, List[torch.Tensor]]:
     """Derive the outputs from the all attention layers
 

+ 2 - 2
funasr/main_funcs/collect_stats.py

@@ -17,12 +17,12 @@ from funasr.fileio.datadir_writer import DatadirWriter
 from funasr.fileio.npy_scp import NpyScpWriter
 from funasr.torch_utils.device_funcs import to_device
 from funasr.torch_utils.forward_adaptor import ForwardAdaptor
-from funasr.train.abs_espnet_model import AbsESPnetModel
+from funasr.models.base_model import FunASRModel
 
 
 @torch.no_grad()
 def collect_stats(
-    model: AbsESPnetModel,
+    model: FunASRModel,
     train_iter: DataLoader and Iterable[Tuple[List[str], Dict[str, torch.Tensor]]],
     valid_iter: DataLoader and Iterable[Tuple[List[str], Dict[str, torch.Tensor]]],
     output_dir: Path,

+ 1 - 1
funasr/models/e2e_tp.py

@@ -29,7 +29,7 @@ else:
         yield
 
 
-class TimestampPredictor(AbsESPnetModel):
+class TimestampPredictor(FunASRModel):
     """
     Author: Speech Lab of DAMO Academy, Alibaba Group
     """

+ 1 - 1
funasr/tasks/diar.py

@@ -106,7 +106,7 @@ model_choices = ClassChoices(
         sond=DiarSondModel,
         eend_ola=DiarEENDOLAModel,
     ),
-    type_check=AbsESPnetModel,
+    type_check=FunASRModel,
     default="sond",
 )
 encoder_choices = ClassChoices(