Просмотр исходного кода

添加cli的字对应prompt支持。更新readme

hellofinch 1 год назад
Родитель
Сommit
bc8dd24550
7 измененных файлов с 156 добавлено и 36 удалено
  1. 28 0
      README.md
  2. 30 0
      README_ja-JP.md
  3. 29 0
      README_zh-CN.md
  4. 3 2
      pdf2zh/converter.py
  5. 3 2
      pdf2zh/high_level.py
  6. 15 1
      pdf2zh/pdf2zh.py
  7. 48 31
      pdf2zh/translator.py

+ 28 - 0
README.md

@@ -173,6 +173,7 @@ In the following table, we list all advanced options for reference:
 | `-f`, `-c` | [Exceptions](#exceptions) | `pdf2zh example.pdf -f "(MS.*)"` |
 | `-f`, `-c` | [Exceptions](#exceptions) | `pdf2zh example.pdf -f "(MS.*)"` |
 | `--share` | [Get gradio public link] | `pdf2zh -i --share` |
 | `--share` | [Get gradio public link] | `pdf2zh -i --share` |
 | `-a` | [add authorization and custom login page] | `pdf2zh -i -a users.txt [auth.html]` |
 | `-a` | [add authorization and custom login page] | `pdf2zh -i -a users.txt [auth.html]` |
+| `-pr` | [custom llm prompt] | `pdf2zh -pr [prompt.txt]` |
 
 
 <h3 id="partial">Full / partial document translation</h3>
 <h3 id="partial">Full / partial document translation</h3>
 
 
@@ -252,7 +253,34 @@ Use `-t` to specify how many threads to use in translation:
 ```bash
 ```bash
 pdf2zh example.pdf -t 1
 pdf2zh example.pdf -t 1
 ```
 ```
+<h3 id="prompt">custom prompt</h3>
+Use `-pr` or `--prompt` to specify which prompt to use in llm:
+```bash
+pdf2zh example.pdf -pr prompt.txt
+```
+
+
+example prompt.txt
+```
+[
+    {
+        "role": "system",
+        "content": "You are a professional,authentic machine translation engine.",
+    },
+    {
+        "role": "user",
+        "content": "Translate the following markdown source text to ${lang_out}. Keep the formula notation {{v*}} unchanged. Output translation directly without any additional text.\nSource Text: ${text}\nTranslated Text:",
+    },
+]
+```
+
 
 
+In custom prompt file, there are three variables can be used.
+|**variables**|**comment**|
+|-|-|
+|`lang_in`|input language|
+|`lang_out`|output language|
+|`text`|text need to be translated|
 <h2 id="todo">API</h2>
 <h2 id="todo">API</h2>
 
 
 ### Python
 ### Python

+ 30 - 0
README_ja-JP.md

@@ -174,6 +174,7 @@ Python環境を事前にインストールする必要はありません
 | `-f`, `-c` | [例外](#exceptions) | `pdf2zh example.pdf -f "(MS.*)"` |
 | `-f`, `-c` | [例外](#exceptions) | `pdf2zh example.pdf -f "(MS.*)"` |
 | `--share` | [gradio公開リンクを取得] | `pdf2zh -i --share` |
 | `--share` | [gradio公開リンクを取得] | `pdf2zh -i --share` |
 | `-a` | [ウェブ認証とカスタム認証ページの追加] | `pdf2zh -i -a users.txt [auth.html]` |
 | `-a` | [ウェブ認証とカスタム認証ページの追加] | `pdf2zh -i -a users.txt [auth.html]` |
+| `-pr` | [カスタムビッグモデルのプロンプトを使用する] | `pdf2zh -pr [prompt.txt]` |
 
 
 <h3 id="partial">全文または部分的なドキュメント翻訳</h3>
 <h3 id="partial">全文または部分的なドキュメント翻訳</h3>
 
 
@@ -254,6 +255,35 @@ pdf2zh example.pdf -f "(CM[^R]|(MS|XY|MT|BL|RM|EU|LA|RS)[A-Z]|LINE|LCIRCLE|TeX-|
 pdf2zh example.pdf -t 1
 pdf2zh example.pdf -t 1
 ```
 ```
 
 
+<h3 id="prompt">custom prompt</h3>
+(need Japenese translation)
+Use `-pr` or `--prompt` to specify which prompt to use in llm:
+```bash
+pdf2zh example.pdf -pr prompt.txt
+```
+
+
+example prompt.txt
+```
+[
+    {
+        "role": "system",
+        "content": "You are a professional,authentic machine translation engine.",
+    },
+    {
+        "role": "user",
+        "content": "Translate the following markdown source text to ${lang_out}. Keep the formula notation {{v*}} unchanged. Output translation directly without any additional text.\nSource Text: ${text}\nTranslated Text:",
+    },
+]
+```
+
+
+In custom prompt file, there are three variables can be used.
+|**variables**|**comment**|
+|-|-|
+|`lang_in`|input language|
+|`lang_out`|output language|
+|`text`|text need to be translated|
 <h2 id="todo">API</h2>
 <h2 id="todo">API</h2>
 
 
 ### Python
 ### Python

+ 29 - 0
README_zh-CN.md

@@ -173,6 +173,7 @@ USE_MODELSCOPE=1 pdf2zh
 | `-f`, `-c` | [例外规则](#exceptions) | `pdf2zh example.pdf -f "(MS.*)"` |
 | `-f`, `-c` | [例外规则](#exceptions) | `pdf2zh example.pdf -f "(MS.*)"` |
 | `--share` | [获取 gradio 公开链接] | `pdf2zh -i --share` |
 | `--share` | [获取 gradio 公开链接] | `pdf2zh -i --share` |
 | `-a` | [添加网页认证和自定义认证页] | `pdf2zh -i -a users.txt [auth.html]` |
 | `-a` | [添加网页认证和自定义认证页] | `pdf2zh -i -a users.txt [auth.html]` |
+| `-pr` | [使用自定义的大模型prompt] | `pdf2zh -pr [prompt.txt]` |
 
 
 <h3 id="partial">全文或部分文档翻译</h3>
 <h3 id="partial">全文或部分文档翻译</h3>
 
 
@@ -252,6 +253,34 @@ pdf2zh example.pdf -f "(CM[^R]|(MS|XY|MT|BL|RM|EU|LA|RS)[A-Z]|LINE|LCIRCLE|TeX-|
 ```bash
 ```bash
 pdf2zh example.pdf -t 1
 pdf2zh example.pdf -t 1
 ```
 ```
+<h3 id="prompt">自定义大模型prompt</h3>
+使用 `-pr` 或 `--prompt` 指定使用大模型翻译时使用的prompt文件。
+```bash
+pdf2zh example.pdf -pr prompt.txt
+```
+
+
+示例prompt.txt文件
+```
+[
+    {
+        "role": "system",
+        "content": "You are a professional,authentic machine translation engine.",
+    },
+    {
+        "role": "user",
+        "content": "Translate the following markdown source text to ${lang_out}. Keep the formula notation {{v*}} unchanged. Output translation directly without any additional text.\nSource Text: ${text}\nTranslated Text:",
+    },
+]
+```
+
+
+自定义prompt文件中,可以使用三个内置变量用来传递参数。
+|**变量名**|**说明**|
+|-|-|
+|`lang_in`|输入的语言|
+|`lang_out`|输出的语言|
+|`text`|需要翻译的文本|
 
 
 <h2 id="todo">API</h2>
 <h2 id="todo">API</h2>
 
 

+ 3 - 2
pdf2zh/converter.py

@@ -1,4 +1,4 @@
-from typing import Dict
+from typing import Dict,List
 
 
 from pdfminer.pdfinterp import PDFGraphicState, PDFResourceManager
 from pdfminer.pdfinterp import PDFGraphicState, PDFResourceManager
 from pdfminer.pdffont import PDFCIDFont
 from pdfminer.pdffont import PDFCIDFont
@@ -136,6 +136,7 @@ class TranslateConverter(PDFConverterEx):
         resfont: str = "",
         resfont: str = "",
         noto: Font = None,
         noto: Font = None,
         envs: Dict = None,
         envs: Dict = None,
+        prompt: List = None,
     ) -> None:
     ) -> None:
         super().__init__(rsrcmgr)
         super().__init__(rsrcmgr)
         self.vfont = vfont
         self.vfont = vfont
@@ -151,7 +152,7 @@ class TranslateConverter(PDFConverterEx):
         for translator in [GoogleTranslator, BingTranslator, DeepLTranslator, DeepLXTranslator, OllamaTranslator, AzureOpenAITranslator,
         for translator in [GoogleTranslator, BingTranslator, DeepLTranslator, DeepLXTranslator, OllamaTranslator, AzureOpenAITranslator,
                            OpenAITranslator, ZhipuTranslator, ModelScopeTranslator, SiliconTranslator, GeminiTranslator, AzureTranslator, TencentTranslator, DifyTranslator, AnythingLLMTranslator]:
                            OpenAITranslator, ZhipuTranslator, ModelScopeTranslator, SiliconTranslator, GeminiTranslator, AzureTranslator, TencentTranslator, DifyTranslator, AnythingLLMTranslator]:
             if service_name == translator.name:
             if service_name == translator.name:
-                self.translator = translator(lang_in, lang_out, service_model, envs=envs)
+                self.translator = translator(lang_in, lang_out, service_model, envs=envs,prompt=prompt)
         if not self.translator:
         if not self.translator:
             raise ValueError("Unsupported translation service")
             raise ValueError("Unsupported translation service")
 
 

+ 3 - 2
pdf2zh/high_level.py

@@ -103,6 +103,7 @@ def translate_patch(
         resfont,
         resfont,
         noto,
         noto,
         kwarg.get("envs", {}),
         kwarg.get("envs", {}),
+        kwarg.get("prompt", []),
     )
     )
 
 
     assert device is not None
     assert device is not None
@@ -226,7 +227,7 @@ def translate_stream(
 
 
     fp = io.BytesIO()
     fp = io.BytesIO()
     doc_zh.save(fp)
     doc_zh.save(fp)
-    obj_patch: dict = translate_patch(fp, envs=kwarg["envs"], **locals())
+    obj_patch: dict = translate_patch(fp, prompt=kwarg["prompt"], **locals())
 
 
     for obj_id, ops_new in obj_patch.items():
     for obj_id, ops_new in obj_patch.items():
         # ops_old=doc_en.xref_stream(obj_id)
         # ops_old=doc_en.xref_stream(obj_id)
@@ -292,7 +293,7 @@ def translate(
 
 
         doc_raw = open(file, "rb")
         doc_raw = open(file, "rb")
         s_raw = doc_raw.read()
         s_raw = doc_raw.read()
-        s_mono, s_dual = translate_stream(s_raw, envs=kwarg.get('envs'), **locals())
+        s_mono, s_dual = translate_stream(s_raw, envs=kwarg.get('envs'), prompt=kwarg["prompt"], **locals())
         file_mono = Path(output) / f"{filename}-mono.pdf"
         file_mono = Path(output) / f"{filename}-mono.pdf"
         file_dual = Path(output) / f"{filename}-dual.pdf"
         file_dual = Path(output) / f"{filename}-dual.pdf"
         doc_mono = open(file_mono, "wb")
         doc_mono = open(file_mono, "wb")

+ 15 - 1
pdf2zh/pdf2zh.py

@@ -11,6 +11,7 @@ import logging
 from typing import List, Optional
 from typing import List, Optional
 from pdf2zh import __version__, log
 from pdf2zh import __version__, log
 from pdf2zh.high_level import translate
 from pdf2zh.high_level import translate
+from string import Template
 
 
 
 
 def create_parser() -> argparse.ArgumentParser:
 def create_parser() -> argparse.ArgumentParser:
@@ -120,9 +121,14 @@ def create_parser() -> argparse.ArgumentParser:
         "-a",
         "-a",
         type=str,
         type=str,
         nargs="+",
         nargs="+",
-        default=["./users.txt", "./auth.html"],
         help="user name and password.",
         help="user name and password.",
     )
     )
+    parse_params.add_argument(
+        "--prompt",
+        "-pr",
+        type=str,
+        help="user custom prompt.",
+    )
 
 
     return parser
     return parser
 
 
@@ -169,6 +175,14 @@ def main(args: Optional[List[str]] = None) -> int:
         celery_app.start(argv=sys.argv[2:])
         celery_app.start(argv=sys.argv[2:])
         return 0
         return 0
 
 
+    if parsed_args.prompt:
+        try:
+            with open(parsed_args.prompt,'r',encoding='utf-8') as file:
+                content=file.read()
+            parsed_args.prompt=Template(content)
+        except Exception as e:
+            raise ValueError("prompt error.")
+
     translate(**vars(parsed_args))
     translate(**vars(parsed_args))
     return 0
     return 0
 
 

+ 48 - 31
pdf2zh/translator.py

@@ -17,7 +17,6 @@ from tencentcloud.tmt.v20180321.models import TextTranslateResponse
 
 
 import json
 import json
 
 
-
 def remove_control_characters(s):
 def remove_control_characters(s):
     return "".join(ch for ch in s if unicodedata.category(ch)[0] != "C")
     return "".join(ch for ch in s if unicodedata.category(ch)[0] != "C")
 
 
@@ -49,17 +48,25 @@ class BaseTranslator:
     def translate(self, text):
     def translate(self, text):
         pass
         pass
 
 
-    def prompt(self, text):
-        return [
-            {
-                "role": "system",
-                "content": "You are a professional,authentic machine translation engine.",
-            },
-            {
-                "role": "user",
-                "content": f"Translate the following markdown source text to {self.lang_out}. Keep the formula notation {{v*}} unchanged. Output translation directly without any additional text.\nSource Text: {text}\nTranslated Text:",  # noqa: E501
-            },
-        ]
+    def prompt(self, text, prompt):
+        if prompt:
+            context={
+                "lang_in":self.lang_in,
+                "lang_out":self.lang_out,
+                "text":text,
+            }
+            return eval(prompt.safe_substitute(context))
+        else:
+            return [
+                {
+                    "role": "system",
+                    "content": "You are a professional,authentic machine translation engine.",
+                },
+                {
+                    "role": "user",
+                    "content": f"Translate the following markdown source text to {self.lang_out}. Keep the formula notation {{v*}} unchanged. Output translation directly without any additional text.\nSource Text: {text}\nTranslated Text:",  # noqa: E501
+                },
+            ]
 
 
     def __str__(self):
     def __str__(self):
         return f"{self.name} {self.lang_in} {self.lang_out} {self.model}"
         return f"{self.name} {self.lang_in} {self.lang_out} {self.model}"
@@ -145,7 +152,7 @@ class DeepLTranslator(BaseTranslator):
     }
     }
     lang_map = {"zh": "zh-Hans"}
     lang_map = {"zh": "zh-Hans"}
 
 
-    def __init__(self, lang_in, lang_out, model, envs=None):
+    def __init__(self, lang_in, lang_out, model, envs=None, **kwargs):
         self.set_envs(envs)
         self.set_envs(envs)
         super().__init__(lang_in, lang_out, model)
         super().__init__(lang_in, lang_out, model)
         auth_key = self.envs["DEEPL_AUTH_KEY"]
         auth_key = self.envs["DEEPL_AUTH_KEY"]
@@ -166,7 +173,7 @@ class DeepLXTranslator(BaseTranslator):
     }
     }
     lang_map = {"zh": "zh-Hans"}
     lang_map = {"zh": "zh-Hans"}
 
 
-    def __init__(self, lang_in, lang_out, model, envs=None):
+    def __init__(self, lang_in, lang_out, model, envs=None, **kwargs):
         self.set_envs(envs)
         self.set_envs(envs)
         super().__init__(lang_in, lang_out, model)
         super().__init__(lang_in, lang_out, model)
         self.endpoint = self.envs["DEEPLX_ENDPOINT"]
         self.endpoint = self.envs["DEEPLX_ENDPOINT"]
@@ -193,19 +200,23 @@ class OllamaTranslator(BaseTranslator):
         "OLLAMA_MODEL": "gemma2",
         "OLLAMA_MODEL": "gemma2",
     }
     }
 
 
-    def __init__(self, lang_in, lang_out, model, envs=None):
+    def __init__(self, lang_in, lang_out, model, envs=None,prompt=None):
         self.set_envs(envs)
         self.set_envs(envs)
         if not model:
         if not model:
             model = self.envs["OLLAMA_MODEL"]
             model = self.envs["OLLAMA_MODEL"]
         super().__init__(lang_in, lang_out, model)
         super().__init__(lang_in, lang_out, model)
         self.options = {"temperature": 0}  # 随机采样可能会打断公式标记
         self.options = {"temperature": 0}  # 随机采样可能会打断公式标记
         self.client = ollama.Client()
         self.client = ollama.Client()
+        self.prompttext=prompt
 
 
     def translate(self, text):
     def translate(self, text):
+        print(len(self.prompt(text,self.prompttext)))
+        print(self.prompt(text,self.prompttext)[0])
+        print(self.prompt(text,self.prompttext)[1])
         response = self.client.chat(
         response = self.client.chat(
             model=self.model,
             model=self.model,
             options=self.options,
             options=self.options,
-            messages=self.prompt(text),
+            messages=self.prompt(text,self.prompttext),
         )
         )
         return response["message"]["content"].strip()
         return response["message"]["content"].strip()
 
 
@@ -220,7 +231,7 @@ class OpenAITranslator(BaseTranslator):
     }
     }
 
 
     def __init__(
     def __init__(
-        self, lang_in, lang_out, model, base_url=None, api_key=None, envs=None
+        self, lang_in, lang_out, model, base_url=None, api_key=None, envs=None,prompt=None
     ):
     ):
         self.set_envs(envs)
         self.set_envs(envs)
         if not model:
         if not model:
@@ -228,12 +239,13 @@ class OpenAITranslator(BaseTranslator):
         super().__init__(lang_in, lang_out, model)
         super().__init__(lang_in, lang_out, model)
         self.options = {"temperature": 0}  # 随机采样可能会打断公式标记
         self.options = {"temperature": 0}  # 随机采样可能会打断公式标记
         self.client = openai.OpenAI(base_url=base_url, api_key=api_key)
         self.client = openai.OpenAI(base_url=base_url, api_key=api_key)
+        self.prompttext=prompt
 
 
     def translate(self, text) -> str:
     def translate(self, text) -> str:
         response = self.client.chat.completions.create(
         response = self.client.chat.completions.create(
             model=self.model,
             model=self.model,
             **self.options,
             **self.options,
-            messages=self.prompt(text),
+            messages=self.prompt(text,self.prompttext),
         )
         )
         return response.choices[0].message.content.strip()
         return response.choices[0].message.content.strip()
 
 
@@ -247,7 +259,7 @@ class AzureOpenAITranslator(BaseTranslator):
     }
     }
 
 
     def __init__(
     def __init__(
-        self, lang_in, lang_out, model, base_url=None, api_key=None, envs=None
+        self, lang_in, lang_out, model, base_url=None, api_key=None, envs=None,prompt=None
     ):
     ):
         self.set_envs(envs)
         self.set_envs(envs)
         base_url = self.envs["AZURE_OPENAI_BASE_URL"]
         base_url = self.envs["AZURE_OPENAI_BASE_URL"]
@@ -261,12 +273,13 @@ class AzureOpenAITranslator(BaseTranslator):
             api_version="2024-06-01",
             api_version="2024-06-01",
             api_key=api_key,
             api_key=api_key,
         )
         )
+        self.prompttext=prompt
 
 
     def translate(self, text) -> str:
     def translate(self, text) -> str:
         response = self.client.chat.completions.create(
         response = self.client.chat.completions.create(
             model=self.model,
             model=self.model,
             **self.options,
             **self.options,
-            messages=self.prompt(text),
+            messages=self.prompt(text,self.prompttext),
         )
         )
         return response.choices[0].message.content.strip()
         return response.choices[0].message.content.strip()
 
 
@@ -280,7 +293,7 @@ class ModelScopeTranslator(OpenAITranslator):
     }
     }
 
 
     def __init__(
     def __init__(
-        self, lang_in, lang_out, model, base_url=None, api_key=None, envs=None
+        self, lang_in, lang_out, model, base_url=None, api_key=None, envs=None,prompt=None
     ):
     ):
         self.set_envs(envs)
         self.set_envs(envs)
         base_url = "https://api-inference.modelscope.cn/v1"
         base_url = "https://api-inference.modelscope.cn/v1"
@@ -288,6 +301,7 @@ class ModelScopeTranslator(OpenAITranslator):
         if not model:
         if not model:
             model = self.envs["MODELSCOPE_MODEL"]
             model = self.envs["MODELSCOPE_MODEL"]
         super().__init__(lang_in, lang_out, model, base_url=base_url, api_key=api_key)
         super().__init__(lang_in, lang_out, model, base_url=base_url, api_key=api_key)
+        self.prompttext=prompt
 
 
 
 
 class ZhipuTranslator(OpenAITranslator):
 class ZhipuTranslator(OpenAITranslator):
@@ -298,20 +312,21 @@ class ZhipuTranslator(OpenAITranslator):
         "ZHIPU_MODEL": "glm-4-flash",
         "ZHIPU_MODEL": "glm-4-flash",
     }
     }
 
 
-    def __init__(self, lang_in, lang_out, model, envs=None):
+    def __init__(self, lang_in, lang_out, model, envs=None,prompt=None):
         self.set_envs(envs)
         self.set_envs(envs)
         base_url = "https://open.bigmodel.cn/api/paas/v4"
         base_url = "https://open.bigmodel.cn/api/paas/v4"
         api_key = self.envs["ZHIPU_API_KEY"]
         api_key = self.envs["ZHIPU_API_KEY"]
         if not model:
         if not model:
             model = self.envs["ZHIPU_MODEL"]
             model = self.envs["ZHIPU_MODEL"]
         super().__init__(lang_in, lang_out, model, base_url=base_url, api_key=api_key)
         super().__init__(lang_in, lang_out, model, base_url=base_url, api_key=api_key)
+        self.prompttext=prompt
 
 
     def translate(self, text) -> str:
     def translate(self, text) -> str:
         try:
         try:
             response = self.client.chat.completions.create(
             response = self.client.chat.completions.create(
                 model=self.model,
                 model=self.model,
                 **self.options,
                 **self.options,
-                messages=self.prompt(text),
+                messages=self.prompt(text,self.prompttext),
             )
             )
         except openai.BadRequestError as e:
         except openai.BadRequestError as e:
             if (
             if (
@@ -331,13 +346,14 @@ class SiliconTranslator(OpenAITranslator):
         "SILICON_MODEL": "Qwen/Qwen2.5-7B-Instruct",
         "SILICON_MODEL": "Qwen/Qwen2.5-7B-Instruct",
     }
     }
 
 
-    def __init__(self, lang_in, lang_out, model, envs=None):
+    def __init__(self, lang_in, lang_out, model, envs=None,prompt=None):
         self.set_envs(envs)
         self.set_envs(envs)
         base_url = "https://api.siliconflow.cn/v1"
         base_url = "https://api.siliconflow.cn/v1"
         api_key = self.envs["SILICON_API_KEY"]
         api_key = self.envs["SILICON_API_KEY"]
         if not model:
         if not model:
             model = self.envs["SILICON_MODEL"]
             model = self.envs["SILICON_MODEL"]
         super().__init__(lang_in, lang_out, model, base_url=base_url, api_key=api_key)
         super().__init__(lang_in, lang_out, model, base_url=base_url, api_key=api_key)
+        self.prompttext=prompt
 
 
 
 
 class GeminiTranslator(OpenAITranslator):
 class GeminiTranslator(OpenAITranslator):
@@ -348,14 +364,14 @@ class GeminiTranslator(OpenAITranslator):
         "GEMINI_MODEL": "gemini-1.5-flash",
         "GEMINI_MODEL": "gemini-1.5-flash",
     }
     }
 
 
-    def __init__(self, lang_in, lang_out, model, envs=None):
+    def __init__(self, lang_in, lang_out, model, envs=None,prompt=None):
         self.set_envs(envs)
         self.set_envs(envs)
         base_url = "https://generativelanguage.googleapis.com/v1beta/openai/"
         base_url = "https://generativelanguage.googleapis.com/v1beta/openai/"
         api_key = self.envs["GEMINI_API_KEY"]
         api_key = self.envs["GEMINI_API_KEY"]
         if not model:
         if not model:
             model = self.envs["GEMINI_MODEL"]
             model = self.envs["GEMINI_MODEL"]
         super().__init__(lang_in, lang_out, model, base_url=base_url, api_key=api_key)
         super().__init__(lang_in, lang_out, model, base_url=base_url, api_key=api_key)
-
+        self.prompttext=prompt
 
 
 class AzureTranslator(BaseTranslator):
 class AzureTranslator(BaseTranslator):
     # https://github.com/Azure/azure-sdk-for-python
     # https://github.com/Azure/azure-sdk-for-python
@@ -366,7 +382,7 @@ class AzureTranslator(BaseTranslator):
     }
     }
     lang_map = {"zh": "zh-Hans"}
     lang_map = {"zh": "zh-Hans"}
 
 
-    def __init__(self, lang_in, lang_out, model, envs=None):
+    def __init__(self, lang_in, lang_out, model, envs=None, **kwargs):
         self.set_envs(envs)
         self.set_envs(envs)
         super().__init__(lang_in, lang_out, model)
         super().__init__(lang_in, lang_out, model)
         endpoint = self.envs["AZURE_ENDPOINT"]
         endpoint = self.envs["AZURE_ENDPOINT"]
@@ -397,7 +413,7 @@ class TencentTranslator(BaseTranslator):
         "TENCENTCLOUD_SECRET_KEY": None,
         "TENCENTCLOUD_SECRET_KEY": None,
     }
     }
 
 
-    def __init__(self, lang_in, lang_out, model, envs=None):
+    def __init__(self, lang_in, lang_out, model, envs=None, **kwargs):
         self.set_envs(envs)
         self.set_envs(envs)
         super().__init__(lang_in, lang_out, model)
         super().__init__(lang_in, lang_out, model)
         cred = credential.DefaultCredentialProvider().get_credential()
         cred = credential.DefaultCredentialProvider().get_credential()
@@ -420,7 +436,7 @@ class AnythingLLMTranslator(BaseTranslator):
         "AnythingLLM_APIKEY": None,
         "AnythingLLM_APIKEY": None,
     }
     }
 
 
-    def __init__(self, lang_out, lang_in, model, envs=None):
+    def __init__(self, lang_out, lang_in, model, envs=None,prompt=None):
         self.set_envs(envs)
         self.set_envs(envs)
         super().__init__(lang_out, lang_in, model)
         super().__init__(lang_out, lang_in, model)
         self.api_url = self.envs["AnythingLLM_URL"]
         self.api_url = self.envs["AnythingLLM_URL"]
@@ -430,9 +446,10 @@ class AnythingLLMTranslator(BaseTranslator):
             "Authorization": f"Bearer {self.api_key}",
             "Authorization": f"Bearer {self.api_key}",
             "Content-Type": "application/json",
             "Content-Type": "application/json",
         }
         }
+        self.prompttext=prompt
 
 
     def translate(self, text):
     def translate(self, text):
-        messages = self.prompt(text)
+        messages = self.prompt(text,self.prompttext)
         payload = {
         payload = {
             "message": messages,
             "message": messages,
             "mode": "chat",
             "mode": "chat",
@@ -456,7 +473,7 @@ class DifyTranslator(BaseTranslator):
         "DIFY_API_KEY": None,  # 替换为实际 API 密钥
         "DIFY_API_KEY": None,  # 替换为实际 API 密钥
     }
     }
 
 
-    def __init__(self, lang_out, lang_in, model, envs=None):
+    def __init__(self, lang_out, lang_in, model, envs=None, **kwargs):
         self.set_envs(envs)
         self.set_envs(envs)
         super().__init__(lang_out, lang_in, model)
         super().__init__(lang_out, lang_in, model)
         self.api_url = self.envs["DIFY_API_URL"]
         self.api_url = self.envs["DIFY_API_URL"]