Jelajahi Sumber

Remove ollama reference from documentation for now (#3959)

mamoodi 1 tahun lalu
induk
melakukan
ef189d52a5

+ 2 - 2
docs/modules/usage/llms/groq.md

@@ -6,10 +6,10 @@ OpenHands uses LiteLLM to make calls to chat models on Groq. You can find their
 
 When running OpenHands, you'll need to set the following in the OpenHands UI through the Settings:
 * `LLM Provider` to `Groq`
-* `LLM Model` to the model you will be using. [Visit **here** to see the list of
+* `LLM Model` to the model you will be using. [Visit here to see the list of
 models that Groq hosts](https://console.groq.com/docs/models). If the model is not in the list, toggle
 `Advanced Options`, and enter it in `Custom Model` (i.e. groq/<model-name>)
-* `API key` to your Groq API key. To find or create your Groq API Key, [see **here**](https://console.groq.com/keys)
+* `API key` to your Groq API key. To find or create your Groq API Key, [see here](https://console.groq.com/keys)
 
 
 

+ 0 - 1
docs/modules/usage/llms/llms.md

@@ -54,7 +54,6 @@ We have a few guides for running OpenHands with specific model providers:
 * [Azure](llms/azure-llms)
 * [Google](llms/google-llms)
 * [Groq](llms/groq)
-* [ollama](llms/local-llms)
 * [OpenAI](llms/openai-llms)
 
 ### API retries and rate limits

+ 2 - 2
docs/modules/usage/llms/openai-llms.md

@@ -7,9 +7,9 @@ OpenHands uses LiteLLM to make calls to OpenAI's chat models. You can find their
 When running OpenHands, you'll need to set the following in the OpenHands UI through the Settings:
 * `LLM Provider` to `OpenAI`
 * `LLM Model` to the model you will be using.
-[Visit **here** to see a full list of OpenAI models that LiteLLM supports.](https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models)
+[Visit here to see a full list of OpenAI models that LiteLLM supports.](https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models)
 If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (i.e. openai/<model-name>).
-* `API Key` to your OpenAI API key. To find or create your OpenAI Project API Key, [see **here**](https://platform.openai.com/api-keys).
+* `API Key` to your OpenAI API key. To find or create your OpenAI Project API Key, [see here](https://platform.openai.com/api-keys).
 
 ## Using OpenAI-Compatible Endpoints
 

+ 0 - 29
docs/modules/usage/troubleshooting/troubleshooting.md

@@ -17,7 +17,6 @@ Check out [Notes for WSL on Windows Users](troubleshooting/windows) for some tro
 ## Common Issues
 
 * [Unable to connect to Docker](#unable-to-connect-to-docker)
-* [Unable to connect to LLM](#unable-to-connect-to-llm)
 * [404 Resource not found](#404-resource-not-found)
 * [`make build` getting stuck on package installations](#make-build-getting-stuck-on-package-installations)
 * [Sessions are not restored](#sessions-are-not-restored)
@@ -47,33 +46,6 @@ OpenHands uses a Docker container to do its work safely, without potentially bre
 * If you are on a Mac, check the [permissions requirements](https://docs.docker.com/desktop/mac/permission-requirements/) and in particular consider enabling the `Allow the default Docker socket to be used` under `Settings > Advanced` in Docker Desktop.
 * In addition, upgrade your Docker to the latest version under `Check for Updates`
 
----
-### Unable to connect to LLM
-
-[GitHub Issue](https://github.com/All-Hands-AI/OpenHands/issues/1208)
-
-**Symptoms**
-
-```python
-  File "/app/.venv/lib/python3.12/site-packages/openai/_exceptions.py", line 81, in __init__
-    super().__init__(message, response.request, body=body)
-                              ^^^^^^^^^^^^^^^^
-AttributeError: 'NoneType' object has no attribute 'request'
-```
-
-**Details**
-
-[GitHub Issues](https://github.com/All-Hands-AI/OpenHands/issues?q=is%3Aissue+is%3Aopen+404)
-
-This usually happens with *local* LLM setups, when OpenHands can't connect to the LLM server.
-See our guide for [local LLMs](llms/local-llms) for more information.
-
-**Workarounds**
-
-* Check your `base_url` in your config.toml (if it exists) under the "llm" section
-* Check that ollama (or whatever LLM you're using) is running OK
-* Make sure you're using `--add-host host.docker.internal:host-gateway` when running in Docker
-
 ---
 ### `404 Resource not found`
 
@@ -115,7 +87,6 @@ the API endpoint you're trying to connect to. Most often this happens for Azure
   * If you're running inside the UI, be sure to set the `model` in the settings modal
   * If you're running headless (via main.py) be sure to set `LLM_MODEL` in your env/config
 * Make sure you've followed any special instructions for your LLM provider
-  * [ollama](/modules/usage/llms/local-llms)
   * [Azure](/modules/usage/llms/azure-llms)
   * [Google](/modules/usage/llms/google-llms)
 * Make sure your API key is correct

+ 0 - 5
docs/sidebars.ts

@@ -40,11 +40,6 @@ const sidebars: SidebarsConfig = {
               type: 'doc',
               label: 'Groq',
               id: 'usage/llms/groq',
-            },
-            {
-              type: 'doc',
-              label: 'Local/ollama',
-              id: 'usage/llms/local-llms',
             }
           ],
         },