Parcourir la source

Update docs on LLM providers for consistency (#3738)

* Update docs on LLM providers for consistency

* Update headless command

* minor tweaks based on feedback

---------

Co-authored-by: Robert Brennan <contact@rbren.io>
Co-authored-by: Robert Brennan <accounts@rbren.io>
mamoodi il y a 1 an
Parent
commit
60c5fd41ec

+ 10 - 7
docs/modules/usage/getting-started.md

@@ -2,16 +2,18 @@
 sidebar_position: 2
 sidebar_position: 2
 ---
 ---
 
 
-# Getting Started
+# Getting Started
 
 
 ## System Requirements
 ## System Requirements
+
 * Docker version 26.0.0+ or Docker Desktop 4.31.0+
 * Docker version 26.0.0+ or Docker Desktop 4.31.0+
 * You must be using Linux or Mac OS
 * You must be using Linux or Mac OS
   * If you are on Windows, you must use [WSL](https://learn.microsoft.com/en-us/windows/wsl/install)
   * If you are on Windows, you must use [WSL](https://learn.microsoft.com/en-us/windows/wsl/install)
 
 
 ## Installation
 ## Installation
-The easiest way to run OpenHands is in Docker. Use `WORKSPACE_BASE` below to
-specify which folder the OpenHands agent should modify.
+
+The easiest way to run OpenHands is in Docker. You can change `WORKSPACE_BASE` below to point OpenHands to
+existing code that you'd like to modify.
 
 
 ```bash
 ```bash
 WORKSPACE_BASE=$(pwd)/workspace
 WORKSPACE_BASE=$(pwd)/workspace
@@ -32,19 +34,20 @@ You can also run OpenHands in a scriptable [headless mode](https://docs.all-hand
 or as an [interactive CLI](https://docs.all-hands.dev/modules/usage/how-to/cli-mode).
 or as an [interactive CLI](https://docs.all-hands.dev/modules/usage/how-to/cli-mode).
 
 
 ## Setup
 ## Setup
+
 After running the command above, you'll find OpenHands running at [http://localhost:3000](http://localhost:3000).
 After running the command above, you'll find OpenHands running at [http://localhost:3000](http://localhost:3000).
 
 
 The agent will have access to the `./workspace` folder to do its work. You can copy existing code here, or change `WORKSPACE_BASE` in the
 The agent will have access to the `./workspace` folder to do its work. You can copy existing code here, or change `WORKSPACE_BASE` in the
 command to point to an existing folder.
 command to point to an existing folder.
 
 
-Upon launching OpenHands, you'll see a settings modal. You must select an LLM backend using `Model`, and enter a corresponding `API Key`
+Upon launching OpenHands, you'll see a settings modal. You must select an LLM backend using `Model`, and enter a corresponding `API Key`.
 These can be changed at any time by selecting the `Settings` button (gear icon) in the UI.
 These can be changed at any time by selecting the `Settings` button (gear icon) in the UI.
-If the required `Model` does not exist in the list, you can manually enter it in the text box.
-
-![settings-modal](/img/settings-screenshot.png)
+If the required `Model` does not exist in the list, you can toggle `Use custom model` and manually enter it in the text box.
 
 
+<img src="/img/settings-screenshot.png" alt="settings-modal" width="340" />
 
 
 ## Versions
 ## Versions
+
 The command above pulls the `0.9` tag, which represents the most recent stable release of OpenHands. You have other options as well:
 The command above pulls the `0.9` tag, which represents the most recent stable release of OpenHands. You have other options as well:
 - For a specific release, use `ghcr.io/all-hands-ai/openhands:$VERSION`, replacing $VERSION with the version number.
 - For a specific release, use `ghcr.io/all-hands-ai/openhands:$VERSION`, replacing $VERSION with the version number.
 - We use semver, and release major, minor, and patch tags. So `0.9` will automatically point to the latest `0.9.x` release, and `0` will point to the latest `0.x.x` release.
 - We use semver, and release major, minor, and patch tags. So `0.9` will automatically point to the latest `0.9.x` release, and `0` will point to the latest `0.x.x` release.

+ 5 - 16
docs/modules/usage/how-to/custom-sandbox-guide.md

@@ -1,30 +1,19 @@
 # Custom Sandbox
 # Custom Sandbox
 
 
-The sandbox is where the agent does its work--instead of running commands directly on your computer
+The sandbox is where the agent does its work. Instead of running commands directly on your computer
 (which could be dangerous), the agent runs them inside of a Docker container.
 (which could be dangerous), the agent runs them inside of a Docker container.
 
 
-The default OpenHands sandbox comes with a
-[minimal ubuntu configuration](https://github.com/All-Hands-AI/OpenHands/blob/main/containers/sandbox/Dockerfile).
-Your use case may need additional software installed by default. In this case, you can build a custom sandbox image.
+The default OpenHands sandbox (`python-nodejs:python3.11-nodejs22`
+from [nikolaik/python-nodejs](https://hub.docker.com/r/nikolaik/python-nodejs)) comes with some packages installed such
+as python and Node.js but your use case may need additional software installed by default.
 
 
 There are two ways you can do so:
 There are two ways you can do so:
 
 
-1. Use an existing image from docker hub. For instance, if you want to have `nodejs` installed, you can do so by using the `node:20` image
+1. Use an existing image from docker hub
 2. Creating your own custom docker image and using it
 2. Creating your own custom docker image and using it
 
 
 If you want to take the first approach, you can skip the `Create Your Docker Image` section.
 If you want to take the first approach, you can skip the `Create Your Docker Image` section.
 
 
-For a more feature-rich environment, you might consider using pre-built images like **[nikolaik/python-nodejs](https://hub.docker.com/r/nikolaik/python-nodejs)**, which comes with both Python and Node.js pre-installed, along with many other useful tools and libraries, like:
-
-- Node.js: 22.x
-- npm: 10.x
-- yarn: stable
-- Python: latest
-- pip: latest
-- pipenv: latest
-- poetry: latest
-- uv: latest
-
 ## Setup
 ## Setup
 
 
 Make sure you are able to run OpenHands using the [Development.md](https://github.com/All-Hands-AI/OpenHands/blob/main/Development.md) first.
 Make sure you are able to run OpenHands using the [Development.md](https://github.com/All-Hands-AI/OpenHands/blob/main/Development.md) first.

+ 10 - 7
docs/modules/usage/llms/azure-llms.md

@@ -2,7 +2,7 @@
 
 
 ## Completion
 ## Completion
 
 
-OpenHands uses LiteLLM for completion calls. You can find their documentation on Azure [here](https://docs.litellm.ai/docs/providers/azure)
+OpenHands uses LiteLLM for completion calls. You can find their documentation on Azure [here](https://docs.litellm.ai/docs/providers/azure).
 
 
 ### Azure openai configs
 ### Azure openai configs
 
 
@@ -12,7 +12,7 @@ When running the OpenHands Docker image, you'll need to set the following enviro
 LLM_BASE_URL="<azure-api-base-url>"          # e.g. "https://openai-gpt-4-test-v-1.openai.azure.com/"
 LLM_BASE_URL="<azure-api-base-url>"          # e.g. "https://openai-gpt-4-test-v-1.openai.azure.com/"
 LLM_API_KEY="<azure-api-key>"
 LLM_API_KEY="<azure-api-key>"
 LLM_MODEL="azure/<your-gpt-deployment-name>"
 LLM_MODEL="azure/<your-gpt-deployment-name>"
-LLM_API_VERSION="<api-version>"          # e.g. "2024-02-15-preview"
+LLM_API_VERSION="<api-version>"              # e.g. "2024-02-15-preview"
 ```
 ```
 
 
 Example:
 Example:
@@ -31,15 +31,18 @@ docker run -it \
 ghcr.io/all-hands-ai/openhands:main
 ghcr.io/all-hands-ai/openhands:main
 ```
 ```
 
 
-You can set the LLM_MODEL and LLM_API_KEY in the OpenHands UI itself.
+You can also set the model and API key in the OpenHands UI through the Settings.
 
 
 :::note
 :::note
-You can find your ChatGPT deployment name on the deployments page in Azure. It could be the same with the chat model name (e.g. 'GPT4-1106-preview'), by default or initially set, but it doesn't have to be the same. Run openhands, and when you load it in the browser, go to Settings and set model as above: "azure/&lt;your-actual-gpt-deployment-name&gt;". If it's not in the list, enter your own text and save it.
+You can find your ChatGPT deployment name on the deployments page in Azure. It could be the same with the chat model
+name (e.g. 'GPT4-1106-preview'), by default or initially set, but it doesn't have to be the same. Run OpenHands,
+and when you load it in the browser, go to Settings and set model as above: "azure/&lt;your-actual-gpt-deployment-name&gt;".
+If it's not in the list, you can open the Settings modal, switch to "Custom Model", and enter your model name.
 :::
 :::
 
 
 ## Embeddings
 ## Embeddings
 
 
-OpenHands uses llama-index for embeddings. You can find their documentation on Azure [here](https://docs.llamaindex.ai/en/stable/api_reference/embeddings/azure_openai/)
+OpenHands uses llama-index for embeddings. You can find their documentation on Azure [here](https://docs.llamaindex.ai/en/stable/api_reference/embeddings/azure_openai/).
 
 
 ### Azure openai configs
 ### Azure openai configs
 
 
@@ -50,6 +53,6 @@ When running OpenHands in Docker, set the following environment variables using
 
 
 ```
 ```
 LLM_EMBEDDING_MODEL="azureopenai"
 LLM_EMBEDDING_MODEL="azureopenai"
-LLM_EMBEDDING_DEPLOYMENT_NAME="<your-embedding-deployment-name>"        # e.g. "TextEmbedding...<etc>"
-LLM_API_VERSION="<api-version>"         # e.g. "2024-02-15-preview"
+LLM_EMBEDDING_DEPLOYMENT_NAME="<your-embedding-deployment-name>"   # e.g. "TextEmbedding...<etc>"
+LLM_API_VERSION="<api-version>"                                    # e.g. "2024-02-15-preview"
 ```
 ```

+ 1 - 1
docs/modules/usage/llms/google-llms.md

@@ -2,7 +2,7 @@
 
 
 ## Completion
 ## Completion
 
 
-OpenHands uses LiteLLM for completion calls. The following resources are relevant for using OpenHands with Google's LLMs
+OpenHands uses LiteLLM for completion calls. The following resources are relevant for using OpenHands with Google's LLMs:
 
 
 - [Gemini - Google AI Studio](https://docs.litellm.ai/docs/providers/gemini)
 - [Gemini - Google AI Studio](https://docs.litellm.ai/docs/providers/gemini)
 - [VertexAI - Google Cloud Platform](https://docs.litellm.ai/docs/providers/vertex)
 - [VertexAI - Google Cloud Platform](https://docs.litellm.ai/docs/providers/vertex)

+ 5 - 5
docs/modules/usage/llms/llms.md

@@ -13,6 +13,11 @@ The following are verified by the community to work with OpenHands:
 * llama-3.1-405b / hermes-3-llama-3.1-405b
 * llama-3.1-405b / hermes-3-llama-3.1-405b
 * wizardlm-2-8x22b
 * wizardlm-2-8x22b
 
 
+:::warning
+OpenHands will issue many prompts to the LLM you configure. Most of these LLMs cost money, so be sure to set spending
+limits and monitor usage.
+:::
+
 If you have successfully run OpenHands with specific LLMs not in the list, please add them to the verified list. We
 If you have successfully run OpenHands with specific LLMs not in the list, please add them to the verified list. We
 also encourage you to open a PR to share your setup process to help others using the same provider and LLM!
 also encourage you to open a PR to share your setup process to help others using the same provider and LLM!
 
 
@@ -27,11 +32,6 @@ models driving it. However, if you do find ones that work, please add them to th
 
 
 ## LLM Configuration
 ## LLM Configuration
 
 
-:::warning
-OpenHands will issue many prompts to the LLM you configure. Most of these LLMs cost money, so be sure to set spending
-limits and monitor usage.
-:::
-
 The `LLM_MODEL` environment variable controls which model is used in programmatic interactions.
 The `LLM_MODEL` environment variable controls which model is used in programmatic interactions.
 But when using the OpenHands UI, you'll need to choose your model in the settings window.
 But when using the OpenHands UI, you'll need to choose your model in the settings window.
 
 

+ 1 - 1
docs/modules/usage/llms/local-llms.md

@@ -5,7 +5,7 @@ When using a Local LLM, OpenHands may have limited functionality.
 :::
 :::
 
 
 Ensure that you have the Ollama server up and running.
 Ensure that you have the Ollama server up and running.
-For detailed startup instructions, refer to [here](https://github.com/ollama/ollama)
+For detailed startup instructions, refer to [here](https://github.com/ollama/ollama).
 
 
 This guide assumes you've started ollama with `ollama serve`. If you're running ollama differently (e.g. inside docker), the instructions might need to be modified. Please note that if you're running WSL the default ollama configuration blocks requests from docker containers. See [here](#configuring-ollama-service-wsl-en).
 This guide assumes you've started ollama with `ollama serve`. If you're running ollama differently (e.g. inside docker), the instructions might need to be modified. Please note that if you're running WSL the default ollama configuration blocks requests from docker containers. See [here](#configuring-ollama-service-wsl-en).
 
 

+ 4 - 56
docs/modules/usage/llms/openai-llms.md

@@ -4,72 +4,20 @@ OpenHands uses [LiteLLM](https://www.litellm.ai/) to make calls to OpenAI's chat
 
 
 ## Configuration
 ## Configuration
 
 
-### Manual Configuration
-
-When running the OpenHands Docker image, you'll need to set the following environment variables:
-
-```sh
-LLM_MODEL="openai/<gpt-model-name>" # e.g. "openai/gpt-4o"
-LLM_API_KEY="<your-openai-project-api-key>"
-```
+When running the OpenHands Docker image, you'll need to choose a model and set your API key in the OpenHands UI through the Settings.
 
 
 To see a full list of OpenAI models that LiteLLM supports, please visit https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models.
 To see a full list of OpenAI models that LiteLLM supports, please visit https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models.
 
 
 To find or create your OpenAI Project API Key, please visit https://platform.openai.com/api-keys.
 To find or create your OpenAI Project API Key, please visit https://platform.openai.com/api-keys.
 
 
-**Example**:
-
-```sh
-export WORKSPACE_BASE=$(pwd)/workspace
-
-docker run -it \
-    --pull=always \
-    -e SANDBOX_USER_ID=$(id -u) \
-    -e LLM_MODEL="openai/<gpt-model-name>" \
-    -e LLM_API_KEY="<your-openai-project-api-key>" \
-    -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-    -v $WORKSPACE_BASE:/opt/workspace_base \
-    -v /var/run/docker.sock:/var/run/docker.sock \
-    -p 3000:3000 \
-    --add-host host.docker.internal:host-gateway \
-    --name openhands-app-$(date +%Y%m%d%H%M%S) \
-    ghcr.io/opendevin/opendevin:0.8
-```
-
-### UI Configuration
-
-You can also directly set the `LLM_MODEL` and `LLM_API_KEY` in the OpenHands client itself. Follow this guide to get up and running with the OpenHands client.
-
-From there, you can set your model and API key in the settings window.
-
 ## Using OpenAI-Compatible Endpoints
 ## Using OpenAI-Compatible Endpoints
 
 
 Just as for OpenAI Chat completions, we use LiteLLM for OpenAI-compatible endpoints. You can find their full documentation on this topic [here](https://docs.litellm.ai/docs/providers/openai_compatible).
 Just as for OpenAI Chat completions, we use LiteLLM for OpenAI-compatible endpoints. You can find their full documentation on this topic [here](https://docs.litellm.ai/docs/providers/openai_compatible).
 
 
-When running the OpenHands Docker image, you'll need to set the following environment variables:
+When running the OpenHands Docker image, you'll need to set the following environment variables using `-e`:
 
 
 ```sh
 ```sh
-LLM_BASE_URL="<api-base-url>" # e.g. "http://0.0.0.0:3000"
-LLM_MODEL="openai/<model-name>" # e.g. "openai/mistral"
-LLM_API_KEY="<your-api-key>"
+LLM_BASE_URL="<api-base-url>"   # e.g. "http://0.0.0.0:3000"
 ```
 ```
 
 
-**Example**:
-
-```sh
-export WORKSPACE_BASE=$(pwd)/workspace
-
-docker run -it \
-    --pull=always \
-    -e SANDBOX_USER_ID=$(id -u) \
-    -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-    -e LLM_BASE_URL="<api-base-url>" \
-    -e LLM_MODEL="openai/<model-name>" \
-    -e LLM_API_KEY="<your-api-key>" \
-    -v $WORKSPACE_BASE:/opt/workspace_base \
-    -v /var/run/docker.sock:/var/run/docker.sock \
-    -p 3000:3000 \
-    --add-host host.docker.internal:host-gateway \
-    --name openhands-app-$(date +%Y%m%d%H%M%S) \
-    ghcr.io/opendevin/opendevin:0.8
-```
+Then set your model and API key in the OpenHands UI through the Settings.