|
|
@@ -12,9 +12,27 @@ When running the OpenDevin Docker image, you'll need to set the following enviro
|
|
|
LLM_BASE_URL="<azure-api-base-url>" # e.g. "https://openai-gpt-4-test-v-1.openai.azure.com/"
|
|
|
LLM_API_KEY="<azure-api-key>"
|
|
|
LLM_MODEL="azure/<your-gpt-deployment-name>"
|
|
|
-LLM_API_VERSION = "<api-version>" # e.g. "2024-02-15-preview"
|
|
|
+LLM_API_VERSION="<api-version>" # e.g. "2024-02-15-preview"
|
|
|
```
|
|
|
|
|
|
+Example:
|
|
|
+```bash
|
|
|
+docker run -it \
|
|
|
+--pull=always \
|
|
|
+-e SANDBOX_USER_ID=$(id -u) \
|
|
|
+-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
|
|
|
+-e LLM_BASE_URL="x.openai.azure.com" \
|
|
|
+-e LLM_API_VERSION="2024-02-15-preview" \
|
|
|
+-v $WORKSPACE_BASE:/opt/workspace_base \
|
|
|
+-v /var/run/docker.sock:/var/run/docker.sock \
|
|
|
+-p 3000:3000 \
|
|
|
+--add-host host.docker.internal:host-gateway \
|
|
|
+--name opendevin-app-$(date +%Y%m%d%H%M%S) \
|
|
|
+ghcr.io/opendevin/opendevin
|
|
|
+```
|
|
|
+
|
|
|
+You can set the LLM_MODEL and LLM_API_KEY in the OpenDevin UI itself.
|
|
|
+
|
|
|
:::note
|
|
|
You can find your ChatGPT deployment name on the deployments page in Azure. It could be the same with the chat model name (e.g. 'GPT4-1106-preview'), by default or initially set, but it doesn't have to be the same. Run opendevin, and when you load it in the browser, go to Settings and set model as above: "azure/<your-actual-gpt-deployment-name>". If it's not in the list, enter your own text and save it.
|
|
|
:::
|
|
|
@@ -32,6 +50,6 @@ When running OpenDevin in Docker, set the following environment variables using
|
|
|
|
|
|
```
|
|
|
LLM_EMBEDDING_MODEL="azureopenai"
|
|
|
-LLM_EMBEDDING_DEPLOYMENT_NAME = "<your-embedding-deployment-name>" # e.g. "TextEmbedding...<etc>"
|
|
|
-LLM_API_VERSION = "<api-version>" # e.g. "2024-02-15-preview"
|
|
|
+LLM_EMBEDDING_DEPLOYMENT_NAME="<your-embedding-deployment-name>" # e.g. "TextEmbedding...<etc>"
|
|
|
+LLM_API_VERSION="<api-version>" # e.g. "2024-02-15-preview"
|
|
|
```
|