|
|
@@ -1,42 +1,34 @@
|
|
|
# Local LLM Guide with Ollama server
|
|
|
|
|
|
-## 0. Install and Start ollama:
|
|
|
-run the following command in a conda env with CUDA etc.
|
|
|
+Ensure that you have the Ollama server up and running.
|
|
|
+For detailed startup instructions, refer to the [here](https://github.com/ollama/ollama)
|
|
|
|
|
|
-Linux:
|
|
|
-```
|
|
|
-curl -fsSL https://ollama.com/install.sh | sh
|
|
|
-```
|
|
|
-Windows or macOS:
|
|
|
+This guide assumes you've started ollama with `ollama serve`. If you're running ollama differently (e.g. inside docker), the instructions might need to be modified.
|
|
|
|
|
|
-- Download from [here](https://ollama.com/download/)
|
|
|
+## 1. Pull Models
|
|
|
|
|
|
-Then run:
|
|
|
-```bash
|
|
|
-ollama serve
|
|
|
-```
|
|
|
-
|
|
|
-## 1. Install Models:
|
|
|
Ollama model names can be found [here](https://ollama.com/library). For a small example, you can use
|
|
|
-the codellama:7b model. Bigger models will generally perform better.
|
|
|
+the `codellama:7b` model. Bigger models will generally perform better.
|
|
|
|
|
|
-```
|
|
|
+```bash
|
|
|
ollama pull codellama:7b
|
|
|
```
|
|
|
|
|
|
you can check which models you have downloaded like this:
|
|
|
-```
|
|
|
+
|
|
|
+```bash
|
|
|
~$ ollama list
|
|
|
NAME ID SIZE MODIFIED
|
|
|
-llama2:latest 78e26419b446 3.8 GB 6 weeks ago
|
|
|
+codellama:7b 8fdf8f752f6e 3.8 GB 6 weeks ago
|
|
|
mistral:7b-instruct-v0.2-q4_K_M eb14864c7427 4.4 GB 2 weeks ago
|
|
|
starcoder2:latest f67ae0f64584 1.7 GB 19 hours ago
|
|
|
```
|
|
|
|
|
|
-## 3. Start OpenDevin
|
|
|
+## 2. Start OpenDevin
|
|
|
|
|
|
Use the instructions in [README.md](/README.md) to start OpenDevin using Docker.
|
|
|
But when running `docker run`, you'll need to add a few more arguments:
|
|
|
+
|
|
|
```bash
|
|
|
--add-host host.docker.internal=host-gateway \
|
|
|
-e LLM_API_KEY="ollama" \
|
|
|
@@ -44,6 +36,7 @@ But when running `docker run`, you'll need to add a few more arguments:
|
|
|
```
|
|
|
|
|
|
For example:
|
|
|
+
|
|
|
```bash
|
|
|
# The directory you want OpenDevin to modify. MUST be an absolute path!
|
|
|
export WORKSPACE_DIR=$(pwd)/workspace
|
|
|
@@ -59,10 +52,12 @@ docker run \
|
|
|
ghcr.io/opendevin/opendevin:main
|
|
|
```
|
|
|
|
|
|
-You should now be able to connect to `http://localhost:3001/`
|
|
|
+You should now be able to connect to `http://localhost:3000/`
|
|
|
+
|
|
|
+## 3. Select your Model
|
|
|
|
|
|
-## 4. Select your Model
|
|
|
In the OpenDevin UI, click on the Settings wheel in the bottom-left corner.
|
|
|
-Then in the `Model` input, enter `ollama/codellama:7b`, or the name of the model you pulled earlier, and click Save.
|
|
|
+Then in the `Model` input, enter `ollama/codellama:7b`, or the name of the model you pulled earlier.
|
|
|
+If it doesn’t show up in a dropdown, that’s fine, just type it in. Click Save when you’re done.
|
|
|
|
|
|
And now you're ready to go!
|