There are some error messages that get reported over and over by users. We'll try to make the install process easier, and to make these error messages better in the future. But for now, you can look for your error message below, and see if there are any workaround.
For each of these error messages there is an existing issue. Please do not open an new issue--just comment there.
If you find more information or a workaround for one of these issues, please open a PR to add details to this file.
:::tip If you're running on Windows and having trouble, check out our guide for Windows users :::
Error creating controller. Please check Docker is running and visit `https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting` for more debugging information.
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
OpenDevin uses a docker container to do its work safely, without potentially breaking your machine.
docker ps to ensure that docker is runningsudo to run docker see hereself.shell = DockerSSHBox(
...
pexpect.pxssh.ExceptionPxssh: Could not establish connection to host
By default, OpenDevin connects to a running container using SSH. On some machines, especially Windows, this seems to fail.
-e SANDBOX_TYPE=exec to switch to the ExecBox docker container File "/app/.venv/lib/python3.12/site-packages/openai/_exceptions.py", line 81, in __init__
super().__init__(message, response.request, body=body)
^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'request'
This usually happens with local LLM setups, when OpenDevin can't connect to the LLM server. See our guide for local LLMs for more information.
LLM_BASE_URL--add-host host.docker.internal:host-gateway when running in dockerTraceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 414, in completion
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 373, in completion
response = openai_client.chat.completions.create(**data, timeout=timeout) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 579, in create
return self._post(
^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1232, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 921, in request
return self._request(
^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1012, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
This happens when LiteLLM (our library for connecting to different LLM providers) can't find the API you're trying to connect to. Most often this happens for Azure or ollama users.
LLM_BASE_URL properlymodel in the settings modalLLM_MODEL in your env/configcurl