OpenHands uses LiteLLM to make calls to OpenAI's chat models. You can find their full documentation on OpenAI chat calls here.
When running OpenHands, you'll need to set the following in the OpenHands UI through the Settings:
LLM Provider to OpenAILLM Model to the model you will be using.
Visit here to see a full list of OpenAI models that LiteLLM supports.
If the model is not in the list, toggle Advanced Options, and enter it in Custom Model (i.e. openai/<model-name>).API Key to your OpenAI API key. To find or create your OpenAI Project API Key, see here.Just as for OpenAI Chat completions, we use LiteLLM for OpenAI-compatible endpoints. You can find their full documentation on this topic here.
If you're using an OpenAI proxy, you'll need to set the following in the OpenHands UI through the Settings:
Advanced OptionsCustom Model to openai/<model-name> (i.e.: openai/gpt-4o or openai/<proxy-prefix>/<model-name>)Base URL to the URL of your OpenAI proxyAPI Key to your OpenAI API key