Ei kuvausta

Engel Nyst 220dac926e Note on model name (#1295) 1 vuosi sitten
.github 7bd5417b95 DogFood: instruct agent to read from diff file (#1259) 1 vuosi sitten
agenthub 39a851bd95 Message property sent to llm (#1273) 1 vuosi sitten
containers e84125de6f Symlink `python3` to `python` in Sandbox (#1286) 1 vuosi sitten
dev_config aed82704a9 Fix python linter inconsistent behaviour with quotes (#1112) 1 vuosi sitten
docs 3193014ed8 Correct workspace directory config variable in README (#1282) 1 vuosi sitten
evaluation 516c9bf1e0 Revamp docker build process (#1121) 1 vuosi sitten
frontend b0c3bca915 refactor(frontend): Refactor imports to use absolute path (#1288) 1 vuosi sitten
opendevin 39a851bd95 Message property sent to llm (#1273) 1 vuosi sitten
tests 39a851bd95 Message property sent to llm (#1273) 1 vuosi sitten
.dockerignore 516c9bf1e0 Revamp docker build process (#1121) 1 vuosi sitten
.gitattributes d97602d071 add: .gitattributes (#511) 1 vuosi sitten
.gitignore adbcfefd8c feat: websocket connection management and sandbox bound to session. (#559) 1 vuosi sitten
CONTRIBUTING.md e9adc45276 doc: add mock server to list in CONTRIBUTING.md (#442) 1 vuosi sitten
CodeOfConduct.md fe9815d57b Add Contributor Covenant (#769) 1 vuosi sitten
Development.md 220dac926e Note on model name (#1295) 1 vuosi sitten
LICENSE 39add27f15 Create MIT LICENSE (#8) 1 vuosi sitten
Makefile 2242702cf9 feat add prerequisites validation (#943) 1 vuosi sitten
README.md 3193014ed8 Correct workspace directory config variable in README (#1282) 1 vuosi sitten
logo.png 0cbe95dd35 feat: update opendevin logo (#242) 1 vuosi sitten
poetry.lock 76b81ca0ed Integrate E2B sandbox as an alternative to a Docker container (#727) 1 vuosi sitten
pyproject.toml 76b81ca0ed Integrate E2B sandbox as an alternative to a Docker container (#727) 1 vuosi sitten

README.md

Logo

OpenDevin: Code Less, Make More

🗂️ Table of Contents
  1. 🎯 Mission
  2. 🤔 What is Devin?
  3. 🐚 Why OpenDevin?
  4. 🚧 Project Status
  5. 🚀 Get Started
  • ⭐️ Research Strategy
  • 🤝 How to Contribute
  • 🤖 Join Our Community
  • 🛠️ Built With
  • 📜 License
  • 🎯 Mission

    Project Demo Video

    Welcome to OpenDevin, an open-source project aiming to replicate Devin, an autonomous AI software engineer who is capable of executing complex engineering tasks and collaborating actively with users on software development projects. This project aspires to replicate, enhance, and innovate upon Devin through the power of the open-source community.

    ↑ Back to Top ↑

    🤔 What is Devin?

    Devin represents a cutting-edge autonomous agent designed to navigate the complexities of software engineering. It leverages a combination of tools such as a shell, code editor, and web browser, showcasing the untapped potential of LLMs in software development. Our goal is to explore and expand upon Devin's capabilities, identifying both its strengths and areas for improvement, to guide the progress of open code models.

    ↑ Back to Top ↑

    🐚 Why OpenDevin?

    The OpenDevin project is born out of a desire to replicate, enhance, and innovate beyond the original Devin model. By engaging the open-source community, we aim to tackle the challenges faced by Code LLMs in practical scenarios, producing works that significantly contribute to the community and pave the way for future advancements.

    ↑ Back to Top ↑

    🚧 Project Status

    OpenDevin is currently a work in progress, but you can already run the alpha version to see the end-to-end system in action. The project team is actively working on the following key milestones:

    • UI: Developing a user-friendly interface, including a chat interface, a shell demonstrating commands, and a web browser.
    • Architecture: Building a stable agent framework with a robust backend that can read, write, and run simple commands.
    • Agent Capabilities: Enhancing the agent's abilities to generate bash scripts, run tests, and perform other software engineering tasks.
    • Evaluation: Establishing a minimal evaluation pipeline that is consistent with Devin's evaluation criteria.

    After completing the MVP, the team will focus on research in various areas, including foundation models, specialist capabilities, evaluation, and agent studies.

    ↑ Back to Top ↑

    ⚠️ Caveats and Warnings

    • OpenDevin is still an alpha project. It is changing very quickly and is unstable. We are working on getting a stable release out in the coming weeks.
    • OpenDevin will issue many prompts to the LLM you configure. Most of these LLMs cost money--be sure to set spending limits and monitor usage.
    • OpenDevin runs bash commands within a Docker sandbox, so it should not affect your machine. But your workspace directory will be attached to that sandbox, and files in the directory may be modified or deleted.
    • Our default Agent is currently the MonologueAgent, which has limited capabilities, but is fairly stable. We're working on other Agent implementations, including SWE Agent. You can read about our current set of agents here.

    🚀 Get Started

    The easiest way to run OpenDevin is inside a Docker container.

    To start the app, run these commands, replacing $(pwd)/workspace with the path to the code you want OpenDevin to work with.

    # Your OpenAI API key, or any other LLM API key
    export LLM_API_KEY="sk-..."
    
    # The directory you want OpenDevin to modify. MUST be an absolute path!
    export WORKSPACE_BASE=$(pwd)/workspace
    
    docker run \
        -e LLM_API_KEY \
        -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
        -v $WORKSPACE_BASE:/opt/workspace_base \
        -v /var/run/docker.sock:/var/run/docker.sock \
        -p 3000:3000 \
        --add-host host.docker.internal=host-gateway \
        ghcr.io/opendevin/opendevin:0.3.1
    

    You'll find opendevin running at http://localhost:3000.

    If you want to use the (unstable!) bleeding edge, you can use ghcr.io/opendevin/opendevin:main as the image.

    See Development.md for instructions on running OpenDevin without Docker.

    Having trouble? Check out our Troubleshooting Guide.

    🤖 LLM Backends

    OpenDevin can work with any LLM backend. For a full list of the LM providers and models available, please consult the litellm documentation.

    The LLM_MODEL environment variable controls which model is used in programmatic interactions. But when using the OpenDevin UI, you'll need to choose your model in the settings window (the gear wheel on the bottom left).

    The following environment variables might be necessary for some LLMs:

    • LLM_API_KEY
    • LLM_BASE_URL
    • LLM_EMBEDDING_MODEL
    • LLM_EMBEDDING_DEPLOYMENT_NAME
    • LLM_API_VERSION

    We have a few guides for running OpenDevin with specific model providers:

    If you're using another provider, we encourage you to open a PR to share your setup!

    Note on Alternative Models: The best models are GPT-4 and Claude 3. Current local and open source models are not nearly as powerful. When using an alternative model, you may see long wait times between messages, poor responses, or errors about malformed JSON. OpenDevin can only be as powerful as the models driving it--fortunately folks on our team are actively working on building better open source models!

    Note on API retries and rate limits: Some LLMs have rate limits and may require retries. OpenDevin will automatically retry requests if it receives a 429 error or API connection error. You can set LLM_NUM_RETRIES, LLM_RETRY_MIN_WAIT, LLM_RETRY_MAX_WAIT environment variables to control the number of retries and the time between retries. By default, LLM_NUM_RETRIES is 5 and LLM_RETRY_MIN_WAIT, LLM_RETRY_MAX_WAIT are 3 seconds and respectively 60 seconds.

    ⭐️ Research Strategy

    Achieving full replication of production-grade applications with LLMs is a complex endeavor. Our strategy involves:

    1. Core Technical Research: Focusing on foundational research to understand and improve the technical aspects of code generation and handling.
    2. Specialist Abilities: Enhancing the effectiveness of core components through data curation, training methods, and more.
    3. Task Planning: Developing capabilities for bug detection, codebase management, and optimization.
    4. Evaluation: Establishing comprehensive evaluation metrics to better understand and improve our models.

    ↑ Back to Top ↑

    🤝 How to Contribute

    OpenDevin is a community-driven project, and we welcome contributions from everyone. Whether you're a developer, a researcher, or simply enthusiastic about advancing the field of software engineering with AI, there are many ways to get involved:

    • Code Contributions: Help us develop the core functionalities, frontend interface, or sandboxing solutions.
    • Research and Evaluation: Contribute to our understanding of LLMs in software engineering, participate in evaluating the models, or suggest improvements.
    • Feedback and Testing: Use the OpenDevin toolset, report bugs, suggest features, or provide feedback on usability.

    For details, please check this document.

    ↑ Back to Top ↑

    🤖 Join Our Community

    Now we have both Slack workspace for the collaboration on building OpenDevin and Discord server for discussion about anything related, e.g., this project, LLM, agent, etc.

    If you would love to contribute, feel free to join our community (note that now there is no need to fill in the form). Let's simplify software engineering together!

    🐚 Code less, make more with OpenDevin.

    Star History Chart

    🛠️ Built With

    OpenDevin is built using a combination of powerful frameworks and libraries, providing a robust foundation for its development. Here are the key technologies used in the project:

    FastAPI uvicorn LiteLLM Docker Ruff MyPy LlamaIndex React

    Please note that the selection of these technologies is in progress, and additional technologies may be added or existing ones may be removed as the project evolves. We strive to adopt the most suitable and efficient tools to enhance the capabilities of OpenDevin.

    ↑ Back to Top ↑

    📜 License

    Distributed under the MIT License. See LICENSE for more information.

    ↑ Back to Top ↑