Xingyao Wang 01ef90205d Add CodeActSWEAgent to remove browsing & github + improvements on agentskills (#2105) 1 rok temu
..
SWE_agent 0eccf31604 Refactor monologue and SWE agent to use the messages in state history (#1863) 1 rok temu
browsing_agent 53f64ffa06 Improve browsing agent prompts, allowing agent to properly finish when done (#1993) 1 rok temu
codeact_agent 01ef90205d Add CodeActSWEAgent to remove browsing & github + improvements on agentskills (#2105) 1 rok temu
codeact_swe_agent 01ef90205d Add CodeActSWEAgent to remove browsing & github + improvements on agentskills (#2105) 1 rok temu
delegator_agent b845a38169 Small improvements & fixes to SWE-Bench (#1874) 1 rok temu
dummy_agent 3d29ec0418 add version (#2078) 1 rok temu
micro 9b371b1b5f Refactor agent delegation and tweak micro agents (#1910) 1 rok temu
monologue_agent 0eccf31604 Refactor monologue and SWE agent to use the messages in state history (#1863) 1 rok temu
planner_agent 46352e890b Logging security (#1943) 1 rok temu
README.md b028bd46bb Use messages to drive tasks (#1688) 1 rok temu
__init__.py 01ef90205d Add CodeActSWEAgent to remove browsing & github + improvements on agentskills (#2105) 1 rok temu

README.md

Agent Framework Research

In this folder, there may exist multiple implementations of Agent that will be used by the framework.

For example, agenthub/monologue_agent, agenthub/metagpt_agent, agenthub/codeact_agent, etc. Contributors from different backgrounds and interests can choose to contribute to any (or all!) of these directions.

Constructing an Agent

The abstraction for an agent can be found here.

Agents are run inside of a loop. At each iteration, agent.step() is called with a State input, and the agent must output an Action.

Every agent also has a self.llm which it can use to interact with the LLM configured by the user. See the LiteLLM docs for self.llm.completion.

State

The state contains:

  • A history of actions taken by the agent, as well as any observations (e.g. file content, command output) from those actions
  • A list of actions/observations that have happened since the most recent step
  • A root_task, which contains a plan of action
    • The agent can add and modify subtasks through the AddTaskAction and ModifyTaskAction

Actions

Here is a list of available Actions, which can be returned by agent.step():

You can use action.to_dict() and action_from_dict to serialize and deserialize actions.

Observations

There are also several types of Observations. These are typically available in the step following the corresponding Action. But they may also appear as a result of asynchronous events (e.g. a message from the user, logs from a command running in the background).

Here is a list of available Observations:

You can use observation.to_dict() and observation_from_dict to serialize and deserialize observations.

Interface

Every agent must implement the following methods:

step

def step(self, state: "State") -> "Action"

step moves the agent forward one step towards its goal. This probably means sending a prompt to the LLM, then parsing the response into an Action.

search_memory

def search_memory(self, query: str) -> list[str]:

search_memory should return a list of events that match the query. This will be used for the recall action.

You can optionally just return [] for this method, meaning the agent has no long-term memory.