|
|
1 рік тому | |
|---|---|---|
| .. | ||
| SWE_agent | 1 рік тому | |
| codeact_agent | 1 рік тому | |
| monologue_agent | 1 рік тому | |
| planner_agent | 1 рік тому | |
| README.md | 1 рік тому | |
| __init__.py | 1 рік тому | |
In this folder, there may exist multiple implementations of Agent that will be used by the framework.
For example, agenthub/monologue_agent, agenthub/metagpt_agent, agenthub/codeact_agent, etc.
Contributors from different backgrounds and interests can choose to contribute to any (or all!) of these directions.
The abstraction for an agent can be found here.
Agents are run inside of a loop. At each iteration, agent.step() is called with a
State input, and the agent must output an Action.
Every agent also has a self.llm which it can use to interact with the LLM configured by the user.
See the LiteLLM docs for self.llm.completion.
The state contains:
plan, which contains the main goal
AddTaskAction and ModifyTaskActionHere is a list of available Actions, which can be returned by agent.step():
CmdRunAction - Runs a command inside a sandboxed terminalCmdKillAction - Kills a background commandFileReadAction - Reads the content of a fileFileWriteAction - Writes new content to a fileBrowseURLAction - Gets the content of a URLAgentRecallAction - Searches memory (e.g. a vector database)AddTaskAction - Adds a subtask to the planModifyTaskAction - Changes the state of a subtaskAgentThinkAction - A no-op that allows the agent to add plaintext to the history (as well as the chat log)AgentFinishAction - Stops the control loop, allowing the user to enter a new taskYou can use action.to_dict() and action_from_dict to serialize and deserialize actions.
There are also several types of Observations. These are typically available in the step following the corresponding Action. But they may also appear as a result of asynchronous events (e.g. a message from the user, logs from a command running in the background).
Here is a list of available Observations:
CmdOutputObservationBrowserOutputObservationFileReadObservationFileWriteObservationUserMessageObservationAgentRecallObservationAgentErrorObservationYou can use observation.to_dict() and observation_from_dict to serialize and deserialize observations.
Every agent must implement the following methods:
stepdef step(self, state: "State") -> "Action"
step moves the agent forward one step towards its goal. This probably means
sending a prompt to the LLM, then parsing the response into an Action.
search_memorydef search_memory(self, query: str) -> List[str]:
search_memory should return a list of events that match the query. This will be used
for the recall action.
You can optionally just return [] for this method, meaning the agent has no long-term memory.