Bläddra i källkod

Fix CodeAct paper link (#1784)

https://arxiv.org/abs/2402.13463 is RefuteBench: Evaluating Refuting Instruction-Following for Large Language Models

https://arxiv.org/abs/2402.01030 is Executable Code Actions Elicit Better LLM Agents
Marshall Roch 1 år sedan
förälder
incheckning
64ee5d404d
1 ändrade filer med 1 tillägg och 1 borttagningar
  1. 1 1
      docs/modules/usage/agents.md

+ 1 - 1
docs/modules/usage/agents.md

@@ -8,7 +8,7 @@ sidebar_position: 3
 
 ### Description
 
-This agent implements the CodeAct idea ([paper](https://arxiv.org/abs/2402.13463), [tweet](https://twitter.com/xingyaow_/status/1754556835703751087)) that consolidates LLM agents’ **act**ions into a unified **code** action space for both _simplicity_ and _performance_ (see paper for more details).
+This agent implements the CodeAct idea ([paper](https://arxiv.org/abs/2402.01030), [tweet](https://twitter.com/xingyaow_/status/1754556835703751087)) that consolidates LLM agents’ **act**ions into a unified **code** action space for both _simplicity_ and _performance_ (see paper for more details).
 
 The conceptual idea is illustrated below. At each turn, the agent can: