代码来源: https://github.com/langchain-ai/langgraph/blob/main/examples/agent_executor/human-in-the-loop.ipynb
ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;overflow-x: auto;border-radius: 8px;padding: 1em;margin: 10px 8px;">
fromlangchainimporthub
fromlangchain.agentsimportcreate_openai_functions_agent
fromlangchain_openai.chat_modelsimportChatOpenAI
fromlangchain_community.tools.tavily_searchimportTavilySearchResults
tools=[TavilySearchResults(max_results=1)]
#Gettheprompttouse-youcanmodifythis!
prompt=hub.pull("hwchase17/openai-functions-agent")
#ChoosetheLLMthatwilldrivetheagent
llm=ChatOpenAI(model="gpt-3.5-turbo-1106",streaming=True)
#ConstructtheOpenAIFunctionsagent
agent_runnable=create_openai_functions_agent(llm,tools,prompt)
fromtypingimportTypedDict,Annotated,List,Union
fromlangchain_core.agentsimportAgentAction,AgentFinish
fromlangchain_core.messagesimportBaseMessage
importoperator
classAgentState(TypedDict):
#Theinputstring
input:str
#Thelistofpreviousmessagesintheconversation
chat_history:list[BaseMessage]
#Theoutcomeofagivencalltotheagent
#Needs`None`asavalidtype,sincethisiswhatthiswillstartas
agent_outcome:Union[AgentAction,AgentFinish,None]
#Listofactionsandcorrespondingobservations
#Hereweannotatethiswith`operator.add`toindicatethatoperationsto
#thisstateshouldbeADDEDtotheexistingvalues(notoverwriteit)
intermediate_steps:Annotated[list[tuple[AgentAction,str]],operator.add]
fromlangchain_core.agentsimportAgentFinish
fromlanggraph.prebuilt.tool_executorimportToolExecutor
#Thisahelperclasswehavethatisusefulforrunningtools
#Ittakesinanagentactionandcallsthattoolandreturnstheresult
tool_executor=ToolExecutor(tools)
#Definetheagent
defrun_agent(data):
agent_outcome=agent_runnable.invoke(data)
return{"agent_outcome":agent_outcome}
#Definethefunctiontoexecutetools
defexecute_tools(data):
#Getthemostrecentagent_outcome-thisisthekeyaddedinthe`agent`above
agent_action=data["agent_outcome"]
response=input(f"[y/n]continuewith:{agent_action}?")
ifresponse=="n":
raiseValueError
output=tool_executor.invoke(agent_action)
return{"intermediate_steps":[(agent_action,str(output))]}
#Definelogicthatwillbeusedtodeterminewhichconditionaledgetogodown
defshould_continue(data):
#IftheagentoutcomeisanAgentFinish,thenwereturn`exit`string
#Thiswillbeusedwhensettingupthegraphtodefinetheflow
ifisinstance(data["agent_outcome"],AgentFinish):
return"end"
#Otherwise,anAgentActionisreturned
#Herewereturn`continue`string
#Thiswillbeusedwhensettingupthegraphtodefinetheflow
else:
return"continue"
fromlanggraph.graphimportEND,StateGraph
#Defineanewgraph
workflow=StateGraph(AgentState)
#Definethetwonodeswewillcyclebetween
workflow.add_node("agent",run_agent)
workflow.add_node("action",execute_tools)
#Settheentrypointas`agent`
#Thismeansthatthisnodeisthefirstonecalled
workflow.set_entry_point("agent")
#Wenowaddaconditionaledge
workflow.add_conditional_edges(
#First,wedefinethestartnode.Weuse`agent`.
#Thismeansthesearetheedgestakenafterthe`agent`nodeiscalled.
"agent",
#Next,wepassinthefunctionthatwilldeterminewhichnodeiscallednext.
should_continue,
#Finallywepassinamapping.
#Thekeysarestrings,andthevaluesareothernodes.
#ENDisaspecialnodemarkingthatthegraphshouldfinish.
#Whatwillhappeniswewillcall`should_continue`,andthentheoutputofthat
#willbematchedagainstthekeysinthismapping.
#Basedonwhichoneitmatches,thatnodewillthenbecalled.
{
#If`tools`,thenwecallthetoolnode.
"continue":"action",
#Otherwisewefinish.
"end":END,
},
)
#Wenowaddanormaledgefrom`tools`to`agent`.
#Thismeansthatafter`tools`iscalled,`agent`nodeiscallednext.
workflow.add_edge("action","agent")
#Finally,wecompileit!
#ThiscompilesitintoaLangChainRunnable,
#meaningyoucanuseitasyouwouldanyotherrunnable
app=workflow.compile()
inputs={"input":"北京今天的天气怎么样?","chat_history":[]}
forsinapp.stream(inputs):
print(list(s.values())[0])
print("----")ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;margin: 1.5em 8px;letter-spacing: 0.1em;color: rgb(63, 63, 63);">运行结果:ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;font-size: 1.2em;font-weight: bold;display: table;margin: 2em auto 1em;padding-right: 1em;padding-left: 1em;border-bottom: 2px solid rgb(15, 76, 129);color: rgb(63, 63, 63);">1. 代码解释ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;font-size: 1.2em;font-weight: bold;display: table;margin: 4em auto 2em;padding-right: 0.2em;padding-left: 0.2em;background: rgb(15, 76, 129);color: rgb(255, 255, 255);">1.1 总结使用LangGraph的步骤ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;margin: 1.5em 8px;letter-spacing: 0.1em;color: rgb(63, 63, 63);">上面代码其实没什么新鲜的,就是使用LangGraph的步骤,前面入门的文章(【AI Agent系列】【LangGraph】0. 快速上手:协同LangChain,LangGraph帮你用图结构轻松构建多智能体应用)中详细介绍过:ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;margin: 1.5em 8px;letter-spacing: 0.1em;color: rgb(63, 63, 63);">(1)首先创建一个图:workflow = StateGraph(AgentState)ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;margin: 1.5em 8px;letter-spacing: 0.1em;color: rgb(63, 63, 63);">(2)然后,往图中添加节点:workflow.add_node("agent",run_agent)
workflow.add_node("action",execute_tools)(3)再然后,添加边:
workflow.add_conditional_edges(
"agent",
should_continue,
{
"continue":"action",
"end":END,
},
)
workflow.add_edge("action","agent")(4)再然后,添加进入节点:workflow.set_entry_point("agent")
(5)编译图:app = workflow.compile()
(6)运行,这里用的是stream函数,也可以使用invoke函数。
定义了两个节点:agent和action,分别对应着两个函数。
•run_agent函数执行agent,获取执行结果。
•execute_tools函数首先需要人工介入判断是否要继续执行工具,如果继续,则执行相应工具。如果不继续,则停止程序。
defrun_agent(data):
agent_outcome=agent_runnable.invoke(data)
return{"agent_outcome":agent_outcome}
defexecute_tools(data):
agent_action=data["agent_outcome"]
response=input(f"[y/n]continuewith:{agent_action}?")
ifresponse=="n":
raiseValueError
output=tool_executor.invoke(agent_action)
return{"intermediate_steps":[(agent_action,str(output))]}下面来到本文的主要学习内容:节点间的信息是如何传递的,Graph中的State是如何更新的,
前面我们说过,LangGraph的核心概念之一是状态。每次图的执行都会创建一个状态,该状态在图中的节点之间传递,每个节点在执行后都会更新此状态。所以,LangGraph呈现的是类似状态机(state machine)的机制。而这也是LangGraph节点间进行信息传递的方式。
下面是代码中的状态定义:
classAgentState(TypedDict):
input:str
chat_history:list[BaseMessage]
agent_outcome:Union[AgentAction,AgentFinish,None]
intermediate_steps:Annotated[list[tuple[AgentAction,str]],operator.add]定义一个AgentState继承自TypedDict即可。AgentState中自定义了一些传递的信息:input、chat_history、agent_outcome和intermediate_steps。每个node在执行前,都可以从状态中获取这些信息使用,然后执行后,也都可以将返回结果更新到状态中。
agent_outcome 就是agent执行后的返回的执行状态(AgentAction或者AgentFinish,不用过多纠结这是什么,只要知道是个状态就好了。)
看下代码中的使用过程:
(1)首先在创建图的时候要把这个状态设置给图:workflow = StateGraph(AgentState)
(2)然后在各节点中就可以使用了:
以execute_tools节点为例:
defexecute_tools(data):
agent_action=data["agent_outcome"]
response=input(f"[y/n]continuewith:{agent_action}?")
ifresponse=="n":
raiseValueError
output=tool_executor.invoke(agent_action)
return{"intermediate_steps":[(agent_action,str(output))]}在执行前,先从状态中获取agent_outcome,使用这个值给用户提示,是否需要继续。如果继续,则执行工具获取执行结果。最后,执行结果被更新到状态的intermediate_steps字段中。下个节点就可以使用这个intermediate_steps信息了。这就完成了节点间的信息传递。
本文总体来说比较简单,首先回顾了LangGraph的创建与使用的基本步骤,然后学习了LangGraph中状态的定义和节点间信息传递的原理,这算是个新的知识点。总结下自定义信息传递的步骤:
(1)定义class AgentState(TypedDict)
(2)传递给图:workflow = StateGraph(AgentState)
(3)如有需要,在各node中,获取状态:data = data["agent_outcome"]
(4)如有需要,在各node中,更新状态:return {"intermediate_steps": [(agent_action, str(output))]}
这个例子其实是用来展示如何在多智能体交互中让人参与其中的。也可以参考下,就是在node中适当位置加个input函数,等待用户输入即可。
| 欢迎光临 链载Ai (http://www.lianzai.com/) | Powered by Discuz! X3.5 |