? ? ? ?這就是在LangGraph中創建代理執行器的方式,和LangChain的執行器功能類似。我們將進一步探討狀態圖的接口以及返回結果的不同流式傳輸方法。

四、探索聊天代理執行器

? ? ? ?我們將在LangGraph中探索聊天代理執行器,這是一個設計用于處理基于聊天的模型的工具。此執行器是唯一的,因為它完全根據輸入消息的列表進行操作,通過向該列表中添加新消息來隨著時間的推移更新代理的狀態。

讓我們深入了解設置過程:

4.1 安裝軟件包:

? ? ? ?同樣需要LangChain軟件包,LangChain OpenAI用于模型,Tavily軟件包用于搜索工具,并為這些服務設置API密鑰。

!pip install --quiet -U langchain langchain_openai tavily-python
import osimport getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["TAVILY_API_KEY"] = getpass.getpass("Tavily API Key:")os.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = getpass.getpass("LangSmith API Key:")

4.2 設置工具和模型:

? ? ? ?我們將使用Tavily Search作為我們的工具,并設置一個工具執行器來調用這些工具。對于模型,我們將使用LangChain集成中的Chat OpenAI模型,確保其在啟用流式進行初始化。這使我們能夠流式返回tokens,并附加我們希望模型調用的函數。

from langchain_community.tools.tavily_search import TavilySearchResultsfrom langchain_openai import ChatOpenAIfrom langgraph.prebuilt import ToolExecutorfrom langchain.tools.render import format_tool_to_openai_functiontools = [TavilySearchResults(max_results=1)]tool_executor = ToolExecutor(tools)# We will set streaming=True so that we can stream tokens# See the streaming section for more information on this.model = ChatOpenAI(temperature=0, streaming=True)functions = [format_tool_to_openai_function(t) for t in tools]model = model.bind_functions(functions)

4.3 定義代理狀態:

       代理狀態是一個簡單的字典,其中包含消息列表的鍵。我們將使用“add to”標記,這樣隨著時間的推移,節點對此消息列表的任何更新都會累積。

from typing import TypedDict, Annotated, Sequenceimport operatorfrom langchain_core.messages import BaseMessage

class AgentState(TypedDict): messages: Annotated[Sequence[BaseMessage], operator.add]

4.4 創建節點和邊:

       節點表示具體的工作任務,邊連接節點。我們需要一個代理節點來調用語言模型并獲得響應,一個操作節點來查看是否有任何工具需要調用,以及一個函數來確定我們是否應該繼續調用工具或完成。

from langgraph.prebuilt import ToolInvocationimport jsonfrom langchain_core.messages import FunctionMessage
# Define the function that determines whether to continue or notdef should_continue(state): messages = state['messages'] last_message = messages[-1] # If there is no function call, then we finish if "function_call" not in last_message.additional_kwargs: return "end" # Otherwise if there is, we continue else: return "continue"
# Define the function that calls the modeldef call_model(state): messages = state['messages'] response = model.invoke(messages) # We return a list, because this will get added to the existing list return {"messages": [response]}
# Define the function to execute toolsdef call_tool(state): messages = state['messages'] # Based on the continue condition # we know the last message involves a function call last_message = messages[-1] # We construct an ToolInvocation from the function_call action = ToolInvocation( tool=last_message.additional_kwargs["function_call"]["name"], tool_input=json.loads(last_message.additional_kwargs["function_call"]["arguments"]), ) # We call the tool_executor and get back a response response = tool_executor.invoke(action) # We use the response to create a FunctionMessage function_message = FunctionMessage(content=str(response), name=action.tool) # We return a list, because this will get added to the existing list return {"messages": [function_message]}

4.5 構建圖:

       我們創建一個具有代理狀態的圖,為代理和動作添加節點,并將入口點設置為代理節點。條件邊是根據代理應該繼續還是結束來添加的,并且正常邊總是在動作后返回到代理。

from langgraph.graph import StateGraph, END# Define a new graphworkflow = StateGraph(AgentState)
# Define the two nodes we will cycle betweenworkflow.add_node("agent", call_model)workflow.add_node("action", call_tool)
# Set the entrypoint as agent# This means that this node is the first one calledworkflow.set_entry_point("agent")
# We now add a conditional edgeworkflow.add_conditional_edges( # First, we define the start node. We use agent. # This means these are the edges taken after the agent node is called. "agent", # Next, we pass in the function that will determine which node is called next. should_continue, # Finally we pass in a mapping. # The keys are strings, and the values are other nodes. # END is a special node marking that the graph should finish. # What will happen is we will call should_continue, and then the output of that # will be matched against the keys in this mapping. # Based on which one it matches, that node will then be called. { # If tools, then we call the tool node. "continue": "action", # Otherwise we finish. "end": END })
# We now add a normal edge from tools to agent.# This means that after tools is called, agent node is called next.workflow.add_edge('action', 'agent')
# Finally, we compile it!# This compiles it into a LangChain Runnable,# meaning you can use it as you would any other runnableapp = workflow.compile()

4.6 編譯和使用圖形:

       編譯圖形后,我們創建一個帶有消息鍵的輸入字典。運行圖形將處理這些消息,將AI響應、功能結果和最終輸出添加到消息列表中。

from langchain_core.messages import HumanMessage
inputs = {"messages": [HumanMessage(content="what is the weather in sf")]}app.invoke(inputs)

4.7 觀察執行過程:

? ? ? ?使用LangSmith,我們可以看到我們的代理所采取的詳細步驟,包括對OpenAI的調用和由此產生的輸出。

流式功能:LangGraph還提供流式功能。

五、如何在循環中修改humans操作

? ? ? ?讓我們修改LangGraph中的聊天代理執行器,使其包含一個“human in the loop”組件,這樣在執行工具操作之前可以進行人工驗證。

       設置:初始設置保持不變。不需要額外安裝。我們將創建我們的工具,設置工具執行器,準備我們的模型,將工具綁定到模型,并定義代理狀態——所有這些都與我們在前一個會話中所做的一樣。

       關鍵修改——調用工具功能:主要的變化來自調用工具功能。我們添加了一個步驟,系統在交互式IDE中提示用戶(即您!),詢問是否繼續執行特定操作。如果用戶響應“否”,則會引發錯誤,進程將停止。這是我們的人工驗證步驟。

# Define the function to execute toolsdef call_tool(state): messages = state['messages'] # Based on the continue condition # we know the last message involves a function call last_message = messages[-1] # We construct an ToolInvocation from the function_call action = ToolInvocation( tool=last_message.additional_kwargs["function_call"]["name"], tool_input=json.loads(last_message.additional_kwargs["function_call"]["arguments"]), ) response = input(prompt=f"[y/n] continue with: {action}?") if response == "n": raise ValueError # We call the tool_executor and get back a response response = tool_executor.invoke(action) # We use the response to create a FunctionMessage function_message = FunctionMessage(content=str(response), name=action.tool) # We return a list, because this will get added to the existing list return {"messages": [function_message]}

       使用修改的執行器:當我們運行這個修改的執行程序時,它會在執行任何工具操作之前請求批準。如果我們同意說“是”,它將正常進行。然而,如果我們說“不”,則會引發錯誤并停止該過程。

utput from node 'agent':---{'messages': [AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "query": "weather in San Francisco"\n}', 'name': 'tavily_search_results_json'}})]}
---
---------------------------------------------------------------------------ValueError Traceback (most recent call last)Cell In[10], line 4 1 from langchain_core.messages import HumanMessage 3 inputs = {"messages": [HumanMessage(content="what is the weather in sf")]}----> 4 for output in app.stream(inputs): 5 # stream() yields dictionaries with output keyed by node name 6 for key, value in output.items(): 7 print(f"Output from node '{key}':")

? ? ? ?這是一個基本的實現。在現實世界中,您可能希望用更復雜的響應來代替錯誤,并使用更用戶友好的界面,而不是Jupyter筆記本。但這讓您清楚地了解了如何將一個簡單而有效的人工循環組件添加到LangGraph代理中。

六、修改管理代理步驟

? ? ? ?讓我們來看看在LangGraph中修改聊天代理執行器,以在處理消息時操縱代理的內部狀態。

       本教程建立在基本的聊天代理執行程序設置的基礎上,因此,如果您還沒有在基本筆記本中完成初始設置,請先完成。我們在這里只關注新的修改。

       關鍵修改——過濾消息:我們引入的主要更改是過濾傳遞給模型的消息的方法。現在,您可以自定義代理考慮的消息。例如:

def call_model(state): messages = state['messages'][-5:] response = model.invoke(messages) # We return a list, because this will get added to the existing list return {"messages": [response]}

       此修改是一個小而強大的添加,允許您控制代理如何與其消息歷史進行交互,并改進其決策過程。

       使用修改的執行器:實現非常簡單。僅有一條輸入消息不同,但重要的是,您希望應用于代理步驟的任何邏輯都可以插入到這個新的修改部分。

       此方法非常適合修改聊天代理執行器,但如果使用標準代理執行器時,同樣的原理也適用。

七、強制調用工具

? ? ? ?我們將對LangGraph中的聊天代理執行器進行簡單但有效的修改,確保始終首先調用一個工具。這是建立在基本的聊天代理執行器筆記本上的,所以請確保您已經檢查了背景信息。

       關鍵修改——強制工具調用優先:我們這里的重點是設置聊天代理調用特定工具作為其第一個操作。為此,我們將添加一個新節點,并將其命名為“first model node”。該節點將被編程為返回一條消息,指示代理調用特定工具,如“Tavil search results Json”工具,并將最新的消息內容作為查詢。

# This is the new first - the first call of the model we want to explicitly hard-code some actionfrom langchain_core.messages import AIMessageimport json
def first_model(state): human_input = state['messages'][-1].content return { "messages": [ AIMessage( content="", additional_kwargs={ "function_call": { "name": "tavily_search_results_json", "arguments": json.dumps({"query": human_input}) } } ) ] }

        更新圖:我們將修改現有的圖,將這個新的“first agent”節點作為入口點。這樣可以確保始終首先調用第一個代理節點,然后調用動作節點。我們設置了一個從代理到動作或結束的條件節點,以及一個從動作回到代理的直接節點。關鍵的添加是從第一個代理到操作的一個新節點,確保工具調用一開始就發生。

from langgraph.graph import StateGraph, END# Define a new graphworkflow = StateGraph(AgentState)
# Define the new entrypointworkflow.add_node("first_agent", first_model)
# Define the two nodes we will cycle betweenworkflow.add_node("agent", call_model)workflow.add_node("action", call_tool)
# Set the entrypoint as agent# This means that this node is the first one calledworkflow.set_entry_point("first_agent")
# We now add a conditional edgeworkflow.add_conditional_edges( # First, we define the start node. We use agent. # This means these are the edges taken after the agent node is called. "agent", # Next, we pass in the function that will determine which node is called next. should_continue, # Finally we pass in a mapping. # The keys are strings, and the values are other nodes. # END is a special node marking that the graph should finish. # What will happen is we will call should_continue, and then the output of that # will be matched against the keys in this mapping. # Based on which one it matches, that node will then be called. { # If tools, then we call the tool node. "continue": "action", # Otherwise we finish. "end": END })
# We now add a normal edge from tools to agent.# This means that after tools is called, agent node is called next.workflow.add_edge('action', 'agent')
# After we call the first agent, we know we want to go to actionworkflow.add_edge('first_agent', 'action')
# Finally, we compile it!# This compiles it into a LangChain Runnable,# meaning you can use it as you would any other runnableapp = workflow.compile()

       使用修改的執行器:當我們運行這個更新的執行器時,第一個結果會很快返回,因為我們繞過了初始的語言模型調用,直接調用該工具。通過觀察LangSmith中的過程可以證實這一點,在LangSmith中,我們可以看到工具是第一個被調用的東西,然后是最后的語言模型調用。

       這種修改是一種簡單而強大的方法,可以確保在聊天代理的工作流程中立即使用特定的工具。

參考文獻:

[1] https://camunda.com/blog/2023/02/orchestration-vs-choreography/

[2] https://medium.com/@rajib76.gcp/langgraph-agent-orchestrator-9cb4da8179c3

[3] https://levelup.gitconnected.com/langgraph-create-a-hyper-ai-agent-0e74c61238cc

[4] https://python.langchain.com/docs/langgraph

[5] https://github.com/langchain-ai/langgraph/blob/main/examples/agent_executor/base.ipynb

[6] https://github.com/langchain-ai/langgraph/blob/main/examples/chat_agent_executor_with_function_calling/base.ipynb

[7] https://github.com/langchain-ai/langgraph/blob/main/examples/chat_agent_executor_with_function_calling/human-in-the-loop.ipynb

[8]?https://github.com/langchain-ai/langgraph/blob/main/examples/chat_agent_executor_with_function_calling/managing-agent-steps.ipynb

[9]?https://github.com/langchain-ai/langgraph/blob/main/examples/chat_agent_executor_with_function_calling/force-calling-a-tool-first.ipynb

文章轉自微信公眾號@ArronAI

上一篇:

LLM之LangChain(五)| 使用LangChain Agent分析非結構化數據

下一篇:

LLM之LangChain(七)| 使用LangChain,LangSmith實現Prompt工程ToT
#你可能也喜歡這些API文章!

我們有何不同?

API服務商零注冊

多API并行試用

數據驅動選型,提升決策效率

查看全部API→
??

熱門場景實測,選對API

#AI文本生成大模型API

對比大模型API的內容創意新穎性、情感共鳴力、商業轉化潛力

25個渠道
一鍵對比試用API 限時免費

#AI深度推理大模型API

對比大模型API的邏輯推理準確性、分析深度、可視化建議合理性

10個渠道
一鍵對比試用API 限時免費