🐒
LangGraphで始めるSelf Reflection
Simple reflection 【Basic Reflection】
実装
では、実装していきましょう。
poetryで環境構築
詳細は、poetry setupのセクションで環境構築を詳しく書いてます。
1. poetryの初期化
作業ディレクトリで、以下のコマンドを実行しましょう
poetry init
2. ライブラリを追加
- python-dotenv
- black
- isort
- langchain
- langchain-openai
- langgraph
poetry add python-dotenv black isort langchain langchain-openai langgraph
3. ディレクトリ確認
現在のディレクトリは以下のような構成になっています。
.
├── main.py
├── chains.py
├── poetry.lock
└── pyproject.toml
LangGraphの実装
LangGraphでは、nodeとedgeを組み合わせてグラフを作成していきます。
+-----------+
| __start__ |
+-----------+
*
*
*
+----------+
| generate |
+----------+
... ...
. .
.. ..
+---------+ +---------+
| reflect | | __end__ |
+---------+ +---------+
main.py
from dotenv import load_dotenv
load_dotenv()
from typing import List, Sequence
from langchain_core.messages import BaseMessage, HumanMessage
from langgraph.graph import END, MessageGraph
from chains import generate_chain, reflect_chain
REFLECT = "reflect"
GENERATE = "generate"
def generation_node(state: Sequence[BaseMessage]):
"""
ツイート生成ノード。LangChainのgenerate_chainを使用してツイートを生成します。
Args:
state (Sequence[BaseMessage]): メッセージ履歴を含むシーケンス
Returns:
BaseMessage: 生成されたツイートを含むメッセージ
"""
return generate_chain.invoke({'messages': state})
def reflection_node(messages: Sequence[BaseMessage]) -> List[BaseMessage]:
res = reflect_chain.invoke({"messages": messages})
return [HumanMessage(content=res.content)]
builder = MessageGraph()
builder.add_node(GENERATE, generation_node)
builder.add_node(REFLECT, reflection_node)
builder.set_entry_point(GENERATE)
def should_continue(state: List[BaseMessage]):
if len(state):
return END
return REFLECT
builder.add_conditional_edges(GENERATE, should_continue)
builder.add_edge(REFLECT, GENERATE)
graph = builder.compile()
print(graph.get_graph().draw_mermaid())
graph.get_graph().print_ascii()
if __name__ == '__main__':
print("Hello LangGraph")
inputs = HumanMessage(content="""Make this tweet better:"
@LangChainAI
- newly Tool Calling feature is seriously underrated.
After a long wait, it's here- making the implementation of agents across different models with function calling.
Made a video covering their newest blog post
""")
response = graph.invoke(inputs)
print(response)
chains.py
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
reflection_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a viral twitter influencer grading a tweet. Generate critique and recommendations for the user's tweet."
"Always provide detailed recommendations, including requests for length, virality, style, etc.",
),
MessagesPlaceholder(variable_name="messages"),
]
)
generation_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a twitter techie influencer assistant tasked with writing excellent twitter posts."
" Generate the best twitter post possible for the user's request."
" If the user provides critique, respond with a revised version of your previous attempts.",
),
MessagesPlaceholder(variable_name="messages"),
]
)
llm = ChatOpenAI()
generate_chain = generation_prompt | llm
reflect_chain = reflection_prompt | llm
参考
Discussion