# 2. 准备数据和索引 # 创建一个虚拟的知识库文件 os.makedirs("data", exist_ok=True) withopen("data/paul_graham_essay.txt","w", encoding="utf-8")asf: f.write("aul Graham co-founded Y Combinator. After YC, he started painting. He spent most of 2014 painting. In March 2015, he started working on Lisp again.")
documents = SimpleDirectoryReader("./data").load_data() index = VectorStoreIndex.from_documents(documents)
# 第一轮对话 response = chat_engine.chat("What did Paul Graham do after YC?") print(response)
# 第二轮追问 # chat_engine会记住上一轮的上下文 response_follow_up = chat_engine.chat("What about after that?") print(response_follow_up)
当执行第二轮追问chat("What about after that?")时,verbose=True会在控制台打印出类似下面的信息,清晰地展示了查询重写的威力:
Querying with: What did Paul Graham do after he started painting after leaving YC?
LlamaIndex 使用的默认Prompt模板如下,它指示LLM根据聊天记录(Chat History)和新的追问(Follow Up Message)生成一个独立的、包含所有相关上下文的问题。
Given a conversation (between Human and Assistant) and a follow up message from Human, rewrite the message to be a standalone question that captures all relevant context from the conversation.
# 2. 准备数据和索引 # 使用与LlamaIndex示例相同的数据 os.makedirs("data", exist_ok=True) withopen("data/paul_graham_essay.txt","w", encoding="utf-8")asf: f.write("aul Graham co-founded Y Combinator. After YC, he started painting. He spent most of 2014 painting. In March 2015, he started working on Lisp again.")