返回顶部
热门问答 更多热门问答
技术文章 更多技术文章

还在困惑RAG评估?一篇文章彻底解析RAG评估工作流程!

[复制链接]
链载Ai 显示全部楼层 发表于 昨天 10:51 |阅读模式 打印 上一主题 下一主题

随着AI的快速发展,开发者往往面临一个选择的悖论:如何选择正确的提示,如何权衡LLM质量与成本?评估可以通过结构化的流程来做出这些决策,从而加速开发。

下面介绍如何使用LangSmith进行评估,并讨论工作流程的每一部分。

一、准备数据集

在 LangSmith UI 中使用 csv 上传创建的数据集。

https://smith.langchain.com/public/730d833b-74da-43e2-a614-4e2ca2502606/d

在这里,我们确保设置了 OpenAI 和 LangSmith 的 API 密钥。

getpass

指定数据集的名称。

###Datasetnamedataset_name="LCEL-QA"

二、创建Task任务

接下来,使用LangChain来创建检索器并检索相关文档。

### INDEX
from bs4 import BeautifulSoup as Soupfrom langchain_community.vectorstores import Chromafrom langchain_openai import OpenAIEmbeddingsfrom langchain_community.document_loaders.recursive_url_loader import RecursiveUrlLoaderfrom langchain_text_splitters import RecursiveCharacterTextSplitter
# Load docsurl = "https://python.langchain.com/v0.1/docs/expression_language/"loader = RecursiveUrlLoader(url=url, max_depth=20, extractor=lambda x: Soup(x, "html.parser").text)docs = loader.load()
# Split into chunkstext_splitter = RecursiveCharacterTextSplitter(chunk_size=4500, chunk_overlap=200)splits = text_splitter.split_documents(docs)
# Embed and store in Chromavectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())
# Indexretriever=vectorstore.as_retriever()

创建聊天机器人。

### RAG bot
import openaifrom langsmith import traceablefrom langsmith.wrappers import wrap_openai
class RagBot:
def __init__(self, retriever, model: str = "gpt-4-0125-preview"):self._retriever = retriever# Wrapping the client instruments the LLMself._client = wrap_openai(openai.Client())self._model = model
@traceable()def retrieve_docs(self, question):return self._retriever.invoke(question)
@traceable()def invoke_llm(self, question, docs):response = self._client.chat.completions.create(model=self._model,messages=[{"role": "system","content": "You are a helpful AI code assistant with expertise in LCEL."" Use the following docs to produce a concise code solution to the user question.\n\n"f"## Docs\n\n{docs}",},{"role": "user", "content": question},],)
# Evaluators will expect "answer" and "contexts"return {"answer": response.choices[0].message.content,"contexts": [str(doc) for doc in docs],}
@traceable()def get_answer(self, question: str):docs = self.retrieve_docs(question)return self.invoke_llm(question, docs)
rag_bot = RagBot(retriever)
response=rag_bot.get_answer("HowtobuildaRAGchaininLCEL?")response["answer"][:150]
def predict_rag_answer(example: dict):"""Use this for answer evaluation"""response = rag_bot.get_answer(example["input_question"])return {"answer": response["answer"]}
def predict_rag_answer_with_context(example: dict):"""Use this for evaluation of retrieved documents and hallucinations"""response = rag_bot.get_answer(example["input_question"])return {"answer": response["answer"], "contexts": response["contexts"]}

三、创建评估器

下面介绍4 种类型的 RAG 评估。

1. 响应与参考答案

目标:衡量“RAG 链答案相对于真实答案的相似/正确程度”。

模式:使用通过数据集提供的真值(参考)答案。

评判:使用 LLM-as-judge 来评估答案的正确性。

参考:https://smith.langchain.com/hub/langchain-ai/rag-answer-vs-reference

from langchain import hubfrom langchain_openai import ChatOpenAI
# Grade promptgrade_prompt_answer_accuracy = prompt = hub.pull("langchain-ai/rag-answer-vs-reference")
def answer_evaluator(run, example) -> dict:"""A simple evaluator for RAG answer accuracy"""
# Get question, ground truth answer, RAG chain answerinput_question = example.inputs["input_question"]reference = example.outputs["output_answer"]prediction = run.outputs["answer"]
# LLM graderllm = ChatOpenAI(model="gpt-4-turbo", temperature=0)
# Structured promptanswer_grader = grade_prompt_answer_accuracy | llm
# Run evaluatorscore = answer_grader.invoke({"question": input_question,"correct_answer": reference,"student_answer": prediction})score = score["Score"]
return {"key": "answer_v_reference_score", "score": score}
from langsmith.evaluation import evaluate
experiment_results = evaluate(predict_rag_answer,data=dataset_name,evaluators=[answer_evaluator],experiment_prefix="rag-answer-v-reference",metadata={"version": "LCEL context, gpt-4-0125-preview"},)
  1. 响应与输入


目标:衡量“生成的响应对初始用户输入的处理程度”。

模式:无引用,因为它会将答案与输入问题进行比较。

评判:使用 LLM-as-judge 来评估答案的相关性、帮助性等。

参考:https://smith.langchain.com/hub/langchain-ai/rag-answer-helpfulness

# Grade promptgrade_prompt_answer_helpfulness = prompt = hub.pull("langchain-ai/rag-answer-helpfulness")
def answer_helpfulness_evaluator(run, example) -> dict:"""A simple evaluator for RAG answer helpfulness"""
# Get question, ground truth answer, RAG chain answerinput_question = example.inputs["input_question"]prediction = run.outputs["answer"]
# LLM graderllm = ChatOpenAI(model="gpt-4-turbo", temperature=0)
# Structured promptanswer_grader = grade_prompt_answer_helpfulness | llm
# Run evaluatorscore = answer_grader.invoke({"question": input_question,"student_answer": prediction})score = score["Score"]
return {"key": "answer_helpfulness_score", "score": score}
experiment_results=evaluate(predict_rag_answer,data=dataset_name,evaluators=[answer_helpfulness_evaluator],experiment_prefix="rag-answer-helpfulness",metadata={"version":"LCELcontext,gpt-4-0125-preview"},)
  1. 响应与检索到的文档


目标:衡量“生成的响应与检索到的上下文在多大程度上一致”。

模式:无引用,因为它会将答案与检索到的上下文进行比较。

评判:使用 LLM-as-judge 来评估忠诚度、幻觉等。

参考:https://smith.langchain.com/hub/langchain-ai/rag-answer-hallucination

# Promptgrade_prompt_hallucinations = prompt = hub.pull("langchain-ai/rag-answer-hallucination")
def answer_hallucination_evaluator(run, example) -> dict:"""A simple evaluator for generation hallucination"""
# RAG inputsinput_question = example.inputs["input_question"]contexts = run.outputs["contexts"]
# RAG answerprediction = run.outputs["answer"]
# LLM graderllm = ChatOpenAI(model="gpt-4-turbo", temperature=0)
# Structured promptanswer_grader = grade_prompt_hallucinations | llm
# Get scorescore = answer_grader.invoke({"documents": contexts,"student_answer": prediction})score = score["Score"]
return {"key": "answer_hallucination", "score": score}
  1. 检索到的文档与输入


目标:衡量“我对此查询检索到的结果有多好”。

模式:无引用,因为它会将问题与检索到的上下文进行比较。

评判:使用 LLM-as-judge 评估相关性。

参考:https://smith.langchain.com/hub/langchain-ai/rag-document-relevance

# Grade promptgrade_prompt_doc_relevance = hub.pull("langchain-ai/rag-document-relevance")
def docs_relevance_evaluator(run, example) -> dict:"""A simple evaluator for document relevance"""
# RAG inputsinput_question = example.inputs["input_question"]contexts = run.outputs["contexts"]
# LLM graderllm = ChatOpenAI(model="gpt-4-turbo", temperature=0)
# Structured promptanswer_grader = grade_prompt_doc_relevance | llm
# Get scorescore = answer_grader.invoke({"question":input_question,"documents":contexts})score = score["Score"]
return {"key": "document_relevance", "score": score}
experiment_results=evaluate(predict_rag_answer_with_context,data=dataset_name,evaluators=[docs_relevance_evaluator],experiment_prefix="rag-doc-relevance",metadata={"version":"LCELcontext,gpt-4-0125-preview"},)

四、评估中间步骤

虽然在许多情况下,评估任务的最终输出就足够了,但在某些情况下,你可能想要评估管道的中间步骤。

例如,对于检索增强生成 (RAG),您可能希望:

  • 评估检索步骤,以确保在输入查询中检索到正确的文档。

  • 评估生成步骤,以确保在检索到的文档中生成正确的答案。


from langsmith.schemas import Example, Runfrom langsmith.evaluation import evaluate
def document_relevance_grader(root_run: Run, example: Example) -> dict:"""A simple evaluator that checks to see if retrieved documents are relevant to the question"""
# Get specific steps in our RAG pipeline, which are noted with @traceable decoratorrag_pipeline_run = next(run for run in root_run.child_runs if run.name == "get_answer")retrieve_run = next(run for run in rag_pipeline_run.child_runs if run.name == "retrieve_docs")contexts = "\n\n".join(doc.page_content for doc in retrieve_run.outputs["output"])input_question = example.inputs["input_question"]
# LLM graderllm = ChatOpenAI(model="gpt-4-turbo", temperature=0)
# Structured promptanswer_grader = grade_prompt_doc_relevance | llm
# Get scorescore = answer_grader.invoke({"question":input_question,"documents":contexts})score = score["Score"]
return {"key": "document_relevance", "score": score}
def answer_hallucination_grader(root_run: Run, example: Example) -> dict:"""A simple evaluator that checks to see the answer is grounded in the documents"""
# RAG inputrag_pipeline_run = next(run for run in root_run.child_runs if run.name == "get_answer")retrieve_run = next(run for run in rag_pipeline_run.child_runs if run.name == "retrieve_docs")contexts = "\n\n".join(doc.page_content for doc in retrieve_run.outputs["output"])
# RAG outputprediction = rag_pipeline_run.outputs["answer"]
# LLM graderllm = ChatOpenAI(model="gpt-4-turbo", temperature=0)
# Structured promptanswer_grader = grade_prompt_hallucinations | llm
# Get scorescore = answer_grader.invoke({"documents": contexts,"student_answer": prediction})score = score["Score"]
return {"key": "answer_hallucination", "score": score}
experiment_results = evaluate(predict_rag_answer,data=dataset_name,evaluators=[document_relevance_grader, answer_hallucination_grader],metadata={"version": "LCEL context, gpt-4-0125-preview"},)

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

链载AI是专业的生成式人工智能教程平台。提供Stable Diffusion、Midjourney AI绘画教程,Suno AI音乐生成指南,以及Runway、Pika等AI视频制作与动画生成实战案例。从提示词编写到参数调整,手把手助您从入门到精通。
  • 官方手机版

  • 微信公众号

  • 商务合作

  • Powered by Discuz! X3.5 | Copyright © 2025-2025. | 链载Ai
  • 桂ICP备2024021734号 | 营业执照 | |广西笔趣文化传媒有限公司|| QQ