ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";font-size: 15px;line-height: 1.7;color: rgb(6, 7, 31);font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;">Retrieval-Augmented Generation(RAG)技术已经成为了一项革命性的突破。它打破了传统语言模型仅依赖预训练知识的局限,通过动态检索外部信息,生成更加相关和准确的回答。本文将详细介绍如何使用LangChain、FAISS和DeepSeek-LLM构建一个处理PDF文档、检索相关内容并生成智能响应的RAG系统。ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";color: rgb(5, 7, 59);font-weight: 600;font-size: 20px;border: none;line-height: 1.7;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;">一、RAG技术概述ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";font-size: 15px;line-height: 1.7;color: rgb(6, 7, 31);font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;">RAG技术是一种结合了检索和生成能力的新型语言模型应用方式。其核心在于,首先使用一个检索器从知识库中获取与查询相关的文档片段,然后基于这些检索到的上下文,利用语言模型(LLM)生成回答。这种方式显著提高了回答的准确性和时效性,因为它能够实时地、基于事实地、动态地生成响应。ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";color: rgb(5, 7, 59);font-weight: 600;font-size: 20px;border: none;line-height: 1.7;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;">二、技术栈介绍ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";font-size: 15px;line-height: 1.7;color: rgb(6, 7, 31);font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;">在构建 RAG 系统时,选择合适的技术工具至关重要。本文所介绍的系统使用了以下几种关键技术:ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";font-size: 15px;line-height: 1.7;color: rgb(6, 7, 31);font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;"> ingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Helvetica Neue", Helvetica, Arial, sans-serif;font-size: 16px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;" class="list-paddingleft-1">LangChain作为连接检索器和语言模型的桥梁,LangChain 提供了一系列便捷的工具和接口,能轻松整合不同的组件,让开发人员专注于系统逻辑的实现。FAISSFacebook AI Similarity Search 的简称,它是一种高效的向量相似度搜索库。在 RAG 系统里,FAISS 用于存储文本向量嵌入,并快速查找与查询向量最相似的文本片段,大大提高了检索效率。DeepSeek-LLM作为负责生成回答的语言模型,DeepSeek-LLM 凭借其强大的语言理解和生成能力,在检索到的上下文基础上,生成高质量的回答。Sentence Transformers用于将文本转换为向量表示,也就是文本嵌入。这些向量能够精准地捕捉文本的语义信息,为后续的检索和匹配提供基础。PyTorch作为深度学习框架,PyTorch 负责加载和运行 DeepSeek-LLM 模型,借助 GPU 加速技术,提升模型的推理速度和性能。ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";color: rgb(5, 7, 59);font-weight: 600;font-size: 20px;border: none;line-height: 1.7;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;">三、系统实施步骤ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";color: rgb(5, 7, 59);font-weight: 600;font-size: 18px;border: none;line-height: 1.7;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;">1. 加载PDF文档ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";font-size: 15px;line-height: 1.7;color: rgb(6, 7, 31);font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;">首先,我们使用LangChain的PyPDFLoader加载PDF文件,并将其拆分成较小的文本块。这一步骤是为了方便后续对文本进行向量化处理。fromlangchain.document_loadersimportPyPDFLoaderfromlangchain.text_splitterimportRecursiveCharacterTextSplitterpdf="/kaggle/input/about-me-rag/about-me-rag.pdf"#PDFFilePath#LoadthePDFloader=PyPDFLoader(pdf)documents=loader.load()#Splitthetextintochunkstext_splitter=RecursiveCharacterTextSplitter(chunk_size=512,chunk_overlap=50)texts=text_splitter.split_documents(documents) 2. 文本向量化接下来,我们使用Hugging Face Sentence Transformers将文本块转换成向量嵌入,并将这些嵌入存储在FAISS中。这样,我们就可以在需要时高效地检索到与查询相关的文本块。 fromlangchain.embeddingsimportHuggingFaceEmbeddingsfromlangchain.vectorstoresimportFAISS#Loadembeddingmodelembeddings=HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")#Createvectorstorevector_store=FAISS.from_documents(texts,embeddings)vector_store.save_local("faiss_index")#Saveforreuse
3. 加载DeepSeek-LLM模型DeepSeek-LLM模型用于生成回答。我们利用Hugging Face Transformers库加载这个模型,并确保它在GPU上运行以加速处理速度。 fromtransformersimportAutoTokenizer,AutoModelForCausalLMimporttorchmodel_version="deepseek-ai/deepseek-llm-7b-chat"tokenizer=AutoTokenizer.from_pretrained(model_version)model=AutoModelForCausalLM.from_pretrained(model_version)#MovemodeltoGPUifavailabledevice="cuda"iftorch.cuda.is_available()else"cpu"model=model.to(device) 4. 定义RAG管道现在,我们将所有组件连接在一起,形成一个完整的RAG管道。我们使用LangChain的RetrievalQA模块来实现这一点,它允许我们定义检索器、语言模型和提示模板。 fromlangchain.llmsimportHuggingFacePipelinefromtransformersimportpipelinefromlangchain.promptsimportPromptTemplatefromlangchain.chainsimportRetrievalQA#Definetheretrieverretriever=vector_store.as_retriever(search_kwargs={"k":3})#Retrievetop3chunks#CreateaHuggingFacepipelinefortextgenerationpipe=pipeline("text-generation",model=model,tokenizer=tokenizer,max_new_tokens=256,temperature=0.7)#WrapthepipelineforLangChaincompatibilityllm=HuggingFacePipeline(pipeline=pipe)#DefinethePromptTemplatetemplate="""Usethefollowingcontexttoanswerthequestion.Ifunsure,say"Idon'tknow."Context:{context}Question:{question}Answer:"""prompt=PromptTemplate(template=template,input_variables=["context","question"])#DefinetheRAGChainrag_chain=RetrievalQA.from_chain_type(llm=llm,retriever=retriever,chain_type_kwargs={"prompt":prompt},return_source_documents=True)5. 查询系统最后,我们可以向系统提出查询,并基于PDF文档的内容获得回答。RAG管道会检索相关文本块,并使用DeepSeek-LLM模型生成回答。 query="WhatisSingistic?"result=rag_chain({"query":query})#Extractthegeneratedansweranswer=result["result"].split("Answer:")[1].strip()print(answer) |