链载Ai
标题: RAG:在LangChain中使用本地向量embedding模型 [打印本页]
作者: 链载Ai 时间: 1 小时前
标题: RAG:在LangChain中使用本地向量embedding模型
ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;letter-spacing: 0.544px;text-wrap: wrap;background-color: rgb(255, 255, 255);visibility: visible;line-height: 1.5em;margin-bottom: 8px;"> 向量模型是RAG系统中实现有效信息检索和生成的关键技术之一,它们使得系统能够处理复杂的语言理解任务,并生成更加准确和相关的输出。ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;letter-spacing: 0.544px;text-wrap: wrap;background-color: rgb(255, 255, 255);visibility: visible;line-height: 1.5em;margin-bottom: 8px;">向量模型将文本转换为向量形式,便于在高维空间中进行快速的相似性检索,这是RAG系统中检索相关信息的基石。通过向量化,模型能够评估不同文本之间的语义相似度,即使在词汇不完全匹配的情况下也能找到语义相关的文档。向量模型帮助系统捕捉输入查询的上下文信息,这对于理解用户意图并检索最相关的信息至关重要。ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;letter-spacing: 0.544px;text-wrap: wrap;background-color: rgb(255, 255, 255);visibility: visible;"> 本篇文章将为大家介绍在langchain中使用自己向量模型的方法,帮助大家扫清障碍快速搭建RAG和Agent流程。ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;border-left: 4px solid rgb(248, 57, 41);">环境依赖ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;letter-spacing: 0.544px;text-wrap: wrap;background-color: rgb(255, 255, 255);text-align: left;">本示例用到的安装包如下:pipinstalltorchlangchainsentence_transformers
ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;border-left: 4px solid rgb(248, 57, 41);">模型选择ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;letter-spacing: 0.544px;text-wrap: wrap;background-color: rgb(255, 255, 255);text-align: left;">向量模型可以去MTEB榜单上找,发掘一个适合自己业务的模型。目前榜单如下:
#榜单地址https://huggingface.co/spaces/mteb/leaderboard
本文采用bge-m3模型作为例子,其是向量维数为1024维,支持的最大长度为8192,是一个支持多语言的模型,目前效果还算比较好。后面会专门写一篇文章介绍向量模型如何选择和评测。以下是bge-m3的一些信息:

ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;border-left: 4px solid rgb(248, 57, 41);">示例代码ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;letter-spacing: 0.544px;text-wrap: wrap;background-color: rgb(255, 255, 255);text-align: left;">直接上代码,可以直接用在自己的项目中:import torchfrom typing import Any, Listfrom pydantic import Extrafrom langchain.embeddings.base import Embeddingsfrom sentence_transformers import SentenceTransformer
device='cpu'
class CustomEmbedding(Embeddings):
client: Any#: :meta private:tokenizer: Anycontext_sequence_length: int = 512query_sequence_length: int = 512model_name: str = ''"""Model name to use."""
def __init__(self, **kwargs: Any):"""Initialize the sentence_transformer."""# super().__init__(**kwargs)self.client = SentenceTransformer('BAAI/bge-m3',device=device,trust_remote_code=True)self.context_sequence_length = 512self.query_sequence_length = 512
class Config:extra = Extra.forbid
@staticmethoddef mean_pooling(model_output, attention_mask):# First element of model_output contains all token embeddingstoken_embeddings = model_output[0]input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()return torch.sum(token_embeddings * input_mask_expanded, 1) / \torch.clamp(input_mask_expanded.sum(1), min=1e-9)
def embed_documents(self, texts: List[str]) -> List[List[float]]:
with torch.no_grad():embeddings = self.client.encode(texts)embeddings = embeddings.astype('float32')return embeddings.tolist()
def embed_query(self, text: str) -> List[float]:return self.embed_documents([text])[0]
#使用测试model=CustomEmbedding()emb = model.embed_query("张三")print(len(emb))
正确运行后,输出的结果是1024,即代表query被向量化后的维数为1024维;可以用这个模型替换上篇文章中的OpenAIEmbeddings.
ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;letter-spacing: 0.544px;text-wrap: wrap;background-color: rgb(255, 255, 255);margin-bottom: 0px;">
| 欢迎光临 链载Ai (https://www.lianzai.com/) |
Powered by Discuz! X3.5 |