链载Ai

标题: 爆改RAG!Relevant Segment Extraction(RSE)让你的AI检索“有头有尾”,不再碎片化 [打印本页]

作者: 链载Ai    时间: 昨天 21:32
标题: 爆改RAG!Relevant Segment Extraction(RSE)让你的AI检索“有头有尾”,不再碎片化

你还在用传统RAG(Retrieval-Augmented Generation)检索一堆“东一榔头西一棒槌”的文本碎片,然后让大模型自己拼拼凑凑?别闹了,AI都快被你玩成拼图大师了!今天,咱们聊聊如何用Relevant Segment Extraction(RSE)让你的RAG系统“上下文连贯”,让大模型如虎添翼,输出更靠谱、更有逻辑的答案!


1. 传统RAG的“碎片化”困境

先来个灵魂拷问:你见过大模型被一堆无头无尾的文本块折磨得语无伦次吗?

传统RAG的套路是这样的:

  1. 把文档切成一堆小块(chunk),每块几百字。
  2. 用户提问后,检索最相关的Top-K块,拼成上下文,扔给大模型。
  3. 大模型努力“脑补”上下文,输出答案。

看似合理,实则暗藏杀机——相关内容往往是连续的,但你检索出来的却是“东一块西一块”,上下文断裂,信息丢失,模型理解难度飙升

举个栗子:你问“什么是可解释AI?为什么重要?”
传统RAG可能给你拼出:

模型一看:这都啥啊?我得自己把这些碎片拼成一篇论文,累不累!


2. RSE:让检索“成段成片”,上下文更连贯

Relevant Segment Extraction(RSE)来了!它的核心思想很简单:

相关内容往往在文档中是连续的,咱们就应该“整段整段”地检索出来,别再让模型拼拼凑凑了!

RSE的流程如下:

  1. 文档切块:还是老套路,先切成小块。
  2. 相关性打分:每块和用户问题算相关性分数。
  3. 连续片段提取:用“最大子段和”算法,找出一段段连续的高分块,拼成完整的上下文片段。
  4. 上下文重构:把这些片段拼起来,作为大模型的输入。

这样,模型拿到的上下文是“有头有尾”的连续内容,理解起来轻松多了,输出自然更靠谱!


3. RSE全流程实战拆解

3.1 PDF文本提取

先用PyMuPDF(fitz)把PDF里的内容全都抽出来:

importfitz

defextract_text_from_pdf(pdf_path):
mypdf = fitz.open(pdf_path)
all_text =""
forpage_numinrange(mypdf.page_count):
page = mypdf[page_num]
text = page.get_text("text")
all_text += text
returnall_text

3.2 文本切块(Chunking)

切块的艺术:每块800字,不重叠,方便后续连续片段重构。

defchunk_text(text, chunk_size=800, overlap=0):
chunks = []
foriinrange(0, len(text), chunk_size - overlap):
chunk = text[i:i + chunk_size]
ifchunk:
chunks.append(chunk)
returnchunks

3.3 向量化与向量库

用OpenAI API(或其他embedding模型)把每个chunk变成向量,存进自建的SimpleVectorStore:

classSimpleVectorStore:
def__init__(self, dimension=1536):
self.dimension = dimension
self.vectors = []
self.documents = []
self.metadata = []

defadd_documents(self, documents, vectors=None, metadata=None):
ifvectorsisNone:
vectors = [None] * len(documents)
ifmetadataisNone:
metadata = [{}for_inrange(len(documents))]
fordoc, vec, metainzip(documents, vectors, metadata):
self.documents.append(doc)
self.vectors.append(vec)
self.metadata.append(meta)

defsearch(self, query_vector, top_k=5):
importnumpyasnp
query_array = np.array(query_vector)
similarities = []
fori, vectorinenumerate(self.vectors):
ifvectorisnotNone:
similarity = np.dot(query_array, vector) / (
np.linalg.norm(query_array) * np.linalg.norm(vector)
)
similarities.append((i, similarity))
similarities.sort(key=lambdax: x[1], reverse=True)
results = []
fori, scoreinsimilarities[:top_k]:
results.append({
"document": self.documents[i],
"score": float(score),
"metadata": self.metadata[i]
})
returnresults

3.4 相关性打分

每个chunk和query算相关性分数,减去不相关惩罚,得到“chunk value”:

defcalculate_chunk_values(query, chunks, vector_store, irrelevant_chunk_penalty=0.2):
query_embedding = create_embeddings([query])[0]
num_chunks = len(chunks)
results = vector_store.search(query_embedding, top_k=num_chunks)
relevance_scores = {result["metadata"]["chunk_index"]: result["score"]forresultinresults}
chunk_values = []
foriinrange(num_chunks):
score = relevance_scores.get(i,0.0)
value = score - irrelevant_chunk_penalty
chunk_values.append(value)
returnchunk_values

3.5 连续片段提取(RSE核心算法)

用“最大子段和”思想,找出一段段连续的高分块,保证每段不超过20块,总共不超过30块:

deffind_best_segments(chunk_values, max_segment_length=20, total_max_length=30, min_segment_value=0.2):
best_segments = []
segment_scores = []
total_included_chunks =0
whiletotal_included_chunks < total_max_length:
best_score = min_segment_value
best_segment =None
forstartinrange(len(chunk_values)):
ifany(start >= s[0]andstart < s[1]forsinbest_segments):
continue
forlengthinrange(1, min(max_segment_length, len(chunk_values) - start) +1):
end = start + length
ifany(end > s[0]andend <= s[1]forsinbest_segments):
continue
segment_value = sum(chunk_values[start:end])
ifsegment_value > best_score:
best_score = segment_value
best_segment = (start, end)
ifbest_segment:
best_segments.append(best_segment)
segment_scores.append(best_score)
total_included_chunks += best_segment[1] - best_segment[0]
else:
break
best_segments = sorted(best_segments, key=lambdax: x[0])
returnbest_segments, segment_scores

3.6 片段重构与格式化

把连续的chunk拼成完整的上下文片段,格式化给大模型:

defreconstruct_segments(chunks, best_segments):
reconstructed_segments = []
forstart, endinbest_segments:
segment_text =" ".join(chunks[start:end])
reconstructed_segments.append({
"text": segment_text,
"segment_range": (start, end),
})
returnreconstructed_segments

defformat_segments_for_context(segments):
context = []
fori, segmentinenumerate(segments):
segment_header =f"SEGMENT{i+1}(Chunks{segment['segment_range'][0]}-{segment['segment_range'][1]-1}):"
context.append(segment_header)
context.append(segment['text'])
context.append("-"*80)
return"\n\n".join(context)

3.7 大模型生成答案

把格式化好的上下文和问题一起扔给大模型,生成最终答案:

defgenerate_response(query, context, model="meta-llama/Llama-3.2-3B-Instruct"):
system_prompt ="""You are a helpful assistant that answers questions based on the provided context.
The context consists of document segments that have been retrieved as relevant to the user's query.
Use the information from these segments to provide a comprehensive and accurate answer.
If the context doesn't contain relevant information to answer the question, say so clearly."""
user_prompt =f"""
Context:
{context}

Question:{query}

Please provide a helpful answer based on the context provided.
"""
response = client.chat.completions.create(
model=model,
messages=[
{"role":"system","content": system_prompt},
{"role":"user","content": user_prompt}
],
temperature=0
)
returnresponse.choices[0].message.content

4. RSE vs 传统Top-K检索:谁更强?

来一场正面PK!
同样的问题:“什么是可解释AI?为什么重要?”

实测结果

甚至让大模型自己评价,RSE的答案在“紧扣问题、上下文连贯、信息全面”上更胜一筹!


5. RSE的优势总结

  1. 上下文连贯:不再碎片化,模型理解更轻松。
  2. 信息覆盖更全:连续片段避免遗漏关键上下文。
  3. 减少无关内容:相关性低的块被惩罚,减少噪音。
  4. 可控性强:片段长度、总长度可调,灵活适配不同场景。

6. 适用场景与进阶玩法

进阶玩法:


7. 结语:让你的RAG“有头有尾”,大模型不再“脑补”!

RSE不是玄学,而是让检索更贴合人类阅读习惯的“工程升级”。
别再让大模型为你的碎片化检索擦屁股了,赶紧用RSE武装你的RAG系统,让AI输出更靠谱、更有逻辑的答案!






欢迎光临 链载Ai (https://www.lianzai.com/) Powered by Discuz! X3.5