链载Ai

标题: PageIndex:一种基于推理的 RAG 框架 [打印本页]

作者: 链载Ai    时间: 昨天 17:13
标题: PageIndex:一种基于推理的 RAG 框架
最近在调研RAG的各种技术,关注到了PageIndex,觉得其思路颇有借鉴意义,故整理下PageIndex相关知识要点。

1、PageIndex是什么

PageIndex 是一种不依赖向量的、基于推理(reasoning-based)的信息检索框架,用于从长篇、复杂文档中进行知识检索,其设计理念是模拟人类专家阅读和定位信息的方式,通过将文档结构化为树,并让大模型在该结构上进行推理导航,从而实现可解释、无向量的长文档检索。

核心特点包括:



感兴趣的可以去PageIndex官网去体验下。

2、PageIndex为什么要这样设计

传统基于向量的RAG的痛点

基于向量的RAG依靠语义嵌入向量数据库来识别相关的文本块。

在预处理阶段,文档首先被分割成更小的块,然后每个块使用嵌入模型被嵌入到向量空间中,生成的向量被存储在诸如Chroma或Pinecone之类的向量数据库中。

在查询阶段,使用相同的嵌入模型对用户查询进行嵌入处理,在向量数据库中搜索语义相似的文本块,系统检索出排名前k的结果,这些结果随后被用于构成模型的输入上下文。

尽管对于短文本而言简单有效,但基于向量的RAG面临着几个主要挑战:

  1. 查询与知识空间不匹配
  1. 语义相似不等于真实相关
  1. 硬切分破坏语义与上下文完整性
  1. 无法整合对话历史
  1. 难以处理文档内交叉引用

3、PageIndex如何解决上述痛点问题

传统基于向量的RAG存在查询与知识空间不匹配、语义相似不等于真实相关、硬切分破坏语义完整性、无法整合聊天历史、难以处理文档内交叉引用五大痛点,而以 PageIndex为代表的基于推理的RAG框架,通过模仿人类阅读长文档的逻辑,结合结构化索引与动态推理流程,针对性地解决了这些问题,具体方法如下:

  1. 解决 “查询与知识空间不匹配”:用推理定位信息位置,而非依赖语义相似。传统向量RAG仅通过匹配“语义相似文本”检索,无法衔接“查询意图”与“信息位置”;基于推理的RAG则让 LLM 通过文档结构推理确定检索方向。
  1. 解决“语义相似不等于真实相关”:聚焦“contextual relevance(上下文相关性)”,而非表面语义。针对专业文档中“语义相近但相关性差异大”的问题,基于推理的RAG通过上下文理解 + 结构化导航筛选真正相关的信息。
  1. 解决 “硬切分破坏语义与上下文完整性”:动态检索 “语义连贯单元”,而非固定长度文本块。传统向量 RAG 为适配嵌入模型,将文档切分为固定长度块(如 512 tokens),易断裂逻辑;基于推理的 RAG 则以完整语义单元为检索单位。
  1. 解决 “无法整合聊天历史”:多轮推理关联历史上下文,实现连贯检索针对 “每轮查询独立处理” 的问题,基于推理的 RAG 将聊天历史融入检索决策。
  1. 解决 “难以处理文档内交叉引用”:通过层级索引追踪引用,无需额外预处理。传统向量 RAG 因 “引用语句与被引用内容语义不相似”,无法识别 “参见附录 G”“参考表 5.3” 等关联;基于推理的 RAG 则借助结构化索引直接导航引用内容。

3、ToC结构

{
    "structure": [
        {
            "nodes": [
                {
                    "title": "Abstract",
                    "node_id": "0001",
                    "summary": "This text discusses the increasing importance of fine-tuning large language models (LLMs) for human intent alignment, highlighting the need for efficient resource utilization. It contrasts Reinforcement Learning from Human or AI Preferences (RLHF/RLAIF), which is complex and unstable, with Direct Preference Optimization (DPO), a simpler alternative. The work introduces an active learning strategy for DPO, proposing an acquisition function that uses predictive entropy and the certainty of the implicit preference model to improve the efficiency and effectiveness of fine-tuning with pairwise preference data.",
                    "end_index": 1,
                    "start_index": 1
                },
                {
                    "nodes": [
                        {
                            "title": "3.1. Acquisition functions",
                            "node_id": "0005",
                            "summary": "### 3.1. Acquisition functions\n\nIn selecting scoring methods (step 8 in 1) we aim for options that are straightforward to implement and do not require modifications to the model architectures or the fine-tuning procedure itself. This allows for a drop in addition to existing implementations. As a result, we propose using the predictive entropy of  $p_{\\theta_t}(y|x)$  as well as a measure of certainty under the Bradley-Terry preference model, which leverages the implicit reward model in DPO.\n",
                            "end_index": 4,
                            "start_index": 3
                        }
                    ],
                    "title": "3 Active Preference Learning",
                    "node_id": "0004",
                    "summary": "This text introduces Active Preference Learning (APL), a machine learning paradigm for efficiently selecting the most informative data points during training, specifically within a pool-based active learning setting. The APL training procedure involves iteratively sampling prompts, generating pairs of completions using the current model, ranking these pairs with an acquisition function, selecting the highest-ranked pairs for preference labeling by an oracle, and then fine-tuning the model with these labeled preferences. This approach augments the standard DPO fine-tuning loop with an outer data acquisition loop, where the number of acquisition steps is determined by the labeling budget and batch size. A key difference from traditional active learning is the necessity of generating completions for acquired data before scoring, especially if the acquisition function requires them. The text also outlines crucial design considerations, including the selection of acquisition functions, fine-tuning implementation details, the choice of oracle, and experimental settings for sampling parameters. Algorithm 1 provides a detailed step-by-step breakdown of the entire APL procedure.",
                    "end_index": 3,
                    "start_index": 2
                }    ]
}

4、PageIndex检索方式

文档检索



PageIndex会根据你的query先检索哪些文档相关联。文档检索大概有以下三种方式:

prompt = f""" 
You are given a list of documents with their IDs, file names, and descriptions. Your task is to select documents that may contain information relevant to answering the user query.

Query: {query}

Documents: [
    {
        "doc_id": "xxx",
        "doc_name": "xxx",
        "doc_description": "xxx"
    }
]

Response Format:
{{
    "thinking": "<Your reasoning for document selection>",
    "answer": <ython list of relevant doc_ids>, e.g. ['doc_id1', 'doc_id2']. Return [] if no documents are relevant.
}}

Return only the JSON structure, with no additional output.
"""

ToC树检索



让大模型根据目录树来推理相关联的node节点,获取到node节点内容之后再进行迭代式生成。

prompt = f"""
You are given a query and the tree structure of a document.
You need to find all nodes that are likely to contain the answer.

Query: {query}

Document tree structure: {PageIndex_Tree}

Reply in the following JSON format:
{{
  "thinking": <your reasoning about which nodes are relevant>,
  "node_list": [node_id1, node_id2, ...]
}}
"""

除此之外,还支持混合树检索,例如基于chunk进行召回,筛选出node节点。

5、参考文献

主题
链接
PageIndex: Next-Generation Vectorless, Reasoning-based RAG
https://pageindex.ai/blog/pageindex-intro
PageIndex官方文档
https://docs.pageindex.ai/
RAG for Technical Manuals: Challenges & Solutions
https://pageindex.ai/blog/technical-manuals
Vectless RAG
https://docs.pageindex.ai/cookbook/vectorless-rag-pageindex
Vision RAG
https://docs.pageindex.ai/cookbook/vision-rag-pageindex






欢迎光临 链载Ai (http://www.lianzai.com/) Powered by Discuz! X3.5