链载Ai

标题: 【24/100个AI应用体验】graphRAG,解释下苏东坡为什么乐观 [打印本页]

作者: 链载Ai    时间: 2025-12-2 10:52
标题: 【24/100个AI应用体验】graphRAG,解释下苏东坡为什么乐观

┃这一篇蛮长,先做个免责声明

第24个体验项目:graphRAG

┃体验小结

里面的Data:reports(XX,XX,XX,XX,XX)是指引用的最终生成的知识图谱文件【create_final_community_reports.parquet】中的具体位置,多少数字就代表社区xx序号。我们以倒数第二段【个人经历与性格】为例简单说明下:


个人经历与性格

苏东坡的乐观豁达也与他的个人经历和性格有关。他经历了多次贬谪和流放,但每次都能从中找到生活的乐趣和创作的灵感。他的生活态度和文学作品对后世产生了深远的影响,成为许多人在逆境中寻求慰藉和力量的源泉 [Data: Reports (81, 85, 59, 44, 110)].

看下81,这个社区(community)的主题是【苏洵与宋代文学政治】,其中介绍了苏洵对苏东坡的影响。

再看下85,这个社区(知识图谱里面的术语,community)的主题是【苏氏兄弟与宋朝文化】,这个社区强调了宋朝文化环境对苏轼和苏辙两兄弟的影响。


我们在这两个社区里面看到的内容都是基于大语言模型(llm)和知识图谱结构综合形成的对原文分维度的总结和摘要,它比只用LLM要更精准。

回到部署和体验过程(体验过程非常煎熬,跑通后非常开心)

┃部署环境

操作系统(OS):mac/windows都可以

python版本:推荐3.10和3.11

部署方式:使用 pip 安装 graphRAG,本地部署

┃部署过程

1.通过conda创建一个纯净python环境


condacreate-ngraphragpython=3.11condaactivategraphrag

‍2.通过pip在独立的python环境中安装GraphRAG

pipinstallgraphrag

3.创建一个文件夹,专门用来放graphrag任务,我这里命名为graphrag,也可以在命令窗口下面执行↓

mkdirgraphrag

4.创建本次任务的文件夹,比如我这次是对林语堂写的《苏东坡传》“下手”,文件夹就命名为sudongpo(名字随意,用英文),在文件夹里面再创建一个叫input的文件夹,用于本地的graphrag的练习与实践。也可以直接使用下面的命令↓

cdgraphragmkdir./sudongpo/input

5.把苏东坡传txt文件用编辑器打开保存为utf-8格式(总之要用utf-8格式,否则要改代码),放置到文件夹中↓

6.初始化工作区,执行完成后,会在sudongpo文件夹下生成output、prompts两个子文件夹和一个settings.yaml配置文件,还有一个.env存放大模型api key的环境变量配置文件↓

python-mgraphrag.index--init--root./sudongpo

7.在sudongpo文件夹下面,打开刚生成的配置文件:settings.yaml。参考以下LLM 文本模型和嵌入模型修改具体配置项。(啰嗦几句,绝大部分graphrag部署卡壳都在这里,如果遇到过程报错中断,大概率都是配置项没有正确设置。如何正确设置?多试!多看别人分享的配置视频!)。我这里用到的文本模型是deepseek-chat,嵌入模型用到的是智谱的embedding-2。大家可以根据自己的喜好换成其他兼容openai api的模型。大语言模型的api密钥需要提前申请,确保有token余额。

我这里是直接把密钥拷贝到settings.yaml文件,是有密钥曝露风险,实际可以在.env文件中配置,减少key泄露风险。完整配置如下↓(供参考):


#settings.yamlencoding_model: cl100k_baseskip_workflows: []llm:api_key: $这里把deepseek的api key拷贝进来,把前后两个'$'都替换掉$type: openai_chat # or azure_openai_chatmodel: deepseek-chatmodel_supports_json: false # recommended if this is available for your model.max_tokens: 4000request_timeout: 180.0api_base: https://api.deepseek.com/v1# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name>tokens_per_minute: 30000 # set a leaky bucket throttlerequests_per_minute: 30 # set a leaky bucket throttlemax_retries: 20max_retry_wait: 10.0sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-timesconcurrent_requests: 10 # the number of parallel inflight requests that may be made# temperature: 0 # temperature for sampling# top_p: 1 # top-p sampling# n: 1 # Number of completions to generate
parallelization:stagger: 0.3# num_threads: 50 # the number of threads to use for parallel processing
async_mode: threaded # or asyncio
embeddings:## parallelization: override the global parallelization settings for embeddingsasync_mode: threaded # or asynciollm:api_key: $这里把智谱的api key拷贝进来,把前后两个'$'都替换掉$type: openai_embedding # or azure_openai_embeddingmodel: embedding-2api_base: https://open.bigmodel.cn/api/paas/v4# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-timesconcurrent_requests: 10 # the number of parallel inflight requests that may be madebatch_size: 16 # the number of documents to send in a single requestbatch_max_tokens: 8191 # the maximum number of tokens to send in a single request# target: required # or optionalchunks:size: 1200overlap: 100group_by_columns: [id] # by default, we don't allow chunks to cross documents
input:type: file # or blobfile_type: text # or csvbase_dir: "input"file_encoding: utf-8file_pattern: ".*\\.txt$"
cache:type: file # or blobbase_dir: "cache"# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>
storage:type: file # or blobbase_dir: "output/${timestamp}/artifacts"# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>
reporting:type: file # or console, blobbase_dir: "output/${timestamp}/reports"# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>
entity_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/entity_extraction.txt"entity_types: [organization,person,geo,event]max_gleanings: 1
summarize_descriptions:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/summarize_descriptions.txt"max_length: 500
claim_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this task# enabled: trueprompt: "prompts/claim_extraction.txt"description: "Any claims or facts that could be relevant to information discovery."max_gleanings: 1
community_reports:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/community_report.txt"max_length: 2000max_input_length: 8000
cluster_graph:max_cluster_size: 10
embed_graph:enabled: false # if true, will generate node2vec embeddings for nodes# num_walks: 10# walk_length: 40# window_size: 2# iterations: 3# random_seed: 597832
umap:enabled: false # if true, will generate UMAP embeddings for nodes
snapshots:graphml: falseraw_entities: falsetop_level_nodes: false
local_search:# text_unit_prop: 0.5# community_prop: 0.1# conversation_history_max_turns: 5# top_k_mapped_entities: 10# top_k_relationships: 10# llm_temperature: 0 # temperature for sampling# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 12000
global_search:# llm_temperature: 0 # temperature for sampling# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 12000# data_max_tokens: 12000# map_max_tokens: 1000# reduce_max_tokens: 2000# concurrency: 32

8.配置好后,运行管道(pipline),把「苏东坡传.txt」文件做分块和嵌入,持续调用推理模型结合知识图谱技术生成知识图谱结构文件(最关键步骤)↓

python-mgraphrag.index--root./sudongpo

9.窗口打印以下信息,则表示本步骤顺利完成,这里面生成的每一个文件都值得去看一遍,对理解知识图谱和RAG在这个项目中的应用非常有帮助!如果出现报错(大概率会出现),检查settings.yaml配置文件里面的配置信息,确保所用的大模型api处于有余额可用状态↓

到这一步,部署过程结束,下面是执行查询,看看效果。

┃执行查询

我们来问一个问题:苏东坡为什么一直被贬,还能乐观豁达?

(graphRAG有两种查询方式,一种是全局查询global query,本篇介绍的就是全局查询;一种是本地查询local query,本文不做演示介绍)

python-mgraphrag.query--root./sudongpo--methodglobal"苏东坡为什么一直被贬,还能乐观豁达?"

查询执行过程和结果如下:

至此查询效果体验结束。

下面把本次体验用到的一些工具和参考文献及up主列出。表示感谢并推荐给大家——如果要体验graphRAG的话,还是很有帮助甚至必不可少的。


ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;font-size: var(--articleFontsize);letter-spacing: 0.034em;">本次使用到的工具:

第24个AI项目体验 Made it!

&

苏东坡是我idol!






欢迎光临 链载Ai (http://www.lianzai.com/) Powered by Discuz! X3.5