NVIDIA驱动版本570.86.1x ,CUDA版本12.8然后Ktransfomers要使用0.2.2版本,目前最新版本0.3还存在很多的buggitclonehttps://github.com/kvcache-ai/ktransformers.gitcdktransformersgitsubmoduleinitgitsubmoduleupdategitcheckout7a19f3bgitrev-parse--shortHEAD#应显示7a19f3b
注意的是,git submodule update 主要是为了更新third_party中的项目如果网络不好,可以直接github中下载这些项目并放到到third_party文件夹中[submodule"third_party/llama.cpp"]path=third_party/llama.cppurl=https://github.com/ggerganov/llama.cpp.git[submodule"third_party/pybind11"]path=third_party/pybind11url=https://github.com/pybind/pybind11.git
下载模型
modelscopedownloadunsloth/DeepSeek-R1-GGUF--include"DeepSeek-R1-Q4_K_M/*"--cache_dir/home/user/new/models
ython的Cargo",它是pip、pip-tools和virtualenv等传统工具的高速替代品。速度比pip更快,而且还支持可编辑安装、git依赖、本地依赖、源代码分发等pip的高级功能。curl-LsSfhttps://astral.sh/uv/install.sh|sh
uvvenv./venv--python3.11--python-preference=only-managedsourcevenv/bin/activate
$uvpipinstallflashinfer-python
$exportTORCH_CUDA_ARCH_LIST="8.6"uvpipinstallhttps://github.com/ubergarm/ktransformers/releases/download/7a19f3b/ktransformers-0.2.2rc1+cu120torch26fancy.amd.ubergarm.7a19f3b.flashinfer-cp311-cp311-linux_x86_64.whl
uvpipinstallhttps://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.0.5/flash_attn-2.6.3+cu124torch2.6-cp311-cp311-linux_x86_64.whl
支持多GPU配置及通过 `--optimize_config_path` 进行更细粒度的显存卸载设置PYTORCH_CUDA_ALLOC_CONF=expandable_segments:Truepython3 ktransformers/server/main.py--gguf_path /mnt/ai/models/unsloth/DeepSeek-R1-GGUF/DeepSeek-R1-UD-Q2_K_XL/--model_path deepseek-ai/DeepSeek-R1--model_name unsloth/DeepSeek-R1-UD-Q2_K_XL--cpu_infer 16--max_new_tokens 8192--cache_lens 32768--total_context 32768--cache_q4 true--temperature 0.6--top_p 0.95--optimize_config_path ktransformers/optimize/optimize_rules/DeepSeek-R1-Chat.yaml--force_think--use_cuda_graph--host 127.0.0.1--port 8080
#安装额外编译依赖项,包括CUDA工具链等,例如:#sudoapt-getinstallbuild-essentialcmake...sourcevenv/bin/activateuvpipinstall-rrequirements-local_chat.txtuvpipinstallsetuptoolswheelpackaging#建议跳过可选网站应用,使用`open-webui`或`litellm`等替代方案cdktransformers/website/npminstall@vue/clinpmrunbuildcd../..#如果拥有充足的CPU核心和内存资源,可显著提升构建速度#$exportMAX_JOBS=8#$exportCMAKE_BUILD_PARALLEL_LEVEL=8#安装flash_attnuvpipinstallflash_attn--no-build-isolation#可选实验性使用flashinfer替代triton#除非您是已经成功上手的进阶用户,否则暂不建议使用#使用以下命令安装:#$uvpipinstallflashinfer-python#仅适用于以下情况:#配备Intel双路CPU且内存>1TB可容纳两份完整模型内存副本(每路CPU一份副本)#AMDEPYCNPS0双路平台可能无需此配置?#$exportUSE_NUMA=1#安装ktransformersKTRANSFORMERS_FORCE_BUILD=TRUEuvpipinstall.--no-build-isolation
KTRANSFORMERS_FORCE_BUILD=TRUEuvbuild
uvpipinstall./dist/ktransformers-0.2.2rc1+cu120torch26fancy-cp311-cp311-linux_x86_64.whl
ktransformers--model_path/home/user/new/ktran0.2.2/ktransformers/models/deepseek-ai/DeepSeek-R1--gguf_path/home/user/new/models/unsloth/DeepSeek-R1-GGUF/DeepSeek-R1-Q4_K_M--port8080
/tmp/cc8uoJt1.s:23667:Error:nosuchinstruction:`vpdpbusd%ymm3,%ymm15,%ymm1'的报错,File"<string>",line327,inbuild_extensionFile"/usr/local/python3/lib/python3.11/subprocess.py",line571,inrunraiseCalledProcessError(retcode,process.args,subprocess.CalledProcessError:Command'['cmake','--build','.','--verbose','--parallel=128']'returnednon-zeroexitstatus1.[endofoutput]
-DLLAMA_NATIVE=OFF-DLLAMA_AVX=ON-DLLAMA_AVX2=ON-DLLAMA_AVX512=OFF-DLLAMA_AVXVNNI=OFF
写在最后
ingFang SC", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;font-size: 17px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: 0.544px;orphans: 2;text-align: justify;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;background-color: rgb(255, 255, 255);">| 欢迎光临 链载Ai (https://www.lianzai.com/) | Powered by Discuz! X3.5 |