按官网要求,在部署vLLM之前首先要保证Python的版本在3.12及以上,gcc版本在12以上,并且一般需要安装Anaconda,用于做Python环境隔离,上述过程不再赘述。从魔塔社区下载模型https://modelscope.cn/models/Qwen/QwQ-32B/files耐心等待吧,我这边下载了两三个小时才下载完sudo modelscope download --model Qwen/QwQ-32B --local_dir /home/data-local/qwq-32b完了,报错了 执行UDA_VISIBLE_DEVICES=0 vllm serve --model /home/data-local/qwq-32b --served-model-name QWQ-32B --port 8000 报错改为CUDA_VISIBLE_DEVICES=0 vllm serve /home/data-local/qwq-32b --served-model-name QWQ-32B --port 8000这次可以了 CUDA_VISIBLE_DEVICES=0 vllm serve /home/data-local/DeepSeek-R1-Distill-Qwen-7B --served-model-name Qwen-7B --port 8000curl http://localhost:8000/v1/completions -H "Content-Type: application/json" -d '{ "model": "QWQ-32B", "prompt": "你好", "max_tokens": 100}'
fromopenaiimportOpenAI#初始化客户端(添加api_key参数)client=OpenAI(base_url="http://172.19.66.132:8000/v1",api_key="dummy"#虚拟密钥:ml-citation{ref="1"data="citationList"})#调用模型生成文本response=client.completions.create(model="Qwen-1.5B",prompt="如何部署大语言模型?",max_tokens=200)#正确输出字段为response.choices.textprint(response.choices[0].text) |