docs.siliconflow.cn/api-reference/chat-completions
pip3install-Uopenai
输入参数:max_tokens:回答的最大长度(包含思维链输出),最大为 16K。
返回参数:
reasoning_content:思维链内容,与 content 同级。
content:最终回答内容
from openai import OpenAI
url='https://api.siliconflow.cn/v1/'
api_key='yourapi_key'
client=OpenAI(
base_url=url,
api_key=api_key
)
#发送带有流式输出的请求
content=""
reasoning_content=""
messages=[
{"role":"user","content":"奥运会的传奇名将有哪些?"}
]
response=client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1",
messages=messages,
stream=True,#启用流式输出
max_tokens=4096
)
#逐步接收并处理响应
forchunkinresponse:
ifchunk.choices[0].delta.content:
content+=chunk.choices[0].delta.content
ifchunk.choices[0].delta.reasoning_content:
reasoning_content+=chunk.choices[0].delta.reasoning_content
#Round2
messages.append({"role":"assistant","content":content})
messages.append({'role':'user','content':"继续"})
response=client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1",
messages=messages,
stream=True
)
from openai import OpenAI
url='https://api.siliconflow.cn/v1/'
api_key='yourapi_key'
client=OpenAI(
base_url=url,
api_key=api_key
)
#发送非流式输出的请求
messages=[
{"role":"user","content":"奥运会的传奇名将有哪些?"}
]
response=client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1",
messages=messages,
stream=False,
max_tokens=4096
)
content=response.choices[0].message.content
reasoning_content=response.choices[0].message.reasoning_content
#Round2
messages.append({"role":"assistant","content":content})
messages.append({'role':'user','content':"继续"})
response=client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1",
messages=messages,
stream=False
)
API 密钥:请确保使用正确的 API 密钥进行身份验证。
流式输出:流式输出适用于需要逐步接收响应的场景,而非流式输出则适用于一次性获取完整响应的场景。
| 欢迎光临 链载Ai (https://www.lianzai.com/) | Powered by Discuz! X3.5 |