ollama run qwq
gitclonehttps://github.com/mannaandpoem/OpenManus
conda create -n open-manus python=3.12
我这里默认的base 环境是python3.12.9,故直接拿来使用
cdOpenManus
# 设置 pip 国内镜像
pip configsetglobal.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
# 安装依赖
pip install -r requirements.txt
OpenManus 需要配置使用的 LLM API,请按以下步骤设置:
cpconfig/config.example.tomlconfig/config.toml
# Global LLM configuration
[llm]
model ="deepseek-reasoner"
base_url ="https://api.deepseek.com/v1"
api_key ="sk-741cd3685f3548d98dba5b279a24da7b"
max_tokens = 8192
temperature = 0.0
# 备注: 目前多模态还没有整合,现在暂时可以不动
# Optional configuration for specific LLM models
[llm.vision]
model ="claude-3-5-sonnet"
base_url ="https://api.openai.com/v1"
api_key ="sk-..."
# Global LLM configuration
[llm]
model ="qwq-32b"
base_url ="https://dashscope.aliyuncs.com/compatible-mode/v1"
api_key ="sk-f9460b3a55994f5ea128b2b55637a2b7"
max_tokens = 8192
temperature = 0.0
# 备注: 目前多模态还没有整合,现在暂时可以不动
# Optional configuration for specific LLM models
[llm.vision]
model ="claude-3-5-sonnet"
base_url ="https://api.openai.com/v1"
api_key ="sk-..."
model 填写说明:
python main.py
输入提示词,不报错即为正常。
说明:QWQ-32B对接,由于需要think思考速度较慢,需要更改ask_tool方法中timeout为600(默认为60s)
vi config/config.toml
```toml
# Global LLM configuration
[llm]
model ="qwq:latest"
base_url ="http://localhost:11434/v1"
api_key ="EMPTY"
max_tokens = 4096
temperature = 0.0
# Optional configuration for specific LLM models
[llm.vision]model ="llava:7b"
base_url ="localhost:11434/v1"
api_key ="EMPTY"```
model 名字一定要是你本地ollama运行的名字,否则会报错
通过ollama 命令查看,
正确填写为:qwq:latest
说明:api_key一定要设置为EMPTY ,否则启动后会报
API error: Connection error
启动OpenManus
pythonmain.py
vi config/config.toml
#Global LLM configuration
[llm]
model ="qwen2.5:latest"
base_url ="http://localhost:11434/v1"
api_key ="EMPTY"
max_tokens = 4096
temperature = 0.0
# Optional configuration for specific LLM models
[llm.vision]model ="llava:7b"
base_url ="localhost:11434/v1"
api_key ="EMPTY"```
vi config/config.toml
# Global LLM configuration
[llm]
model ="deepseek-r1:32b"
base_url ="http://localhost:11434/api"
api_key ="EMPTY"
max_tokens = 4096
temperature = 0.0
# Optional configuration for specific LLM models
[llm.vision]model ="llava:7b"
base_url ="localhost:11434/v1"
api_key ="EMPTY"```
playwright install
暂时还未研究..
| 欢迎光临 链载Ai (https://www.lianzai.com/) | Powered by Discuz! X3.5 |