|
ingFang SC", Cambria, Cochin, Georgia, Times, "Times New Roman", serif;display: table;padding: 0px 0.2em;color: rgb(255, 255, 255);background: rgb(0, 152, 116);">1、概述ingFang SC", Cambria, Cochin, Georgia, Times, "Times New Roman", serif;font-size: 17px;text-indent: 2em;letter-spacing: 0.1em;color: rgb(63, 63, 63);">Qwen3-Embedding嵌入模型是 Qwen 系列的最新(2025年6月)专有模型,专为文本嵌入和排序任务而设计。该系列基于 Qwen3 系列的密集基础模型,提供了全面的文本嵌入和重排序模型,支持各种规模(0.6B、4B 和 8B)。Qwen3嵌入模型继承了基础模型卓越的多语言能力、长文本理解和推理能力。Qwen3 嵌入模型系列在文本检索、代码检索、文本分类、文本聚类和双文本挖掘等多项文本嵌入和排序任务方面取得了显著进展。ingFang SC", Cambria, Cochin, Georgia, Times, "Times New Roman", serif;font-size: 17px;text-indent: 2em;letter-spacing: 0.1em;color: rgb(63, 63, 63);">ingFang SC", Cambria, Cochin, Georgia, Times, "Times New Roman", serif;font-size: inherit;color: rgb(0, 152, 116);">卓越的多功能性:嵌入模型在广泛的下游应用评估中取得了卓越的性能。8B 大小的嵌入模型在 MTEB 多语言排行榜中排名第一(截至 2025 年 6 月 5 日,得分70.58),而重排序模型在各种文本检索场景中表现出色。ingFang SC", Cambria, Cochin, Georgia, Times, "Times New Roman", serif;font-size: 17px;text-indent: 2em;letter-spacing: 0.1em;color: rgb(63, 63, 63);">ingFang SC", Cambria, Cochin, Georgia, Times, "Times New Roman", serif;font-size: inherit;color: rgb(0, 152, 116);">全面的灵活性:Qwen3 Embedding 系列为 Embedding 和 Reranking 模型提供了从 0.6B 到 8B 的全尺寸范围,可满足注重效率和有效性的各种用例。开发者可以无缝组合这两个模块。此外,Embedding 模型允许在所有维度上灵活地定义向量,Embedding 和 Reranking 模型均支持用户自定义指令,以增强特定任务、语言或场景的性能。ingFang SC", Cambria, Cochin, Georgia, Times, "Times New Roman", serif;font-size: 17px;text-indent: 2em;letter-spacing: 0.1em;color: rgb(63, 63, 63);">ingFang SC", Cambria, Cochin, Georgia, Times, "Times New Roman", serif;font-size: inherit;color: rgb(0, 152, 116);">多语言功能:得益于 Qwen3 模型的多语言功能,Qwen3 嵌入式系列支持超过 100 种语言。这涵盖了各种编程语言,并提供强大的多语言、跨语言和代码检索功能。ingFang SC", Cambria, Cochin, Georgia, Times, "Times New Roman", serif;font-size: 17px;text-indent: 2em;letter-spacing: 0.1em;color: rgb(63, 63, 63);">Qwen3-Embedding-8B具有以下特点:ingFang SC", Cambria, Cochin, Georgia, Times, "Times New Roman", serif;font-size: 17px;color: rgb(63, 63, 63);" class="list-paddingleft-1">• 嵌入维度:最大支持4096,支持32~4096自定义输出维度2、模型列表3、安装3.1 Ollama中安装由于Qwen未发布官方模型 ,因此需要使用第三方发布的模型,可以根据需要执行下面命令中的某些命令来下载与运行模型 # 0.6B模型 ollama run dengcao/Qwen3-Embedding-0.6B 8_0 ollama run dengcao/Qwen3-Embedding-0.6B:F16
# 4B模型 ollama run dengcao/Qwen3-Embedding-4B 4_K_M ollama run dengcao/Qwen3-Embedding-4B 5_K_M ollama run dengcao/Qwen3-Embedding-4B 8_0 ollama run dengcao/Qwen3-Embedding-4B:F16
# 8B模型 ollama run dengcao/Qwen3-Embedding-8B 4_K_M ollama run dengcao/Qwen3-Embedding-8B 5_K_M ollama run dengcao/Qwen3-Embedding-8B 8_0 ollama run dengcao/Qwen3-Embedding-8B:F16
关于量化版本的说明: - • q8_0:与浮点数16几乎无法区分。资源使用率高,速度慢。不建议大多数用户使用。
- • q5_k_m:将 Q6_K 用于一半的 attention.wv 和 feed_forward.w2 张量,否则Q5_K。
- • q5_0: 原始量化方法,5位。精度更高,资源使用率更高,推理速度更慢。
- • q4_k_m:将 Q6_K 用于一半的 attention.wv 和 feed_forward.w2 张量,否则Q4_K
- • q3_k_m:将 Q4_K 用于 attention.wv、attention.wo 和 feed_forward.w2 张量,否则Q3_K
- • q2_k:将 Q4_K 用于 attention.vw 和 feed_forward.w2 张量,Q2_K用于其他张量。
根据经验,建议使用 Q5_K_M,因为它保留了模型的大部分性能。或者,如果要节省一些内存,可以使用 Q4_K_M。 3.2 huggingface下载可以使用huggingface-cli将需要的模型预先下载到本地,也可以从魔塔中下载。 hf官网上有Qwen官方模型可供下载: huggingface-cli download Qwen/Qwen3-Embedding-0.6B-GGUF --local-dir /home/models/Qwen3-Embedding-0.6B-GGUF huggingface-cli download Qwen/Qwen3-Embedding-4B-GGUF --local-dir /home/models/Qwen3-Embedding-4B-GGUF huggingface-cli download Qwen/Qwen3-Embedding-8B-GGUF --local-dir /home/models/Qwen3-Embedding-8B-GGUF
huggingface-cli download Qwen/Qwen3-Embedding-0.6B --local-dir /home/models/Qwen3-Embedding-0.6B huggingface-cli download Qwen/Qwen3-Embedding-4B --local-dir /home/models/Qwen3-Embedding-4B huggingface-cli download Qwen/Qwen3-Embedding-8B --local-dir /home/models/Qwen3-Embedding-8B
huggingface-cli download Qwen/Qwen3-Reranker-0.6B-GGUF --local-dir /home/models/Qwen3-Reranker-0.6 BGGUF huggingface-cli download Qwen/Qwen3-Reranker-4B-GGUF --local-dir /home/models/Qwen3-Reranker-4B-GGUF huggingface-cli download Qwen/Qwen3-Reranker-8B-GGUF --local-dir /home/models/Qwen3-Reranker-8B-GGUF
4、模型使用代码相对比较简单,在此不对代码做出详细解释,关键点直接写在代码注释中。 3.1 使用SentenceTransformer版本限制: - • sentence-transformers>=2.7.0
from sentence_transformers import SentenceTransformer
# 加载模型,可以使用本地路径以便加载预先下载好的模型 model = SentenceTransformer("Qwen/Qwen3-Embedding-8B")
# 设置flash_attention_2 以及将`padding_side` 设置为"left",可以加快模型的加载与运行速度 # model = SentenceTransformer( # "Qwen/Qwen3-Embedding-8B", # model_kwargs={"attn_implementation": "flash_attention_2", "device_map": "auto"}, # tokenizer_kwargs={"padding_side": "left"}, # )
# 需要计算的文档与查询 queries = [ "What is the capital of China?", "Explain gravity", ] documents = [ "The capital of China is Beijing.", "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.", ]
# 对查询与文档进行向量计算 query_embeddings = model.encode(queries, prompt_name="query") document_embeddings = model.encode(documents)
# 计算查询与文档的余弦相似度 similarity = model.similarity(query_embeddings, document_embeddings) print(similarity)
# 输出结果 # tensor([[0.7493, 0.0751], # [0.0880, 0.6318]])
3.2 使用Transformers版本要求: import torch import torch.nn.functional as F
from torch import Tensor from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0]) if left_padding: return last_hidden_states[:, -1] else: sequence_lengths = attention_mask.sum(dim=1) - 1 batch_size = last_hidden_states.shape[0] return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery:{query}'
# 每个查询都必须附带一个描述任务的一句话指令 task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [ get_detailed_instruct(task, 'What is the capital of China?'), get_detailed_instruct(task, 'Explain gravity') ] # 检索文档无需添的指令说明 documents = [ "The capital of China is Beijing.", "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun." ] input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-8B', padding_side='left') model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-8B')
# 设置flash_attention_2 以及将`padding_side` 设置为"left",可以加快模型的加载与运行速度 # model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-8B', attn_implementation="flash_attention_2", torch_dtype=torch.float16).cuda()
max_length = 8192
# Tokenize the input texts batch_dict = tokenizer( input_texts, padding=True, truncation=True, max_length=max_length, return_tensors="pt", ) batch_dict.to(model.device) outputs = model(**batch_dict) embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) print(scores.tolist())
[[0.7493016123771667, 0.0750647559762001], [0.08795969933271408, 0.6318399906158447]]
3.3 使用vLLM版本要求: import torch import vllm from vllm import LLM
def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery:{query}' #每个查询都必须附带一个描述任务的一句话指令 task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'What is the capital of China?'), get_detailed_instruct(task, 'Explain gravity') ] # 检索文档无需添的指令说明 documents = [ "The capital of China is Beijing.", "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun." ] input_texts = queries + documents model = LLM(model="Qwen/Qwen3-Embedding-8B", task="embed") outputs = model.embed(input_texts) embeddings = torch.tensor([o.outputs.embedding for o in outputs]) scores = (embeddings[:2] @ embeddings[2:].T) print(scores.tolist())
[[0.7482624650001526, 0.07556197047233582], [0.08875375241041183, 0.6300010681152344]]
|