今天来聊一聊BERT和GPT的预训练,从而了解大模型的第四步:Pre-training。预训练(Pre-training)是大语言模型(如BERT、GPT)训练的第一阶段,其核心目标是通过自监督学习从海量无标注文本中学习通用的语言表示(Language Representation)。这一阶段的目标是让模型掌握语言的语法、语义、常识等基础能力,为后续的微调(Fine-tuning)打下基础。 ingFang SC", system-ui, -apple-system, BlinkMacSystemFont, "Helvetica Neue", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;visibility: visible;box-sizing: border-box !important;overflow-wrap: break-word !important;">BERT的预训练:MLM与NSP基于Transformer编码器的双向架构,BERT通过掩码语言模型(MLM)和下一句预测(NSP)任务学习上下文语义。MLM随机遮盖15%的输入词,强制模型从双向语境中预测缺失词,突破传统单向模型的局限;NSP则通过判断句子对是否连贯,强化跨句推理能力。ingFang SC", system-ui, -apple-system, "system-ui", "Helvetica Neue", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: 0.578px;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;font-size: 15px;-webkit-tap-highlight-color: transparent;margin-top: 0px;margin-right: 0px;margin-left: 0px;padding: 0px;outline: 0px;max-width: 100%;visibility: visible;box-sizing: border-box !important;overflow-wrap: break-word !important;">一、MLM(Masked Language Modeling)在BERT的预训练中,模型通过Masked Language Modeling(MLM)任务学习双向上下文,即随机遮盖输入文本中15%的词,并基于被遮盖词左右两侧的上下文预测该词。 (1)任务:随机遮盖输入文本中的15%的词,要求模型预测被遮盖的词。 (2)示例:输入句子为“The cat sits on the [MASK]”,模型需要预测“[MASK]” 为 “mat”。ingFang SC", system-ui, -apple-system, "system-ui", "Helvetica Neue", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: 0.578px;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;font-size: 15px;-webkit-tap-highlight-color: transparent;margin-top: 0px;margin-right: 0px;margin-left: 0px;padding: 0px;outline: 0px;max-width: 100%;visibility: visible;box-sizing: border-box !important;overflow-wrap: break-word !important;"> ingFang SC", system-ui, -apple-system, "system-ui", "Helvetica Neue", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: 0.578px;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;font-size: 15px;-webkit-tap-highlight-color: transparent;margin-top: 0px;margin-right: 0px;margin-left: 0px;padding: 0px;outline: 0px;max-width: 100%;visibility: visible;box-sizing: border-box !important;overflow-wrap: break-word !important;">二、NSP(Next Sentence Prediction)ingFang SC", system-ui, -apple-system, "system-ui", "Helvetica Neue", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: 0.578px;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;font-size: 15px;-webkit-tap-highlight-color: transparent;margin-top: 0px;margin-right: 0px;margin-left: 0px;padding: 0px;outline: 0px;max-width: 100%;visibility: visible;box-sizing: border-box !important;overflow-wrap: break-word !important;">ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";list-style: none;margin: 0px;scrollbar-width: none;font-weight: 600;color: rgb(6, 7, 31);font-size: 15px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;">BERT通过Next Sentence Prediction(NSP)任务,以50%概率输入连续句子和50% 概率输入随机句子,训练模型学习句子间的逻辑关系,以提升问答、文本分类等任务的性能。(1)任务:判断两个句子是否是连续的(50%是连续的,50%是随机的)。 (2)正例:“I like cats” + “They are cute.”(3)负例:“I like cats” + “The sky is blue.”ingFang SC", system-ui, -apple-system, "system-ui", "Helvetica Neue", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;font-size: 17px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: 0.578px;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;visibility: visible;box-sizing: border-box !important;overflow-wrap: break-word !important;">(1)目的:理解MLM和NSP的设计动机与核心逻辑。(2)BERT论文:重点阅读《BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding》的 Section 3(预训练任务设计)。(3)类比理解:MLM就像你在玩“填空游戏”,需要根据前后文猜出被遮盖的词(如“I like [MASK]” → “cats”);NSP就是判断两句话是否来自同一篇文章(如“I like cats” + “They are cute.”是连续的,而“I like cats” + “The sky is blue.”是随机的)。(1)目标:通过代码理解MLM和NSP的实现细节。(2)代码:无需从零实现,直接基于transformers库调用预训练模型微调。fromtransformersimportBertTokenizer,BertForMaskedLM,BertForNextSentencePrediction,Trainer,TrainingArgumentsimporttorch#加载预训练模型和tokenizertokenizer=BertTokenizer.from_pretrained("bert-base-uncased")model_mlm=BertForMaskedLM.from_pretrained("bert-base-uncased")#MLM专用model_nsp=BertForNextSentencePrediction.from_pretrained("bert-base-uncased")#NSP专用(旧版BERT支持)#示例输入(MLM)text="Thecatsitsonthe[MASK]."inputs=tokenizer(text,return_tensors="pt")outputs=model_mlm(**inputs)predicted_token_id=torch.argmax(outputs.logits[0,-1]).item()print(tokenizer.decode(predicted_token_id))#输出预测的词(如"mat")#示例输入(NSP)sentence1="Ilikecats."sentence2="Theyarecute."sentence3="Theskyisblue."inputs_nsp=tokenizer(sentence1+"[SEP]"+sentence2,return_tensors="pt")#正例inputs_nsp_neg=tokenizer(sentence1+"[SEP]"+sentence3,return_tensors="pt")#负例model_nsp=BertForNextSentencePrediction.from_pretrained("bert-base-uncased")#注意:新版本BERT已合并MLM+NSPGPT的因果语言建模(CLM)通过单向自回归,擅长生成连贯文本,但无法利用后文,更适配“创作”任务;而BERT的掩码语言建模(MLM)类似双向“完形填空”,更擅长上下文理解,适配“理解”任务。一、CLM(Causal Language Modeling)在GPT的预训练中,模型使用ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";list-style: none;margin: 0px;scrollbar-width: none;font-weight: 600;color: rgb(6, 7, 31);font-size: 15px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;letter-spacing: normal;orphans: 2;text-align: start;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(253, 253, 254);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;">因果语言建模(CLM)通过单向上下文(仅前文)预测下一个词(数学表达为 P(wt∣w1,...,wt−1)),像“逐字听写”或“打字机”一样,每次只能看到之前输入的内容,逐步生成后续文本。GPT系列模型(GPT-1/2/3/4)均基于CLM,通过Transformer的单向注意力掩码实现。 (1)任务:基于前文预测下一个词,类似于人类逐字阅读文本的过程。 (2)示例:输入“The cat sits on the”,模型需要预测下一个词是“mat”。 ingFang SC", system-ui, -apple-system, "system-ui", "Helvetica Neue", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", Arial, sans-serif;font-size: 17px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: 0.578px;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;visibility: visible;box-sizing: border-box !important;overflow-wrap: break-word !important;">(1)目的:理解CLM的核心概念、数学原理及其与GPT的关系。(2)BERT论文:重点阅读《Improving Language Understanding by Generative Pre-Training》(GPT-1原始论文)的 Section 2(模型架构与预训练任务)和《Language Models are Unsupervised Multitask Learners》(GPT-2论文),理解CLM在生成任务中的扩展应用。(3)类比理解:CLM就像你在写作文时,只能根据之前写的内容决定下一个词(如“The cat sits on the [?]”),无法回头修改或参考后文。(1)目标:通过代码理解CLM的实现细节,包括Transformer的单向注意力掩码。(2)代码:同样无需从零实现,直接基于transformers库调用预训练模型微调。fromtransformersimportGPT2LMHeadModel,GPT2Tokenizerimporttorch#加载预训练模型和tokenizertokenizer=GPT2Tokenizer.from_pretrained("gpt2")model=GPT2LMHeadModel.from_pretrained("gpt2")#输入文本(CLM任务)input_text="Thecatsitsonthe"inputs=tokenizer(input_text,return_tensors="pt")#生成下一个词outputs=model.generate(**inputs,max_length=20,num_return_sequences=1)print(tokenizer.decode(outputs[0]))#输出完整句子(如"Thecatsitsonthematandsleeps.") ingFang SC", "Hiragino Sans GB", "Microsoft YaHei UI", "Microsoft YaHei", "Source Han Sans CN", sans-serif, "Apple Color Emoji", "Segoe UI Emoji";font-size: 15px;line-height: 1.7;color: rgb(6, 7, 31);letter-spacing: normal;text-align: start;background-color: rgb(253, 253, 254);" class="list-paddingleft-1">
|