概述
建立环境
获取 Gemma 的访问权限
Gemma 模型托管在 Kaggle 上。要使用 Gemma,请在 Kaggle 上请求访问权限:
安装依赖
#InstallKeras3last.Seehttps://keras.io/getting_started/formoredetails.!pipinstall-q-Ukeras-nlp!pipinstall-q-Ukeras>=3
选定后端
import osos.environ["KERAS_BACKEND"] = "jax"# Or "torch" or "tensorflow".# Avoid memory fragmentation on JAX backend.os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"]="1.00"
导入包
importkerasimportkeras_nlp
定义提示模板
template="Instruction:\n{question}\n\nResponse:\n{answer}"加载模型
gemma_lm=keras_nlp.models.GemmaCausalLM.from_preset("gemma2_instruct_2b_en")gemma_lm.summary()Preprocessor:"gemma_causal_lm_preprocessor"
Model:"gemma_causal_lm"
Totalparams:2,614,341,888(9.74GB)
Trainableparams:2,614,341,888(9.74GB)
Non-trainableparams:0(0.00B)
在微调之前推理
查询书中提到的与 Rust 相关的知识
prompt=template.format(question="HowcanIoverloadthe`+`operatorforarithmeticadditioninRust?",answer="",)print(gemma_lm.generate(prompt,max_length=256))
Instruction:HowcanIoverloadthe`+`operatorforarithmeticadditioninRust?Response:```ruststructPoint{x:f64,y:f64,}implPoint{fnnew(x:f64,y:f64)->Self{Point{x,y}}fnadd(self,other
oint)->
oint{Point{x:self.x+other.x,y:self.y+other.y,}}}fnmain(){letp1=Point::new(1.0,2.0);letp2=Point::new(3.0,4.0);letresult=p1+p2;println!("Result
{},{})",result.x,result.y);}```**Explanation:**1.**StructDefinition:**Wedefinea`Point`structtorepresentpointsin2Dspace.2.**`add`Method:**Weimplementthe`+`operatorforthe`Point`LoRA 微调
加载数据集
import jsondata = []with open('/kaggle/input/rust-official-book/dataset.jsonl', encoding='utf-8') as file:for line in file:features = json.loads(line)# Format the entire example as a single string.data.append(template.format(**features))# Only use 1000 training examples, to keep it fast.# data = data[:100]
#EnableLoRAforthemodelandsettheLoRArankto4.gemma_lm.backbone.enable_lora(rank=4)gemma_lm.summary()
Preprocessor:"gemma_causal_lm_preprocessor"
Model:"gemma_causal_lm"
Totalparams:2,617,270,528(9.75GB)
Trainableparams:2,928,640(11.17MB)
Non-trainableparams:2,614,341,888(9.74GB)
# Limit the input sequence length to 512 (to control memory usage).gemma_lm.preprocessor.sequence_length = 512# Use AdamW (a common optimizer for transformer models).optimizer = keras.optimizers.AdamW(learning_rate=5e-5,weight_decay=0.01,)# Exclude layernorm and bias terms from decay.optimizer.exclude_from_weight_decay(var_names=["bias", "scale"])gemma_lm.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),optimizer=optimizer,weighted_metrics=[keras.metrics.SparseCategoricalAccuracy()],)gemma_lm.fit(data, epochs=1, batch_size=1)
微调后的推理
查询书中提到的与 Rust 相关的知识
prompt=template.format(question="HowcanIoverloadthe`+`operatorforarithmeticadditioninRust?",answer="",)print(gemma_lm.generate(prompt,max_length=256))
注意,本教程在一个小型粗糙数据集上进行微调,仅训练一个轮次 (epoch),并使用较低的 LoRA 秩值。为了从微调后的模型中获得更好的响应,您可以尝试以下方法:
| 欢迎光临 链载Ai (https://www.lianzai.com/) | Powered by Discuz! X3.5 |