2 min readfrom Machine Learning

Trials and tribulations fine-tuning & deploying Gemma-4 [P]

Hey all,

Our ML team spent some time this week getting training and deployments working for Gemma-4, and wanted to document all the things we ran into along the way.

  • PEFT doesn't recognize Gemma 4's custom layers. Google wrapped vision/audio projections in a new ClippableLinear class that doesn't inherit from nn.Linear, so PEFT refuses to attach LoRA, even for text-only fine-tuning. Fix: unwrap the wrappers after loading weights but before calling PEFT.
  • SFTTrainer killed training silently. TRL hardcodes use_cache=False, which breaks Gemma 4's KV-sharing attention. Loss never converges and there's no error, just garbage gradients. Fixed upstream in transformers v5.5.2+.
  • DeepSpeed ZeRO-3 saves half-empty adapters. Training loss looks perfect, but the saved LoRA file has zero-element tensors for half the layers. The model acts like it was never fine-tuned. Workaround: don't use DeepSpeed for LoRA on Gemma 4.
  • No runtime LoRA serving anywhere. Sometimes it takes a minute for vLLM and SGLang to support runtime LoRAs for Gemma 4's multimodal architecture. You have to merge weights and remap state dict keys manually before serving.

Much more detail in the blog, but hopefully it's helpful in your Gemma-4 journey as well!

submitted by /u/FallMindless3563
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#natural language processing for spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#no-code spreadsheet solutions
#row zero
#google sheets
#rows.com
#real-time data collaboration
#real-time collaboration
#Gemma-4
#PEFT
#LoRA
#SFTTrainer
#DeepSpeed ZeRO-3
#ClippableLinear
#KV-sharing attention
#training loss
#multimodal architecture
#weights
#runtime LoRA serving