Google Research Unveils VaultGemma, a 1B-Param Differentially Private LLM

Ars Technica •

Google Research unveiled VaultGemma, a 1-billion-parameter LLM trained with differential privacy. Using derived scaling laws, the team balanced noise, compute, and data to limit memorization while keeping performance comparable to non-private models of the same size. VaultGemma is experimental, aimed at smaller/specialized models, and its weights are available on Hugging Face and Kaggle under a license that forbids misuse.

Read original ↗