LOADING...

Google's new AI model can chat without leaking your secrets

Technology

Google recently introduced VaultGemma, a new AI language model with one billion parameters, built to keep your personal info safe.
Instead of memorizing everything it sees, VaultGemma uses a clever privacy trick—sequence-level differential privacy—to scramble sensitive data during training.
That means the model can't spit out private details later.

VaultGemma works about as well as popular non-private models from 2020

VaultGemma gives up some memorization to protect your data but still works about as well as popular non-private models from 2020, like GPT-2.
Tests show it doesn't accidentally leak training info, so you get solid performance without worrying about your secrets.

Google also shared VaultGemma's model weights and training methods

Google made VaultGemma's model weights and training methods public to help researchers build safer AI for everyone.
If more companies follow this lead, future apps could handle your personal data much more securely—making things like chatbots or virtual assistants way less risky for your privacy.