Do Not Mimic My Voice: Speaker Identity Unlearning for Zero-Shot Text-to-Speech

Submitted to the Forty-second International Conference on Machine Learning

Abstract

The rapid advancement of Zero-Shot Text-to-Speech (ZS-TTS) technology has enabled high-fidelity voice synthesis from minimal audio cues, raising significant privacy and ethical concerns. Despite the threats to voice privacy, research to selectively remove the knowledge to replicate unwanted individual voices from pre-trained model parameters has not been explored. In this paper, we address the new challenge of speaker identity unlearning for ZS-TTS systems. To meet this goal, we propose the first machine unlearning frameworks for ZS-TTS, especially Teacher-Guided Unlearning (TGU), designed to ensure the model forgets designated speaker identities while retaining its ability to generate accurate speech for other speakers. Our proposed methods incorporate randomness to prevent consistent replication of forget speakers' voices, assuring unlearned identities remain untraceable. Additionally, we propose a new evaluation metric, speaker-Zero Retrain Forgetting (spk-ZRF). This assesses the model's ability to disregard prompts associated with forgotten speakers, effectively neutralizing its knowledge of these voices. The experiments conducted on the state-of-the-art model demonstrate that TGU prevents the model from replicating forget speakers' voices while maintaining high quality for other speakers.

Main Figure

A descriptive caption
The training procedure for the forget set in (b) the SGU framework and (c) the TGU framework, along with (a) the training procedure for the remain set in both SGU and TGU.

Demo Overview

This page provides samples of our approach on various “Forget Speaker” scenarios. Below, you will find audio prompts, ground truth references, and outputs of our unlearning models.

Our Methods on Forget Speakers

Our methods effectively remove Zero-Shot Text-to-Speech models' capability of mimicking voices on requested Forget Speakers. The Forget Speakers can either be seen during pre-training (In-Domain) or unseen at all (Out-of-Domain).

Model Pre-trained on LibriHeavy

Model Pre-trained on LibriTTS

We reproduce results of Table 1 of our paper, with different datasets for generalizability. Unlearning 10 speakers at once required 10K steps, which is around 2% of steps required for pre-training the model with LibriTTS - 500K steps.

Our Methods on Remain Speakers

While our methods effectively prevent synthesis of Forget Speakers voices, it succeeds to retain the Zero-Shot performance for all other Remain Speakers. Here, the Remain Speakers are unseen voices from LibriSpeech tested in Zero-Shot setting.

Baseline Comparison on Forget Speakers

Baseline Comparison on Remain Speakers

Handling Similar Speakers - Robustness

When there is a Remain Speaker with very similar voice to a Forget Speaker, surprisingly, even without speaker classification process, our method effectively synthesizes Remain Speakers while preventing synthesis of Forget Speakers. Here, we identify three speakers from LibriSpeech dataset with high similarity to Forget Speakers and evaluate model performances. Notice how Remain Speakers are still well synthesized!

BibTeX

@inproceedings{
        anonymous2024do,
        title={Do Not Mimic My Voice: Speaker Identity Unlearning for Zero-Shot Text-to-Speech},
        author={Anonymous},
        booktitle={Submitted to the Forty-second International Conference on Machine Learning},
        year={2025},
        url={https://openreview.net/forum?id=1EgFJDjodU¬eId=1EgFJDjodU},
        note={under review}
        }