Demo Overview
This page provides samples of our approach on various “Forget Speaker” scenarios. Below, you will find audio prompts, ground truth references, and outputs of our unlearning models.
The rapid advancement of Zero-Shot Text-to-Speech (ZS-TTS) technology has enabled high-fidelity voice synthesis from minimal audio cues, raising significant privacy and ethical concerns. Despite the threats to voice privacy, research to selectively remove the knowledge to replicate unwanted individual voices from pre-trained model parameters has not been explored. In this paper, we address the new challenge of speaker identity unlearning for ZS-TTS systems. To meet this goal, we propose the first machine unlearning frameworks for ZS-TTS, especially Teacher-Guided Unlearning (TGU), designed to ensure the model forgets designated speaker identities while retaining its ability to generate accurate speech for other speakers. Our proposed methods incorporate randomness to prevent consistent replication of forget speakers' voices, assuring unlearned identities remain untraceable. Additionally, we propose a new evaluation metric, speaker-Zero Retrain Forgetting (spk-ZRF). This assesses the model's ability to disregard prompts associated with forgotten speakers, effectively neutralizing its knowledge of these voices. The experiments conducted on the state-of-the-art model demonstrate that TGU prevents the model from replicating forget speakers' voices while maintaining high quality for other speakers.
This page provides samples of our approach on various “Forget Speaker” scenarios. Below, you will find audio prompts, ground truth references, and outputs of our unlearning models.
Our methods effectively remove Zero-Shot Text-to-Speech models' capability of mimicking voices on requested Forget Speakers. The Forget Speakers can either be seen during pre-training (In-Domain) or unseen at all (Out-of-Domain).
Target Text | *Audio Prompt | *Sample to Forget | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
I made fifteen or sixteen dresses for her during the spring and early part of the summer when she left Washington spending the hot weather at Saratoga Long Branch and other places. | ||||
*An actual audio file spoken by Forget Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | *Sample to Forget | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
She had to go through a lot of red tape before she got it. Had quite a time of it she did, and say kid, that woman ain't so bad. | ||||
*An actual audio file spoken by Forget Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | *Sample to Forget | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
If he falls in my way I shall tell him my mind. If he don't fall in my way I shan't for it won't be worth my while to do it. | ||||
*An actual audio file spoken by Forget Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | *Sample to Forget | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
I contend then, that the true place to fight the battle is in the union, and within the provisions of the constitution. | ||||
*An actual audio file spoken by Forget Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | *Sample to Forget | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
Through these the ends of the bars passing across the door were placed. Which if anything made the opening when closed and fastened inside stronger than any other portion of the structure. | ||||
*An actual audio file spoken by Forget Speaker, but not seen during training or unlearning process.
We reproduce results of Table 1 of our paper, with different datasets for generalizability. Unlearning 10 speakers at once required 10K steps, which is around 2% of steps required for pre-training the model with LibriTTS - 500K steps.
Target Text | *Audio Prompt | *Sample to Forget | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
He saw no folk in the streets save here and there and an old woman sitting at the door of her house and maybe a young child with her. | ||||
*An actual audio file spoken by Forget Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | *Sample to Forget | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
He believed neither in God nor the devil, but was much concerned about the question of improvement of the clergy and the maintenance of their revenues and took special trouble to keep up the church in his village. | ||||
*An actual audio file spoken by Forget Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | *Sample to Forget | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
I'm hoping for the best, said Harry. | ||||
*An actual audio file spoken by Forget Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | *Sample to Forget | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
He was afraid to be sure and his heart was beating fast with excitement of the moment, but he knew he must regain the magic umbrella if he would save his comrades and himself from destruction. For without it, they could never return to the Earth. | ||||
*An actual audio file spoken by Forget Speaker, but not seen during training or unlearning process.
While our methods effectively prevent synthesis of Forget Speakers voices, it succeeds to retain the Zero-Shot performance for all other Remain Speakers. Here, the Remain Speakers are unseen voices from LibriSpeech tested in Zero-Shot setting.
Target Text | *Audio Prompt | *Sample to Remain | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
They then renewed their journey and under the better light made a safe crossing of the stable roofs. |
*An actual audio file spoken by Remain Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | *Sample to Remain | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
...and towards Christmas, he was one of the first that was cut down. |
*An actual audio file spoken by Remain Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | *Sample to Remain | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
Other circumstances permitting, that instinct disposes men to look with favor upon productive efficiency and on whatever is of human use. |
*An actual audio file spoken by Remain Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | *Sample to Remain | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|
He darted like an arrow through all the halls, down all the stairs, and across the yard. |
*An actual audio file spoken by Remain Speaker, but not seen during training or unlearning process.
Target Text | *Audio Prompt | Original | Negative Gradient | KL Loss | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|---|---|
She had to go through a lot of red tape before she got it. Had quite a time of it she did, and say kid, that woman ain't so bad. | ||||||
This table compares the output of several models, including Original, Negative Gradient, KL Loss, SGU, and TGU. The table demonstrates how each model performed for the LibriHeavy Forget Speaker 1.
Target Text | *Audio Prompt | Original | Negative Gradient | KL Loss | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|---|---|
If he falls in my way I shall tell him my mind. If he don't fall in my way I shan't for it won't be worth my while to do it. | ||||||
This table compares the output of several models, including Ground Truth, Original, Negative Gradient, KL Loss, SGU, and TGU. The table demonstrates how each model performed for the LibriHeavy Forget Speaker 2 scenario.
Target Text | *Audio Prompt | Original | Negative Gradient | KL Loss | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|---|---|
I contend then, that the true place to fight the battle is in the union, and within the provisions of the constitution. | ||||||
This table compares the output of several models, including Ground Truth, Original, Negative Gradient, KL Loss, SGU, and TGU. The table demonstrates how each model performed for the LibriHeavy Forget Speaker 3 scenario.
Target Text | *Audio Prompt | Original | Negative Gradient | KL Loss | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|---|---|
Through these the ends of the bars passing across the door were placed. Which if anything made the opening when closed and fastened inside stronger than any other portion of the structure. | ||||||
This table compares the output of several models, including Ground Truth, Original, Negative Gradient, KL Loss, SGU, and TGU. The table demonstrates how each model performed for the LibriHeavy Forget Speaker 4 scenario.
Target Text | *Audio Prompt | Original | Negative Gradient | KL Loss | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|---|---|
They then renewed their journey and under the better light made a safe crossing of the stable roofs. |
Target Text | *Audio Prompt | Original | Negative Gradient | KL Loss | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|---|---|
...and towards Christmas, he was one of the first that was cut down. |
Target Text | *Audio Prompt | Original | Negative Gradient | KL Loss | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|---|---|
Other circumstances permitting, that instinct disposes men to look with favor upon productive efficiency and on whatever is of human use. |
Target Text | *Audio Prompt | Original | Negative Gradient | KL Loss | SGU (Ours) | TGU (Ours) |
---|---|---|---|---|---|---|
He darted like an arrow through all the halls, down all the stairs, and across the yard. |
When there is a Remain Speaker with very similar voice to a Forget Speaker, surprisingly, even without speaker classification process, our method effectively synthesizes Remain Speakers while preventing synthesis of Forget Speakers. Here, we identify three speakers from LibriSpeech dataset with high similarity to Forget Speakers and evaluate model performances. Notice how Remain Speakers are still well synthesized!
Remain Speaker Sample | Forget Speaker Sample | SIM | |
---|---|---|---|
1 | 0.442 | ||
2 | 0.504 | 3 | 0.479 |
Slide next to see generated outputs. SIM refers to Speaker Similarity between remain speaker sample and forget speaker sample. Higher SIM means the speakers of two speech sound more similar to each other, hence higher probability of being the same speaker.
*Remain Sample as Audio Prompt | *Sample to Forget | SGU (Ours) | TGU (Ours) | |
---|---|---|---|---|
1 | ||||
2 | ||||
3 |
*An actual audio file spoken by Remain Speaker, but not seen during training or unlearning process.
@inproceedings{
anonymous2024do,
title={Do Not Mimic My Voice: Speaker Identity Unlearning for Zero-Shot Text-to-Speech},
author={Anonymous},
booktitle={Submitted to the Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=1EgFJDjodU¬eId=1EgFJDjodU},
note={under review}
}