StutterZero: Speech Conversion for Stuttering Transcription and Correction
Recorded: Dec. 3, 2025, 3:04 a.m.
| Original | Summarized |
[2510.18938] StutterZero and StutterFormer: End-to-End Speech Conversion for Stuttering Transcription and Correction
Skip to main content We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. > eess > arXiv:2510.18938 Help | Advanced Search All fields Search open search GO open navigation menu quick links Login Electrical Engineering and Systems Science > Audio and Speech Processing arXiv:2510.18938 (eess) [Submitted on 21 Oct 2025 (v1), last revised 5 Nov 2025 (this version, v2)] Abstract:Over 70 million people worldwide experience stuttering, yet most automatic speech systems misinterpret disfluent utterances or fail to transcribe them accurately. Existing methods for stutter correction rely on handcrafted feature extraction or multi-stage automatic speech recognition (ASR) and text-to-speech (TTS) pipelines, which separate transcription from audio reconstruction and often amplify distortions. This work introduces StutterZero and StutterFormer, the first end-to-end waveform-to-waveform models that directly convert stuttered speech into fluent speech while jointly predicting its transcription. StutterZero employs a convolutional-bidirectional LSTM encoder-decoder with attention, whereas StutterFormer integrates a dual-stream Transformer with shared acoustic-linguistic representations. Both architectures are trained on paired stuttered-fluent data synthesized from the SEP-28K and LibriStutter corpora and evaluated on unseen speakers from the FluencyBank dataset. Across all benchmarks, StutterZero had a 24% decrease in Word Error Rate (WER) and a 31% improvement in semantic similarity (BERTScore) compared to the leading Whisper-Medium model. StutterFormer achieved better results, with a 28% decrease in WER and a 34% improvement in BERTScore. The results validate the feasibility of direct end-to-end stutter-to-fluent speech conversion, offering new opportunities for inclusive human-computer interaction, speech therapy, and accessibility-oriented AI systems. Subjects: Audio and Speech Processing (eess.AS); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) Cite as: Focus to learn more arXiv-issued DOI via DataCite Submission history From: Qianheng Xu Mr. [view email] [v1]
Full-text links: View a PDF of the paper titled StutterZero and StutterFormer: End-to-End Speech Conversion for Stuttering Transcription and Correction, by Qianheng XuView PDFHTML (experimental)TeX Source view license < prev | new Change to browse by: References & Citations NASA ADSGoogle Scholar export BibTeX citation BibTeX formatted citation loading... Data provided by: Bookmark
Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv (What is alphaXiv?) Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub Toggle DagsHub (What is DagsHub?) GotitPub Toggle Gotit.pub (What is GotitPub?) Huggingface Toggle Hugging Face (What is Huggingface?) Links to Code Toggle Papers with Code (What is Papers with Code?) ScienceCast Toggle ScienceCast (What is ScienceCast?) Demos Demos Replicate Toggle Replicate (What is Replicate?) Spaces Toggle Hugging Face Spaces (What is Spaces?) Spaces Toggle TXYZ.AI (What is TXYZ.AI?) Related Papers Recommenders and Search Tools Link to Influence Flower Influence Flower (What are Influence Flowers?) Core recommender toggle CORE Recommender (What is CORE?) Author About arXivLabs arXivLabs: experimental projects with community collaborators Which authors of this paper are endorsers? | About contact arXivClick here to contact arXiv subscribe to arXiv mailingsClick here to subscribe Copyright Web Accessibility Assistance arXiv Operational Status |
This research paper, authored by Qianheng Xu, introduces StutterZero and StutterFormer, novel end-to-end speech conversion models specifically designed to address the challenges posed by stuttering in automatic speech recognition. The core innovation lies in a direct, waveform-to-waveform approach, enabling the models to simultaneously transcribe and convert stuttered speech into fluent speech. The study leverages paired stuttered and fluent data generated from the SEP-28K and LibriStutter corpora, alongside unseen data from the FluencyBank dataset, to train and evaluate these models. StutterZero utilizes a convolutional-bidirectional LSTM encoder-decoder architecture with attention mechanisms, while StutterFormer integrates dual-stream Transformers with shared acoustic-linguistic representations – a key architectural difference that appears to contribute to improved performance. Across the various benchmarks, StutterZero demonstrated a significant advancement, achieving a 24% reduction in Word Error Rate (WER) and a 31% improvement in semantic similarity as measured by BERTScore compared to the Whisper-Medium model. StutterFormer surpassed even these results, reporting a 28% decrease in WER and a 34% improvement in BERTScore. The results strongly suggest the feasibility of this direct, end-to-end conversion, raising the possibility of integrating these systems for inclusive human-computer interaction, supportive speech therapy applications, and accessibility-focused artificial intelligence systems. The research highlights the potential of transformer-based architectures when combined with LSTM components for speech processing tasks. Further investigation using real-world stuttering examples and larger datasets could refine and solidify the impact of the StutterFormer model. |