LmCast :: Stay tuned in

StutterZero: Speech Conversion for Stuttering Transcription and Correction

Recorded: Dec. 3, 2025, 3:04 a.m.

Original Summarized

[2510.18938] StutterZero and StutterFormer: End-to-End Speech Conversion for Stuttering Transcription and Correction

Skip to main content

We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.
Donate

> eess > arXiv:2510.18938

Help | Advanced Search

All fields
Title
Author
Abstract
Comments
Journal reference
ACM classification
MSC classification
Report number
arXiv identifier
DOI
ORCID
arXiv author ID
Help pages
Full text

Search

open search

GO

open navigation menu

quick links

Login
Help Pages
About

Electrical Engineering and Systems Science > Audio and Speech Processing

arXiv:2510.18938 (eess)

[Submitted on 21 Oct 2025 (v1), last revised 5 Nov 2025 (this version, v2)]
Title:StutterZero and StutterFormer: End-to-End Speech Conversion for Stuttering Transcription and Correction
Authors:Qianheng Xu View a PDF of the paper titled StutterZero and StutterFormer: End-to-End Speech Conversion for Stuttering Transcription and Correction, by Qianheng Xu
View PDF
HTML (experimental)

Abstract:Over 70 million people worldwide experience stuttering, yet most automatic speech systems misinterpret disfluent utterances or fail to transcribe them accurately. Existing methods for stutter correction rely on handcrafted feature extraction or multi-stage automatic speech recognition (ASR) and text-to-speech (TTS) pipelines, which separate transcription from audio reconstruction and often amplify distortions. This work introduces StutterZero and StutterFormer, the first end-to-end waveform-to-waveform models that directly convert stuttered speech into fluent speech while jointly predicting its transcription. StutterZero employs a convolutional-bidirectional LSTM encoder-decoder with attention, whereas StutterFormer integrates a dual-stream Transformer with shared acoustic-linguistic representations. Both architectures are trained on paired stuttered-fluent data synthesized from the SEP-28K and LibriStutter corpora and evaluated on unseen speakers from the FluencyBank dataset. Across all benchmarks, StutterZero had a 24% decrease in Word Error Rate (WER) and a 31% improvement in semantic similarity (BERTScore) compared to the leading Whisper-Medium model. StutterFormer achieved better results, with a 28% decrease in WER and a 34% improvement in BERTScore. The results validate the feasibility of direct end-to-end stutter-to-fluent speech conversion, offering new opportunities for inclusive human-computer interaction, speech therapy, and accessibility-oriented AI systems.


Comments:
13 pages, 5 figures

Subjects:

Audio and Speech Processing (eess.AS); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

Cite as:
arXiv:2510.18938 [eess.AS]

 
(or
arXiv:2510.18938v2 [eess.AS] for this version)

 
https://doi.org/10.48550/arXiv.2510.18938

Focus to learn more

arXiv-issued DOI via DataCite

Submission history From: Qianheng Xu Mr. [view email] [v1]
Tue, 21 Oct 2025 17:54:36 UTC (9,663 KB)
[v2]
Wed, 5 Nov 2025 00:00:48 UTC (9,657 KB)

Full-text links:
Access Paper:

View a PDF of the paper titled StutterZero and StutterFormer: End-to-End Speech Conversion for Stuttering Transcription and Correction, by Qianheng XuView PDFHTML (experimental)TeX Source

view license


Current browse context: eess.AS

< prev

  |  
next >

new
|
recent
| 2025-10

Change to browse by:

cs
cs.AI
cs.CL
eess

References & Citations

NASA ADSGoogle Scholar
Semantic Scholar

export BibTeX citation
Loading...

BibTeX formatted citation
×

loading...

Data provided by:

Bookmark

Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer (What is the Explorer?)

Connected Papers Toggle

Connected Papers (What is Connected Papers?)

Litmaps Toggle

Litmaps (What is Litmaps?)

scite.ai Toggle

scite Smart Citations (What are Smart Citations?)

Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle

alphaXiv (What is alphaXiv?)

Links to Code Toggle

CatalyzeX Code Finder for Papers (What is CatalyzeX?)

DagsHub Toggle

DagsHub (What is DagsHub?)

GotitPub Toggle

Gotit.pub (What is GotitPub?)

Huggingface Toggle

Hugging Face (What is Huggingface?)

Links to Code Toggle

Papers with Code (What is Papers with Code?)

ScienceCast Toggle

ScienceCast (What is ScienceCast?)

Demos

Demos

Replicate Toggle

Replicate (What is Replicate?)

Spaces Toggle

Hugging Face Spaces (What is Spaces?)

Spaces Toggle

TXYZ.AI (What is TXYZ.AI?)

Related Papers

Recommenders and Search Tools

Link to Influence Flower

Influence Flower (What are Influence Flowers?)

Core recommender toggle

CORE Recommender (What is CORE?)

Author
Venue
Institution
Topic

About arXivLabs

arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? |
Disable MathJax (What is MathJax?)

About
Help

contact arXivClick here to contact arXiv
Contact

subscribe to arXiv mailingsClick here to subscribe
Subscribe

Copyright
Privacy Policy

Web Accessibility Assistance

arXiv Operational Status

This research paper, authored by Qianheng Xu, introduces StutterZero and StutterFormer, novel end-to-end speech conversion models specifically designed to address the challenges posed by stuttering in automatic speech recognition. The core innovation lies in a direct, waveform-to-waveform approach, enabling the models to simultaneously transcribe and convert stuttered speech into fluent speech. The study leverages paired stuttered and fluent data generated from the SEP-28K and LibriStutter corpora, alongside unseen data from the FluencyBank dataset, to train and evaluate these models.

StutterZero utilizes a convolutional-bidirectional LSTM encoder-decoder architecture with attention mechanisms, while StutterFormer integrates dual-stream Transformers with shared acoustic-linguistic representations – a key architectural difference that appears to contribute to improved performance. Across the various benchmarks, StutterZero demonstrated a significant advancement, achieving a 24% reduction in Word Error Rate (WER) and a 31% improvement in semantic similarity as measured by BERTScore compared to the Whisper-Medium model. StutterFormer surpassed even these results, reporting a 28% decrease in WER and a 34% improvement in BERTScore. The results strongly suggest the feasibility of this direct, end-to-end conversion, raising the possibility of integrating these systems for inclusive human-computer interaction, supportive speech therapy applications, and accessibility-focused artificial intelligence systems. The research highlights the potential of transformer-based architectures when combined with LSTM components for speech processing tasks. Further investigation using real-world stuttering examples and larger datasets could refine and solidify the impact of the StutterFormer model.