Abstract
Text-based speech editing (TSE) techniques are designed to enable users to edit the output audio by modifying the input text transcript instead of the audio itself. Despite much progress in neural network-based TSE techniques, the current techniques have focused on reducing the difference between the generated speech segment and the reference target in the editing region, ignoring its local and global fluency in the context and original utterance.
To maintain the speech fluency, we propose a fluency speech editing model, termed FluentEditor, by considering fluency-aware training criterion in the TSE training. Specifically, the acoustic consistency constraint aims to smooth the transition between the edited region and its neighboring acoustic segments consistent with the ground truth, while the prosody consistency constraint seeks to ensure that the prosody attributes within the edited regions remain consistent with the overall style of the original utterance.
The subjective and objective experimental results on VCTK demonstrate that our FluentEditor outperforms all advanced baselines in terms of naturalness and fluency. The audio samples and code are available at: https://github.com/ai-s2-lab/fluenteditor
Speech Demo
Dataset: VCTK
Operations: Insertion and Replacement
1. FluentEditor performance in terms of Insertion and Replacement
Insertion
Item_name | GT | FluentEditor |
---|---|---|
p308_100 |
![]() Original_Text:We have no idea what caused the derailment . |
![]() Edited_Text: We have absolutely no idea what caused the derailment. |
p272_017 |
![]() Original_Text:Others have tried to explain the phenomenon physically. |
![]() Edited_Text: Others have tried to explain the rare phenomenon for them( |