Pérez González de Martos, Alejandro Deep Neural Networks for Automatic Speech-To-Speech Translation of Open Educational Resources PhD Thesis Universitat Politècnica de València, 2022, (Advisors: Alfons Juan Ciscar and Alberto Sanchis Navarro). Links | BibTeX | Tags: automatic dubbing, cross-lingual voice cloning, educational resources, simultaneous machine interpretation, text-to-speech @phdthesis{aperez2022,
title = {Deep Neural Networks for Automatic Speech-To-Speech Translation of Open Educational Resources},
author = {Pérez González de Martos, Alejandro},
url = {http://hdl.handle.net/10251/184019},
doi = {10.4995/Thesis/10251/184019},
year = {2022},
date = {2022-06-15},
school = {Universitat Politècnica de València},
note = {Advisors: Alfons Juan Ciscar and Alberto Sanchis Navarro},
keywords = {automatic dubbing, cross-lingual voice cloning, educational resources, simultaneous machine interpretation, text-to-speech},
pubstate = {published},
tppubtype = {phdthesis}
}
|
Pérez-González-de-Martos, Alejandro; Iranzo-Sánchez, Javier; Giménez Pastor, Adrià ; Jorge, Javier; Silvestre-Cerdà, Joan-Albert; Civera, Jorge; Sanchis, Albert; Juan, Alfons Towards simultaneous machine interpretation Inproceedings Proc. Interspeech 2021, pp. 2277–2281, Brno (Czech Republic), 2021. Abstract | Links | BibTeX | Tags: cross-lingual voice cloning, incremental text-to-speech, simultaneous machine interpretation, speech-to-speech translation @inproceedings{Pérez-González-de-Martos2021,
title = {Towards simultaneous machine interpretation},
author = {Alejandro Pérez-González-de-Martos and Javier Iranzo-Sánchez and Giménez Pastor, Adrià and Javier Jorge and Joan-Albert Silvestre-Cerdà and Jorge Civera and Albert Sanchis and Alfons Juan},
doi = {10.21437/Interspeech.2021-201},
year = {2021},
date = {2021-01-01},
booktitle = {Proc. Interspeech 2021},
journal = {Proc. Interspeech 2021},
pages = {2277--2281},
address = {Brno (Czech Republic)},
abstract = {Automatic speech-to-speech translation (S2S) is one of the most challenging speech and language processing tasks, especially when considering its application to real-time settings. Recent advances in streaming Automatic Speech Recognition (ASR), simultaneous Machine Translation (MT) and incremental neural Text-To-Speech (TTS) make it possible to develop real-time cascade S2S systems with greatly improved accuracy. On the way to simultaneous machine interpretation, a state-of-the-art cascade streaming S2S system is described and empirically assessed in the simultaneous interpretation of European Parliament debates. We pay particular attention to the TTS component, particularly in terms of speech naturalness under a variety of response-time settings, as well as in terms of speaker similarity for its cross-lingual voice cloning capabilities.},
keywords = {cross-lingual voice cloning, incremental text-to-speech, simultaneous machine interpretation, speech-to-speech translation},
pubstate = {published},
tppubtype = {inproceedings}
}
Automatic speech-to-speech translation (S2S) is one of the most challenging speech and language processing tasks, especially when considering its application to real-time settings. Recent advances in streaming Automatic Speech Recognition (ASR), simultaneous Machine Translation (MT) and incremental neural Text-To-Speech (TTS) make it possible to develop real-time cascade S2S systems with greatly improved accuracy. On the way to simultaneous machine interpretation, a state-of-the-art cascade streaming S2S system is described and empirically assessed in the simultaneous interpretation of European Parliament debates. We pay particular attention to the TTS component, particularly in terms of speech naturalness under a variety of response-time settings, as well as in terms of speaker similarity for its cross-lingual voice cloning capabilities. |