Valor Miró, Juan Daniel ; Silvestre-Cerdà, Joan Albert ; Civera, Jorge ; Turró, Carlos ; Juan, Alfons Efficient Generation of High-Quality Multilingual Subtitles for Video Lecture Repositories Inproceedings Proc. of 10th European Conf. on Technology Enhanced Learning (EC-TEL 2015), pp. 485–490, Toledo (Spain), 2015, ISBN: 978-3-319-24258-3. Abstract | Links | BibTeX | Tags: Automatic Speech Recognition, Docencia en Red, Efficient video subtitling, Polimedia, Statistical machine translation, video lecture repositories @inproceedings{valor2015efficient,
title = {Efficient Generation of High-Quality Multilingual Subtitles for Video Lecture Repositories},
author = {Valor Miró, Juan Daniel and Silvestre-Cerdà, Joan Albert and Civera, Jorge and Turró, Carlos and Juan, Alfons},
url = {http://link.springer.com/chapter/10.1007/978-3-319-24258-3_44
http://www.mllp.upv.es/wp-content/uploads/2016/03/paper.pdf
},
isbn = {978-3-319-24258-3},
year = {2015},
date = {2015-09-17},
booktitle = {Proc. of 10th European Conf. on Technology Enhanced Learning (EC-TEL 2015)},
pages = {485--490},
address = {Toledo (Spain)},
abstract = {Video lectures are a valuable educational tool in higher education to support or replace face-to-face lectures in active learning strategies. In 2007 the Universitat Polit‘ecnica de Val‘encia (UPV) implemented its video lecture capture system, resulting in a high quality educational video repository, called poliMedia, with more than 10.000 mini lectures created by 1.373 lecturers. Also, in the framework of the European project transLectures, UPV has automatically generated transcriptions and translations in Spanish, Catalan and English for all videos included in the poliMedia video repository. transLectures’s objective responds to the widely-recognised need for subtitles to be provided with video lectures, as an essential service for non-native speakers and hearing impaired persons, and to allow advanced repository functionalities. Although high-quality automatic transcriptions and translations were generated in transLectures, they were not error-free. For this reason, lecturers need to manually review video subtitles to guarantee the absence of errors. The aim of this study is to evaluate the efficiency of the manual review process from automatic subtitles in comparison with the conventional generation of video subtitles from scratch. The reported results clearly indicate the convenience of providing automatic subtitles as a first step in the generation of video subtitles and the significant savings in time of up to almost 75% involved in reviewing subtitles.},
keywords = {Automatic Speech Recognition, Docencia en Red, Efficient video subtitling, Polimedia, Statistical machine translation, video lecture repositories},
pubstate = {published},
tppubtype = {inproceedings}
}
Video lectures are a valuable educational tool in higher education to support or replace face-to-face lectures in active learning strategies. In 2007 the Universitat Polit‘ecnica de Val‘encia (UPV) implemented its video lecture capture system, resulting in a high quality educational video repository, called poliMedia, with more than 10.000 mini lectures created by 1.373 lecturers. Also, in the framework of the European project transLectures, UPV has automatically generated transcriptions and translations in Spanish, Catalan and English for all videos included in the poliMedia video repository. transLectures’s objective responds to the widely-recognised need for subtitles to be provided with video lectures, as an essential service for non-native speakers and hearing impaired persons, and to allow advanced repository functionalities. Although high-quality automatic transcriptions and translations were generated in transLectures, they were not error-free. For this reason, lecturers need to manually review video subtitles to guarantee the absence of errors. The aim of this study is to evaluate the efficiency of the manual review process from automatic subtitles in comparison with the conventional generation of video subtitles from scratch. The reported results clearly indicate the convenience of providing automatic subtitles as a first step in the generation of video subtitles and the significant savings in time of up to almost 75% involved in reviewing subtitles. |
Brouns, Francis; Serrano Martínez-Santos, Nicolás ; Civera, Jorge; Kalz, Marco; Juan, Alfons Supporting language diversity of European MOOCs with the EMMA platform Inproceedings Proc. of the European MOOC Stakeholder Summit EMOOCs 2015, pp. 157–165, Mons (Belgium), 2015. Abstract | Links | BibTeX | Tags: Automatic Speech Recognition, EMMA, Statistical machine translation @inproceedings{Brouns2015,
title = {Supporting language diversity of European MOOCs with the EMMA platform},
author = {Francis Brouns and Serrano Martínez-Santos, Nicolás and Jorge Civera and Marco Kalz and Alfons Juan},
url = {http://www.emoocs2015.eu/node/55},
year = {2015},
date = {2015-01-01},
booktitle = {Proc. of the European MOOC Stakeholder Summit EMOOCs 2015},
pages = {157--165},
address = {Mons (Belgium)},
abstract = {This paper introduces the cross-language support of the EMMA MOOC platform. Based on a discussion of language diversity in Europe, we introduce the development and evaluation of automated translation of texts and subtitling of videos from Dutch into English. The development of an Automatic Speech Recognition (ASR) system and a Statistical Machine Translation (SMT) system is described. The resources employed and evaluation approach is introduced. Initial evaluation results are presented. Finally, we provide an outlook into future research and development.},
keywords = {Automatic Speech Recognition, EMMA, Statistical machine translation},
pubstate = {published},
tppubtype = {inproceedings}
}
This paper introduces the cross-language support of the EMMA MOOC platform. Based on a discussion of language diversity in Europe, we introduce the development and evaluation of automated translation of texts and subtitling of videos from Dutch into English. The development of an Automatic Speech Recognition (ASR) system and a Statistical Machine Translation (SMT) system is described. The resources employed and evaluation approach is introduced. Initial evaluation results are presented. Finally, we provide an outlook into future research and development. |
Turró, Carlos; Juan, Alfons; Civera, Jorge; Orliĉ, Davor; Jermol, Mitja transLectures: Transcription and Translation of Video Lectures Inproceedings Proc. of Cambridge 2012: Innovation and Impact - Openly Collaborating to Enhance Education, pp. 543-546, Cambridge (UK), 2012. Abstract | Links | BibTeX | Tags: Automatic Speech Recognition, Statistical machine translation @inproceedings{Turró2012,
title = {transLectures: Transcription and Translation of Video Lectures},
author = {Carlos Turró and Alfons Juan and Jorge Civera and Davor Orliĉ and Mitja Jermol},
url = {http://oro.open.ac.uk/id/eprint/33640
http://hdl.handle.net/10251/54166},
year = {2012},
date = {2012-01-01},
booktitle = {Proc. of Cambridge 2012: Innovation and Impact - Openly Collaborating to Enhance Education},
pages = {543-546},
address = {Cambridge (UK)},
abstract = {transLectures is a FP7 project aimed at developing innovative, cost-effective solutions to produce accurate transcriptions and translations in large repositories of video lectures. This paper describes user requirements, first integration steps and evaluation plans at transLectures case studies, VideoLectures.NET and poliMedia.},
keywords = {Automatic Speech Recognition, Statistical machine translation},
pubstate = {published},
tppubtype = {inproceedings}
}
transLectures is a FP7 project aimed at developing innovative, cost-effective solutions to produce accurate transcriptions and translations in large repositories of video lectures. This paper describes user requirements, first integration steps and evaluation plans at transLectures case studies, VideoLectures.NET and poliMedia. |