The MLLP has released version 2.1 of the transLectures-UPV Platform (TLP) for the integration of automated transcription and translation technologies into media repositories.
The transLectures-UPV Platform (TLP) is an open source piece of software designed to integrate automated transcription and translation technologies into a media repository, and is developed by the MLLP research group at the Universitat Politècnica de València (UPV). Its main components are the TLP Database, Web Service, Ingest Service and Player. The MLLP is releasing TLP version 2.1, now available for download from the TLP page.
TLP has come a long way since its first releases in 2014. Some of the most important changes and improvements include: improved user management and authentication; support for user groups; improved collaborative editing functionalities; new advanced subtitle edition functionalities; new and improved media upload options; support for text-to-speech; and a much more powerful API, with new tools for API interaction.
A complete description of the functionalities offered by TLP 2.1 can be found in its documentation.
The most direct way to try what TLP offers is to access the MLLP transcription and translation platform, which integrates our own software TLP and TLK (the transLectures-UPV toolkit for speech recognition). Just open a trial account and start uploading media files to try our automatic subtitling technology, and even integrate it with your own media repository.
The MLLP’s TLP and TLK are also being used as the basis for automatic subtitle generation in the educational media repositories of our own Universitat Politècnica de València (Polimedia), and of the Carlos III University of Madrid. So far, our technology has been used to generate subtitles for over 16 000 media files (for a total of over 3 300 hours).
So download TLP 2.1 or try it in the MLLP transcription and translation platform. We’ll keep you updated on new developments and releases of our software; just keep checking our website and follow us on Twitter @mllpresearch !