science

Google's latest Translate function turns speech of one dialect directly into another


Google’s Translate can now listen to a language and make it into an audio translation in the original speaker’s voice

  • The tool is able to convert language without the need for a text-based process
  • It also preserves the person’s original voice in the audio clip of the new language 
  • Currently, Google Translate’s system uses automatic speech recognition
  • This transcribes speech which is then converted into language
  • Now it can directly translate speech from one language into speech in another language, without relying on a text representation in either language

Google has announced a new translate tool which convert sone language into another and preserves the speaker’s original voice.

The tech giant’s new system works without the need to convert it to text before.  

A first-of-its-kind, the tool is able to do this while retaining the voice of the original speaker and making it sound ‘more realistic’, the tech giant said. 

Google claims the system, dubbed ‘Translatotron’, will be able to retain the voice of the original speaker after translation while also understanding words better. 

Scroll down for video 

Google has announced that their new translate tool will convert one language into another without the intermediate text-based process. The first of its kind tool is able to do this while retaining the voice of the original speaker and making it sound 'more realistic'

Google has announced that their new translate tool will convert one language into another without the intermediate text-based process. The first of its kind tool is able to do this while retaining the voice of the original speaker and making it sound ‘more realistic’

It can directly translate speech from one language into speech in another language, without relying on the intermediate text representation in either language, as is required in cascaded systems. 

‘Translatotron’ is the first end-to-end model that can directly translate speech from one language into speech in another language,’ Google wrote in a blog post.

Currently, Google Translate’s system uses three stages.

Automatic speech recognition, which transcribes speech as text; machine translation, which translates this text into another language; and text-to-speech synthesis, which uses this text to generate speech. 

The tech giant now says it will use a single model without the need for text.  

This system avoids dividing the task into separate stages,’ the blog post by Google AI software engineers Ye Jia and Ron Weiss said. 

According to the company, this will mean faster translation speed and fewer errors. 

The system retains the speaker’s voice by using spectrograms, a visual representation of the soundwaves, as its input.

HOW DOES ‘TRANSLATOTRON’ WORK?

Translatotron is based on a sequence-to-sequence network which takes source spectrograms, a visual representation of the soundwaves, as input and generates spectrograms of the translated content in the target language.

It also makes use of two other separately trained components: a neural vocoder that converts output spectrograms to waveforms.

Optionally, a speaker encoder that can be used to maintain the character of the source speaker’s voice in the synthesized translated speech. 

During training, the sequence-to-sequence model uses a multitask objective to predict source and target transcripts at the same time as generating target spectrograms. 

However, no transcripts or other intermediate text representations are used during inference.

The system retains the speaker's voice by using spectrograms, a visual representation of the soundwaves, as its input. It then generates these spectrograms, also relying on a neural vocoder and a speaker encoder, meaning the speaker's voice stays the same once translated

The system retains the speaker’s voice by using spectrograms, a visual representation of the soundwaves, as its input. It then generates these spectrograms, also relying on a neural vocoder and a speaker encoder, meaning the speaker’s voice stays the same once translated

It then generates these spectrograms, also relying on a neural vocoder and a speaker encoder, meaning the speaker’s vocal characteristics stay the same once translated. 

Google admitted that the system needs refining through further training of the algorithm. 

Sound clips published in the post were more ‘realistic’ than a machine voice, but still unmistakably computer-generated.  



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.