back

Did you just say that?

April 2, 2024

This proves once again that when they do something, they do it right - the sample audios provided are convincing.

The technology is not new. Some of you have tried or heard examples of voice cloning from ElevenLabs. Hearing yourself in another language that you don't even speak was amazing. However, ElevenLabs requires a recording of at least 60 seconds, preferably several minutes, to create a similar-sounding clone.

More than just voices

In addition to communication across language barriers, the application scenarios also include a number of things that fall under the heading of "AI for good". Reading assistance for people who cannot read, whether children or adults, or therapeutic applications for people with communication deficits are just two examples.

However, the new language model carries risks, especially with regard to deepfakes. OpenAI is aware of this risk, which is why the language engine has been kept under wraps until now. But for how long? And won't someone else come out with a similarly powerful tool if OpenAI doesn't?

Deepfakes are becoming easier to create. We always have to be aware of this when dealing with information. But clones can also interfere with direct contact. Will we perhaps need code words for our personal conversations in the future, similar to passwords for digital services?

One thing is clear: No one can avoid generative AI. Even those who are not yet using it themselves are just as affected by the risks as everyone else. It's high time to get to grips with it.

read more