security

I’m a security expert – Android, iPhone users warned they ‘can’t trust their ears’ as eerie AI call raids b… – The US Sun


CYBERSECURITY experts are warning billions of Android and iPhone users that they might not be able to trust their own ears from scammers looking to raid their banks. 

As artificial intelligence continues to develop, Russian cybersecurity and anti-virus software provider Kaspersky Lab is warning people of scammers using deep-fake technology in phone calls. 

Cybersecurity experts are warning smartphone users of scammers using voice deepfakesCredit: Getty
The scams use fake audio recordings in an attempt to steal money and personal dataCredit: Getty
The technology compresses two recordings togetherCredit: Niral Shah/Stanford/K. Qian, Y. Zhang, S. Chang, et al

Also known as “voice cloning” or “voice conversion,” the cyber security company highlighted voice deep fakes in a recent blog post. 

According to the company, this technology is based on “autoencoders,” which compresses input data into a compact internal representation before learning to decompress it back, restoring the original data. 

In other words, the AI program will first be given data such as two audio recordings – one with the original audio and words, and the other with the voice it wants to use instead. 

Next, the system determines what was said in the first recording and how the voice in the second recording speaks – such as various inflections or accents. 

Read More on Artificial Intelligence

Then, the system will combine these two compressed representations together to then generate the voice in the second recording saying the words from the first. 

While this technology might seem harmless to some – or the foundations of a good prank – it can be very dangerous when put in the wrong hands. 

Readers Also Like:  Understanding Cloud Workload Protection: Technologies and Best Practices - Security Boulevard

Kaspersky Lab detailed that scammers have been using this technology for years to target companies and individuals worldwide. 

In 2019, for example, criminals used AI software to create fraudulent money transfer requests supposedly from the chief executive officers of an energy firm in the United Kingdom. 

Not only did the scammers use the technology to make the initial request over the phone, they also falsified two additional phone calls to confirm the first transfer and request a second. 

Because the AI program had used a similar accent to the CEO’s, the staff was not initially suspicious. 

There are a number of actions smartphone users can take to protect themselves from these types of scams. 

If you are suspicious that a phone call you received is AI generated or a deep fake, Kaspersky Lab suggests listening for these things: 

  • Is the voice monotone?
  • Is the voice unintelligible?
  • Are there strange noises heard in the background or throughout the call?

Additionally, the cybersecurity company recommends everyone double-check information provided through additional channels – such as confirming a money transfer request to the supposed trusted source in person. 

If the person on the call claims to be from a certain company, always conduct independent research to confirm their information. 

Finally, Kaspersky Lab recommends people stay calm if they are suspicious, saying, “remember that surprise and panic are what scammers rely on most.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.