- Mar 9, 2019
- 874
If deepfakes were a disease, this would be a pandemic. Artificial Intelligence (AI) now generates deepfake voice at a scale and quality that has bridged the uncanny valley.
Fraud is increasingly being fueled by voice deepfakes. An analysis by Pindrop (using a ‘liveness detection tool’) examined 130 million calls in Q4 2024 and found an increase of 173% in the use of synthetic voice compared to Q1. This growth is expected to continue with AI models like Respeecher (legitimately used in movies, video games and documentaries) able to change pitch, timbre, and accent in real time – effectively adding emotion to a mechanically produced voice. Synthesized voice has successfully crossed the so-called uncanny valley.
The ‘uncanny valley’ is the dip in human acceptance for new developments followed by a sharp rise as they improve. It was described in the 1970s by Japanese robotics engineer Masahiro Mori. Its effect is accentuated by movement in the subject — for Mori in robotics, but equally applicable to moving voice today. The improvement in deepfake synthesis has reached that stage where initial distrust is replaced by active and increasing acceptance. It is impossible for a human to detect a voice deepfake.
Read more on:

The AI Arms Race: Deepfake Generation vs. Detection
With voice deepfakes indistinguishable from real speech, fraud is spiking—and most organizations aren't prepared for the scale of deception.