Separator

AI Voice Cloning Tech Stirs Confusion Amid Sudan Civil War

Separator

img

Amid the Sudan civil war, a TikTok campaign impersonating Omar al-Bashir, the former leader of the region, emerged through voice cloning technology. The AI voice clone has garnered hundreds of millions of views, stirring more internet chaos in the already-conflicted Sudan’s worsening violence that has been torn apart by civil war.

The campaign got its start in late August when an anonymous account posted phony recordings that claimed to be the voice of the former president. Since then, it has received hundreds of millions of views.

Omar al Bashir, who is charged with planning war crimes, has not been seen in public for a year and is said to be in critical condition. His whereabouts are still a mystery despite an anonymous TikTok account publishing phony recordings. Despite the voice being eerily similar to Bashir's, there are concerns about the tapes' veracity due to his deteriorating health. This unpredictability makes the situation in Sudan, a country already dealing with a civil conflict, even worse.

This development is an example of how technology is used in contemporary warfare. Realistic synthetic voices are produced by artificial intelligence (AI) and then used for a variety of objectives in the continuing war. Wide-ranging effects could result from this, including the possibility of disinformation campaigns and psychological warfare.

The use of voice conversion software has been uncovered in investigations into the legitimacy of the leaked recordings that impersonate Omar al Bashir. Analysis of audio waves revealed similarities between speech and silence, proving that Bashir's voice was imitated using voice cloning technology. Concerns concerning audio content modification and its potential effects on how the general public perceives and comprehends the current conflict are raised by this usage of voice cloning technology in the context of the Sudanese civil war.

Due to modern technologies like AI voice cloning, experts warn that this is a clear example of how quickly bogus information can spread over social media. An already turbulent situation is made more precarious by the mystery surrounding Bashir's location as clashes between the military and numerous militia groups continue.

 

The Brightside

Additionally, cloned voices can convey a wide spectrum of emotions, such as fear and rage, as well as love and boredom. This is very different from the artificial speech of the past, which was robotic, rigid, unnatural, and obviously machine-generated.

Film production companies won't need to engage foreign-language performers anymore to generate versions of their films suited for other nations because voice cloning may be used to convert an actor's utterances into several languages.

The medical field's capacity to aid those with speech impairments may have the greatest positive impact. Imagine being able to produce artificial voices for individuals who require aid speaking. Or consider a person with throat cancer who might need to have their larynx removed but who can capture their voice before the operation in order to produce a voice clone that sounds more like them.

The Downside

It should come as no surprise that there is a great potential for cybercriminals to abuse this technology. Following the failure of Silicon Valley Bank in March 2023, a phony audio tape purporting to be American President Joe Biden was released, instructing his team to "use the full force of the media to calm the public." In the end, fact-checkers were able to disprove the clip's fabrication, but by that time, the audio had already been heard by millions of people and was well on its way to spreading fear.

AI voice generators can be used to impersonate not only famous persons and people in positions of authority but also everyday people. Vishing (voice phishing) attacks take place when online fraudsters pose as regular people.

Major Players

There are several other voice cloning programs available than VoiceCopy. Many others exist, including Voice.ai, Speechify, Resemble AI, Play.ht, ElevenLabs, Murf AI, and numerous others.

Additionally, Samsung has created a new AI voice cloning program named Bixby. When taking phone calls, users of this AI program have the option to clone their voice. Users will be able to compose a message that will be spoken to the person on the other end of the line using voice cloning, thanks to this.

Not just Samsung is embracing voice cloning for mobile devices. With its upcoming IOS 17 upgrade, Apple will roll out Personal Voice, an AI voice assistant with the same concept.

There are some protections that should be put in place for voice cloning as these technologies develop:

Opt in and Opt out Procedures

It is fairly simple to clone voices; con artists can use public domain audio samples to make their clones and con your friends. Do not share your audio snippets on open platforms to protect your safety and the safety of those you know.

Be wary if you receive a call from an unknown number, especially if the caller asks for money or personal information (even if you believe the caller's voice to be that of a member of your family or acquaintance). Ask a few questions that only the two of you would know the answers to in order to confirm the person's identification.

Facial recognition technology is frequently used at airport checkpoints to verify that the person presenting matches the person in the license photo whose name is also on the boarding pass.

At these checkpoints, there are very clear signs informing people that their biometric data is being taken, what it will be used for, where and how it will be preserved, and what other methods they can choose if they do not want to provide their approval. Every time there is an attempt or intention to record a person's voice, the same opt in/opt out consent processes that have become customary for facial recognition must also be provided. People can only preserve control over their distinctive, biological identifiers in this way.

Since it can complicate user authentication and texts sent to cell phones can still be intercepted, multi-factor authentication is not a perfect security measure. Multi-factor authentication, however, can add a second level of verification for firms that use voice recognition as a biometric authentication method.

Current Issue