<100 subscribers
Share Dialog
Share Dialog
When she heard her daughter's cry for help on the phone, her panicked mother almost called the "kidnapper" for money, but the kidnapper was a fake and her daughter's voice was also a clone. With the latest AI technology, the scammers can quickly replicate the exact same voice, even the dialect and tone of voice, as long as they get 3 seconds of another person's voice.
While people are still amazed at the ability of AI paintings to be fake, AI voice imitation technology has been applied by scammers. In Canada, at least eight victims were cheated out of about $1 million in just three days, with scammers imitating the voices of their loved ones to "save the day", even if their own mothers couldn't hear the mistake.
With AI, the "stingy man" can become a seductress in seconds, strangers can easily impersonate your friends and relatives, and you will lose if you play money.
From blockchain to AI, every time a new technology comes out, fraudsters are the first to charge the frontier of the times. In the face of fake scams, ordinary people are increasingly difficult to prevent, those who open Pandora's Box of technology giants, but also can not just be busy running blindfolded, have to use magic to defeat the magic.
AI imitation voice scam emerged 8 people in Canada were cheated 1 million
"Mom! Help me!" The cry of her daughter came from the other end of the phone, and the mother panicked at once. Before you can get over it, you can hear an unfamiliar man on the phone yelling, "Tilt your head back and lie down. The next strange man spoke, "Your daughter is here with me, you dare to call the police or notify anyone, I will stun her and take her to Mexico."
At the end of the phone, the anxious mother could faintly hear her daughter's cry for help, "Mom, please help me, help me." At this point, the "kidnappers" talked about the conditions, the need for $1 million to release, not so much money in response, the "kidnappers" changed his words, "take $50,000 for your daughter. "
The above scenario is a real scene that happened recently in Arizona, USA. According to NBC, this is an AI imitation voice scam. The girl's voice on the other end of the phone was, in fact, an AI clone. Fortunately, the mother calmed down and contacted her husband to confirm that her daughter was okay before she was saved from being scammed.
When recalling the details of the call afterwards, the mother felt a wave of fear, although it was clear that this was a telecom fraud, but she still did not have the slightest doubt about her daughter's voice.
At a time when artificial intelligence is speeding up technological change, unscrupulous elements are using AI as magic to commit fraud with the most advanced technological means. How advanced is this technology? According to Kambam Patti, a professor in AI at Arizona State University, it is possible to quickly replicate the exact same voice by simply obtaining 3 seconds of someone's original voice.
Many people have already been cheated. on April 2, CCTV released a video report that Canadian criminals used AI to synthesize the voices of their loved ones to commit fraud, and at least eight people fell for the scam in three days, involving about 1 million yuan. Most of these victims were elderly people, and their feedback was that "the voices on the phone were simply identical to their loved ones."
Even the "voice print" system, which was previously considered secure, could be easily cracked. In March, Guardian Australia reporter Nick Everside said he used artificial intelligence to make his own voice and successfully accessed his Centrelink self-service account. And just two years ago, the Australian Service claimed in a report that "voice prints are as secure as fingerprints".
The speed of AI evolution is too fast. Back in 2019, the Wall Street Journal reported on a scammer who used AI voice synthesis to impersonate his boss and get the managing director of a branch to pay him 220,000 euros. But at the time, the scammer also had to prepare a corpus of the impersonated person to be trained and polished. Now, AIGC makes it possible to clone voices with almost no threshold, even dialect and tone of voice imitation are not a problem.
At present, there is no large-scale case of AI voice imitation fraud in China for the time being, however, the use of AI to generate fake pictures to cheat money has happened frequently.
In February this year, a "yacht maid party will be held in Suzhou Jinji Lake" news circulating in the network, publicity documents with a number of maid dress up female photos, the publisher explicit price, to participate in the Party each person to pay 3000 yuan. The blatantly organized "erasure" party, soon attracted the net police, according to the follow-up net police disclosure, those maid photos are basically AI generated.
There are many similar incidents, and netizens have gradually developed a sense of precaution that "there is not necessarily a picture of the truth". Nowadays, even if there is sound, there is video is not credible.
Anti-forgery technology can not keep up with AI "high imitation" anti-fraud to be upgraded
When ChatGPT was asked "What types of AI fraud are there?", it gave several answers, including using AI to generate fake websites and social media accounts, impersonating legitimate organizations and individuals to obtain information and money; using AI to generate simulated voices to commit wire fraud; deep learning of victims' personal information to generate personalized fraud schemes And so on.
But ChatGPT still underestimates the tactics of scammers. For fraudsters who set up elaborate schemes, AI serves as the underlying layer and even gives rise to an endless number of fraudulent methods.
In this regard, "piggy banks" that incorporate AI technology are not less harmful. On various dating and matrimonial sites, there are many "piggy bank" scams, in which crooks fake their own identities and personas to get close to their victims, and then gain their trust and obtain money.
In the past, scammers would steal photos on the Internet to disguise themselves, but as long as there are people who are interested in searching the picture comparison may also be detected, but if the scammers use AI to draw, this can be a "unique". Not only that, through AI drawing, the scammers can even easily make corresponding gestures according to the victim's request, making it easier to gain trust.
On social media, it is also easier for "stingy men" to pretend to be beautiful women to cheat money. Some AIGC applications can generate seductive, large-scale photos and videos to entice users to chat naked or make them pay for more intimate photos. Put in the past, these visual products still need to spend efforts to produce, now these things are directly handed to AI, once someone steps into the trap, they will be taken by the scammers.
What's even scarier is that since many AI tools have been open-sourced, the cost and threshold for scammers to commit fraud is getting lower and lower, and tutorials are available directly on the web. The publishers' intention is for people to learn, but they can't stop unscrupulous people from getting their perverse ideas.
With the technological evolution of AI, what people hear and see will become more and more untrustworthy. To avoid being cheated, not only do we need to understand fraudulent methods and raise awareness of prevention, we also need advanced technology to defeat magic with magic.
In the face of the harm brought by depth forgery, Google began to consciously defend as early as 2019, first recruited a group of actors to record videos, and then use the online depth forgery technology to change faces and other operations, and to analyze these data comprehensively, so that developers can study the algorithm and logic of depth forgery technology, in "know yourself and know your enemy On the basis of "know your enemy", more accurate identification and prevention of deep forgery content.
Last year, Microsoft launched a tool called "Video Aunthenticator", which can analyze rendering boundaries and grayscale levels on a frame-by-frame basis in real time to generate a reliable index to help users identify the authenticity of content.
Not long ago, OpenAI also released an AI-generated content recognizer, but blog data shows that the detection success rate is only 26%, and the detection effect is much worse for languages other than English. Compared with the power of ChatGPT, the AI company's research on "counterfeiting" is not deep enough.
When AI technology presents two sides of the coin, the battle between good and evil can never be stopped. It is thought-provoking that in 2018, the State University of New York had developed an "anti-facial" AI forensic tool, which identifies forged faces by predicting whether the eyes are blinking or not, with an accuracy rate claimed to be 99%, but the tool soon failed because the deep forgery technology evolved.
The world needs them to master the chains that hold the devil in place as the technology giants open Pandora's Box.
When she heard her daughter's cry for help on the phone, her panicked mother almost called the "kidnapper" for money, but the kidnapper was a fake and her daughter's voice was also a clone. With the latest AI technology, the scammers can quickly replicate the exact same voice, even the dialect and tone of voice, as long as they get 3 seconds of another person's voice.
While people are still amazed at the ability of AI paintings to be fake, AI voice imitation technology has been applied by scammers. In Canada, at least eight victims were cheated out of about $1 million in just three days, with scammers imitating the voices of their loved ones to "save the day", even if their own mothers couldn't hear the mistake.
With AI, the "stingy man" can become a seductress in seconds, strangers can easily impersonate your friends and relatives, and you will lose if you play money.
From blockchain to AI, every time a new technology comes out, fraudsters are the first to charge the frontier of the times. In the face of fake scams, ordinary people are increasingly difficult to prevent, those who open Pandora's Box of technology giants, but also can not just be busy running blindfolded, have to use magic to defeat the magic.
AI imitation voice scam emerged 8 people in Canada were cheated 1 million
"Mom! Help me!" The cry of her daughter came from the other end of the phone, and the mother panicked at once. Before you can get over it, you can hear an unfamiliar man on the phone yelling, "Tilt your head back and lie down. The next strange man spoke, "Your daughter is here with me, you dare to call the police or notify anyone, I will stun her and take her to Mexico."
At the end of the phone, the anxious mother could faintly hear her daughter's cry for help, "Mom, please help me, help me." At this point, the "kidnappers" talked about the conditions, the need for $1 million to release, not so much money in response, the "kidnappers" changed his words, "take $50,000 for your daughter. "
The above scenario is a real scene that happened recently in Arizona, USA. According to NBC, this is an AI imitation voice scam. The girl's voice on the other end of the phone was, in fact, an AI clone. Fortunately, the mother calmed down and contacted her husband to confirm that her daughter was okay before she was saved from being scammed.
When recalling the details of the call afterwards, the mother felt a wave of fear, although it was clear that this was a telecom fraud, but she still did not have the slightest doubt about her daughter's voice.
At a time when artificial intelligence is speeding up technological change, unscrupulous elements are using AI as magic to commit fraud with the most advanced technological means. How advanced is this technology? According to Kambam Patti, a professor in AI at Arizona State University, it is possible to quickly replicate the exact same voice by simply obtaining 3 seconds of someone's original voice.
Many people have already been cheated. on April 2, CCTV released a video report that Canadian criminals used AI to synthesize the voices of their loved ones to commit fraud, and at least eight people fell for the scam in three days, involving about 1 million yuan. Most of these victims were elderly people, and their feedback was that "the voices on the phone were simply identical to their loved ones."
Even the "voice print" system, which was previously considered secure, could be easily cracked. In March, Guardian Australia reporter Nick Everside said he used artificial intelligence to make his own voice and successfully accessed his Centrelink self-service account. And just two years ago, the Australian Service claimed in a report that "voice prints are as secure as fingerprints".
The speed of AI evolution is too fast. Back in 2019, the Wall Street Journal reported on a scammer who used AI voice synthesis to impersonate his boss and get the managing director of a branch to pay him 220,000 euros. But at the time, the scammer also had to prepare a corpus of the impersonated person to be trained and polished. Now, AIGC makes it possible to clone voices with almost no threshold, even dialect and tone of voice imitation are not a problem.
At present, there is no large-scale case of AI voice imitation fraud in China for the time being, however, the use of AI to generate fake pictures to cheat money has happened frequently.
In February this year, a "yacht maid party will be held in Suzhou Jinji Lake" news circulating in the network, publicity documents with a number of maid dress up female photos, the publisher explicit price, to participate in the Party each person to pay 3000 yuan. The blatantly organized "erasure" party, soon attracted the net police, according to the follow-up net police disclosure, those maid photos are basically AI generated.
There are many similar incidents, and netizens have gradually developed a sense of precaution that "there is not necessarily a picture of the truth". Nowadays, even if there is sound, there is video is not credible.
Anti-forgery technology can not keep up with AI "high imitation" anti-fraud to be upgraded
When ChatGPT was asked "What types of AI fraud are there?", it gave several answers, including using AI to generate fake websites and social media accounts, impersonating legitimate organizations and individuals to obtain information and money; using AI to generate simulated voices to commit wire fraud; deep learning of victims' personal information to generate personalized fraud schemes And so on.
But ChatGPT still underestimates the tactics of scammers. For fraudsters who set up elaborate schemes, AI serves as the underlying layer and even gives rise to an endless number of fraudulent methods.
In this regard, "piggy banks" that incorporate AI technology are not less harmful. On various dating and matrimonial sites, there are many "piggy bank" scams, in which crooks fake their own identities and personas to get close to their victims, and then gain their trust and obtain money.
In the past, scammers would steal photos on the Internet to disguise themselves, but as long as there are people who are interested in searching the picture comparison may also be detected, but if the scammers use AI to draw, this can be a "unique". Not only that, through AI drawing, the scammers can even easily make corresponding gestures according to the victim's request, making it easier to gain trust.
On social media, it is also easier for "stingy men" to pretend to be beautiful women to cheat money. Some AIGC applications can generate seductive, large-scale photos and videos to entice users to chat naked or make them pay for more intimate photos. Put in the past, these visual products still need to spend efforts to produce, now these things are directly handed to AI, once someone steps into the trap, they will be taken by the scammers.
What's even scarier is that since many AI tools have been open-sourced, the cost and threshold for scammers to commit fraud is getting lower and lower, and tutorials are available directly on the web. The publishers' intention is for people to learn, but they can't stop unscrupulous people from getting their perverse ideas.
With the technological evolution of AI, what people hear and see will become more and more untrustworthy. To avoid being cheated, not only do we need to understand fraudulent methods and raise awareness of prevention, we also need advanced technology to defeat magic with magic.
In the face of the harm brought by depth forgery, Google began to consciously defend as early as 2019, first recruited a group of actors to record videos, and then use the online depth forgery technology to change faces and other operations, and to analyze these data comprehensively, so that developers can study the algorithm and logic of depth forgery technology, in "know yourself and know your enemy On the basis of "know your enemy", more accurate identification and prevention of deep forgery content.
Last year, Microsoft launched a tool called "Video Aunthenticator", which can analyze rendering boundaries and grayscale levels on a frame-by-frame basis in real time to generate a reliable index to help users identify the authenticity of content.
Not long ago, OpenAI also released an AI-generated content recognizer, but blog data shows that the detection success rate is only 26%, and the detection effect is much worse for languages other than English. Compared with the power of ChatGPT, the AI company's research on "counterfeiting" is not deep enough.
When AI technology presents two sides of the coin, the battle between good and evil can never be stopped. It is thought-provoking that in 2018, the State University of New York had developed an "anti-facial" AI forensic tool, which identifies forged faces by predicting whether the eyes are blinking or not, with an accuracy rate claimed to be 99%, but the tool soon failed because the deep forgery technology evolved.
The world needs them to master the chains that hold the devil in place as the technology giants open Pandora's Box.
No comments yet