US researchers have reportedly used OpenAI’s voice API to create an AI-powered phone fraud agent that can be used to drain victims’ crypto wallets and bank accounts.
The Register reported that computer scientists at the University of Illinois at Urbana-Champaign (UIUC) used OpenAI’s GPT-4o model in conjunction with a number of other freely available tools to They have built an agent that they claim is capable of autonomously performing actions. Required for various phone-based scams. ”
Telephone scams by perpetrators posing as businesses or government organizations target approximately 18 million Americans each year, costing approximately $40 billion, according to UIUC Assistant Professor Daniel Kang.
GPT-4o allows users to send text or voice and have GPT-4o respond in kind. Additionally, Kang says it costs nothing, breaking down a major barrier to entry for fraudsters looking to steal personal information such as bank account information or Social Security numbers.
In fact, according to a paper co-authored by Kang, the average cost of a successful scam is just $0.75.
Read more: Hong Kong busts cryptocurrency scam that used AI deepfakes to create ‘superior women’
During the course of their research, the team conducted a number of different experiments, including cryptocurrency transfers, gift card fraud, and user credential theft. The overall average success rate for the various scams was 36%, with most failures due to AI transcription errors.
“The design of our agent is not complex,” Kang says. “We implemented it in just 1,051 lines of code, with the majority of the code dedicated to processing the real-time audio API.
“This simplicity is consistent with previous research that has shown that dual-purpose AI agents can be easily created for tasks such as cybersecurity attacks.”
He continued, “Voice fraud already causes billions of dollars in damages and requires a comprehensive solution to reduce the impact of such fraud, including at the telephone provider level (e.g. the AI provider level (e.g. OpenAI), and the policy/regulatory level.”
The Register reported that OpenAI’s detection system did indeed warn about the UICU experiment and moved to reassure users that it “uses multiple layers of safeguards to reduce the risk of API abuse.” I am.
It also states that “Reusing or distributing output from the Service for purposes of spamming, misleading, or harming others is a violation of our Acceptable Use Policy, and we actively monitor potential misuse.” “We are monitoring,” he warned.
Any tips? Please send us an email or ProtonMail. For more news, follow us ×Instagram , Bluesky , Google News , or subscribe to our YouTube channel.