Robocalls can be intensely annoying. They can also be highly effective at convincing people to part with their money. If they were not, the companies that use them simply would not bother initiating them.
AI has exacerbated the problem, and – for better and for worse – as in all areas of life, people are only just beginning to understand its potential. It’s likely to become an even more potent tool for those who know how to use it. Unfortunately, that includes many who will use it for unscrupulous ends.
There are already plenty of examples of fraudsters using AI-generated voice messages to scam people out of money. For example, ringing someone up and pretending to be their child or a friend in desperate need of money. There are also plenty of cases of people using AI robocalls to dissuade certain groups of people from voting.
The Federal Communications Commission is considering a proposal to criminalize AI-generated robocalls. If passed, the rule will give state attorneys the power to pursue those guilty of using these calls. It would be an important addition to existing laws, such as those that require telemarketers to have approval to call someone.
People seeking to defraud you won’t always play by the law
Fraud is a crime, but it can be very rewarding, so plenty of criminals are willing to ignore or look for ways around laws that restrict them.
If you face problems resulting from a robocall you’ve received, you may not be able to identify whether it was human or AI-generated. Regardless, with appropriate legal guidance, you may be able to take action to resolve any issues it has caused you.