More and more companies are using artificial intelligence (AI) to detect fraud and scams, says the U.S. Federal Trade Commission (FTC).
For example, Google uses AI to detect online fraud and spam in search results and to filter spam, malware and phishing in Gmail.
In turn, Microsoft makes similar use of artificial intelligence for phishing and spam in Outlook.
FTA refers that representatives of cybersecurity companies claim that AI tools can also «track patterns in fraudulent emails using large data sets and remove fraudulent emails from people’s inboxes.»
Also, third-party vendors can offer AI-based scam detection, such as tools to find tax fraud websites.
Facebook claims to use AI tools to address various types of fraud, although, as the FTC has reported, scams on Facebook and other social networks have continued to increase.
Specifically, Facebook says it uses machine learning to identify scams and imposters on Messenger and generally degrades content associated with fraud, including: links to suspicious cloaking domains (which could involve financial scams); pages predicted to be spam (which could involve fake ads, fraud and security risks); and exaggerated health claims.
It also uses automated scam detection systems in the Facebook Marketplace, although their effectiveness is uncertain at best.
Some academic research has focused on using artificial intelligence to address these harms, including a publicly funded project in the UK to detect fake dating profiles, two connected studies on detecting undisclosed influencer affiliations, and studies on detecting email spam.
AI tools could help investigate whether companies engage in online behavior that harms competitors.
In 2021, for example, investigative journalists at The Markup used a machine learning tool to examine whether «Amazon systematically placed its own brands and exclusives ahead of better-known, higher-scoring brands.»
However, a word of caution for consumer or competition enforcers is that it makes little sense to use limited resources to obtain AI tools without already having decided exactly what to do with them.
It would be wiser to first determine what an agency wants to find or learn and then see which available tools, AI or non-AI, are the most suitable and appropriate for that task.
Of course, it would also require personnel capable of deploying those tools and evaluating their responsible use.