Have you ever wondered whether the photo or video you’re viewing, the audio you’re listening to or the article you’re reading is real? Artificial intelligence (AI) can make the line between authentic and inauthentic hard to determine. For fraud perpetrators, the ability to create deepfakes, voice clones or machine-generated communications can make their scams far more compelling — and effective.
Although using AI for criminal purposes is a clear-cut abuse of the technology, it’s possible for well-intentioned businesses to violate the Federal Trade Commission (FTC) Act. Sec. 5 of the FTC Act, “Unfair or Deceptive Acts or Practices” prohibits any material representation, omission or practice that’s likely to mislead consumers under ordinary circumstances. Here’s how to reduce the risk of violations.
Limit the Technology’s Risk
Using AI can help improve products, increase production efficiency and enable your company to stand out in a crowded marketplace. But AI use can also lead to misrepresentation and unintentional violation of the FTC Act.
If you design AI-based solutions, make sure you set aside time to consider how they could be abused. Suppose you’re designing an application that uses AI to analyze a voice and create a new recording to mimic that individual. How might a fraudster use the technology to engage in illegal activity? If you can envision how someone might abuse your app, criminals certainly can too. Don’t rush a product to the market only to take risk-management measures after customers (and criminals) start using it. Embed controls in AI, pre-release.
For example, when developing a voice cloning application, you might want to:
- Secure consent for the individuals to be cloned,
- Embed a watermark in the audio noting it was generated by cloning, and
- Limit the number of voices a user can clone.
Robust user authentication and verification, analytics to detect abuse and a strict data retention policy can also help mitigate AI’s inherent marketplace risk.
Responsibility to Customers
Although the technology for identifying AI-generated content is improving every day, it often falls behind technology employed to evade detection. Therefore, consumers may not know when AI is used or be able to detect it. But that shouldn’t be their responsibility. It’s better for your company to disclose AI use to preserve customer loyalty and avoid negative media coverage.
The same goes for using AI in advertising. Let’s say, for instance, that your ads take advantage of AI to create an image, voice or written content. You don’t disclose the use of AI and consumers believe that AI wasn’t involved. This could attract regulatory scrutiny. In other words, if your company’s ads mislead, you could face FTC enforcement actions.
Be Proactive and Talk to Experts
Deceiving consumers isn’t your company’s objective. But you must be proactive and act responsibly when using AI in products, services and advertising. Take time to evaluate how the technology could trick customers and violate the FTC Act. Consult your attorney and contact us with questions about how to embed checks and balances and limit the technology’s risk.
@2023