Artificial intelligence considerations for large merchants

Artificial intelligence (AI) is rapidly transforming the payments industry, offering new opportunities and challenges for large merchants. AI can enhance various aspects of the payments business, such as transaction processing, customer communications, fraud detection, and innovation. However, AI also poses risks for these businesses, including regulatory complexity, ethical and social implications, cybersecurity threats, and class action litigation risk. This article provides an overview of the terminology, use cases, legal and regulatory landscape, principles, and risks and mitigation strategies that large merchants should consider when deploying AI systems in the payments space.

AI terminology

AI is the capability of a computer system to mimic human cognitive functions such as learning, problem-solving, predictive modelling and analytics. AI systems use maths and logic to simulate the reasoning people use to learn from new information and make decisions. AI systems are designed to operate with varying levels of autonomy and can be used on a standalone basis or as a component of a product.

Generative AI is a category of AI that uses learning algorithms to create a range of content, such as text, images, audio and video. Generative AI can be used for various purposes, including content creation, personalisation, recommendation and prediction. Generative AI relies on foundational large language models, such as GPT (generative pretrained transformer), which is trained to understand and generate natural language.

AI use cases in the payments space

While AI has been getting a lot of press lately, this technology has been used in the payments industry for years. For instance, fraud detection systems rely heavily on AI to evaluate anomalies (and thus potential fraud) in consumers’ spending patterns. The difference with today’s AI technology is the power of AI, particularly generative AI, to transform a business that already is extremely data-driven. Along with this comes increasing regulatory scrutiny and class action risk. Merchants are expanding the use of generative AI for purposes such as fraud prevention, transaction processing, data management and analytics, customer interactions and communications, and compliance. AI enables companies to leverage large volumes of data, automate complex tasks, enhance decision-making, and create new products and services.

Current legal and regulatory landscape

There is no comprehensive federal AI law in the US yet, but there are various existing laws and regulations – such as the Electronic Communications Privacy Act, the Computer Fraud and Abuse Act, the Federal Trade Commission Act, and the Fair Credit Reporting Act – that may apply to AI systems in the payments industry. The US Consumer Financial Protection Bureau published a circular confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology used.

In addition, some states – including California, Colorado, New York and the District of Columbia – have enacted or proposed laws or regulations that specifically address AI. These laws or regulations may impose requirements or restrictions on the use of AI systems, such as requirements to test for bias, provide transparency and explainability, and ensure data quality and security.

Litigants are gearing up for class action lawsuits, particularly when AI is used to process personal information. For example, voice authentication and fraud prevention may require the use of voiceprints, a potential form of biometrics, which is a constant source of class action litigation and would require appropriate up-front consent and privacy disclosures. Automated decision-making can also open up large merchants to claims of discrimination.

AI principles and best practices

To ensure the trustworthiness and responsible use of AI systems as well as to reduce regulatory and litigation risk, large merchants should adopt and implement AI principles and best practices that align with a generally accepted risk management framework. One such framework is the National Institute of Standards and Technology (NIST) AI risk management framework, which provides a comprehensive and flexible approach to managing AI risks. The NIST framework identifies seven characteristics of trustworthy AI systems that should guide the design, development and deployment of AI systems, namely that the AI systems be valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair – with harmful bias managed.

AI risks and mitigation

AI systems pose various risks and challenges for large merchants, such as:

  • Legal and regulatory risks: AI systems may violate or conflict with existing or emerging laws or regulations, such as those related to privacy, data protection, consumer protection, anti-discrimination, antitrust, intellectual property or cybersecurity.
  • Ethical and social risks: AI systems may have negative or unintended impacts on human values, rights, dignity, autonomy or well-being, such as those related to fairness, accountability, transparency, explainability, privacy or safety.
  • Cybersecurity risks: AI systems may be vulnerable to cyberattacks, such as data breaches, unauthorised access, manipulation or sabotage, which may compromise the confidentiality, integrity or availability of the AI systems or the data they use or produce. Threat actors are also using AI to supercharge their attacks, including via deepfakes and other data manipulation attacks.
  • Talent and resource risks: AI systems may require specialised skills, knowledge or expertise that is scarce or in high demand, such as that held by data scientists, engineers or analysts, which may limit the ability of large merchants to develop, deploy or maintain their AI systems.

What else large merchants can do now

To leverage the benefits and mitigate the risks of AI systems in the payments industry, merchants should take the following actions:

  • Adopt and implement an enterprise-wide AI policy or code of conduct that defines the objectives, scope, roles and responsibilities of the use of AI systems.
  • Establish a cross-functional AI committee to review use cases and impact assessments and provide guidance and oversight for the design, development and deployment of AI systems.
  • Establish protocols for reporting to the board of directors, and provide training to key stakeholders, including information technology staff, senior management and board members.
  • Consider external assessments of material AI systems.
  • Enhance oversight of service providers, and place strict contractual requirements on their use of AI, including risk assessments and follow-up.

The authors thank partner Michael Bahar and associate Kristi Thielen for their contributions.