5 AI Ethics Questions Marketers Must Ask

The Gist

  • FTC focus. The FTC has begun to outline what constitutes fraudulent AI use and has already addressed an ecommerce case that involved a fraud claim of AI usage.
  • Risk complexity. The fleeting nature of service delivery combined with the scale of AI makes identifying fraud risk complicated for marketers.
  • Ethical inquiry. Marketers should consider the ethical implications of their AI strategies to ensure customer trust and compliance.

It’s an understatement to say there has been a rapid introduction of AI-based products and services. The public’s adoption of tools such as ChatGPT, Propensity, and Bard — now Gemini — has created immense curiosity for learning how to use almost anything AI related or AI infused, raising important AI ethics considerations.

AI Ethics: Balancing Curiosity and Truth

Marketers of products that include AI are looking to leverage that curiosity. But when does courting customers cross the line into false advertising? What ethical concerns become a risk to how customers experience an AI-based product or service?

With AI, marketers must be more direct in identifying and understanding the benefits of AI-based offerings. Campaign tactics that do not clearly explain the benefits of a product or service can mislead customers with an overpromise of expectations that cannot be met.

Related Article: AI, Privacy & the Law: Unpacking the US Legal Framework

An FTC Warning

Like many leaders from organizations and verticals, the FTC has been keeping an eye on AI developments — but with a concern for marketplace transparency. Last year Michael Atleson, an attorney for the Federal Trade Commission (FTC) Division of Advertising Practices, posted an FTC notice online raising the concern that AI based products are overhyped and that interest should be balanced with a modicum of caution.

Homepage of the Federal Trade Commission website on the display of computer screen in piece about AI ethics.
Like many leaders from organizations and verticals, the FTC has been keeping an eye on AI developments — but with a concern for marketplace transparency. mehaniq41 on Adobe Stock Photos

The FTC’s Four Key AI Ethics Questions

The letter outlines four key questions the FTC will use to examine the validity of AI-based solutions:

  • Are you exaggerating what your AI product can do?

  • Are you promising that your AI product does something better than a non-AI product?

  • Are you aware of the risks?

  • Does the product actually use AI at all? 

Related Article: Is AI Executive Order a Data Privacy Compass for Customer Experience?

AI Value: Proving Enhancement and Impact

All of these are terrific questions. I imagine the last question will come up frequently among marketers of AI-influenced solutions, as many popular software solutions incorporate AI tools at a feverish pace.

Yet proving the value of a product enhanced with AI will also be particularly difficult. How does a consumer know that an enhancement has made their experience with a product or service significantly better?

Related Article: FTC Won’t Tolerate Generative AI Deception in Marketing, Customer Service

FTC Cracks Down on AI Misrepresentation

One case already demonstrates the major risk of not delivering AI-based experiences as promised. This past February three business coaches settled an FTC claim that they deceived affluent ecommerce clients with unfounded promises of increased earnings from their consulting. Part of their offering featured operating online sites on the clients’ behalf that would be featured in networks such as Walmart and Amazon. The coaching was advertised to include AI-powered services for the sites. In the end the vast majority of the clients did not achieve the promised earnings, with Walmart and Amazon suspending many sites for policy violations. The FTC settlement meant the accused coaches had to return nearly $21 million in assets and accept permanent bans from consulting in the ecommerce space.

Related Article: Executive Order on AI: A Needed Step or Kitchen-Sink AI Governance?

AI Ambiguity: Navigating Benefits and Claims

As described in Atleson’s letter, AI encompasses a number of different frameworks, which makes benefits very ambiguous to define. Bad actors often exploit this ambiguity to sell ineffective products to unsuspecting customers. Consumers must be able to measure or compare benefit claims. For example, a health drink can claim to contain iron as an ingredient, but it may not have a sufficient amount of iron to really benefit the body. Consumers can compare the quantity of iron in one drink against another by comparing nutrition labels.

But many offerings are services, and customer experiences with a service are fleeting experiences. It can be more difficult to determine the delivery of outcomes from such transient experiences.

Related Article: AI in Customer Experience: The Impact on Customer Journey

5 AI Ethics Questions Marketers Should Ask About Their Strategies

So, what should marketers consider when deploying AI in an ethical manner that ensures customers notice a benefit, not a deficit like in the FTC case?

Answering the following questions can highlight the areas marketers should focus on to uphold ethical customer experiences.

AI Ethics Question No. 1: What Social Aspects of an AI Feature Should Be the Algorithms’ Responsibility? 

The answer to this question addresses decisions where human intervention is essential. AI features should not be solely responsible for making decisions that have significant social impacts. AI algorithms are trained on massive datasets that can reflect societal biases. The ability to process many decisions quickly based on biased data can amplify discriminatory or unfair outcomes.

Personnel decisions such as hiring or firing an employee, or services with customer-facing approvals such as consumer bank loans, are prime examples where having human intervention in an automated AI-based process is beneficial.

Related Article: AI in Marketing: Balancing Creativity and Algorithms for Marketers

AI Ethics Question No. 2: What Personal Information Should AI Have Access to in Order to Determine an Outcome?

The answer should be related to how personally identifiable data is processed within a marketer’s organization. The amount of personal information accessed by an AI algorithm should be limited to what is necessary to achieve its intended purpose. For example, if an AI is recommending products to a customer, it might need access to the customer’s purchase history, but it would not need access to the customer’s social security number.

Effective data privacy is about permission. An organization must ensure that an AI model consistently processes only the customer data it has permission to access.

Related Article: The 2024 AI Roadmap for Marketers

AI Ethics Question No. 3: How Do I Ensure That AI Is Not Training on Biased Data and Does Not Perpetuate Discriminatory Practices?

This type of question gets to the heart of many AI usage debates that have gained publicity over the years, such as the application of facial recognition. Concerns about AI decisions based on biased data are also why developers are working on modeling techniques to reduce bias. 

One example I profiled is Latimer, the research large language model, which was designed to include cultural data in its model to teach how to eliminate cultural discrimination in AI models.

Marketers should familiarize themselves with the latest advances in AI, such as research on the optimal deployment of retrieval augmented generation (RAG), to understand their practical options for ensuring unbiased training in their AI applications.

AI Ethics Question No. 4: How Will Customers Understand How AI-Powered Recommendations or Decisions Are Made?

How customers appreciate AI-based outcomes is essential for ensuring that a customer experience with AI feels authentic and makes reasonable sense, as opposed to the FTC ecommerce case in which promises were excessive and not met.

There are many simple actions marketers can take along the purchase process to highlight customer options for managing an interface with AI. For example, customer options should be prominent, so they can understand where they can opt out of an interaction with an AI chatbot if they wish. The options should be transparent, so customers can understand them well.

AI Ethics Question No. 5: Are There Potential Societal Impacts From the AI Influence in Our Product or Service?

Taking time to brainstorm the potential societal impact should be a required part of product or service development when AI is involved. For example, having AI in many processes could automate many jobs such that some positions are eliminated. A key question to ask is if the persons impacted live in areas in which a brand had already made significant investment. A brand would be countering any previous positive effort to bolster economic support with high-profile negative messaging. It is important to highlight potential impacts in a mind map and to determine steps to mitigate them.

Final Thoughts

AI-infused marketing technology has rapidly become a key element in meeting customers’ expectations in the marketplace. Marketers are now charged with ensuring that those expectations are met with the highest standards.