How to Navigate Your Brand Around the Double-Edged Sword of AI

The rapid pace of AI innovation has created a double-edged sword for companies that makes it hard to build trust and position their brands authentically within the conversation. There are many reputational landmines to navigate, from being labeled as “AI-washing” to being accused of irresponsible innovation.

Mission North’s Brand Expectations Index examines the impact of AI on the reputation and relevance of brands. We found that 47% of Americans see AI creating a “more dangerous future” than “a better future,” while 55% say they don’t trust AI startups/companies.

Despite this cynical climate, it’s clear that the financial future of many companies rests on their ability to stand out in the AI economy. Last year, AI drove the vast majority of the S&P 500’s 24% growth, while one in four VC dollars invested in U.S. startups went to AI-first companies.

The stakes are high for CEOs (and their communications and marketing teams) to get AI positioning right. This holds true whether you’re an AI juggernaut like Google fighting to defend your position around AI, a challenger like OpenAI vying for dominance, a startup fighting to stand out, or a legacy, non-tech brand that needs to reposition itself for AI with shareholders.

One critical step is for companies to define their first principles of AI storytelling, with the goal of minimizing risk, amplifying their market position and maximizing relative opportunities around AI. Here are the top four principles required to successfully navigate this tricky landscape.

Principle 1: Recalibrate Your AI Narrative for Different Audiences

With trust in AI and the companies developing it so low, it’s crucial to put authentic, accessible spokespeople at the heart of your narrative and communicate in a way that meets your different stakeholders where they’re at. For example, your investors likely have much different expectations and feelings about your AI story than your employees and customers.

It’s critical to address stakeholders’ top concerns about AI head-on. With economic issues and layoffs so heavily associated with AI, your spokespeople should communicate openly to both employees and external audiences about these concerns. This could include showcasing how your AI innovations and investments have helped employees.

A great example is Mark Surman, President of the Mozilla Foundation, who has emerged as a trusted voice through contributed articles and congressional discussions surrounding the responsible use of AI.

Principle 2: Build Trust with Data, Actions and Evidence 

Recalibrating your AI communications for different stakeholders requires an understanding of where they each stand on AI. Data and tangible proof points are essential to better understand your audiences and earn their trust.

Owned research can be a powerful barometer of where your audiences fall along the continuum of AI anxiety and optimism. Surveying your core audiences to understand their sentiment around AI is a great strategy. The research should substantiate why your products and services matter, and the ways AI can amplify and evolve them for your stakeholders. For example, Canva’s Marketing & AI Report demystifies generative AI among marketers and creative professionals and educates them on the technology’s biggest benefits and risks.

From there, use proof points to combat skepticism and stand out from the “AI-washing.” All thought leadership, earned and paid media storytelling should be anchored in the specific investments you are making in AI, centered on minimizing risks for customers and maximizing their benefits. For example, Snowflake bolstered its investment in R&D to help its customers better test their AI data models. This Wall Street Journal article brings its investment to life for customers, investors and employees.

<split-lines>"Owned research can be a powerful barometer of where your audiences fall along the continuum of AI anxiety and optimism."<split-lines>

Principle 3: Carefully Calculate Your Risks and Avoid Surprises

Companies must be prepared to address their audiences when (not if) their AI risks become a real issue. It’s important to build an AI crisis plan that proactively assesses areas of risk, stakeholder expectations and societal impact. This plan should account for key audiences, what they expect of you, and how you'd communicate with them proactively around sensitive news or in the event of a crisis. Having it in place will enable you to communicate transparently and preemptively which is critical to earning the benefit of the doubt when things go wrong.

The media, markets and policymakers hate AI surprises. This is why the major players like Google and OpenAI make lots of iterative product releases over fewer big bang launches. These are very intentional strategies designed to bring their stakeholders along the journey without any major surprises.

Many companies have fallen under fire for casually rolling out or boasting about their use of AI without proactively addressing the why and how. Last year, Levi Strauss & Co. announced it was testing out AI-generated models with more diverse body types and skin tones. The backlash was swift, with critics accusing the company of a lazy attempt at DEI that took jobs away from real models. Asking the simple question, “what could go wrong?” would have almost certainly accounted for that risk and activated up-front communication about the company’s intentions.

<split-lines>"It’s important to build an AI crisis plan that proactively assesses areas of risk, stakeholder expectations and societal impact."<split-lines>

Principle 4: Self-Govern Your AI Innovations

Ungoverned AI was one of the top risks of 2024 cited by The Eurasia Group, a leading geopolitical risk advisory firm. In the absence of regulatory governance, companies must work to de-risk their AI developments and maintain their license to innovate through self-governance.

Companies should communicate clearly and proactively about their AI governance structure. This should include its core charter and operating principles, as well as the names and credentials of all governing members. This signals to policymakers, regulators and users that you are committed to responsible AI. OpenAI’s launch of a grant program aimed at democratizing AI governance is a great example.

<split-lines>"In the absence of regulatory governance, companies must work to de-risk their AI developments and maintain their license to innovate through self-governance."<split-lines>

The Power of Innovation Storytelling

Companies must find ways to position themselves within a broader AI economy that shows no signs of slowing down, while keeping intention and authenticity at the forefront. True, there are numerous reputational risks involved, but they are worth navigating.

Leaders must take a strategic approach to earn the benefit of the doubt with the public and build the trust of regulators, investors and customers. This applies to any form of innovation storytelling. The stakes are high, but it is not an impossible task. Cracking it will involve a mix of credible spokespeople, proof points, a shrewd assessment of the risks, and policies that lead with transparency and self-governance.

More posts

April 18, 2024

April 18, 2024

Expert Insights
Life Sciences

Mission North Launches 'The Pipeline' – a New Life Sciences and Health Podcast

April 11, 2024

April 11, 2024

Expert Insights
Life Sciences

Allonnia CEO Nicole Richards on Changing What It Means to Be a Woman Leader

March 28, 2024

March 28, 2024

Social Impact
Talent/Brand

From Vision to Impact: Mission North’s Third Annual Social Impact Report

March 26, 2024

March 26, 2024

Expert Insights
Sustainability

EnergyHub's Erika Diamond on Electrification and Consumer Adoption