Lessons From ChatGPT on Building Trust in Disruptive Tech

ChatGPT Iconography

ChatGPT’s story is staggering. Within two months of launching its prototype, it eclipsed 100 million active monthly users, making it the fastest-growing application ever. During that time, its creator, OpenAI, also closed a $10 billion investment from Microsoft, embedded its core tech into Bing, and officially sparked the generative AI wars with Google.

What happens next has major ramifications not just for OpenAI and the generative AI space, but for any disruptive technology. Building trust and communicating transparently are crucial parts of introducing a technology of this magnitude with intention and care. There is a big difference in communication strategy for an early-stage startup versus a business valued at approximately $30 billion that birthed the most consequential form of generative AI the world has ever seen.

OpenAI’s new capital and market position bring incredible responsibility, and the company needs to do much more to earn the trust of users, policymakers and institutions as they continue to innovate. This will require a different communications playbook than the one it’s used to date.

What’s the best way to build trust in new, disruptive technologies? My colleague, Mission North’s SVP of Corporate Reputation Nick Maschari, asked ChatGPT. Here’s a summary of its response, edited for brevity:

  1. Be transparent about how the technology works and where its biggest risks lie by publishing information about how it works, as well as how it could be misused.
  2. Address concerns about the technology with clear and concise information that owns the risks and explains the steps underway to safeguard society. 
  3. Demonstrate the benefits via case studies, testimonials, and other evidence that show how the technology has helped improve lives.
  4. Engage users, employees and other stakeholders to participate in the development and testing of the technology.
  5. Provide support, training and technical assistance for users to help ensure they use it effectively and safely.
  6. Work with policymakers and regulators to make sure the technology complies with relevant laws and regulations.

If I critiqued OpenAI on its communications as an early-stage startup using this framework, I’d say they crushed it, having implemented a majority of the above tactics to varying degrees. But if I critique the communications approach with its new capital and market position in mind, I’d say there’s much more work to do to earn trust and permission from stakeholders. Here are a few places to start, which other disruptive tech companies can use to level-up their communications strategy as they scale.

<split-lines>"[OpenAI] needs to do much more to earn the trust of users, policymakers and institutions as they continue to innovate. This will require a different communications playbook..."<split-lines>

Innovate and Iterate With a Strong Point of View

In a recent StrictlyVC interview with Connie Loizos, OpenAI CEO Sam Altman said he believed the most responsible way to introduce such a societally significant technology to the world is to do it very iteratively. Trickling imperfect versions into the market and gathering user feedback will allow them to solve the right problems and earn the goodwill of policymakers and others. Launching a perfectly baked product in three years, according to Altman, would be bad for society because we couldn't cope with that level of disruption all at once and trust would be impossible to obtain.

I agree. Yet while an iterative and imperfect release strategy is responsible, new versions of technology must be released with a clear point of view. This means being proactive and preemptive in how you communicate about its most evident risks. This point of view should serve as the throughline across all content and outbound communications.

<split-lines>"While an iterative and imperfect release strategy is responsible, new versions of the technology must be released with a clear point of view."<split-lines>

Be a Resource to Your User Community

Since launching ChatGPT more than three months ago, OpenAI has published three customer success stories and a smattering of blog content. That’s not enough. Users (and society) would benefit from many more, and much deeper, content resources around ChatGPT’s best practices. For any mission-driven tech company, this should include industry-specific research and insights made accessible to non-researchers, Q&As with experts, practical tutorials, and case studies. In addition to owned content, companies should curate helpful content created by its user community.

Build Advocates and Alliances

The best way to attract new advocates and build trust with your broader user community is to develop and promote the voices of your most influential advocates. For ChatGPT, this might include Cherie Shields, a high school English teacher in Sandy, Ore., or John Villasenor, a law professor at UCLA. Future owned content and earned media from ChatGPT should feature voices of the educators, creators, marketers, communicators, lawyers and business leaders that are paving the way for generative AI and humanity to flourish. Convening user meetups and industry roundtable discussions to better understand the biggest issues and opportunities will help build a strong community that’s more likely to advocate on behalf of a technology.

<split-lines>"The best way to attract new advocates and build trust with your broader user community is to develop and promote the voices of your most influential advocates."<split-lines>

Ask Policymakers and Regulators for Permission, Don’t Beg for Forgiveness

Lawmakers and regulators are generally very skeptical about the pace of technological change and the rate of innovation. Don’t wait to build a relationship with elected and government officials until you need something from them. Start early, understand what matters to them and be responsive to their concerns and the concerns of their constituents. By bringing them along with you on your innovation journey and even training staffers on how they can benefit from a technology like ChatGPT, tech leaders can cement their permission to keep innovating.

Major innovation always comes with potential risks and tradeoffs. In order to build the trust and understanding necessary to release disruptive technology responsibly, companies must focus on clear, authentic and ongoing communication with stakeholders. There’s no doubt that communication has played a role in ChatGPT’s meteoric rise. But in order to continue developing the technology responsibly, there are crucial steps its leaders must follow to ensure its transformative impact is a positive one.

More posts

April 18, 2024

April 18, 2024

Expert Insights
Life Sciences

Mission North Launches 'The Pipeline' – a New Life Sciences and Health Podcast

April 11, 2024

April 11, 2024

Expert Insights
Life Sciences

Allonnia CEO Nicole Richards on Changing What It Means to Be a Woman Leader

March 28, 2024

March 28, 2024

Social Impact
Talent/Brand

From Vision to Impact: Mission North’s Third Annual Social Impact Report

March 26, 2024

March 26, 2024

Expert Insights
Sustainability

EnergyHub's Erika Diamond on Electrification and Consumer Adoption