Chat got your tongue? Lessons for AI cheerleaders and skeptics


With the emergence of platforms like DALL-E, and particularly since the launch of ChatGPT last November, we have witnessed a wave of public interest in AI systems, generative media, and their implications for the workplace. While AI has tremendous potential as part of communications toolkits, there are valid concerns regarding overreliance and misuse, ranging from plagiarism to disinformation campaigns to worker replacement. We have already seen cases where hasty use of AI has led to disaster for communicators, journalists, and others.  

We are also seeing a trend of webinars aimed at helping professionals better understand AI and generative media, including “Why I’m not (too) scared of Chat GPT” at the University of Minnesota and “ChatGPT & DALL-E: What Generative AI means for journalism” hosted by the Associated Press. While these were intended primarily for educators and newsrooms, respectively, much of the discussion is relevant to a broader audience. Here are some key takeaways that may be useful to communicators:

  • Be transparent and upfront with your collaborators and audience about when and how you use AI. Consider developing a disclosure statement as part of an AI policy.
  • Always double-check AI outputs and never rely on it for a finished product.
  • Communication tasks that appear best-suited for automation or assistance by AI: Content discovery, document analysis, translation, transcription, text summaries, SEO, routine social media updates, newsletters, comment moderation, content transformation, alert personalization, coding.
  • AI limitations: Accuracy, bias, provenance, anthropomorphism, privacy, costs, copyright, plagiarism, terms of service violations.
  • AI “checkers” like GPTZero may be useful to help determine when AI has been used. For communicators, they could detect disinformation and AI bots, helping to prevent missteps during a crisis.
  • Learning how to prompt AI effectively is important to achieving desirable results.
  • Human biases (including racial and gender biases) will be reproduced in AI outputs to the extent that they are present at the inputs.
  • Best practices for AI are already emerging in fields like journalism, higher education, and law. One example is PAI’s Responsible Practices for Synthetic Media. Communicators should consult these best practices while developing their own.

AI is not now, and will not be in the future, a substitute for sound judgment and tactful communications. Stay informed, use discretion and consult with your team as you decide whether and when it can be useful to you.