Guidance on the Use of Generative Artificial Intelligence (AI)

Artificial Intelligence (AI) can be a helpful marketing and communications tool. Generative AI and Large Language Models (LLMs) have permeated all facets of the workplace, allowing new ways to create and manage workflow. University Relations aims to supply guidance for using this new technology and will update this guidance as AI and its uses evolve. This guidance does not address academic use by students or faculty – which is discussed in the AI and ChatGPT in Teaching: Context and Strategies document. 

Background

The use of AI is personal to each individual. At the University of Minnesota, it is not a replacement for our marketing and communications staff but augments the work we do. 

  • Traditional AI, like spellcheck or voice assistants, uses specific sets of rules and data to complete a task. 
  • Generative AI is focused on creating “new” content based on the information it receives. It can assist marketing and communications professionals at all stages of a project from development to release, both in writing and design. 
  • Large Language Models are types of AI programs that recognize and generate human-like text. They are embedded into Generative AI systems to provide a word-based output (essentially giving you an answer.) These programs are trained on the data they receive, which means that fact-checking and close attention to meaning and implications are critical to their use as a tool.

Guidelines 

These guidelines are not comprehensive and subject to change as we learn more about AI capabilities and security.

General use

Expand all

Review, verify, and modify all Generative AI-produced content.

AI output is based on the data it’s trained on and the information you provide, filling gaps—even if incorrect—as it sees fit. Double-check if sources are correct and refer to a content expert whenever possible. AI output should never be your final product.

Ensure use of AI complies with University data privacy and information security policies.

Follow OIT’s Artificial Intelligence guidance to confirm the proper use of public data. No University data should be inputted into a Generative AI engine — all content provided to these systems is used to train them, increasing known and unknown security risks. Additionally, any tools being used must be reviewed by OGC and OIT before accepting the Terms and Conditions. 

Further questions about AI safety can be directed to University Information Security at security@umn.edu.

Maintain ethical use of AI across your department or team.

Your teams should have open conversations on how AI is used for the work you do. New technologies can only get better through experimentation and discussion to keep safety and efficiency top of mind. 

Operational Tasks

Expand all

AI can assist with productivity and organizational tasks.

Generative AI can help manage your workflow by taking notes, drafting emails, creating checklists, and more. 

Note that meetings or documents covering confidential or sensitive material are subject to Data Access and Privacy Requests.

Be transparent in your use of AI.

Before using a program that requires recording voices or faces for a meeting or presentation, disclose your intended use to participants so they can make an informed choice about their likeness becoming part of AI data. 

Writing and Editing

Expand all

Utilize Generative AI for initial draft work.

AI can assist with idea creation, brainstorming, and research gathering that offer new ways to write about a topic and engage your audience. AI tools can also find information to prepare for a project or interview and bolster strategy.

We recommend using it for introductory questioning and doing further research individually to corroborate or expand on its findings.

Use Generative AI for grammar, structure, and voice assistance.

AI programs can help edit for grammar, length, clarity, tone, and style. They also can fit a document or slide deck with proper headings, spacing, and citations. Remember to review the output and edit content to align with the University’s editorial style.

There are some instances where Generative AI should not be used.

At this time, we advise AI should not be used in the creation of institution-specific content (e.g. leadership messaging) or information regarding the immediate health and safety of our community (e.g. updates and triage.)

Images and Video

Expand all

Generative AI can assist with the implementation of accessibility features.

There are many ways to utilize AI programs to jumpstart accessibility measures, such as transcription and captioning; but avoid developing alternative text or interpreting charts or graphs with AI. It is imperative to closely review the output to produce accurate content for those who rely on those services.

Further inquiries about accessibility accommodation resources can be found on the Disability Resource Center website.

Generative AI content should not be published as work of your own nor a sole contributor.

AI tools should be supplemental to the work you do. Because of the nature of AI, the work must be cited. The University Libraries have developed a Citing ChatGPT and other LLMs page to learn more. 

Be mindful of bias in image creation with Generative AI.

AI tools can reflect biases present in discriminatory training data. Use equitable and sound decision-making during your review process and publish content reflecting the University’s diversity, equity, and inclusion standards. 

There are some instances where Generative AI should not be used.

Generative AI should not be used to modify any University trademarks, mascots, or otherwise without explicit permission from University Relations.

For the Web

Expand all

AI tools can support data and analytics tasks.

Data interpretation, pattern recognition, and search engine optimization processes can be assisted with AI.

Coding

AI can improve workflow by automating routine tasks, reviewing for code quality, aiding in debugging a complex issue, and recommending different approaches to building web applications. AI should be used along with human expertise to ensure code reliability, security, and maintainability. 

OIT has not selected a preferred University-approved AI tool to be used for this purpose.

Frequently Asked Questions

Expand all

How can I improve the output from Generative AI?

Make sure your prompts are specific and direct. It is also recommended, if using a tool based using OpenAI’s ChatGPT, to share a compliment when it gives you something correctly to improve its future outputs.

Learn more from the Prompt Engineering Guide and Axios HQ.

What tools does the University recommend I use?

Use tools that look credible and openly detail data and privacy use. The popularity of a product is a good barometer for using a tool with the most diverse data and quality control.

Is there a University policy on the use of Generative AI tools?

There is currently no official policy on the use of Generative AI. Departments and teams are encouraged to create guidelines that fit best for their work to continue cautious exploration of these tools.

Are there University-supported AI tools?

The University is not entering into any contracts regarding Generative AI until further institutional, state, and federal policies are established.

Where can I find the University’s data policies?

You can find all University policies in the University Policy Library

What should I not use with the assistance of Generate AI?

We recommend abiding by the University guidelines referenced in this resource, those of your area of expertise, and the manuals of style relevant to your work.

Will I get in trouble for using Generative AI?

You are responsible for published content that includes AI-generated material. Anything produced by AI should be vetted for accuracy, tone, and appropriateness.