Front of Strong Hall zoomed in on University of Kansas lettering above the front entrance.

AI Taskforce Recommended AI Guidelines

Generative Artificial Intelligence (GenAI)

Generative artificial intelligence (GenAI), also referred to as Generative AI or simply AI, describes algorithmic models designed to create new content based on the data used to train the model (known as training data). Popular GenAI tools (e.g., Copilot, ChatGPT, Google Gemini, and DALL-E) are trained to produce new content. For instance, ChatGPT can generate human-like text, DALL-E can create images from textual descriptions, and other tools can produce music or video content.

GenAI tools operate by processing text prompts provided by the user, typically in the form of questions or descriptions, to generate results. Users can refine these results with additional prompts to adjust the tone, style, or format of the generated content.

Applications of GenAI span many fields, including creative arts, education, healthcare, and business, where they can assist in content creation, data analysis, personalized learning, and other tasks. This technology has the potential to transform industries and enhance productivity. However, its adoption at KU presents significant challenges, including privacy and security concerns, academic integrity issues, potential biases, intellectual property risks, ethical implications, and environmental costs. Despite these challenges, KU aims to harness GenAI’s potential while maintaining ethical standards and core values, aligning with its mission to educate leaders, build healthy communities, and make world-changing discoveries.

Ethics and Responsible Use of AI Tools

AI tools should be used in a manner consistent with the University’s academic, teaching, learning, and research missions; and therefore, Users of GenAI tools developed or deployed at the University of Kansas should align with the following principles:

Users should be given agency in their use of GenAI tools. Users should be given ways to overrule, alter, and ignore the output of a GenAI tool, based on their own judgment. Potential for harm should be anticipated and mitigated to the extent possible. 

The use of GenAI tools should align with the values of the University of Kansas. 

The potential for bias in a GenAI tools should be assessed and efforts should be made to mitigate it. Users should not rely on any output that indicates bias.

GenAI tools should be used in accordance with applicable data protection laws and existing University policy. This includes but is not limited to student data protected by FERPA (the Family Educational Rights and Privacy Act), medical data protected by HIPAA (the Health Insurance Portability and Accountability Act), and data protected by the GDPR (General Data Protection Regulation).

Users should report issues they encounter with a GenAI tool. Users should also evaluate and verify for accuracy the information generated by a GenAI tool.

GenAI Tools should operate and be used in a manner that is open, understandable, and accountable to all individuals.

There should be evidence that an AI system can perform the task for which it is designed. Rigorous scientific methods should be used during the development of the system. The AI system should be tested throughout its life to ensure the benefits that accrue from its use outweigh any risks or potential harms.


Expectation and Guidance for Using GenAI Tools

All users are responsible for understanding the benefits and opportunities, as well as the risks and limitations associated with the use of GenAI tools. All users should keep the following guidance in mind:

  • The use of GenAI tools is subject to University policies, standards, procedures, guidelines, regulations, faculty, staff, and student manuals, and codes of conduct. GenAI tools must not be used by Users for illegal, discriminatory, or defamatory purposes.
  • Users must not use GenAI tools for malware or spam purposes.
  • Students should ensure their use of GenAI tools in coursework complies with their instructor’s GenAI policy in their syllabus.
  • Users have the same degree of responsibility for any content they produce using GenAI tools, including any mistakes made by the GenAI tool (e.g., “hallucinations” or factual errors), as they would for content produced without GenAI assistance.
  • Users are responsible for verifying the accuracy of any content created by a GenAI tool.
  • Users should clearly disclose when research content (e.g., analysis, figures, proofs) has been created using GenAI tools (e.g., labeling or citing figures).
  • Users performing research should follow applicable rules including but not limited to publishers, conferences, agencies, and professional organizations when using GenAI tools
  • Users are generally not expected to acknowledge the use of GenAI tools for assistance with tasks such as grammatical editing website copy, and generation of boilerplate language for letters, so long as users review content before use, and such attribution is not required, and use of GenAI tools is not otherwise prohibited. Many concerns around the use of GenAI tools and intellectual property rights are unresolved. Users should keep in mind that disclosing unpublished research results or findings to such tools could affect the University’s ability to pursue intellectual property rights.
  • Users should be aware that University data entered into an unapproved GenAI tool lacks University privacy protection and is comparable to sharing that data publicly. Since GenAI tools continue to learn by collecting and storing data received, any information shared may then be used as output provided to other users. Accordingly, data should not be entered into any GenAI tool unless 1) the data is either classified as Public under the applicable Data Classification Policy; or 2) it is done in accordance with the following rules: 

GenAI Tools may be used on non-public data in the following circumstances:

  • Microsoft Copilot may be used from KU Medical Center or University of Kansas Health System computers only for public, sensitive, confidential or restricted data, as defined in the KUMC Data Classification Policy, when the user is signed in with their @kumc.edu account and the shield at the top right corner of the page appears indicating that Enterprise Data Protection applies to the chat.
  • The Office of Research Informatics manages a Databricks instance for researchers that can provide HIPAA compliant GenAI tools.
    • Databricks Foundation Model APIs can be provisioned by Research Informatics to use provisioned throughput models that are compliant. Research Informatics may provide access to an Azure OpenAI API that is HIPAA compliant.
  • Microsoft Copilot may be used with public or sensitive data, as defined in the Data Classification and Handling Policy, when the user is signed in with their @ku.edu account and the shield at the top right corner of the page appears indicating that Enterprise Data Protection applies to the chat.
  • To use Copilot with confidential data, first consult with departmental Technology Support Staff for consideration.
  • Copilot with data protection guarantees that a user’s prompts and Copilot’s responses:
    • Are encrypted with barriers against unauthorized access to data;
    • Have local access administrative settings to match KU IT’s identity, permission, and sensitivity controls; and
    • Are not used to train Copilot (or other Microsoft or OpenAI tools) foundation or language AI models.

For any other use of GenAI tools not covered in this guidance, please reach out to the Chief Information Security Officer for information. 


Additional Guidance for Faculty Members and Instructors

Instructors may choose to teach about or use GenAI tools as appropriate to course, level, and discipline. The following guidelines are suggested for such situations:

  • Communicate expectations to students about coursework and the uses of GenAI tools.
  • Communication may include statements about such use in the class syllabus and reiteration of classroom policy when framing relevant coursework. Remind students about their academic integrity obligations and the practical consequences of academic integrity violations.
  • Use of GenAI plagiarism detection tools is not recommended, as their accuracy is not guaranteed. GenAI plagiarism checkers (e.g., Turnitin and GPTZero) may return false positives and introduce bias against non-native English speakers and students with disabilities.
  • Exercise caution when using GenAI tools in research activities. Review any policies and rules regarding GenAI use provided by the organization funding your research.
  • Communicate expectations to students about coursework and the uses of GenAI tools.
  • Communication may include statements about such use in the class syllabus and reiteration of classroom policy when framing relevant coursework. Remind students about their academic integrity obligations and the practical consequences of academic integrity violations.
  • Use of GenAI plagiarism detection tools is not recommended, as their accuracy is not guaranteed. GenAI plagiarism checkers (e.g., Turnitin and GPTZero) may return false positives and introduce bias against non-native English speakers and students with disabilities.
  • Exercise caution when using GenAI tools in research activities. Review any policies and rules regarding GenAI use provided by the organization funding your research.

The University of Kansas encourages responsible learning, inquiry, and experimentation with GenAI tools. It is essential to remember that Gen AI is a tool, and we remain responsible for the outcomes of its use. Technology may enhance our productivity, but it does not replace our judgment or diminish our responsibility.