Responsible AI


Guiding Principles

Artificial Intelligence (AI) should be used in ways that support the University's missions of teaching and learning, research, and service. The University's approach to AI is grounded in principles that emphasize fairness and responsible use.
 

  • Human Centeredness: AI can help or even act on our behalf, but people stay in charge. Human review helps ensure AI benefits people and prevents harm.
  • Value Alignment: AI use should reflect and advance the values and mission of the University.
  • Equitability: AI may reflect or amplify bias. Bias should be identified and mitigated to avoid unfair outcomes, and biased results should not be relied upon. 
  • Privacy, Security & Safety: AI use should protect people, data, and systems. Follow university policies, use approved tools, and safeguard sensitive data.
  • Accountability: Responsibility for outcomes rests with people. Report concerns or issues and verify important outputs for accuracy.
  • Transparency: AI use should be open and understandable to those affected.
  • Validity: Testing and reliability are critical. AI should be evidence based, dependable, and appropriate for its intended use over time

Policies & Requirements

The use of artificial intelligence at KU and KUMC is governed by existing university policies, many of which are technology neutral and apply across different tools and platforms. These policies apply based on the activity and data involved, rather than the specific technology used, and may address areas such as data privacy, security, research integrity, human subjects research, academic conduct, accessibility, records management, procurement, and acceptable use of university systems.

For official policies, see:

 


Responsible AI: Risk and Advisory Bodies

Several university bodies support responsible AI use at KU providing strategic coordination, advisory guidance, and risk-focused oversight, including: 

The KU AI Taskforce was established to provide initial strategic guidance and coordination for the responsible use of artificial intelligence across academics, administration, and research at the University of Kansas and the University of Kansas Medical Center.

The Taskforce completed its charge and has now concluded its work. Key outcomes included:

  • Establishing foundational principles for responsible and ethical AI use at KU 
  • Developing initial guidance for faculty, staff, and students on appropriate AI use 
  • Identifying institutional risks and opportunities associated with emerging AI technologies 
  • Informing early policy development and alignment with legal, privacy, and compliance obligations 
  • Supporting cross campus coordination and shared understanding of AI impacts 
  • With these foundations in place, KU has transitioned from exploratory governance to a more focused, risk-based oversight model that supports responsible AI use while enabling research breakthroughs, innovation, and excellence in teaching and learning.

The Taskforce’s work was informed by faculty and staff from across the university.

Taskforce Committee Members

The Council’s work is informed by the NIST Artificial Intelligence Risk Management Framework (AI RMF) and emphasizes a practical, risk based approach to AI use that supports KU’s academic, research, and operational missions while protecting individuals and institutional interests.

Council scope and responsibilities include:

  • Assessing privacy, security, and compliance risks associated with AI systems and use cases 
  • Providing advisory guidance on high risk or novel AI applications 
  • Supporting consistent, risk based decision making across KU and KUMC 
  • Aligning institutional AI practices with applicable law, policy, and recognized standards 
  • Maintaining and refining AI risk assessment processes and guidance 

The Council does not approve individual research projects or replace existing academic, research, or IT governance bodies.

The KUMC AI Steering Committee provides strategic leadership and coordination for the responsible use of artificial intelligence across academics, administration, and research at the University of Kansas Medical Center.

The Committee supports AI innovation while emphasizing ethical use, institutional alignment, and responsible implementation. It advises on AI strategy, policy development, and resource needs, and serves as the primary coordinating body for AI initiatives at KUMC.

The Committee’s work is supported operationally by the KUMC Artificial Intelligence Resource Center, which provides AI expertise, tools, and research support.