IT 160 – Safe & Ethical Adoption of Artificial Intelligence (AI)

Number
IT 160
Department
Finance & Administration
Effective
Last Revision
Last Reviewed

Purpose

College of Western Idaho (CWI) is committed to leading Idaho in the safe, ethical, and innovative use of artificial intelligence. By adopting AI responsibly, CWI aims to enhance teaching and learning, improve institutional efficiency, strengthen services for students and the community, and help prepare Idaho’s workforce for the future. 

CWI’s goal in adopting AI is not to replace people, but to empower faculty, staff, and students with tools that augment human work, improve access to information, and support high-quality educational experiences. 

This policy establishes expectations, responsibilities, and governance structures for the responsible use of AI technologies at CWI. 

Scope

This policy applies to: 

  • All CWI employees, including full-time, part-time, adjunct, contractors, and temporary staff.
  • All AI-related activities, systems, and tools used for college business, instruction, research, operations, or services.
  • Any AI tools or systems used in student-facing or community-facing contexts. 

Student use of AI for academic work is governed by the Academic Integrity Policy and instructor or departmental guidelines. 
 

Definition

Artificial Intelligence (AI) Tools: Software, platforms, or systems that use machine learning, natural language processing, predictive analytics, or similar techniques to analyze data, generate content, or automate tasks. 

AI-Assisted Work: Any work, content, communication, or decision produced with the assistance of an AI tool. 

High-Risk AI Use: Use of AI tools that involve restricted or confidential data, as defined in IT Policy 150, or any use that significantly affects individuals’ access to services, academic outcomes, or employment decisions. 

Scaled or Student-Facing AI Use: AI tools deployed broadly across multiple departments or used directly by students or community members (e.g., chatbots, tutoring systems, automated communication tools).

Policy

Responsible Use & Training 

CWI employees are expected to: 

  • Use AI tools ethically and safely in alignment with college policies.
  • Complete annual training on AI and emerging technology.
  • Ensure that AI-assisted work remains accurate, appropriate, and compliant with institutional policies, standards, and legal requirements. 

Employees retain full responsibility for the quality and trustworthiness of any work produced with AI assistance.

Accountability, Oversight, & Transparency

The following standards apply to all AI use:

Responsibility for Output

Employees are responsible for reviewing, verifying, and validating AI-generated content or decisions before using or distributing them.

Oversight Plans for High-Risk & Scaled Use

Departments implementing scaled, student-facing, or high-risk AI tools must document an oversight and risk-management plan and share it with the AI Committee. Plans must address: 

  • Intended purpose and scope
  • Data classification and handling
  • Human oversight processes
  • Accuracy, bias, or accessibility concerns
  • Evaluation and monitoring procedures

The AI Committee serves an advisory role only.

Notification & Transparency 

Users must be clearly informed when:

  • They interact with an AI system (e.g., chatbot).
  • They are being recorded or monitored by an AI-enabled feature (e.g., meeting capture tools), when such notification is feasible.

Human Escalation Option

When AI is used to provide services, such as chatbots, virtual assistants, or automated messaging, users must be provided with clear, accessible instructions for contacting a human representative at any time.

Disclosure of AI-Generated Content

Content or decisions generated mainly or entirely by AI must be labeled or disclosed as such. Deans and department leaders are responsible for establishing discipline-specific disclosure standards within their areas.

PROHIBITED USES 

The following uses of AI are prohibited:

  • Inputting restricted or confidential data into unapproved or consumer-grade AI tools.
  • Allowing AI systems to make final decisions that significantly affect individuals (e.g., academic standing, access to services, employment decisions) without meaningful human review.
  • Deploying AI tools that misrepresent themselves as human.
  • Using AI in ways that violate privacy laws, accessibility requirements, intellectual property rights, or College policies.
  • Using AI tools in discriminatory, biased, or harmful ways.

GOVERNANCE & AUTHORITY

AI governance and decision-making occur through existing College structures: 

  • Information Technology (IT): Approves AI tools; enforces data security, privacy, and technical requirements.
  • Department Leads and Supervisors: Monitor AI use within their units; ensure employee compliance with this policy.
  • Deans and Executive Leadership: Provide oversight for student-facing, institution-wide, high-risk, or high-impact AI use.
  • Human Resources and Supervisory Processes: Enforce compliance with this policy using existing corrective or disciplinary procedures.
  • AI Committee: The AI Committee is chartered through the President’s Cabinet and serves in an advisory, research, and coordinating capacity. This Committee provides strategic coordination and support for AI adoption across the institution, but does not approve, deny, or enforce AI use.