Skip to Main Content

Artificial Intelligence

This subject guide provides information and suggested readings and resources on the topic of Artificial Intelligence

AI Use Policies in Higher Education

University policies and policy recommendations regarding the use of AI. 

Washburn University does not have an institutional policy specifically addressing artificial intelligence; however, Section 7 of the Faculty Handbook implicates claiming authorship over AI-generated content:

C. Academic Improprieties 
An academic impropriety is any student action that undermines, or could reasonably be interpreted as undermining, the presumption that the academic work being produced or submitted by a student is his or her own…


[Academic Irregularities, B.]

(ii) Failure to acknowledge the incorporation of another person's work into one's own, including the failure to properly identify as such material that is being paraphrased or quoted.

(iii) Failure to document properly all works consulted, paraphrased, or quoted.


Individual instructors or departments may have specific policies as part of their syllabi or student expectations materials.

Always seek clarification from instructors regarding acceptable use of AI tools. 


Washburn University School of Law Generative Artificial Intelligence (AI) Interim Policy


 

See also

Ethical Artificial Intelligence

In 2021, the United Nations Educational, Scientific and Cultural Organization (UNESCO) produced the first global standard for AI ethics, the Recommendation on the Ethics of Artificial Intelligence. [See also Key Facts for abbreviated policy actions)

 


UNESCO's 10 Principles of Human Rights-Centered AI Ethics

1. Proportionality and Do No Harm

The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms which may result from such uses.

2. Safety and Security

Unwanted harms (safety risks) as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.

3. Right to Privacy and Data Protection

Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established.

4. Multi-stakeholder and Adaptive Governance & Collaboration

International law & national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.

5. Responsibility and Accountability

AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.

6. Transparency and Explainability

The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety and security.

7. Human Oversight and Determination

Member States should ensure that AI systems do not displace ultimate human responsibility and accountability.

8. Sustainability

AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.

9. Awareness & Literacy

Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, media & information literacy.

10. Fairness and Non-Discrimation

AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.

AI Ethics in the News

Loading ...