FAQs: Generative AI in Teaching, Learning, & Technology Guidelines

ATUS and Learning Systems are exploring how generative artificial intelligence tools, such as ChatGPT, are being used in higher education. We are coordinating with relevant offices and entities as we continue to learn more about the implications for teaching, learning, technology, policies, and procedures. See also

Updated 9/14/23: Check this FAQ often as we will continue to update it with what we know. This information is subject to change.

Background Information

info
  • While AI can be used for anything from chat-based support and talking with devices (e.g., Hey, Siri, or Alexa), in higher education we are most aware of AI being used to generate content. More specifically, we are concerned with students using AI-generating tools to create content that they submit as their own work–and the tools available to detect that content.
  • According to the Washington State Office of the CIO’s Interim Generative AI Guidelines, “Generative artificial intelligence (AI) is a technology that can create content, including text, images, audio, or video, when prompted by a user. Generative AI systems learn patterns and relationships from massive amounts of data, which enables them to generate new content that may be similar, but not identical, to the underlying training data. The systems generally require a user to submit prompts that guide the generation of new content.” (Adapted slightly from U.S. Government Accountability Office Science and Tech Spotlight: Generative AI)
  • Guidelines for generative AI practices at the state level are currently under review. As of August 8, 2023, the Washington State CIO issued for all state agencies, including higher education, Interim Generative AI Guidelines for how artificial intelligence (generative AI, and LLM) should be used by state employees and within state systems.

Whether an instructor chooses to discourage, limit, or embrace AI use in their courses, designing course content with AI in mind and communicating a clear message to students is invaluable. See the following TLCo-op post for ideas on syllabus language, assignment and course design, and more:

AI-Generated Content

download
  • Faculty lead the effort in determining student violations of academic integrity. In the opinion of ATUS, the instructor needs to state when generative AI tools can or should not be used. A best practice would be to clearly outline AI policies, alongside plagiarism policies, within your course syllabus as well as on relevant assignments.
  • If the instructor has allowed student use of AI, the allowed use should be made explicit within the assignment and/or syllabus, including the extent, methods, citations, and appropriateness of use.
  • If an instructor prohibits student use of AI and a student ignores or subverts this within the course, it may result in a violation of academic integrity and is considered plagiarism (See also Syllabi@WWU).
  • If violations occur, please follow the Ensuring Academic Honesty Policy and Procedure. Questions can be directed to academichonestyboard@wwu.edu

Because generative AI creates its material from a model of data, it may not be possible to fully eliminate copyrighted elements from AI-generated content. The Washington State Office of the CIO’s Interim Generative AI Guidelines note that “...state personnel should conduct due diligence to ensure no copyrighted material is published without appropriate attribution to the acquisition of necessary rights. This includes content generated by AI systems, which could inadvertently infringe upon existing copyrights.

AI systems and the algorithms they use can impart cultural, economic, identity, and social biases of the source materials used for training data. Anyone using AI-generated content should ensure that unintended or undesirable instances of bias, or even potentially offensive or harmful material, is changed or removed (with grateful attribution to City of Seattle, Interim Policy).

AI Detection Tools

search
  • Until 12/31/23, WWU’s current plagiarism-detection tool is Simcheck by TurnItIn within the Canvas learning management system. This plagiarism detection tool includes AI detection and reports both plagiarism and AI usage to instructors who activate it in assignments in the Canvas. When utilized in assignments with the Submission Type set to "Text Entry" or "File Uploads," instructors can view the plagiarism report in the gradebook and click through to view the AI detection report for each student. Please note: The Simcheck plagiarism-detection tool is not available for Discussions, Quizzes, or Surveys.
  • As with plagiarism detection reports, no one product is sufficient to determine that a submission is 100% free from AI-generated content. See Simcheck AI Writing Detection.
  • To continue this function, ATUS is evaluating the upgrade to Originality by TurnItIn for use in 2024.

When instructors have enabled plagiarism/AI detection in Canvas assignments, students are presented with the following: “I agree to the tool's End-User License Agreement. This submission is my own, unique writing and work. I understand that if I use ideas or words that are not my own, without giving appropriate credit/citations, I could be reported for violating the Academic Honesty Policy.” Students must check the box next to this statement in order to submit their work.

  • Caution should be used when choosing a freely-available AI detection tool (such as GPTZero) or using other non WWU authorized subscription based AI detection tools. To avoid disclosing any protected information, work submitted to these tools should be fully anonymized. 
  • ATUS recommends beginning an evaluation using the plagiarism detection tool built into Canvas assignments. Then, as long as the student work is fully anonymized, selections from the student’s submitted work could secondarily be put into other available AI-detection tools. It can be helpful to use several tools to gain a better understanding on evaluating the submitted work and identify key areas that may need further scrutiny or clarification from the student.
  • Student works, as part of their educational record, are protected by FERPA privacy laws as well as confidentiality laws. In Canvas assignments where plagiarism/AI detection are implemented, students are presented with an opportunity to agree to terms (see “What do students agree to” above). Protections are in place with contracted products such as Canvas and TurnItIn when they have agreed to protect WWU FERPA data and WWU has designated them as “school officials.” As long as the student work is fully anonymized, selections from the student’s submitted work can be put into other available AI-detection tools for further evaluation.
  • Note that instructors should not submit sensitive, confidential, or any personally-identifiable data about students, to a generative AI system.

Copyright law states that students hold the copyright to works created and submitted as their academic work. Thus, without express written consent, student works cannot be used for research, publication, etc. While the use of student works for AI detection do not infringe upon their copyright, it is worth noting that work submitted to these tools should be fully anonymized. ATUS recommends beginning an evaluation using the plagiarism detection tool built into Canvas assignments.

  • Because of the evolving nature of AI and AI detection, critics of AI detection have raised concerns of improper AI detection in neurodiverse students or with students with non-standard English. If you have any concerns with a student's work that has been inappropriately flagged for AI usage and you are unsure if they have an accommodation through the DAC, please contact the Disability Access Center
  • If you are a student who has received an academic honesty violation for using AI that you believe is in error, please follow the appeal process for students: Academic Honesty - Students.