FAQs: Generative AI Guidelines

Updated August 16, 2024

ATUS and Learning Systems are exploring how generative artificial intelligence tools, such as ChatGPT, are being used in higher education. We are coordinating with relevant offices and entities as we continue to learn more about the implications for teaching, learning, technology, policies, and procedures. See also

 

Background Information

info
  • While AI can be used for anything from chat-based support and talking with devices (e.g., Hey, Siri, or Alexa), in higher education we are most aware of AI being used to generate content. More specifically, we are concerned with students using AI-generating tools to create content that they submit as their own work–and the tools available to detect that content.
  • According to the Washington State Office of the CIO’s Interim Generative AI Guidelines, “Generative artificial intelligence (AI) is a technology that can create content, including text, images, audio, or video, when prompted by a user. Generative AI systems learn patterns and relationships from massive amounts of data, which enables them to generate new content that may be similar, but not identical, to the underlying training data. The systems generally require a user to submit prompts that guide the generation of new content.” (Adapted slightly from U.S. Government Accountability Office Science and Tech Spotlight: Generative AI)
  • Guidelines for generative AI practices at the state level are currently under review. As of August 8, 2023, the Washington State CIO issued for all state agencies, including higher education, Interim Generative AI Guidelines for how artificial intelligence (generative AI, and LLM) should be used by state employees and within state systems.

Whether an instructor chooses to discourage, limit, or embrace AI use in their courses, designing course content with AI in mind and communicating a clear message to students is invaluable. See the following TLCo-op post for ideas on syllabus language, assignment and course design, and more:

See also: 

AI-Generated Content

download
  • Faculty lead the effort in determining student violations of academic integrity. In the opinion of ATUS, the instructor needs to state when generative AI tools can or should not be used. A best practice would be to clearly outline AI policies, alongside plagiarism policies, within your course syllabus as well as on relevant assignments.
  • If the instructor has allowed student use of AI, the allowed use should be made explicit within the assignment and/or syllabus, including the extent, methods, citations, and appropriateness of use.
  • If an instructor prohibits student use of AI and a student ignores or subverts this within the course, it may result in a violation of academic integrity and is considered plagiarism (See also Syllabi@WWU).
  • If violations occur, please follow the Ensuring Academic Honesty Policy and Procedure. Questions can be directed to academichonestyboard@wwu.edu
  • AI-generated content used in an official State capacity should include transparency of authorship. This includes citation for content that is AI generated or edited, and should disclose details of its review (how the material was reviewed, edited, by whom, and the date the system was used). 
  • See the style guide required for the specific discipline/assignment for recommendations. 
  • Students may be required to include the prompt used when using AI generators. Generally, it should identify which parts of the work were generated or summarized, which generator was used, and what prompt was used. It could also include subsequent iterations employed to finalize the work and how it was revised by the student.

Because generative AI creates its material from a model of data, it may not be possible to fully eliminate copyrighted elements from AI-generated content. The Washington State Office of the CIO’s Interim Generative AI Guidelines note that “...state personnel should conduct due diligence to ensure no copyrighted material is published without appropriate attribution to the acquisition of necessary rights. This includes content generated by AI systems, which could inadvertently infringe upon existing copyrights.” Follow guidance, noted above, for citing generated work.

AI systems and the algorithms they use can impart cultural, economic, identity, and social biases of the source materials used for training data. Anyone using AI-generated content should ensure that unintended or undesirable instances of bias, or even potentially offensive or harmful material, is changed or removed (with grateful attribution to City of Seattle, Interim Policy).

AI Detection Tools

search
  • AI detection was made available to Western through Turnitin's Simcheck product for a limited time in 2023. That function was discontinued by the vendor and is only available through an advanced product line. Consequently, this leaves Western without an enterprise tool to detect AI writing. ATUS and the CIIA have been evaluating the new product line by Turnitin and similar tools. In addition, we are continuing our work with faculty committees to understand the university's need for AI detection.
    • Note that we still have the Simcheck plagiarism detection tool which reports plagiarism flags to instructors who activate it in assignments in Canvas.
  • If you wish your voice to be heard on this subject, please feel free to write to us at CanvasHelp@wwu.edu. We are sharing this input with the faculty committees involved in these discussions. Instructors can also reach out to their representative on the Academic Technology Committee.
  • In the absence of an enterprise tool for AI detection, instructors may find other tools to be helpful in your investigation of student work, as long as they are used appropriately to protect student personal information. We describe an overall approach to AI detection and some alternative tools in the following page:

When instructors have enabled plagiarism detection in Canvas assignments, students are presented with the following: “I agree to the tool's End-User License Agreement. This submission is my own, unique writing and work. I understand that if I use ideas or words that are not my own, without giving appropriate credit/citations, I could be reported for violating the Academic Honesty Policy.” Students must check the box next to this statement in order to submit their work. AI generated content is not considered "unique writing and work" and will be considered plagiarism if not allowed and cited in a course.

Academic Technology & User Services (ATUS) and the Center for Instructional Innovation & Assessment (CIIA) recommend a multiple-measures approach when evaluating student work. Educators are the closest to their instructional content and their students. Evaluation, therefore, should begin with instructors using their own best judgment about the student's writing.

First, review writing evaluation methods. See: Evaluating Student Work When AI Is Suspected.

Important Notes

  • Do not share personally-identifying information: Caution should be used when choosing a third-party AI detection tools to avoid disclosing any protected information. Work submitted to these tools should be fully anonymized, per Interim Generative AI Guidelines from the State.
  • Work flagged by AI detection tools is not definitely AI-generated. Due to the rapidly changing nature of generative AI, there is no foolproof method of detecting AI plagiarism.  
  • Always follow your research with a non-accusatory conversation with your student. Non-native English speakers and beginning writers might use simple sentence structures or phrasing and tone that differ from standard English. This can cause AI detection tools to mistake these as attempts of using AI writing tools. 

For information on Canvas and third-party AI detection tools, see "Using Third-Party AI-Detection Tools to Analyze Student Work" on this page: Evaluating Student Work When AI Is Suspected.  

Support at WWU

  • Student works, as part of their educational record, are protected by FERPA privacy laws as well as confidentiality laws. In Canvas assignments where plagiarism/AI detection are implemented, students are presented with an opportunity to agree to terms (see “What do students agree to” above). Protections are in place with contracted products such as Canvas and TurnItIn when they have agreed to protect WWU FERPA data and WWU has designated them as “school officials.” As long as the student work is fully anonymized, selections from the student’s submitted work can be put into other available AI-detection tools for further evaluation.
  • Note that instructors should not submit sensitive, confidential, or any personally-identifiable data about students, to a generative AI system.

Copyright law states that students hold the copyright to works created and submitted as their academic work. Thus, without express written consent, student works cannot be used for research, publication, etc. While the use of student works for AI detection do not infringe upon their copyright, it is worth noting that work submitted to these tools should be fully anonymized. ATUS recommends beginning an evaluation using the plagiarism detection tool built into Canvas assignments.

  • Because of the evolving nature of AI and AI detection, critics of AI detection have raised concerns of improper AI detection in neurodiverse students or with students with non-standard English. If you have any concerns with a student's work that has been inappropriately flagged for AI usage and you are unsure if they have an accommodation through the DAC, please contact the Disability Access Center
  • If you are a student who has received an academic honesty violation for using AI that you believe is in error, please follow the appeal process for students: Academic Honesty - Students. See also Academic Uses of AI for Students.

Employee Use of AI

smart_toy

If an employee elects to use generative AI in their work, writing, image processing, or data analysis, we suggest that they exercise certain practices and precautions. According to the August 2023 interim guidelines published by the CIO of the State of Washington the “use of AI should support the state’s work in delivering better and more equitable services and outcomes to its residents.”

Prior to selecting an AI for a project or task, consult with your department leadership and colleagues.

When creating a webpage for a department, “how,” “what,” and “why” should be asked when deciding to use AI. How will AI more effectively produce the desired outcome of the project? What kind of data will be used to generate the content for the project? And, why would AI be a better choice in creating this content? These three questions may help you decide which AI to use, as well as determine if an AI will help create a better output for a project. AI does not always produce the best results, and even though it is seen as a way to save time; sometimes it can be more time-consuming to research and evaluate the generated content.

Never use sensitive or personally-identifiable information (PII, educational records, or pre-published research). For more information about PII, see the FERPA Toolkit - WWU Teaching Handbook.

We suggest you research and cross-check generated content and consider the following:

  • Think critically: Who is being represented? Is it primarily privileged demographics? Are there common stereotypes? In other words, if you were asked to picture someone for a specific role, is the person AI is generating what you would immediately picture (for example, does it generate “a construction worker” as a white, masculine, able-bodied man)?
    • Fix: Use specificity in your prompts to generate more diverse outputs, such as by specifying aspects of the desired person that push away from stereotypes.
  • Check for credibility: Some AI hallucinates sources when it automatically (or when prompted) provides citations, providing sources that don’t exist or are fake. 
    • Fix: Evaluate your sources, and also check if the author and site are real, if it’s satire, and if it’s accurate.
  • Read laterally: If the AI doesn’t provide sources to check, pick out its arguments and assumptions and look for other credible articles on these topics to see if the information matches.

See also:

AI and Information Literacy: Assessing Content (Texas A&M University, Dec. 2023)
 

Consider the use of AI systems that support using a collection of documents to source its generative output from. These systems allow for the uploading of documents and websites for the AI to reference before relying on its pre-trained data. These systems can help reduce the amount of hallucinations and provide more targeted or accurate generative responses.

See also: 

Many AI generators save threads, but only for a set amount of time. 

When using generative image creation, consider saving the prompt (if it is not too long) as the file name. Adobe FireFly image generator does this by default. 

Consider creating a folder of the name of the AI image generator used to create the image. 

Note: the use of generative AI as a Washington state employee may result in creating a public record

For a generative AI product comparison and other details pertaining to student use of AI, see: Academic Uses of AI for Students.