RU Banner
Office of Teaching Evaluation and Assessment Research


TEACHING CRITICAL AI LITERACY:

Advice for the New Semester

Prepared by Lauren M. E. Goodlad and Sharon Stoerger in collaboration with the AI Roundtable Advisory Council and with support from the Office of Teaching Evaluation and Assessment Research. A PDF version of this webpage is also available for your convenience.

This webpage offers a provisional set of resources to help instructors make informed decisions and equip themselves for productive discussions as they prepare for a new semester. We provide

  1. a brief introduction to “artificial intelligence,” followed by
  2. discussion of critical AI literacy for educators and students,
  3. discussion of the implications of “generative AI” tools for academic integrity,
  4. suggestions for updating syllabi, and
  5. a short list of potential resources.

“AI” is a complicated subject with many far-reaching implications: we have sought to strike a balance between brevity and comprehensiveness.

We invite you to reach out to the advisory council’s members for additional information and/or questions:

1. What is Artificial Intelligence (AI)?

Artificial Intelligence (AI) has become a common term for an emerging set of computer technologies that affect individuals, communities, societies, and the environment at a global scale. Although the phrase “AI” was coined in the 1950s, it has undergone multiple transformations as a field of research and, until recently, was familiar to the general public largely as a theme for science fiction.

AI research returned to public discussion in the 2010s when a number of innovations in “deep learning” became possible—largely because of the availability of human-generated data on the internet and through networked devices at an unprecedented scale. At around the same time, these technologies began to power widespread applications including voice assistants, recommendation systems, and automated driver assistance. When technologists speak of “deep learning” (DL), which is a type of “machine learning” (ML), the learning in question denotes a computer model’s ability to “optimize” for useful predictions while “training” on data through updating the weights in an elaborate set of statistical calculations. The learning is deep because of the multiple computational layers in the very large models that DL involves.

The most heavily promoted forms of “AI” today are large language models (LLMs) such as Open AI’s ChatGPT, Google’s Bard, and Anthropic AI’s Claude 2.1Though sometimes described as a “start-up,” OpenAI is valued at about $30 billion and has been funded partly through multi-billion dollar investments from Microsoft in exchange for a 49% stake. Anthropic AI was founded by OpenAI employees who disagreed with the direction being taken by OpenAI. Meta’s most updated current language model, LLaMA 2, is open source. BLOOM, another open source LLM, was created as a collaboration between more than 1000 researchers. All of these systems are multi-layered (“deep”) statistical models that predict probable word sequences in response to a prompt even though they do not “understand” language in any human-like sense. Through the intensive mining, modeling, and memorization of vast stores of data “scraped” from the internet, reinforced by vast bodies of human gig workers, text generators synthesize a few paragraphs at a time which resemble writing authored by humans. This machine-generated text is not directly “plagiarized” from some original, and it is usually grammatically and syntactically well-crafted.

However, machine-generated content is often factually incorrect. Moreover, when users prompt an LLM to provide sources for information, the cited sources may be wrong or completely fabricated. Likewise, while chatbots are sometimes marketed as question-answering systems (like Apple’s Siri) and as effective replacements for search engines, LLMs are pre-trained models that do not search the web.2See, for example, the MIT Review’s criticism of replacing search engines with chatbots and, for a research paper on the topic, Shah and Bender (2022). On the various technical challenges to incorporating chatbots into search engines (as in Microsoft’s Bing and Google’s Bard) see, for example, Vincent (2023). The novelist and technologist Ted Chiang has likened a chatbot’s compressed model of its training data to a “blurry JPEG.” A related limitation to consider is that data models are expensive to retrain: for example, although OpenAI’s GPT-4 was released in March 2023, its original training involved textual data gathered through 2021. This means that an LLM’s training data may not include the most recent developments or insights in various fields.

Since the training data for LLMs is enormous–constituting most of the “scrapable” internet–as models have grown successively larger, their predictive capacities have extended beyond the generation of human-like text: for example, they can now answer some questions in basic math (though still make errors on simple tasks such as three-digit multiplication); in the hands of knowledgeable programmers, they can aid in the writing of computer code. Large image models (LIMs) which are trained on visual images scraped from the internet, are text-to-image generators that respond to a user’s textual prompt with a visual output predicted to correspond accordingly. Because these commercial models are proprietary, researchers have no means of determining precisely what training data was involved, what tasks were prioritized for human reinforcement (and under what working conditions), or how much energy and water is required for training and use.

These data-driven technologies are now collectively designated as “generative AI” and their impressive affordances are sometimes talked up as if they were both “god-like” and likely to lead to imminent catastrophe.3For a point-by-point critique of the “reckless exaggeration” such writing may entail, see Noah Giansiracusa’s response to an opinion essay in the New York Times. This peculiar tension between the need to regulate new technologies in order to reduce their harms, and the tendency of some prominent voices to emphasize so-called existential risks at the expense of actually existing harms is one of several ways that “hype” about chatbots and other predictive “AI” systems complicates the educating of students and the public at large. At Rutgers, the Critical AI @ Rutgers initiative is among many groups, national and international, which hold that promoting critical AI literacy is the best possible answer to a dynamic landscape.

2. What is Critical AI Literacy?

Chatbots and other modes of generative AI are controversial because of a number of problems that may be hard (or impossible) to eradicate: these include embedded biases, lack of transparency, built-in surveillance, environmental footprint, and more. Teaching critical AI literacy thus includes helping students to learn about the existing and potential harms of these tools, whether instructors use them in their teaching or not. To be sure, acquiring in-depth literacy takes time for both educators and students. In the best possible case, students and instructors will learn from each other as they discuss common concerns and experiences.4See Conrad’s “Blueprint for An AI Bill of Rights for Educators and Students,” for a useful framework for enabling instructors to teach critical AI literacy (including a basis for discussing such “rights” with your students).

Below we list the chief concerns about the actually existing harms of chatbots and other “generative” tools and the systems and practices on which they depend.

In an educational setting, chatbots also create particular challenges for academic integrity–a subject to which we now turn.

3. Implications for Academic Integrity

The code of conduct at Rutgers states that students must ensure “that all work submitted in a course, academic research, or other activity is the student’s own and created without the aid of impermissible technologies, materials, or collaborations.” In evaluating policies on “generative” tools, this puts special emphasis on the identification of permissible technologies and the question of whether a given tool impedes the learning goals of the course (including the submission of suitable work that is “the student’s own”).

At Rutgers, learning goals vary widely across and within schools, disciplines, majors, and levels of difficulty.

For example,

Of course, students will have their own views on the topic:

The good news is that all of these situations can effectively teach and enhance critical AI literacy, whether AI tools are used or not.

Several points are, however, worth emphasizing as you gear up for the new semester.

4. Suggestions for Updating Your Syllabus So As to Clarify Course Policies and Learning Goals

Whether an instructor wishes to build in the use of chatbots for certain assignments, allow students to experiment with them as they wish, or prohibit their use, we recommend clarifying these policies on syllabi and discussing them with students. Explain how you reached a decision that comports with the learning goals for the course. Consider discussing how chatbots work and the various problems described on this webpage (see potential resources below, including this recorded lecture by computational linguist Emily M. Bender). Members of Rutgers’s advisory council are available with specific suggestions for teaching your students critical AI literacy in line with any of the below suggestions for syllabi.

I. For instructors who do not want students to use AI tools for their course

When specifying on one’s syllabus that the use of chatbots and other AI tools is not permissible, instructors should be as clear as possible and may wish to refer to the Rutgers code of conduct, cited above, in doing so. Given that AI tools are (or may soon be) incorporated seamlessly into platforms such as Google Docs, grammar-checking tools such as Grammarly, and software suites such as Microsoft Office, a clear and specific statement is the best possible way to communicate with your students. In addition, you may wish to ask students to submit a statement of academic integrity along with their assignments.

For example,

In concert with Rutgers’ code of conduct, which mandates “that all work submitted in a course, academic research, or other activity is the student’s own and created without the aid of impermissible technologies, materials, or collaborations,” this course has been designed to promote your learning, critical thinking, skills, and intellectual development without reliance on unauthorized technology including chatbots and other forms of “artificial intelligence” (AI). Although you may use search engines, spell-check, and simple grammar-check in crafting your assignments, you will be asked to submit your written work with the following statement. “I certify that this assignment represents my own work. I have not used any unauthorized or unacknowledged assistance or sources in completing it including free or commercial systems or services offered on the internet or text generating systems embedded into software.” Please consult with your instructor if you have any questions about the permissible use of technology in this class.

Below is some alternative or additional language for syllabi which was developed at the University of Toronto.

II. For instructors who wish to permit use of AI tools in particular circumstances

When specifying on one’s syllabus that the use of chatbots and other AI tools is permissible in certain circumstances, instructors should be as clear as possible and may wish to refer to the Rutgers code of conduct, cited above, in doing so. Bear in mind that students may be using these tools for different purposes in different classes so that it is important to be specific in describing the particular usages you allow or encourage. Given that AI tools are (or may soon be) incorporated seamlessly into platforms such as Google Docs, grammar-checking tools such as Grammarly, and software suites such as Microsoft Office, a clear and specific statement that lays out permissible usages is the best possible way to communicate with your students.

For example, an instructor who does not want AI tools to be used in conjunction with written work but who wants to encourage students to do probing research on model content might consider the following statement:

In concert with Rutgers’ code of conduct, which mandates “that all work submitted in a course, academic research, or other activity is the student’s own and created without the aid of impermissible technologies, materials, or collaborations,” this course has been designed to promote your learning, critical thinking, skills, and intellectual development without reliance on unauthorized technology including chatbots and other forms of “artificial intelligence” (AI). Although you may use search engines, spell-check, and simple grammar-check in crafting your assignments, you will be asked to submit your written work with the following statement. “I certify that this assignment represents my own work. I have not used any unauthorized or unacknowledged assistance or sources in completing it including free or commercial systems or services offered on the internet or text generating systems embedded into software.

A partial exception to this policy is an authorized exploration of model bias which we will conduct in Week X in order to build your learning on critical AI literacy.

Please consult with your instructor if you have any questions about the permissible use of technology in this class.

(As above, our recommendation is that any instructor assigning work that involves mandatory use of an AI tool consider developing an option for students who have data privacy or other concerns.)

Once again, we are sharing some alternative or additional language that was developed at the University of Toronto.

III. For instructors who wish to permit use of AI tools

When specifying on one’s syllabus that the use of chatbots and other AI tools is permissible (or encouraged), instructors should be as clear as possible about how this decision comports with the learning goals for their course and may wish to refer to the Rutgers code of conduct, cited above, in doing so. Instructors may also want to emphasize critical AI literacy including the importance of recognizing that current AI tools are subject to bias, misinformation, and “hallucinations” (as discussed above). Given that AI tools are (or may soon be) incorporated seamlessly into platforms such as Google Docs, grammar-checking tools such as Grammarly, and software suites such as Microsoft Office, a clear and specific statement about AI tools is the best possible way to communicate with your students.

For example, an instructor who encourages the use of AI tools in the course as a way to provide students with the opportunity to gain experience working with emerging technologies and to enhance their understanding of the topic might consider the following statement:

In concert with Rutgers’ code of conduct, which mandates “that all work submitted in a course, academic research, or other activity is the student’s own and created without the aid of impermissible technologies, materials, or collaborations,” this course has been designed to help you develop knowledge and gain emerging skills that will be useful to you as workplace professionals. AI tools may be used as an aid in the creative process, but with the understanding that this should be accompanied by critical thinking and reflection. Students who choose to use these tools are responsible for any errors or omissions resulting from their use. They will also be required to provide as an appendix the prompts used, the generated output, and a thoughtful reflection on the outcomes. When appropriate, students may also be asked to consider the environmental and social costs of using the tools.

(As above, our recommendation is that any instructor assigning work that involves mandatory use of an AI tool consider developing an option for students who have data privacy or other concerns.)

Some instructors who permit use of AI tools for written assignments implement syllabus statements like these, developed at the University of Toronto.

5. A Short List of Resources You Might Wish to Read or to Share with Your Students

This webpage already includes many resources that you might enjoy or might wish to share with or assign to your students. Here we provide a very short list. As this is a living document, we plan to continue to update it with additional resources as they become available.



Footnotes

 

Created in August 2023

 

 

You are using a web browser that does not support "CSS". Please ignore the part of this page below this text. You may be using an older version of web browsing software that is likely to have severe security and identity-theft issues, as well as problems displaying certain web pages. You should download a newer web browser from Microsoft or Mozilla.

Search Rutgers