Tag: artificial intelligence

ChatGPT: Friend or Foe? Maybe Both.

An image of a network to symbolize ChatGPT.

(NOTE: This article is a slightly abbreviated and edited version of a blog originally published in May 2023.)

In 2006, British mathematician and entrepreneur Clive Humby proclaimed that “data is the new oil.”

At the time, his enthusiastic (if not exaggerated) comment reflected the fervor and faith in the then expanding internet economy. And his metaphor had some weight, too. Like oil, data can be collected (or maybe one should say extracted), refined, and sold. Both of these are also in high demand, and just as the inappropriate or excessive use of oil has deleterious effects on the planet, so may the reckless use of data.

Recently, the newest oil concerning many, one that is shaking up the knowledge workplace, is ChatGPT. Released by OpenAI in November 2022, ChatGPT combines chatbot functionality with a very clever language model. Or to be more precise, the GPT in its name stands for Generative Pre-trained Transformer.

Global Campus previously published a blog about robots in the workplace. One of the concerns raised then was that of AI taking away our jobs. But perhaps, now, the even bigger concern is AI doing our writing, generating our essays, or even our TV show scripts. That is, many are worried about AI substituting for both our creative and critical thinking.

Training Our AI Writing Helper

ChatGPT is not an entirely new technology. That is, experts have long integrated large language models into customer service chatbots, Google searches, and autocomplete e-mail features. The ChatGPT of today is an updated version of GPT-3, which has been around since 2020. But ChatGPT’s origins go further back. Almost 60 years ago, MIT’s Joseph Weizenbaum rolled out ELIZA: the first chatbot. Named after Eliza Doolittle, this chatbot mimicked a Rogerian therapist by (perhaps annoyingly) rephrasing questions. If someone asked, for instance, “My father hates me,” it would reply with another question: “Why do you say your father hates you?” And so on.

The current ChatGPT’s immense knowledge and conversational ability are indeed impressive. To acquire these skills, ChatGPT was “trained on huge amounts of data from the Internet, including conversations.” An encyclopedia of text-based data was combined with a “machine learning technique called Reinforcement Learning from Human Feedback (RLHF).” This is a technique in which human trainers provided the model with conversations in which they played both the AI chatbot and the user.” In other words, this bot read a lot of text and practiced mimicking human conversations. Its responses, nonetheless, are not based on knowing the answers, but on predicting what words will come next in a series.

The results of this training is that this chatbot is almost indistinguishable from the human voice. And it’s getting better, too. As chatbot engages with more users, its tone and conversations become increasingly life-like (OpenAI).

Using ChatGPT for Mundane Writing Tasks

Many have used, tested, and challenged ChatGPT. Although one can’t say for certain that the bot always admits its mistakes, it definitely rejects inappropriate requests. It will deliver some clever pick-up lines. However, it won’t provide instructions for cheating on your taxes or on your driver’s license exam. And if you ask it what happens after you die, it is suitably dodgy.

But what makes ChatGPT so popular, and some would say dangerous, is the plethora of text-based documents it can produce, such as the following:

  • Long definitions
  • Emails and letters
  • Scripts for podcasts and videos
  • Speeches
  • Basic instructions
  • Quiz questions
  • Discussion prompts
  • Lesson plans
  • Learning objectives
  • Designs for rubrics
  • Outlines for reports and proposals
  • Summaries of arguments
  • Press releases
  • Essays

And this is the short list, too, of its talents. That is, there are people who have used this friendly bot to construct emails to students, quiz questions, and definitions. The internet is also awash with how-to articles on using ChatGPT to write marketing copy, generate novels, and speeches. Noy and Zhang even claim that this “generative writing tool increases the output quality of low-ability workers while reducing their time spent, and it allows high-ability workers to maintain their quality standards while becoming significantly faster.”

Below are examples of two onerous writing tasks assigned to ChatGPT: a reference letter and learning goals.

ChatGPT reference letter.
AI writes a very wordy reference letter
Example of learning goals generated by ChatGPT
Here is an example of content created by ChatGPT after being instructed to use Bloom’s taxonomy to create learning goals for a Sci-Fi course.

Recognizing ChatGPT’s Limited Knowledge

Despite helping writers with mundane tasks, this artificial intelligence helper does have its limitations. First of all, it is only as wise as its instructions. For instance, the effusive reference letter above resulted from it having no guidance about length or tone. ChatGPT just threw everything in the written soup.

This AI helper also makes mistakes. In fact, right on the first page, OpenAI honestly admits that its chatbot “may occasionally generate incorrect information, and produce harmful instructions or biased content.” It also has “limited knowledge of the world and events after 2021.”

And it reveals these gaps, often humorously.

For instance, when prodded to provide information on several well-known professors from various departments, it came back with wrong answers. In fact, it actually misidentified one well-known department chair as a Floridian famous for his philanthropy and footwear empire. In this case, ChatGPT not only demonstrated “limited knowledge of the world” but also incorrect information. As academics, writers, and global citizens, we should be concerned about releasing more fake information into the world.

Taking into consideration these and other errors, one wonders on what data, exactly, was ChatGPT trained. Did it, for instance, just skip over universities? Academics? Respected academics with important accomplishments? As we know, what the internet prioritizes says a lot about what it and its users value.

Creating Errors

There are other limitations. OpenAi’s ChatGPT can’t write a self-reflection or decent poetry. And because it is not online, it cannot summarize recent content from the internet.

It also can’t approximate the tone of this article, which shifts between formal and informal and colloquial. Or whimsically insert allusions or pop culture references.

To compensate for its knowledge gaps, ChatGPT generates answers that are incorrect or slightly correct.

In the case of generating mistakes, ChatGPT does mimic the human tendency to fumble, to tap dance around an answer, and to make up material rather than humbly admit ignorance.

Passing Along Misinformation

Being trained on text-based data, which might have been incorrect in the first place, ChatGPT often passes this fakery along. That is, it also (as the example above shows) has a tendency to generate or fabricate fake references and quotations.

It can also spread misinformation. (Misinformation, unintentional false or inaccurate information, is different from disinformation: the intentional spread of untruths to deceive.)

The companies CNET and Bankrate found out this glitch the hard way. For months, they had been duplicitously publishing AI-generated informational articles as human-written articles under a byline. When this unethical behavior was discovered, it drew the ire of the internet.

CNET’s stories even contained both plagiarism and factual mistakes, or what Jon Christian at Futurism called “bone-headed errors.” Christian humorously drew attention to mathematical mistakes that were delivered with all the panache of a financial advisor. For instance, the article claimed that “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.” In reality, you’d be earning only $300.

All three screwups. . . . highlight a core issue with current-generation AI text generators: while they’re legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.

John Christian

Revealing Biases

And ChatGPT is not unbiased either. First, this bot has a strong US leaning. For instance, it was prompted to write about the small town of Wingham, ON. In response, it generated some sunny, non-descript prose. However, it omitted this town’s biggest claim to fame: the birthplace of Nobel Prize winning Alice Munro.

The bias is based on ChatGPT being trained on data pulled from the internet. Thus, it reflects all the prejudices of those who wrote and compiled this information.

Nobel exposes corrupt algorithms. Chat GPT was trained on these.
Nobel’s expose of algorithms

This problem was best articulated by Safiya Umoja Nobel in her landmark book Algorithms of Oppression.

In this text, she challenges the ideal that search engines are value-neutral, exposing their hegemonic norms and the consequences of their various sexist, racist biases. ChatGPT, to be sure, is also affected by if not infected with these biases.

What really made me lose confidence in ChatGPT is when I asked if the United States ever had a president with African ancestry, and it answered no, then apologized after I reminded the chatbot about Barack Obama.

Jamaal Abdul-Alim, Education Editor, The Conversation

Despite agreeing with Nobel’s and Abdul-Alim’s very serious concerns, and thinking that ChatGPT can be remarkably dumb at times, many may not want to smash the algorithmic machines anytime soon. Furthermore, there are writers who do use this bot to generate correct definitions of unfamiliar technical terms encountered in their work. For instance, it can help non-experts understand the basics of such concepts as computational fluid dynamics and geospatial engineering. Still, many professionals choose not to rely on it, nor trust it, too much.

Letting Robots Do Your Homework

But it is students’ trust in and reliance on OpenAI that is causing chaos and consternation in the education world.

That is, many 2022 cases of cheating were connected to one of this bot’s most popular features: its impressive ability to generate essays in seconds. For instance, it constructed a 7-paragraph comparison/contrast essay on Impressionism and Post-Impressionism in under a minute.

And the content of this essay, though vague, does hold some truth: “Impressionism had a profound impact on the art world, challenging traditional academic conventions. Its emphasis on capturing the fleeting qualities of light and atmosphere paved the way for modern art movements. Post-impressionism, building upon the foundations of impressionism, further pushed the boundaries of artistic expression. Artists like Georges Seurat developed the technique of pointillism, while Paul Gauguin explored new avenues in color symbolism. The post-impressionists’ bold experimentation influenced later art movements, such as fauvism and expressionism.”

With a few modifications and a checking of facts, this text would fit comfortably into an introductory art textbook. Or maybe a high-school or a college-level essay.

Sounding the Alarm About ChatGPT

Very shortly after people discovered this essay-writing feature, stories of academic integrity violations flooded the internet. An instructor at an R1 STEM grad program confessed that several students had cheated on a project report milestone. “All 15 students are citing papers that don’t exist.” An alarming article from The Chronicle of Higher Education, written by a student, warned that educators had no idea how much students were using AI. The author rejected the claim that AI’s voice is easy to detect. “It’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own.”

And it’s not just a minority of students using ChatGPT either. In a study.com survey of 200 K-12 teachers, 26% had already caught a student cheating by using this tool. In a BestColleges survey of 1,000 current undergraduate and graduate students (March 2023), 50% of students admitted to using AI for some portion of their assignment, 30% for the majority, and 17% had “used it to complete an assignment and turn it in with no edits.”

Soon, after publications like Forbes and Business Insider began pushing out articles about rampant cheating,the internet was buzzing. An elite program in a Florida high school reported a chatbot “cheating scandal”. But probably the most notorious episode was a student who used this bot to write an essay for his Ethics and Artificial Intelligence course. Sadly, the student did not really understood the point of the assignment.

Incorporating ChatGPT in the Classroom

According to a Gizmodo article, many schools have forbidden ChatGPT, such as those in New York City, Los Angeles, Seattle, Fairfax County Virginia.

But there is still a growing body of teachers who aren’t that concerned. Many don’t want to ban ChatGPT altogether. Eliminating this tool from educational settings, they caution, will do far more harm than good. Instead, they argue that teachers must set clearer writing expectations about cheating. They should also create ingenious assignments that students can’t hack with their ChatGPT writing coach, as well as construct learning activities that reveal this tool’s limitations.

Others have suggested that the real problem is that of teachers relying on methods of assessment that are too ChatGPT-hackaable: weighty term papers and final exams on generic topics. Teachers may need to rethink their testing strategies, or as that student from the Chronicle asserted, “[M]assive structural change is needed if our schools are going to keep training students to think critically.”

Sam Altman, CEO of OpenAI, also doesn’t agree with all the hand-wringing about ChatGPT cheating. He blithely suggested that schools need to “get over it.”

Generative text is something we all need to adapt to . . . . We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well.

Sam Altman

Read MTU’s own Rod Bishop’s guidance on ChatGPT in the university classroom. And think about your stance on this little AI writing helper.

Symposium Brings Together MTU and MSU Researchers

Research symposium group picture.

Presenters, organizers, and some attendees of the second MTU / MSU collaborative research symposium pose for a group photo.

Developing novel approaches to fighting disease, using machine learning and computational methods to solve epidemiological problems and improve patient health, and applying technologies to intervene on disease. These are just a few of the challenges and ambitious solutions facing the state of biomedicine now and in the future. These topics, and several others, were addressed at a recent invitation-only collaborative research symposium between MTU and MSU.

On Friday, Oct. 27, 2023, groups of researchers from Michigan Technological University and Michigan State University University met in a collaborative research symposium.

Entitled Engineering the Future of Human Health II: Biomedicine in the 4th Industrial Revolution, this event was held in Michigan Tech’s Memorial Union Building.

VP David Lawrence opens the symposium.
David Lawrence, vice president for Global Campus and continuing education, opens the symposium.

The symposium preceded the Upper Peninsula Medical Conference, put on by MTU’s Health Research Institute, which focused on diverse approaches to health challenges affecting rural communities. It marked the second collaborative research symposium between these two universities. That is, Michigan State University College of Human Medicine hosted the first symposium on March 13, 2023. It was held in MSU’s beautiful Secchia Center in Grand Rapids, Michigan.

Delivering Short Talks With A Big Impact

For these symposiums, the goals continue to be learning about each other’s work; and investigating areas of shared objectives, mutual interests, and possible research projects between MTU and MSU. But perhaps the even greater purpose is that of these institutions combining forces (and resources) to tackle the most challenging health-related issues of the upcoming decades.

Jeremy Prokop opens the MSU / MTU symposium.
Dr. Jeremy Prokop begins the symposium with his presentation.

To disseminate as much research as possible, presenters kept their talks brief. In total, 12 researchers from MTU and 11 from MSU delivered rapid-fire, ten-minute presentations in six consequent sessions exploring the state of biomedicine in the era of Industry 4.0:

  • Computational Health Science (Session 1)
  • Big Data in Healthcare (Session 2)
  • Kinesiology and Physiology (Session 3)
  • Neural Control and Disease (Session 4)
  • Metabolic Disease (Session 5)
  • Chemical Biology (Session 6)

This structure provided opportunities for researchers not only to learn from each other, but also to explore possible connections between their fields.

And the fields were, indeed, diverse. That is, professionals at this multi-disciplinary event came from applied computing, biological sciences, biomedical engineering, chemical engineering, chemistry, computer science and engineering, kinesiology and integrated physiology, pediatrics and human development, and quantitative health sciences. Overall, the quality of the research and breadth of disciplines spoke to the depth of expertise at this symposium and to the challenges and opportunities facing the future of biomedicine.

There was also a concurrent combined poster session with the UPMC that featured research from several MSU and MTU students, as well as a few professors.

Exploring Connections Between MTU and MSU

Throughout the symposium, there were several salient connections both within and between sessions. For instance, many experts presented on novel treatments for conditions and/or diseases affecting public health, such as diabetes, cancer, cystic fibrosis, neurodegenerative disorders, and lack of activity. Dr. Ping “Peter” Wang (MSU, Session) tackled integrating bioengineering into Type-1 Diabetes treatment. And Dr. Marina Tanasova (Session 6, MTU), after summarizing the role of GLUTs (Glucose transporters) in various diseases, focused on targeting these GLUTs in cancer therapy. Dr. Ashutosh Tiwari (Session 4, MTU), analyzed the role of protein aggregates (misfolded proteins) in the cellular toxicity central to neurodegenerative diseases.

Another common thread was responding to the continuing public health crisis of Covid-19. For example, the symposium began with the long research project of Dr. Jeremy Prokop (MSU, Corewell Health) on genotyping various Covid variants. Then, he shifted to how the immunosuppression connected to Covid-19 is associated with the emergence of other viruses, such as Epstein-Barr (EB) and the Human Papillomavirus (HPV).

Throughout the symposium, several experts also assessed the leveraging of artificial intelligence and computational approaches to address health ailments. Dr. Hoda Hatoum (MTU, Session 1) presented on experimental and computational approaches to model cardiovascular diseases and therapies.

There were also presentations on more low-tech, but nonetheless impressive, methods for improving patient outcomes. Dr. William Cooke (MTU, Session 3) demonstrated how using a rather simple impedance-threshold breathing device can reduce hemorrhaging. Using Blood Flow Restriction (BFR) to increase exercise intensity without taxing joints (MTU, Session 3) was the topic of Dr. Steve Elmer’s presentation.

Dr. Matthew Harkey (MSU, Session 3) presented research on using ultrasound and biomechanics to assess arthritis.

Steve Elmer's poster at the MSU / MTU Symposium
Dr. Steve Elmer (MTU, Session 3) delivered both a talk and a poster.

Targeting the Youth Mental Health Crisis in Michigan

CHI Program Director Dr. Guy Hembroff spoke on using AI to improve the mental health of youth (MTU, Session 2). He began by stressing some startling statistics from Youthgov on suicide in the 15-24 age group. Most striking was the fact that “taking one’s life is the second leading cause of death for youths.”

Dr. Guy Hembroff in Session 2.

Hembroff proposed a number of strategies for using artificial intelligence to track, intervene on, and improve the mental health of youth.

First, he articulated that AI may be employed to not only enhance preventative mental health measures, but also provide safe, responsive data.

Or to put it another way, through wearables, daily mental health check-ins, and user feedback, youth could have personalized, responsive mental health treatment delivered right to them. In short, Hembroff outlined a protocol for providing inexpensive, effective tools that quickly monitor and respond to at-crisis youth, reduce the need for reactionary care, and prevent mental disease from spiraling into suicide.

There is another positive effect of this AI-assisted mental health plan: gamifying the activity of tracking one’s mental health. Youth are known for always interacting with their phones. Thus, this gamification could help reduce the stigma associated with reporting depression, anxiety, and other mental diseases.

Symposium Goals: Promoting Networking and Sharing Research

Hembroff’s talk captured one of the main threads of the symposium: using ingenious, cost-effective, computational approaches to solve crucial health issues. However, all of the research was impressive. That is, there were several expert scientific communicators, such as Zhiying “Jenny” Shan (MTU, Session 5), who walked the audience through her research on extracellular vesicles and blood pressure regulation.

But you can learn more about the depth and breadth of the research by examining the event schedule.

In the closing remarks for the symposium, Dr. Christopher Contag (MSU) further elaborated on the connections between these presentations and the opportunities for collaborative research. First, he summarized some commonalities, such as further analyzing cardiovascular disease, studying extracellular vesicles as diagnostic markers, developing strategies for early intervention, and creating a Long Covid research center.

In addition, Dr. Contag focused on the importance of learning the language of cells and communicating with them: that is, this research is about “not just asking them what they’re saying, but telling them what to do.” He saw this communication as central to modulating the immune system and to controlling disease states.

Dr. Contag delivers the closing remarks.
Dr. Christopher Contag (MSU) delivers the closing remarks.

“I think we’re all focused on distributed healthcare and using our approaches and innovation to reduce health disparities. It’s a theme that’s shared between the two universities.”

Dr. Christopher Contag, Director of the Institute for Quantitative Health Science and Engineering (IQ) and Chair of the Department of Biomedical Engineering in the College of Engineering (MSU)

Moving Beyond This Symposium

For Engineering the Future of Human Health II, MTU’s cosponsors were David Lawrence, vice president for Global Campus and continuing education; Dr. Sean J. Kirkpatrick, professor and department chair, Biomedical Engineering; Dr. Caryn Heldt, professor in Chemical Engineering and director of the Health Research Institute; and Dr. William H. Cooke, professor and department chair, Kinesiology and Integrative Physiology. And for MSU, Dr. Adam Alessio, Departments of Computational Mathematics, Science, and Engineering, Biomedical Engineering and Radiology; and Dr. Bin Chen, associate professor, Department of Pediatrics and Human Development took on the roles of cosponsors.

This collaborative symposium is crucial to the MTU Global Campus mission of helping Michigan Technological University grow partnerships with other higher-ed institutions and participate in multidisciplinary research that tackles pressing biomedical challenges.

The next step, then, is instituting these collaborative working research groups. Furthermore, the two universities hope to pool both talent and resources to build a MSU / MTU translational research center in Grand Rapids, MI. Of this center, David Lawrence further articulated its two main objectives: “first, developing cutting-edge health technologies through advanced applied biomedical research; and, second, but equally important, ultimately improving the health of the citizens of Michigan and those of the nation.”

Readers can also learn more about this event in the coverage by TV6.

ChatGPT: Friend or Foe? Maybe Both.

This blog was originally published in May, 2023, but was shortened and re-released to on Nov. 2023.

In 2006, British mathematician and entrepreneur Clive Humby proclaimed that “data is the new oil.”

At the time, his enthusiastic (if not exaggerated) comment reflected the fervor and faith in the then expanding internet economy. And his metaphor had some weight, too. Like oil, data can be collected (or maybe one should say extracted), refined, and sold. Both of these are also in high demand, and just as the inappropriate or excessive use of oil has deleterious effects on the planet, so may the reckless use of data.

Recently, the newest oil concerning many, one that is shaking up the knowledge workplace, is ChatGPT. Released by OpenAI on November 2022, ChatGPT combines chatbot functionality with a very clever language model. Or to be more precise, the GPT in its name stands for Generative Pre-trained Transformer.

Global Campus previously published a blog about robots in the workplace. One of the concerns raised then was that of AI taking away our jobs. But perhaps, now, the even bigger concern is AI doing our writing, generating our essays, or even our TV show scripts. That is, many are worried about AI substituting for both our creative and critical thinking.

Training Our AI Writing Helper

ChatGPT is not an entirely new technology. That is, experts have long integrated large language models into customer service chatbots, Google searches, and autocomplete e-mail features. The ChatGPT of today is an updated version of GPT-3, which has been around since 2020. But we can go back farther. We can trace its origins to almost 60 years ago. That is when MIT’s Joseph Weizenbaum rolled out ELIZA: the first chatbot. Named after Eliza Doolittle, this chatbot mimicked a Rogerian therapist by (perhaps annoyingly) rephrasing questions. If someone asked, for instance, “My father hates me,” it would reply with another question: “Why do you say your father hates you?”

The current ChatGPT’s immense knowledge and conversational ability are indeed impressive. To acquire these skills, ChatGPT was “trained on huge amounts of data from the Internet, including conversations.” An encyclopedia of text-based data was combined with a “machine learning technique called Reinforcement Learning from Human Feedback (RLHF).” This is a technique in which human trainers provided the model with conversations in which they played both the AI chatbot and the user.” In other words, this bot read a lot of text and practiced mimicking human conversations. Its responses, nonetheless, are not based on knowing the answers, but on predicting what words will come next in a series.

The results of this training is that this chatbot is almost indistinguishable from the human voice. And it’s getting better, too. As chatbot engages with more users, its tone and conversations become increasingly life-like (OpenAI).

Using ChatGPT for Mundane Writing Tasks

Many have used, tested, and challenged ChatGPT. Although one can’t say for certain that the bot always admits its mistakes, it definitely rejects inappropriate requests. It will deliver some clever pick-up lines. However, it won’t provide instructions for cheating on your taxes or on your driver’s license exam. And if you ask it what happens after you die, it is suitably dodgy.

But what makes ChatGPT so popular, and some would say dangerous, is the plethora of text-based documents it can write, such as the following:

  • Long definitions
  • Emails and letters
  • Scripts for podcasts and videos
  • Speeches
  • Basic instructions
  • Quiz questions
  • Discussion prompts
  • Lesson plans
  • Learning objectives
  • Designs for rubrics
  • Outlines for reports and proposals
  • Summaries of arguments
  • Press releases
  • Essays

And this is the short list, too, of its talents. That is, there are people who have used this friendly bot to construct emails to students, quiz questions, and definitions. The internet is also awash with how-to articles on using ChatGPT to write marketing copy, generate novels, and speeches.

Constructing Learning Goals

“College-educated professionals performing mid-level professional writing tasks experience substantial increases in productivity when given access to ChatGPT . . . . The generative writing tool increases the output quality of low-ability workers while reducing their time spent, and it allows high-ability workers to maintain their quality standards while becoming significantly faster.”

Shakked Noy and Whitney Zhang

Noy and Zhang’s findings are taken with a grain of salt. That is, just as many writers don’t trust Grammarly to catch subject-verb agreement errors, others don’t trust ChatGPT to write their emails or press releases.

Nonetheless, as an experiment, this writer tested the tool by asking it to generate two tasks of college instructors.

First, ChatGPT was given this heavy-handed command: “Please generate five learning goals for an introductory course on Science Fiction. Make sure that you do not use the words “understand” or “know” when constructing these goals. Also please rely on Bloom’s taxonomy.

ChatGPT-generated learning goals for a Sci-Fi course.

In a few seconds, out popped the learning goals on the right, which use several of Bloom’s verbs: analyze, evaluate, apply, create, and compare and contrast.

The prompt for the second attempt asked ChatGPT to put these goals in order of ascending complexity, to which it quickly obliged.

(Truthfully, no Sci-Fi course could live up to these goals, but this task was a fun one nonetheless.)

Generating Reference Letters

Next, ChatGPT was assigned a task common to many academics: writing a reference letter.

Students often request these letters, often at the end of the semester, an unfortunate time when many instructors are bone-tired from grading. It turns out that ChatGPT could have helped (however badly) with this task.

Why badly? ChatGPT is only as smart as its user. In this case, the prompt didn’t specify the length of the reference letter. So the little bot dutifully churned out an 8-paragraph, ridiculously detailed, effusive letter, one no reasonable human would write, let alone read or believe.

Let’s hope that admissions officers and scholarship officials are not wading through these over-the-top AI-generated reference letters.

ChatGPT reference letter.
An overly long and over-the-top reference letter generated by ChatGPT.

Recognizing ChatGPT’s Limited Knowledge

Despite helping us with onerous writing tasks, this artificial intelligence helper does have its limitations. In fact, right on the first page, OpenAI honestly admits that its chatbot “may occasionally generate incorrect information, and produce harmful instructions or biased content.” It also has “limited knowledge of world and events after 2021.”

And it reveals these gaps, often humorously.

For instance, when prodded to provide information on several well-known professors from various departments, it came back with wrong answers. In fact, it actually misidentified one well-known department chair as a Floridian famous for his philanthropy and footwear empire. In this case, ChatGPT not only demonstrated “limited knowledge of the world” but also incorrect information. As academics, writers, and global citizens, we should be concerned about releasing more fake info into the world.

Taking into consideration these and other errors, one wonders on what data, exactly, was ChatGPT trained. Did it, for instance, just skip over universities? Academics? Respected academics with important accomplishments? As we know, what the internet prioritizes says a lot about what it and its users value.

Creating Mistakes

There are other limitations. ChatGPT can’t write a self-reflection or decent poetry. And because it is not online, it cannot summarize recent content from the internet.

It also can’t approximate the tone of this article, which shifts between formal and informal and colloquial. Or whimsically insert allusions or pop culture references.

To compensate for its knowledge gaps, ChatGPT generates answers that are incorrect or slightly correct.

In the case of generating mistakes, ChatGPT does mimic the human tendency to fumble, to tap dance around an answer, and to make up material rather than humbly admit ignorance.

Passing Along Misinformation

Being trained on text-based data, which might have been incorrect in the first place, ChatGPT often passes this fakery along. That is, it also (as the example above shows) has a tendency to generate or fabricate fake references and quotations.

It can also spread misinformation. (Misinformation, unintentional false or inaccurate information, is different from disinformation: the intentional spread of untruths to deceive.)

The companies CNET and Bankrate found out this glitch the hard way. For months, they had been duplicitously publishing AI-generated informational articles as informational articles under a byline. When this unethical behavior was discovered, it drew the ire of the internet.

CNET’s stories even contained both plagiarism and factual mistakes, or what Jon Christian at Futurism called “bone-headed errors.” Christian humorously drew attention to mathematical mistakes that were delivered with all the panache of a financial advisor. For instance, the article claimed that “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.” In reality, you’d be earning only $300.

All three screwups. . . . highlight a core issue with current-generation AI text generators: while they’re legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.

John Christian

Revealing Biases

And ChatGPT is not unbiased either. First, this bot has a strong US leaning. For instance, it was prompted to write about the small town of Wingham, ON. In response, it generated some sunny, non-descript prose. However, it omitted this town’s biggest claim to fame: the birthplace of Nobel Prize winning Alice Munro.

This bias is based on ChatGPT being trained on data pulled from the internet. Thus, it reflects all the prejudices of those who wrote and compiled this information. This problem was best articulated by Safiya Umoja Nobel in her landmark book Algorithms of Oppression. In this text, she challenges the ideal that search engines are value-neutral, exposing their hegemonic norms and the consequences of their various sexist, racist biases. ChatGPT, to be sure, is also affected by if not infected with these biases.

Despite agreeing with Nobel’s concerns, and thinking that ChatGPT can be remarkably dumb at times, many writers don’t have want to smash the algorithmic machines anytime soon. Furthermore, many writers DO use this bot to generate definitions of unfamiliar technical terms encountered in their work. For instance, it can help non-experts understand the basics of such concepts as computational fluid dynamics and geospatial engineering. Still, many professional writers choose not to rely on it, nor trust it, too much.

Letting Robots Do Your Homework

But it is students’ trust in and reliance on one of ChatGPT’s features that is causing chaos and consternation in the education world.

That is, many recent cases of cheating are connected to one of this bot’s most popular features: its impressive ability to generate essays in seconds. For instance, it constructed a 7-paragraph comparison/contrast essay on Impressionism and Post-Impressionism in under a minute.

And the content of this essay, though vague, does hold some truth: “Impressionism had a profound impact on the art world, challenging traditional academic conventions. Its emphasis on capturing the fleeting qualities of light and atmosphere paved the way for modern art movements. Post-impressionism, building upon the foundations of impressionism, further pushed the boundaries of artistic expression. Artists like Georges Seurat developed the technique of pointillism, while Paul Gauguin explored new avenues in color symbolism. The post-impressionists’ bold experimentation influenced later art movements, such as fauvism and expressionism.”

With a few modifications and checking of facts, this text would fit comfortably into an introductory art textbook. Or maybe a high-school or a college-level essay.

Sounding the Alarm About ChatGPT

Very shortly after people discovered this essay-writing feature, stories of academic integrity violations flooded the internet. An instructor at an R1 STEM grad program confessed that several students had cheated on a project report milestone. “All 15 students are citing papers that don’t exist.” An alarming article from The Chronicle of Higher Education, written by a student, warned that educators had no idea how much students were using AI. The author rejected the claim that AI’s voice is easy to detect. “It’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own.”

And it’s not just a minority of students using ChatGPT either. In a study.com survey of 200 K-12 teachers, 26% had already caught a student cheating by using this tool. In a BestColleges survey of 1,000 current undergraduate and graduate students (March 2023), 50% of students admitted to using AI for some portion of their assignment, 30% for the majority, and 17% had “used it to complete an assignment and turn it in with no edits.”

Soon, publications like Forbes and Business Insider began pushing out articles about rampant cheating and the internet was buzzing. An elite program in a Florida high school reported a chatbot “cheating scandal”. But probably the most notorious episode was a student who used this bot to write an essay for his Ethics and Artificial Intelligence course. Sadly, the student did not really understood the point of the assignment.

Incorporating ChatGPT in the Classroom

According to a Gizmodo article, many schools have forbidden ChatGPT, such as those in New York City, Los Angeles, Seattle, Fairfax County Virginia.

But there is still a growing body of teachers who aren’t that concerned. Many don’t want to ban ChatGPT altogether. Eliminating this tool from educational settings, they caution, will do far more harm than good. Instead, they argue that teachers must set clearer writing expectations about cheating. They should also create ingenious assignments that students can’t hack with their ChatGPT writing coach, as well as create learning activities that reveal this tool’s limitations.

Others have suggested that the real problem is teachers relying on methods of assessment that are too ChatGPT-cheatable: weighty term papers and final exams. Teachers may need to rethink their testing strategies, or as that student from the Chronicle asserted, “[M]assive structural change is needed if our schools are going to keep training students to think critically.”

Sam Altman, CEO of OpenAI, also doesn’t agree with all the hand-wringing about ChatGPT cheating. He blithely suggested that schools need to “get over it.”

Generative text is something we all need to adapt to . . . . We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well.

Sam Altman

Read MTU’s own Rod Bishop’s much briefer take on academic integrity and AI.