In 2006, British mathematician and entrepreneur Clive Humby proclaimed that “data is the new oil.”
At the time, his enthusiastic (if not exaggerated) comment reflected the fervor and faith in the then expanding internet economy. And his metaphor had some weight, too. Like oil, data can be collected (or maybe one should say extracted), refined, and sold. Both of these are also in high demand, and just as the inappropriate or excessive use of oil has deleterious effects on the planet, so may the reckless use of data.
Right now, the newest oil concerning many, one that is shaking up the knowledge workplace, is ChatGPT. Released by OpenAI on November 2022, ChatGPT combines chatbot functionality with a very clever language model. Or to be more precise, the GPT in its name stands for Generative Pre-trained Transformer.
I previously wrote a blog about robots in the workplace. One of the concerns I raised then was that of AI taking away our jobs. But perhaps, now, the even bigger concern is AI doing our writing, generating our essays, and substituting for both our creative and critical thinking. Thus, in this first blog on everyone’s favorite chatbot, I summarize the origins of this technology. Adopting the perspectives of both writer and teacher, I touch not only on the limitations of ChatGPT, but also on its nefarious effects on conveying disinformation and threatening academic integrity.
Training Our AI Writing Helper
It should be noted that ChatGPT is not an entirely new technology. That is, large language models have long been integrated into customer service chatbots, Google searches, and autocomplete e-mail features. The ChatGPT we know is an updated version of GPT-3, which has been around since 2020. But if we want to be really picky, we can trace its origins to almost 60 years ago, when MIT’s Joseph Weizenbaum rolled out ELIZA: the first chatbot. Named after Eliza Doolittle, this chatbot mimicked a Rogerian therapist by (perhaps annoyingly) rephrasing questions. If it was asked, for instance, “My father hates me,” it would reply with another question: “Why do you say your father hates you?”
The current ChatGPT’s immense knowledge and conversational ability are indeed impressive. To acquire these skills, ChatGPT was “trained on huge amounts of data from the Internet, including conversations.” This encyclopedia of text-based data was combined with a “machine learning technique called Reinforcement Learning from Human Feedback (RLHF), in which human trainers provided the model with conversations in which they played both the AI chatbot and the user.” In other words, this bot read a lot of text and practiced mimicking human conversations. Its responses, nonetheless, are not based on actually knowing the answers, but on predicting what words will come next in a series.
The results of this training is that this chatbot is almost indistinguishable from the human voice. And it’s getting better, too. As chatbot engages with more users, its tone and conversations become increasingly life-like. OpenAI claims that this “dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
Using ChatGPT for Mundane Writing Tasks
I’ve used, tested, and challenged ChatGPT. Although I can’t say for certain that it always admits its mistakes, it definitely rejects inappropriate requests. It will deliver some clever pick-up lines, but it won’t provide instructions for cheating on your taxes or on your driver’s license exam. And if you ask it what happens after you die, it is suitably dodgy.
But what makes ChatGPT so popular, and some would say dangerous, is the plethora of text-based documents it can write, such as the following:
- Long definitions
- Emails and letters
- Scripts for podcasts and videos
- Speeches
- Basic instructions
- Quiz questions
- Discussion prompts
- Lesson plans
- Learning objectives
- Designs for rubrics
- Outlines for reports and proposals
- Summaries of arguments
- Press releases
- Essays
And this is the short list, too, of its talents. I know people who have used this friendly bot to construct emails to students, quiz questions, and definitions. The internet is also awash with how-to articles on using ChatGPT to write marketing copy, generate novels, and speeches. A MIT working paper (Noy and Zhang) even found that office workers were far more productive when they used ChatGPT for mundane writing tasks, such as press releases, emails, and short reports.
“College-educated professionals performing mid-level professional writing tasks experience substantial increases in productivity when given access to ChatGPT . . . . The generative writing tool increases the output quality of low-ability workers while reducing their time spent, and it allows high-ability workers to maintain their quality standards while becoming significantly faster.”
I’ll take their findings with a grain of salt. That is, just as I don’t trust Grammarly to catch subject-verb agreement errors, I don’t trust ChatGPT to write my emails or my press releases. So I put it to work on two of the tasks I previously encountered as a writing instructor.
Constructing Learning Goals
First, I gave ChatGPT this heavy-handed command: “Please generate five learning goals for an introductory course on Science Fiction. Make sure that you do not use the words “understand” or “know” when constructing these goals. Also please rely on Bloom’s taxonomy.
In a few seconds, out popped the learning goals on the right, which use several of Bloom’s verbs: analyze, evaluate, apply, create, and compare and contrast.
On my second attempt, I asked it to put these goals in order of ascending complexity, to which it quickly obliged.
(Truthfully, no Sci-Fi course could live up to these goals, but this task was a fun one nonetheless.)

Generating Reference Letters
Then, I assigned it a task common to many academics: writing a reference letter.
When I taught Professional and Technical Communication at MTU, students would make requests for these letters. These requests would usually arrive at the end of the semester, an unfortunate time when I (and every other instructor) was bone-tired from grading. It turns out that ChatGPT could have helped (however badly) with this task.
I say “badly” because ChatGPT is only as smart as its user. In my case, I didn’t specify the length of the reference letter. So I watched as the bot dutifully churned out an 8-paragraph, ridiculously detailed, effusive letter, one no reasonable human would write, let alone read or believe.
As I construct this blog, I am feeling sorry for any employers, as well as admissions and scholarship officials who might have to wade through these over-the-top ChatGPT reference letters.
May the force be with you all.

Recognizing ChatGPT’s Limited Knowledge
Despite helping us with onerous writing tasks, this artificial intelligence helper does have its limitations. In fact, right on the first page, OpenAI honestly admits that its chatbot “may occasionally generate incorrect information, and produce harmful instructions or biased content.” It also has “limited knowledge of world and events after 2021.”
And it reveals these gaps, often humorously.
For instance, when asked to provide information about my husband, Dr. Adam Wellstead, a prolific scholar and public policy expert at MTU, ChatGPT was stumped. It gave this apologetic reply: “I do not have specific information about a person named Dr. Adam Wellstead. It’s possible that he is a relatively new academic or professional who emerged after that time, or he may be a private individual without significant public recognition.” So not true.
When prodded to provide information on several well-known professors from various departments, it came back with similar answers. In fact, it actually misidentified one well-known department chair as a Floridian famous for his philanthropy and footwear empire. In this case, ChatGPT not only demonstrated “limited knowledge of the world” but also incorrect information. As academics, writers, and global citizens, we should be concerned about releasing more disinformation into the world.
Taking into consideration these and other errors, one wonders on what data, exactly, was ChatGPT trained. Did it, for instance, just skip over universities? Academics? Respected academics with important accomplishments? As we know, what the internet prioritizes says a lot about what it and its users value.
Creating Mistakes
There are other limitations. ChatGPT can’t write a self-reflection or decent poetry. And because it is not online, it cannot summarize recent content from the internet.
It also can’t approximate the tone of this article, which shifts between formal and informal and colloquial. Or whimsically insert allusions or pop culture references.
To compensate for its knowledge gaps, ChatGPT generates answers that are incorrect or slightly correct.
In the case of generating mistakes, ChatGPT does mimic the human tendency to fumble, to tap dance around an answer, and to make up material rather than humbly admit ignorance.
Being trained on text-based data, which might have been incorrect in the first place, ChatGPT often passes this fakery along. That is, it also (as the example above shows) has a tendency to generate or fabricate fake references and quotations.
The companies CNET and Bankrate found out this glitch the hard way. For months, they had been duplicitously publishing AI-generated informational articles as informational articles under a byline. When this unethical behavior was discovered, it drew the ire of the internet.
CNET’s stories even contained both plagiarism and factual mistakes, or what Jon Christian at Futurism called “bone-headed errors.” Christian humorously drew attention to mathematical mistakes that were delivered with all the panache of a financial advisor. For instance, the article claimed that “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.” In reality, you’d be earning only $300.
All three screwups. . . . highlight a core issue with current-generation AI text generators: while they’re legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.
Revealing Biases
And ChatGPT is not unbiased either. First, this bot has a strong US leaning. In my experience, it often gives incomplete replies about that foreign country called Canada. For instance, when it was prompted to generate an article on the small Southern Ontario town of Wingham, it produced some sunny, non-descript prose suitable for unknowing tourists. However, it omitted this town’s biggest claim to fame: the birthplace of Nobel Prize winning Alice Munro.
ChatGPT was also trained on data pulled from the internet, which means it reflects all the prejudices of those who wrote and compiled this information. This problem was best articulated by Safiya Umoja Nobel in her landmark book Algorithms of Oppression, which I have previously reviewed. In this text, she challenges the ideal that search engines are value-neutral, exposing their hegemonic norms and the consequences of their various sexist, racist biases. ChatGPT, to be sure, is also affected by if not infected with these biases.
Don’t get me wrong. I’m not a Luddite. Despite agreeing with Nobel’s concerns, and thinking that ChatGPT can be remarkably dumb at times, I don’t have a passion for smashing the algorithmic machines anytime soon. Furthermore, I admit to using this bot to generate definitions of unfamiliar technical terms encountered in my work. AI has helped me grasp the basics of computational fluid dynamics and geospatial engineering, concepts far beyond my expertise. Still, after my two decades as a teacher, I choose not to rely on it, nor trust it.
Letting Robots Do Your Homework
But it is students’ trust in and reliance on one of ChatGPT’s features that is causing chaos and consternation in the education world.
That is, many recent cases of cheating are connected to one of this bot’s most popular features: its impressive ability to generate essays in seconds. For instance, I asked it to write a 7-paragraph comparison/contrast essay on Impressionism and Post-Impressionism. Then, I watched as the little cursor dutifully typed out the product in under a minute.
And the content of this essay, though vague, does hold some truth: “Impressionism had a profound impact on the art world, challenging traditional academic conventions. Its emphasis on capturing the fleeting qualities of light and atmosphere paved the way for modern art movements. Post-impressionism, building upon the foundations of impressionism, further pushed the boundaries of artistic expression. Artists like Georges Seurat developed the technique of pointillism, while Paul Gauguin explored new avenues in color symbolism. The post-impressionists’ bold experimentation influenced later art movements, such as fauvism and expressionism.”
With a few modifications and checking of facts, this text would fit comfortably into an introductory art textbook. Or maybe a high-school or a college-level essay.
I, too, could have let robots do my homework. That is, I could have challenged ChatGPT to write an article on the pros and cons of itself. Although it surely would have produced this essay in under a minute, I doubt it would have been self-reflective or double-checked its references.
Sounding the Alarm About ChatGPT
Very shortly after people discovered this essay-writing feature, stories of academic integrity violations flooded the internet. An instructor at an R1 STEM grad program confessed that several students had cheated on a project report milestone. “All 15 students are citing papers that don’t exist,” he lamented. An alarming article from The Chronicle of Higher Education, written by a student, warned that educators had no idea how much students were using AI. The author disputed the claim that AI’s voice is easy to detect. “In reality, it’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own.”
And it’s not just a minority of students using ChatGPT either. In a study.com survey of 200 K-12 teachers, 26% had already caught a student cheating by using this tool. In a BestColleges survey of 1,000 current undergraduate and graduate students (March 2023), 50% of students admitted to using AI for some portion of their assignment, 30% for the majority, and 17% had “used it to complete an assignment and turn it in with no edits.”
Soon, publications like Forbes and Business Insider began pushing out articles about rampant cheating and the internet was buzzing. A chatbot “cheating scandal” was reported in an elite program in a Florida high school. But probably the most notorious episode was a student who used this bot to write an essay for his Ethics and Artificial Intelligence course. Sadly and ironically, the student did not really understood the point of the assignment.
Incorporating ChatGPT in the Classroom
According to a Gizmodo article, many schools have forbidden ChatGPT, such as those in New York City, Los Angeles, Seattle, Fairfax County Virginia.
But there is still a growing body of teachers who aren’t that concerned. Many don’t want to ban ChatGPT altogether. Eliminating this tool from educational settings, they caution, will do far more harm than good. Instead, they argue that teachers must set clearer writing expectations about cheating. They should also create ingenious assignments that students can’t hack with their ChatGPT writing coach, as well as create learning activities that reveal this tool’s limitations.
Others have suggested that the real problem is teachers relying on methods of assessment that are too ChatGPT-cheatable: weighty term papers and final exams. Teachers may need to rethink their testing strategies, or as that student from the Chronicle asserted, “[M]assive structural change is needed if our schools are going to keep training students to think critically.”
Sam Altman, CEO of OpenAI, also doesn’t agree with all the hand-wringing about ChatGPT cheating. He blithely suggested that schools need to “get over it.”
Generative text is something we all need to adapt to . . . . We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well.
However, I’ll save my response to Altman and other content for the follow-up(s) to this blog. I’ve reached out to my teacher friends and compiled some research. Part II of ChatGPT, then, will summarize strategies for embedding ChatGPT in the classroom.
Fellow writers, I might also suggest tips for ethically using AI-assistants, so that yes, robots, don’t take away your jobs.