Tag: data

MTU’s GI Science Program Promotes Data-Driven, Yet Inclusive Solutions

Through his workshop on drones, Parth Bhatt helped bring GI Science to Suriname.

Bringing GI Science to Suriname

Dr. Parth Bhatt, Assistant Teaching Professor/Researcher from the College of Forest Resources and Environmental Sciences (CFRES) breathes and lives Geographic Information Science. In fact, Bhatt, a team of researchers, and other MTU representatives recently returned from Suriname, South America. There, they led an immersive, 3-day workshop in Forest Field Research Methods at Anton de Kom Agricultural University’s Centre for Agricultural Research (CELOS). 

Suriname, endowed with vast tropical rainforests and rich biodiversity, faces several pressing technological, environmental, and socio-political challenges. And the country’s geographical features also make it vulnerable to the effects of climate change, such as those of severe flooding and storms.

There are also the more obvious human-made damages to Suriname’s delicate ecosystem. Between 2019 and 2022, in fact, artisanal and small-scale gold mining (ASGM) increased by 47%. This growth led to significant deforestation and environmental degradation. As a result, the region lost approximately 25 square kilometers of rainforest. Suriname’s remoteness further complicates regular data collection, hindering effective policy development and environmental protection efforts.

Exacerbating these issues is a serious skills gap. That is, Bhatt acknowledges that “a major challenge [Suriname] faces is a shortage of highly trained professionals to help manage and preserve these resources effectively. Strengthening educational and research collaborations can help bridge this gap by providing expertise in conservation, remote sensing, and sustainable resource management.”

In Suriname, Parth Bhatt and the rest of the team tried to bridge this gap. For instance, while he was there, Bhatt led workshops on the use of drones for collecting geospatial data in the country’s rainforests. This hands-on experience with UAVs (Unmanned Aerial Vehicles) exemplifies the benefits of applying emerging technologies in natural resource management.

Ongoing Challenges in Geographical Information Science

Most obviously, these workshops demonstrated how Geographic Information Science provides approaches for managing natural resources. To Bhatt, though, “remote sensing are more than just tools—they’re gateways to understanding our world in ways that truly matter.”

Bhatt’s online certificates, through CFRES, certainly help with this understanding. In fact, their coursework addresses the complexities of applying GI Science to natural resource management in the US, Suriname, and beyond.

As an example, let’s take Dr. Bhatt’s inaugural online certificate from Michigan Tech Global Campus: Foundations in Geographic Information Science for Natural Resources.

GI Science Challenge #1: Working with Variable Data Sets

Data sets often vary in resolution, format, projection, and accuracy. This point is especially true when researchers combine historical data with newer sources (e.g., satellite vs. drone). Because of variations in data, it is often difficult to model ecosystems reliably. Or to make consistent decisions across jurisdictions or even time spans.

Furthermore, when it comes to geospatial information, there are additional difficulties with handling the volume, variety, and velocity of data. GI Scientists must contend with a stream of heterogenous data from sensors, satellites, smartphones, and social media. And they must collect and streamline this data while also creating real-time data analytics and visualizations.

GI Science Challenge #2: Contending with Uneven Data Quality and Uncertainty

To complicate things further, geographic data often come from multiple sources. Researchers must juggle information from satellites, GPS, surveys, user-generated content (e.g., OpenStreetMap), and government records. And each source may differ in accuracy, resolution, update frequency, and metadata standards, leading to uneven quality and results. For instance, combining high-resolution satellite imagery with outdated census data might produce misleading results in land-use change analysis.

There is also the problem of uncertainty and inconsistency in spatial data. This problem is especially tricky when boundaries or attributes are interpreted subjectively (e.g., informal settlement boundaries). And inconsistency in quality can result from human error, different measurement techniques, and varying classification systems.

Classification, for instance, is variable. Organizations, datasets, and researchers might categorize geographic features differently, even when referring to the same types of objects or areas. For instance, one land-cover dataset might classify land according to “forest,” “urban,” “agriculture,” and water. Another might use these categories: “deciduous forest,” “coniferous forest,” “low-density urban,” “high-density urban,” and “irrigated cropland.”

FW5550 (Geographic Information Science and Spatial Analysis)

Some of the course’s key topics address these challenges.

  • Metadata Standards and Quality Assessment. FW5550 emphasizes understanding metadata, particularly their provenance, processing, and reliability.
  • Spatial Data Models and Structures: Students learn how different types of spatial data (raster vs. vector, continuous vs. discrete) are structured, so that they can recognize the limitations and strengths of each. This skill is crucial when merging data from multiple sources that have inconsistent formats or resolutions.
  • Data Integration and Overlay Analysis: Combining datasets from multiple origins is stressed. The course addresses inconsistencies in classification systems, temporal mismatches, and spatial resolution. It also covers practical techniques of reclassification, resampling, and transformation.

GI Science Challenge #3: Collecting Data in the Field

Gathering data in the real world is definitely messy. Thus, another challenge is ensuring the collection of accurate, up-to-date, and context-sensitive data collection while in varied environments. Researchers must contend with several obstacles, such as poor signal in forests, variable terrain, or multipath interference.

Multipath interference is a common and important source of error in Geographic Information Science, particularly in GPS/GNSS data collection. This problem occurs when a GPS signal bounces off surfaces (buildings, water, terrain, dense forest canopies) before reaching the GPS receiver. This interference then causes delays and inaccuracies in position calculation. (If you’ve ever run in a dense forest with a Garmin watch that beeps out an impossibly fast 6-minute mile followed by an annoying slow, 13-minute one, you’ve experienced this phenomenon.)

In other words, collecting data in the real world means recognizing environmental context, positional accuracy, and uncertainty. Therefore, researchers must understand how to quantify and mitigate locational error in spatial datasets. This need is especially true of data in high-precision applications, such as autonomous navigation. Drones used in forest-fire management, for instance, must quickly get to where they need to be. Furthermore, field-collected data must also be integrated with other geospatial datasets: aerial/satellite imagery, census records, or remote sensing products

How FW5554 (GPS Field Techniques) Helps Students Address the Complexities of Data Collection

This hands-on course, which focuses on GPS technology and its applications, emphasizes data collection, processing, and management. Students gain practical experience with various GPS units, learning to ensure data accuracy and quality. They also get experience integrating GPS data with GIS systems–vital for working with UAVs and IoT devices.

Some of the course’s key features include the following:

  • Data Collection in the Real World: Students work with state-of-the-art handheld Trimble GPS unit and industry-standard mobile applications, such as FieldMaps, Survey123 and QuickCapture which are crucial for their portfolios (as part of the Modern GeoApps). Thus, they gain hands-on experience using GPS devices and collecting precise spatial data in challenging, obstacle-filled settings.
  • Positional Accuracy and Uncertainty: The course covers differential correction techniques and the use of real-time kinematic (RTK) positioning, which are both essential for high-accuracy mapping.
  • Integration of Field Data with Other Geospatial Data: Students learn how to format, import, and manage GPS data in GIS platforms, such as ArcGIS. The course also prepares students to handle data transformation, projection alignment, and temporal matching, which are increasingly important for multi-source data fusion in GI Science. The emphasis on using GPS and mobile mapping technologies gives learners a strong base for adapting to newer geospatial tools (drones, IoT, GIS apps).

The pictures below, taken from Dr. Bhatt’s trip to Suriname, represent the challenges of collecting data in the field while respecting the input of local knowledge.


GIS Challenge #4: Ensuring Human-Centered and Participatory GI Science

Data of any kind is not neutral. It is not without bias. Therefore, one ongoing challenge to GI Science is ensuring that data collection is more inclusive, especially to underrepresented communities. For inclusive GI Science to happen, though, GIS interfaces and tools must be user-friendly. If they are, participatory mapping, community engagement, and indigenous mapping can deepen both the collection and analysis of spatial data.

HOW FW 4545 (Map Design with GIS) Helps Make GI Science More Inclusive

This course teaches the principles of effective map-making. It also focuses on clear communication for decision-making and inclusive natural resource management. That is, students learn advanced visualization techniques to create accessible, informative maps for diverse audiences, supporting participatory approaches.

Ethical issues in GI Science, such as geoprivacy, data anonymization, equity, and bias in spatial algorithms, are another important topic. On the responsible use of spatial data, the course highlights opportunities to empower local and Indigenous communities by integrating traditional knowledge.

GI Science Challenge #5: Addressing the Effects of Climate Change

Overall, the curriculum of Dr. Bhatt’s first online certificate–Foundations in GI Science for Natural Resources–emphasizes applying GI Science to monitor and analyze changing natural systems. By engaging with real-world datasets and case studies, students develop the skills to update and interpret GIS models. They become adept at analyzing environmental conditions, ongoing trends, and the impacts of climate change.

They also learn to integrate ecological and climatic data. In doing so, they develop comprehensive analyses and predictive models so that they can make informed decisions in natural resource management.

Integrating remote sensing techniques with GIS is also stressed. This skill is pivotal to monitoring deforestation, tracking wildlife movements, and assessing fire risks. ​Also, through the program’s emphasis on the societal applications of GI Science, students learn how to engage with communities, incorporate local knowledge, and support collaborative natural resource management.

GI Science at MTU: Looking Forward.

All in all, Michigan Technological University’s Online Graduate Certificate in Foundations in Geographic Information Science for Natural Resources is structured to build foundational GIS skills while addressing common technical barriers.

This certificate is just the first of the stackable three that will constitute Michigan Tech’s forthcoming Online Master of Geographic Information Science (MGIS) program. The subsequent certificates will delve deeper into advanced GI Science and remote sensing topics. Their content will further equip students to navigate and utilize modern GIS tools and technologies as they apply natural resource management.

Currently, Dr. Bhatt is running these courses from the first certificate in the Summer: FW5550 (Geographic Information Science) and FW5554 (GPS Field Techniques). And in Fall 2025, these three courses will be available: FW5550, FW5554, as well as FW5553 (Python Programming for GIS). This last course is from the second very-soon-to-be-released certificate: Advanced Geographic Information Science for Natural Resources.

And he’s proud of these courses, too, and their graduates. He enjoys giving his students “hands-on experience with spatial technologies while exploring their real-world applications, from environmental monitoring in the forests and wetlands to solving local and global resource challenges.”

Through Michigan Tech’s global learning opportunities and hands-on programs, I’ve been able to offer a valuable education to students, which helps them not only transform curiosity into capability, but also data into meaningful change. 

Dr. Parth Bhatt

Learn More About Michigan Tech’s Online GI Science Program.

If you’re interested in diving deeper into this online program and discovering how it can align with your specific career goals or research interests, please attend our virtual (Zoom) information session.

This session, which represents the third installment of our Third Thursday Series, will discuss admissions requirements, program details, and career trajectories. Even better: you’ll also get to meet (and introduce yourself to) the program’s main instructor and director: the dynamic Parth Bhatt.

DETAILS:

Date: Thursday, May 15, 2025

Time: 11:30 AM – 12:15 PM (ET)

Location: Zoom

ChatGPT: Friend or Foe? Maybe Both.

An image of a network to symbolize ChatGPT.

(NOTE: This article is a slightly abbreviated and edited version of a blog originally published in May 2023.)

In 2006, British mathematician and entrepreneur Clive Humby proclaimed that “data is the new oil.”

At the time, his enthusiastic (if not exaggerated) comment reflected the fervor and faith in the then expanding internet economy. And his metaphor had some weight, too. Like oil, data can be collected (or maybe one should say extracted), refined, and sold. Both of these are also in high demand, and just as the inappropriate or excessive use of oil has deleterious effects on the planet, so may the reckless use of data.

Recently, the newest oil concerning many, one that is shaking up the knowledge workplace, is ChatGPT. Released by OpenAI in November 2022, ChatGPT combines chatbot functionality with a very clever language model. Or to be more precise, the GPT in its name stands for Generative Pre-trained Transformer.

Global Campus previously published a blog about robots in the workplace. One of the concerns raised then was that of AI taking away our jobs. But perhaps, now, the even bigger concern is AI doing our writing, generating our essays, or even our TV show scripts. That is, many are worried about AI substituting for both our creative and critical thinking.

Training Our AI Writing Helper

ChatGPT is not an entirely new technology. That is, experts have long integrated large language models into customer service chatbots, Google searches, and autocomplete e-mail features. The ChatGPT of today is an updated version of GPT-3, which has been around since 2020. But ChatGPT’s origins go further back. Almost 60 years ago, MIT’s Joseph Weizenbaum rolled out ELIZA: the first chatbot. Named after Eliza Doolittle, this chatbot mimicked a Rogerian therapist by (perhaps annoyingly) rephrasing questions. If someone asked, for instance, “My father hates me,” it would reply with another question: “Why do you say your father hates you?” And so on.

The current ChatGPT’s immense knowledge and conversational ability are indeed impressive. To acquire these skills, ChatGPT was “trained on huge amounts of data from the Internet, including conversations.” An encyclopedia of text-based data was combined with a “machine learning technique called Reinforcement Learning from Human Feedback (RLHF).” This is a technique in which human trainers provided the model with conversations in which they played both the AI chatbot and the user.” In other words, this bot read a lot of text and practiced mimicking human conversations. Its responses, nonetheless, are not based on knowing the answers, but on predicting what words will come next in a series.

The results of this training is that this chatbot is almost indistinguishable from the human voice. And it’s getting better, too. As chatbot engages with more users, its tone and conversations become increasingly life-like (OpenAI).

Using ChatGPT for Mundane Writing Tasks

Many have used, tested, and challenged ChatGPT. Although one can’t say for certain that the bot always admits its mistakes, it definitely rejects inappropriate requests. It will deliver some clever pick-up lines. However, it won’t provide instructions for cheating on your taxes or on your driver’s license exam. And if you ask it what happens after you die, it is suitably dodgy.

But what makes ChatGPT so popular, and some would say dangerous, is the plethora of text-based documents it can produce, such as the following:

  • Long definitions
  • Emails and letters
  • Scripts for podcasts and videos
  • Speeches
  • Basic instructions
  • Quiz questions
  • Discussion prompts
  • Lesson plans
  • Learning objectives
  • Designs for rubrics
  • Outlines for reports and proposals
  • Summaries of arguments
  • Press releases
  • Essays

And this is the short list, too, of its talents. That is, there are people who have used this friendly bot to construct emails to students, quiz questions, and definitions. The internet is also awash with how-to articles on using ChatGPT to write marketing copy, generate novels, and speeches. Noy and Zhang even claim that this “generative writing tool increases the output quality of low-ability workers while reducing their time spent, and it allows high-ability workers to maintain their quality standards while becoming significantly faster.”

Below are examples of two onerous writing tasks assigned to ChatGPT: a reference letter and learning goals.

ChatGPT reference letter.
AI writes a very wordy reference letter
Example of learning goals generated by ChatGPT
Here is an example of content created by ChatGPT after being instructed to use Bloom’s taxonomy to create learning goals for a Sci-Fi course.

Recognizing ChatGPT’s Limited Knowledge

Despite helping writers with mundane tasks, this artificial intelligence helper does have its limitations. First of all, it is only as wise as its instructions. For instance, the effusive reference letter above resulted from it having no guidance about length or tone. ChatGPT just threw everything in the written soup.

This AI helper also makes mistakes. In fact, right on the first page, OpenAI honestly admits that its chatbot “may occasionally generate incorrect information, and produce harmful instructions or biased content.” It also has “limited knowledge of the world and events after 2021.”

And it reveals these gaps, often humorously.

For instance, when prodded to provide information on several well-known professors from various departments, it came back with wrong answers. In fact, it actually misidentified one well-known department chair as a Floridian famous for his philanthropy and footwear empire. In this case, ChatGPT not only demonstrated “limited knowledge of the world” but also incorrect information. As academics, writers, and global citizens, we should be concerned about releasing more fake information into the world.

Taking into consideration these and other errors, one wonders on what data, exactly, was ChatGPT trained. Did it, for instance, just skip over universities? Academics? Respected academics with important accomplishments? As we know, what the internet prioritizes says a lot about what it and its users value.

Creating Errors

There are other limitations. OpenAi’s ChatGPT can’t write a self-reflection or decent poetry. And because it is not online, it cannot summarize recent content from the internet.

It also can’t approximate the tone of this article, which shifts between formal and informal and colloquial. Or whimsically insert allusions or pop culture references.

To compensate for its knowledge gaps, ChatGPT generates answers that are incorrect or slightly correct.

In the case of generating mistakes, ChatGPT does mimic the human tendency to fumble, to tap dance around an answer, and to make up material rather than humbly admit ignorance.

Passing Along Misinformation

Being trained on text-based data, which might have been incorrect in the first place, ChatGPT often passes this fakery along. That is, it also (as the example above shows) has a tendency to generate or fabricate fake references and quotations.

It can also spread misinformation. (Misinformation, unintentional false or inaccurate information, is different from disinformation: the intentional spread of untruths to deceive.)

The companies CNET and Bankrate found out this glitch the hard way. For months, they had been duplicitously publishing AI-generated informational articles as human-written articles under a byline. When this unethical behavior was discovered, it drew the ire of the internet.

CNET’s stories even contained both plagiarism and factual mistakes, or what Jon Christian at Futurism called “bone-headed errors.” Christian humorously drew attention to mathematical mistakes that were delivered with all the panache of a financial advisor. For instance, the article claimed that “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.” In reality, you’d be earning only $300.

All three screwups. . . . highlight a core issue with current-generation AI text generators: while they’re legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.

John Christian

Revealing Biases

And ChatGPT is not unbiased either. First, this bot has a strong US leaning. For instance, it was prompted to write about the small town of Wingham, ON. In response, it generated some sunny, non-descript prose. However, it omitted this town’s biggest claim to fame: the birthplace of Nobel Prize winning Alice Munro.

The bias is based on ChatGPT being trained on data pulled from the internet. Thus, it reflects all the prejudices of those who wrote and compiled this information.

Nobel exposes corrupt algorithms. Chat GPT was trained on these.
Nobel’s expose of algorithms

This problem was best articulated by Safiya Umoja Nobel in her landmark book Algorithms of Oppression.

In this text, she challenges the ideal that search engines are value-neutral, exposing their hegemonic norms and the consequences of their various sexist, racist biases. ChatGPT, to be sure, is also affected by if not infected with these biases.

What really made me lose confidence in ChatGPT is when I asked if the United States ever had a president with African ancestry, and it answered no, then apologized after I reminded the chatbot about Barack Obama.

Jamaal Abdul-Alim, Education Editor, The Conversation

Despite agreeing with Nobel’s and Abdul-Alim’s very serious concerns, and thinking that ChatGPT can be remarkably dumb at times, many may not want to smash the algorithmic machines anytime soon. Furthermore, there are writers who do use this bot to generate correct definitions of unfamiliar technical terms encountered in their work. For instance, it can help non-experts understand the basics of such concepts as computational fluid dynamics and geospatial engineering. Still, many professionals choose not to rely on it, nor trust it, too much.

Letting Robots Do Your Homework

But it is students’ trust in and reliance on OpenAI that is causing chaos and consternation in the education world.

That is, many 2022 cases of cheating were connected to one of this bot’s most popular features: its impressive ability to generate essays in seconds. For instance, it constructed a 7-paragraph comparison/contrast essay on Impressionism and Post-Impressionism in under a minute.

And the content of this essay, though vague, does hold some truth: “Impressionism had a profound impact on the art world, challenging traditional academic conventions. Its emphasis on capturing the fleeting qualities of light and atmosphere paved the way for modern art movements. Post-impressionism, building upon the foundations of impressionism, further pushed the boundaries of artistic expression. Artists like Georges Seurat developed the technique of pointillism, while Paul Gauguin explored new avenues in color symbolism. The post-impressionists’ bold experimentation influenced later art movements, such as fauvism and expressionism.”

With a few modifications and a checking of facts, this text would fit comfortably into an introductory art textbook. Or maybe a high-school or a college-level essay.

Sounding the Alarm About ChatGPT

Very shortly after people discovered this essay-writing feature, stories of academic integrity violations flooded the internet. An instructor at an R1 STEM grad program confessed that several students had cheated on a project report milestone. “All 15 students are citing papers that don’t exist.” An alarming article from The Chronicle of Higher Education, written by a student, warned that educators had no idea how much students were using AI. The author rejected the claim that AI’s voice is easy to detect. “It’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own.”

And it’s not just a minority of students using ChatGPT either. In a study.com survey of 200 K-12 teachers, 26% had already caught a student cheating by using this tool. In a BestColleges survey of 1,000 current undergraduate and graduate students (March 2023), 50% of students admitted to using AI for some portion of their assignment, 30% for the majority, and 17% had “used it to complete an assignment and turn it in with no edits.”

Soon, after publications like Forbes and Business Insider began pushing out articles about rampant cheating,the internet was buzzing. An elite program in a Florida high school reported a chatbot “cheating scandal”. But probably the most notorious episode was a student who used this bot to write an essay for his Ethics and Artificial Intelligence course. Sadly, the student did not really understood the point of the assignment.

Incorporating ChatGPT in the Classroom

According to a Gizmodo article, many schools have forbidden ChatGPT, such as those in New York City, Los Angeles, Seattle, Fairfax County Virginia.

But there is still a growing body of teachers who aren’t that concerned. Many don’t want to ban ChatGPT altogether. Eliminating this tool from educational settings, they caution, will do far more harm than good. Instead, they argue that teachers must set clearer writing expectations about cheating. They should also create ingenious assignments that students can’t hack with their ChatGPT writing coach, as well as construct learning activities that reveal this tool’s limitations.

Others have suggested that the real problem is that of teachers relying on methods of assessment that are too ChatGPT-hackaable: weighty term papers and final exams on generic topics. Teachers may need to rethink their testing strategies, or as that student from the Chronicle asserted, “[M]assive structural change is needed if our schools are going to keep training students to think critically.”

Sam Altman, CEO of OpenAI, also doesn’t agree with all the hand-wringing about ChatGPT cheating. He blithely suggested that schools need to “get over it.”

Generative text is something we all need to adapt to . . . . We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well.

Sam Altman

Read MTU’s own Rod Bishop’s guidance on ChatGPT in the university classroom. And think about your stance on this little AI writing helper.

ChatGPT: Friend or Foe? Maybe Both.

This blog was originally published in May, 2023, but was shortened and re-released to on Nov. 2023.

In 2006, British mathematician and entrepreneur Clive Humby proclaimed that “data is the new oil.”

At the time, his enthusiastic (if not exaggerated) comment reflected the fervor and faith in the then expanding internet economy. And his metaphor had some weight, too. Like oil, data can be collected (or maybe one should say extracted), refined, and sold. Both of these are also in high demand, and just as the inappropriate or excessive use of oil has deleterious effects on the planet, so may the reckless use of data.

Recently, the newest oil concerning many, one that is shaking up the knowledge workplace, is ChatGPT. Released by OpenAI on November 2022, ChatGPT combines chatbot functionality with a very clever language model. Or to be more precise, the GPT in its name stands for Generative Pre-trained Transformer.

Global Campus previously published a blog about robots in the workplace. One of the concerns raised then was that of AI taking away our jobs. But perhaps, now, the even bigger concern is AI doing our writing, generating our essays, or even our TV show scripts. That is, many are worried about AI substituting for both our creative and critical thinking.

Training Our AI Writing Helper

ChatGPT is not an entirely new technology. That is, experts have long integrated large language models into customer service chatbots, Google searches, and autocomplete e-mail features. The ChatGPT of today is an updated version of GPT-3, which has been around since 2020. But we can go back farther. We can trace its origins to almost 60 years ago. That is when MIT’s Joseph Weizenbaum rolled out ELIZA: the first chatbot. Named after Eliza Doolittle, this chatbot mimicked a Rogerian therapist by (perhaps annoyingly) rephrasing questions. If someone asked, for instance, “My father hates me,” it would reply with another question: “Why do you say your father hates you?”

The current ChatGPT’s immense knowledge and conversational ability are indeed impressive. To acquire these skills, ChatGPT was “trained on huge amounts of data from the Internet, including conversations.” An encyclopedia of text-based data was combined with a “machine learning technique called Reinforcement Learning from Human Feedback (RLHF).” This is a technique in which human trainers provided the model with conversations in which they played both the AI chatbot and the user.” In other words, this bot read a lot of text and practiced mimicking human conversations. Its responses, nonetheless, are not based on knowing the answers, but on predicting what words will come next in a series.

The results of this training is that this chatbot is almost indistinguishable from the human voice. And it’s getting better, too. As chatbot engages with more users, its tone and conversations become increasingly life-like (OpenAI).

Using ChatGPT for Mundane Writing Tasks

Many have used, tested, and challenged ChatGPT. Although one can’t say for certain that the bot always admits its mistakes, it definitely rejects inappropriate requests. It will deliver some clever pick-up lines. However, it won’t provide instructions for cheating on your taxes or on your driver’s license exam. And if you ask it what happens after you die, it is suitably dodgy.

But what makes ChatGPT so popular, and some would say dangerous, is the plethora of text-based documents it can write, such as the following:

  • Long definitions
  • Emails and letters
  • Scripts for podcasts and videos
  • Speeches
  • Basic instructions
  • Quiz questions
  • Discussion prompts
  • Lesson plans
  • Learning objectives
  • Designs for rubrics
  • Outlines for reports and proposals
  • Summaries of arguments
  • Press releases
  • Essays

And this is the short list, too, of its talents. That is, there are people who have used this friendly bot to construct emails to students, quiz questions, and definitions. The internet is also awash with how-to articles on using ChatGPT to write marketing copy, generate novels, and speeches.

Constructing Learning Goals

“College-educated professionals performing mid-level professional writing tasks experience substantial increases in productivity when given access to ChatGPT . . . . The generative writing tool increases the output quality of low-ability workers while reducing their time spent, and it allows high-ability workers to maintain their quality standards while becoming significantly faster.”

Shakked Noy and Whitney Zhang

Noy and Zhang’s findings are taken with a grain of salt. That is, just as many writers don’t trust Grammarly to catch subject-verb agreement errors, others don’t trust ChatGPT to write their emails or press releases.

Nonetheless, as an experiment, this writer tested the tool by asking it to generate two tasks of college instructors.

First, ChatGPT was given this heavy-handed command: “Please generate five learning goals for an introductory course on Science Fiction. Make sure that you do not use the words “understand” or “know” when constructing these goals. Also please rely on Bloom’s taxonomy.

ChatGPT-generated learning goals for a Sci-Fi course.

In a few seconds, out popped the learning goals on the right, which use several of Bloom’s verbs: analyze, evaluate, apply, create, and compare and contrast.

The prompt for the second attempt asked ChatGPT to put these goals in order of ascending complexity, to which it quickly obliged.

(Truthfully, no Sci-Fi course could live up to these goals, but this task was a fun one nonetheless.)

Generating Reference Letters

Next, ChatGPT was assigned a task common to many academics: writing a reference letter.

Students often request these letters, often at the end of the semester, an unfortunate time when many instructors are bone-tired from grading. It turns out that ChatGPT could have helped (however badly) with this task.

Why badly? ChatGPT is only as smart as its user. In this case, the prompt didn’t specify the length of the reference letter. So the little bot dutifully churned out an 8-paragraph, ridiculously detailed, effusive letter, one no reasonable human would write, let alone read or believe.

Let’s hope that admissions officers and scholarship officials are not wading through these over-the-top AI-generated reference letters.

ChatGPT reference letter.
An overly long and over-the-top reference letter generated by ChatGPT.

Recognizing ChatGPT’s Limited Knowledge

Despite helping us with onerous writing tasks, this artificial intelligence helper does have its limitations. In fact, right on the first page, OpenAI honestly admits that its chatbot “may occasionally generate incorrect information, and produce harmful instructions or biased content.” It also has “limited knowledge of world and events after 2021.”

And it reveals these gaps, often humorously.

For instance, when prodded to provide information on several well-known professors from various departments, it came back with wrong answers. In fact, it actually misidentified one well-known department chair as a Floridian famous for his philanthropy and footwear empire. In this case, ChatGPT not only demonstrated “limited knowledge of the world” but also incorrect information. As academics, writers, and global citizens, we should be concerned about releasing more fake info into the world.

Taking into consideration these and other errors, one wonders on what data, exactly, was ChatGPT trained. Did it, for instance, just skip over universities? Academics? Respected academics with important accomplishments? As we know, what the internet prioritizes says a lot about what it and its users value.

Creating Mistakes

There are other limitations. ChatGPT can’t write a self-reflection or decent poetry. And because it is not online, it cannot summarize recent content from the internet.

It also can’t approximate the tone of this article, which shifts between formal and informal and colloquial. Or whimsically insert allusions or pop culture references.

To compensate for its knowledge gaps, ChatGPT generates answers that are incorrect or slightly correct.

In the case of generating mistakes, ChatGPT does mimic the human tendency to fumble, to tap dance around an answer, and to make up material rather than humbly admit ignorance.

Passing Along Misinformation

Being trained on text-based data, which might have been incorrect in the first place, ChatGPT often passes this fakery along. That is, it also (as the example above shows) has a tendency to generate or fabricate fake references and quotations.

It can also spread misinformation. (Misinformation, unintentional false or inaccurate information, is different from disinformation: the intentional spread of untruths to deceive.)

The companies CNET and Bankrate found out this glitch the hard way. For months, they had been duplicitously publishing AI-generated informational articles as informational articles under a byline. When this unethical behavior was discovered, it drew the ire of the internet.

CNET’s stories even contained both plagiarism and factual mistakes, or what Jon Christian at Futurism called “bone-headed errors.” Christian humorously drew attention to mathematical mistakes that were delivered with all the panache of a financial advisor. For instance, the article claimed that “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.” In reality, you’d be earning only $300.

All three screwups. . . . highlight a core issue with current-generation AI text generators: while they’re legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.

John Christian

Revealing Biases

And ChatGPT is not unbiased either. First, this bot has a strong US leaning. For instance, it was prompted to write about the small town of Wingham, ON. In response, it generated some sunny, non-descript prose. However, it omitted this town’s biggest claim to fame: the birthplace of Nobel Prize winning Alice Munro.

This bias is based on ChatGPT being trained on data pulled from the internet. Thus, it reflects all the prejudices of those who wrote and compiled this information. This problem was best articulated by Safiya Umoja Nobel in her landmark book Algorithms of Oppression. In this text, she challenges the ideal that search engines are value-neutral, exposing their hegemonic norms and the consequences of their various sexist, racist biases. ChatGPT, to be sure, is also affected by if not infected with these biases.

Despite agreeing with Nobel’s concerns, and thinking that ChatGPT can be remarkably dumb at times, many writers don’t have want to smash the algorithmic machines anytime soon. Furthermore, many writers DO use this bot to generate definitions of unfamiliar technical terms encountered in their work. For instance, it can help non-experts understand the basics of such concepts as computational fluid dynamics and geospatial engineering. Still, many professional writers choose not to rely on it, nor trust it, too much.

Letting Robots Do Your Homework

But it is students’ trust in and reliance on one of ChatGPT’s features that is causing chaos and consternation in the education world.

That is, many recent cases of cheating are connected to one of this bot’s most popular features: its impressive ability to generate essays in seconds. For instance, it constructed a 7-paragraph comparison/contrast essay on Impressionism and Post-Impressionism in under a minute.

And the content of this essay, though vague, does hold some truth: “Impressionism had a profound impact on the art world, challenging traditional academic conventions. Its emphasis on capturing the fleeting qualities of light and atmosphere paved the way for modern art movements. Post-impressionism, building upon the foundations of impressionism, further pushed the boundaries of artistic expression. Artists like Georges Seurat developed the technique of pointillism, while Paul Gauguin explored new avenues in color symbolism. The post-impressionists’ bold experimentation influenced later art movements, such as fauvism and expressionism.”

With a few modifications and checking of facts, this text would fit comfortably into an introductory art textbook. Or maybe a high-school or a college-level essay.

Sounding the Alarm About ChatGPT

Very shortly after people discovered this essay-writing feature, stories of academic integrity violations flooded the internet. An instructor at an R1 STEM grad program confessed that several students had cheated on a project report milestone. “All 15 students are citing papers that don’t exist.” An alarming article from The Chronicle of Higher Education, written by a student, warned that educators had no idea how much students were using AI. The author rejected the claim that AI’s voice is easy to detect. “It’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own.”

And it’s not just a minority of students using ChatGPT either. In a study.com survey of 200 K-12 teachers, 26% had already caught a student cheating by using this tool. In a BestColleges survey of 1,000 current undergraduate and graduate students (March 2023), 50% of students admitted to using AI for some portion of their assignment, 30% for the majority, and 17% had “used it to complete an assignment and turn it in with no edits.”

Soon, publications like Forbes and Business Insider began pushing out articles about rampant cheating and the internet was buzzing. An elite program in a Florida high school reported a chatbot “cheating scandal”. But probably the most notorious episode was a student who used this bot to write an essay for his Ethics and Artificial Intelligence course. Sadly, the student did not really understood the point of the assignment.

Incorporating ChatGPT in the Classroom

According to a Gizmodo article, many schools have forbidden ChatGPT, such as those in New York City, Los Angeles, Seattle, Fairfax County Virginia.

But there is still a growing body of teachers who aren’t that concerned. Many don’t want to ban ChatGPT altogether. Eliminating this tool from educational settings, they caution, will do far more harm than good. Instead, they argue that teachers must set clearer writing expectations about cheating. They should also create ingenious assignments that students can’t hack with their ChatGPT writing coach, as well as create learning activities that reveal this tool’s limitations.

Others have suggested that the real problem is teachers relying on methods of assessment that are too ChatGPT-cheatable: weighty term papers and final exams. Teachers may need to rethink their testing strategies, or as that student from the Chronicle asserted, “[M]assive structural change is needed if our schools are going to keep training students to think critically.”

Sam Altman, CEO of OpenAI, also doesn’t agree with all the hand-wringing about ChatGPT cheating. He blithely suggested that schools need to “get over it.”

Generative text is something we all need to adapt to . . . . We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well.

Sam Altman

Read MTU’s own Rod Bishop’s much briefer take on academic integrity and AI.