Tag: robots

ChatGPT: Friend or Foe? Maybe Both.

An image of a network to symbolize ChatGPT.

(NOTE: This article is a slightly abbreviated and edited version of a blog originally published in May 2023.)

In 2006, British mathematician and entrepreneur Clive Humby proclaimed that “data is the new oil.”

At the time, his enthusiastic (if not exaggerated) comment reflected the fervor and faith in the then expanding internet economy. And his metaphor had some weight, too. Like oil, data can be collected (or maybe one should say extracted), refined, and sold. Both of these are also in high demand, and just as the inappropriate or excessive use of oil has deleterious effects on the planet, so may the reckless use of data.

Recently, the newest oil concerning many, one that is shaking up the knowledge workplace, is ChatGPT. Released by OpenAI in November 2022, ChatGPT combines chatbot functionality with a very clever language model. Or to be more precise, the GPT in its name stands for Generative Pre-trained Transformer.

Global Campus previously published a blog about robots in the workplace. One of the concerns raised then was that of AI taking away our jobs. But perhaps, now, the even bigger concern is AI doing our writing, generating our essays, or even our TV show scripts. That is, many are worried about AI substituting for both our creative and critical thinking.

Training Our AI Writing Helper

ChatGPT is not an entirely new technology. That is, experts have long integrated large language models into customer service chatbots, Google searches, and autocomplete e-mail features. The ChatGPT of today is an updated version of GPT-3, which has been around since 2020. But ChatGPT’s origins go further back. Almost 60 years ago, MIT’s Joseph Weizenbaum rolled out ELIZA: the first chatbot. Named after Eliza Doolittle, this chatbot mimicked a Rogerian therapist by (perhaps annoyingly) rephrasing questions. If someone asked, for instance, “My father hates me,” it would reply with another question: “Why do you say your father hates you?” And so on.

The current ChatGPT’s immense knowledge and conversational ability are indeed impressive. To acquire these skills, ChatGPT was “trained on huge amounts of data from the Internet, including conversations.” An encyclopedia of text-based data was combined with a “machine learning technique called Reinforcement Learning from Human Feedback (RLHF).” This is a technique in which human trainers provided the model with conversations in which they played both the AI chatbot and the user.” In other words, this bot read a lot of text and practiced mimicking human conversations. Its responses, nonetheless, are not based on knowing the answers, but on predicting what words will come next in a series.

The results of this training is that this chatbot is almost indistinguishable from the human voice. And it’s getting better, too. As chatbot engages with more users, its tone and conversations become increasingly life-like (OpenAI).

Using ChatGPT for Mundane Writing Tasks

Many have used, tested, and challenged ChatGPT. Although one can’t say for certain that the bot always admits its mistakes, it definitely rejects inappropriate requests. It will deliver some clever pick-up lines. However, it won’t provide instructions for cheating on your taxes or on your driver’s license exam. And if you ask it what happens after you die, it is suitably dodgy.

But what makes ChatGPT so popular, and some would say dangerous, is the plethora of text-based documents it can produce, such as the following:

  • Long definitions
  • Emails and letters
  • Scripts for podcasts and videos
  • Speeches
  • Basic instructions
  • Quiz questions
  • Discussion prompts
  • Lesson plans
  • Learning objectives
  • Designs for rubrics
  • Outlines for reports and proposals
  • Summaries of arguments
  • Press releases
  • Essays

And this is the short list, too, of its talents. That is, there are people who have used this friendly bot to construct emails to students, quiz questions, and definitions. The internet is also awash with how-to articles on using ChatGPT to write marketing copy, generate novels, and speeches. Noy and Zhang even claim that this “generative writing tool increases the output quality of low-ability workers while reducing their time spent, and it allows high-ability workers to maintain their quality standards while becoming significantly faster.”

Below are examples of two onerous writing tasks assigned to ChatGPT: a reference letter and learning goals.

ChatGPT reference letter.
AI writes a very wordy reference letter
Example of learning goals generated by ChatGPT
Here is an example of content created by ChatGPT after being instructed to use Bloom’s taxonomy to create learning goals for a Sci-Fi course.

Recognizing ChatGPT’s Limited Knowledge

Despite helping writers with mundane tasks, this artificial intelligence helper does have its limitations. First of all, it is only as wise as its instructions. For instance, the effusive reference letter above resulted from it having no guidance about length or tone. ChatGPT just threw everything in the written soup.

This AI helper also makes mistakes. In fact, right on the first page, OpenAI honestly admits that its chatbot “may occasionally generate incorrect information, and produce harmful instructions or biased content.” It also has “limited knowledge of the world and events after 2021.”

And it reveals these gaps, often humorously.

For instance, when prodded to provide information on several well-known professors from various departments, it came back with wrong answers. In fact, it actually misidentified one well-known department chair as a Floridian famous for his philanthropy and footwear empire. In this case, ChatGPT not only demonstrated “limited knowledge of the world” but also incorrect information. As academics, writers, and global citizens, we should be concerned about releasing more fake information into the world.

Taking into consideration these and other errors, one wonders on what data, exactly, was ChatGPT trained. Did it, for instance, just skip over universities? Academics? Respected academics with important accomplishments? As we know, what the internet prioritizes says a lot about what it and its users value.

Creating Errors

There are other limitations. OpenAi’s ChatGPT can’t write a self-reflection or decent poetry. And because it is not online, it cannot summarize recent content from the internet.

It also can’t approximate the tone of this article, which shifts between formal and informal and colloquial. Or whimsically insert allusions or pop culture references.

To compensate for its knowledge gaps, ChatGPT generates answers that are incorrect or slightly correct.

In the case of generating mistakes, ChatGPT does mimic the human tendency to fumble, to tap dance around an answer, and to make up material rather than humbly admit ignorance.

Passing Along Misinformation

Being trained on text-based data, which might have been incorrect in the first place, ChatGPT often passes this fakery along. That is, it also (as the example above shows) has a tendency to generate or fabricate fake references and quotations.

It can also spread misinformation. (Misinformation, unintentional false or inaccurate information, is different from disinformation: the intentional spread of untruths to deceive.)

The companies CNET and Bankrate found out this glitch the hard way. For months, they had been duplicitously publishing AI-generated informational articles as human-written articles under a byline. When this unethical behavior was discovered, it drew the ire of the internet.

CNET’s stories even contained both plagiarism and factual mistakes, or what Jon Christian at Futurism called “bone-headed errors.” Christian humorously drew attention to mathematical mistakes that were delivered with all the panache of a financial advisor. For instance, the article claimed that “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.” In reality, you’d be earning only $300.

All three screwups. . . . highlight a core issue with current-generation AI text generators: while they’re legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.

John Christian

Revealing Biases

And ChatGPT is not unbiased either. First, this bot has a strong US leaning. For instance, it was prompted to write about the small town of Wingham, ON. In response, it generated some sunny, non-descript prose. However, it omitted this town’s biggest claim to fame: the birthplace of Nobel Prize winning Alice Munro.

The bias is based on ChatGPT being trained on data pulled from the internet. Thus, it reflects all the prejudices of those who wrote and compiled this information.

Nobel exposes corrupt algorithms. Chat GPT was trained on these.
Nobel’s expose of algorithms

This problem was best articulated by Safiya Umoja Nobel in her landmark book Algorithms of Oppression.

In this text, she challenges the ideal that search engines are value-neutral, exposing their hegemonic norms and the consequences of their various sexist, racist biases. ChatGPT, to be sure, is also affected by if not infected with these biases.

What really made me lose confidence in ChatGPT is when I asked if the United States ever had a president with African ancestry, and it answered no, then apologized after I reminded the chatbot about Barack Obama.

Jamaal Abdul-Alim, Education Editor, The Conversation

Despite agreeing with Nobel’s and Abdul-Alim’s very serious concerns, and thinking that ChatGPT can be remarkably dumb at times, many may not want to smash the algorithmic machines anytime soon. Furthermore, there are writers who do use this bot to generate correct definitions of unfamiliar technical terms encountered in their work. For instance, it can help non-experts understand the basics of such concepts as computational fluid dynamics and geospatial engineering. Still, many professionals choose not to rely on it, nor trust it, too much.

Letting Robots Do Your Homework

But it is students’ trust in and reliance on OpenAI that is causing chaos and consternation in the education world.

That is, many 2022 cases of cheating were connected to one of this bot’s most popular features: its impressive ability to generate essays in seconds. For instance, it constructed a 7-paragraph comparison/contrast essay on Impressionism and Post-Impressionism in under a minute.

And the content of this essay, though vague, does hold some truth: “Impressionism had a profound impact on the art world, challenging traditional academic conventions. Its emphasis on capturing the fleeting qualities of light and atmosphere paved the way for modern art movements. Post-impressionism, building upon the foundations of impressionism, further pushed the boundaries of artistic expression. Artists like Georges Seurat developed the technique of pointillism, while Paul Gauguin explored new avenues in color symbolism. The post-impressionists’ bold experimentation influenced later art movements, such as fauvism and expressionism.”

With a few modifications and a checking of facts, this text would fit comfortably into an introductory art textbook. Or maybe a high-school or a college-level essay.

Sounding the Alarm About ChatGPT

Very shortly after people discovered this essay-writing feature, stories of academic integrity violations flooded the internet. An instructor at an R1 STEM grad program confessed that several students had cheated on a project report milestone. “All 15 students are citing papers that don’t exist.” An alarming article from The Chronicle of Higher Education, written by a student, warned that educators had no idea how much students were using AI. The author rejected the claim that AI’s voice is easy to detect. “It’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own.”

And it’s not just a minority of students using ChatGPT either. In a study.com survey of 200 K-12 teachers, 26% had already caught a student cheating by using this tool. In a BestColleges survey of 1,000 current undergraduate and graduate students (March 2023), 50% of students admitted to using AI for some portion of their assignment, 30% for the majority, and 17% had “used it to complete an assignment and turn it in with no edits.”

Soon, after publications like Forbes and Business Insider began pushing out articles about rampant cheating,the internet was buzzing. An elite program in a Florida high school reported a chatbot “cheating scandal”. But probably the most notorious episode was a student who used this bot to write an essay for his Ethics and Artificial Intelligence course. Sadly, the student did not really understood the point of the assignment.

Incorporating ChatGPT in the Classroom

According to a Gizmodo article, many schools have forbidden ChatGPT, such as those in New York City, Los Angeles, Seattle, Fairfax County Virginia.

But there is still a growing body of teachers who aren’t that concerned. Many don’t want to ban ChatGPT altogether. Eliminating this tool from educational settings, they caution, will do far more harm than good. Instead, they argue that teachers must set clearer writing expectations about cheating. They should also create ingenious assignments that students can’t hack with their ChatGPT writing coach, as well as construct learning activities that reveal this tool’s limitations.

Others have suggested that the real problem is that of teachers relying on methods of assessment that are too ChatGPT-hackaable: weighty term papers and final exams on generic topics. Teachers may need to rethink their testing strategies, or as that student from the Chronicle asserted, “[M]assive structural change is needed if our schools are going to keep training students to think critically.”

Sam Altman, CEO of OpenAI, also doesn’t agree with all the hand-wringing about ChatGPT cheating. He blithely suggested that schools need to “get over it.”

Generative text is something we all need to adapt to . . . . We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well.

Sam Altman

Read MTU’s own Rod Bishop’s guidance on ChatGPT in the university classroom. And think about your stance on this little AI writing helper.

ChatGPT: Friend or Foe? Maybe Both.

This blog was originally published in May, 2023, but was shortened and re-released to on Nov. 2023.

In 2006, British mathematician and entrepreneur Clive Humby proclaimed that “data is the new oil.”

At the time, his enthusiastic (if not exaggerated) comment reflected the fervor and faith in the then expanding internet economy. And his metaphor had some weight, too. Like oil, data can be collected (or maybe one should say extracted), refined, and sold. Both of these are also in high demand, and just as the inappropriate or excessive use of oil has deleterious effects on the planet, so may the reckless use of data.

Recently, the newest oil concerning many, one that is shaking up the knowledge workplace, is ChatGPT. Released by OpenAI on November 2022, ChatGPT combines chatbot functionality with a very clever language model. Or to be more precise, the GPT in its name stands for Generative Pre-trained Transformer.

Global Campus previously published a blog about robots in the workplace. One of the concerns raised then was that of AI taking away our jobs. But perhaps, now, the even bigger concern is AI doing our writing, generating our essays, or even our TV show scripts. That is, many are worried about AI substituting for both our creative and critical thinking.

Training Our AI Writing Helper

ChatGPT is not an entirely new technology. That is, experts have long integrated large language models into customer service chatbots, Google searches, and autocomplete e-mail features. The ChatGPT of today is an updated version of GPT-3, which has been around since 2020. But we can go back farther. We can trace its origins to almost 60 years ago. That is when MIT’s Joseph Weizenbaum rolled out ELIZA: the first chatbot. Named after Eliza Doolittle, this chatbot mimicked a Rogerian therapist by (perhaps annoyingly) rephrasing questions. If someone asked, for instance, “My father hates me,” it would reply with another question: “Why do you say your father hates you?”

The current ChatGPT’s immense knowledge and conversational ability are indeed impressive. To acquire these skills, ChatGPT was “trained on huge amounts of data from the Internet, including conversations.” An encyclopedia of text-based data was combined with a “machine learning technique called Reinforcement Learning from Human Feedback (RLHF).” This is a technique in which human trainers provided the model with conversations in which they played both the AI chatbot and the user.” In other words, this bot read a lot of text and practiced mimicking human conversations. Its responses, nonetheless, are not based on knowing the answers, but on predicting what words will come next in a series.

The results of this training is that this chatbot is almost indistinguishable from the human voice. And it’s getting better, too. As chatbot engages with more users, its tone and conversations become increasingly life-like (OpenAI).

Using ChatGPT for Mundane Writing Tasks

Many have used, tested, and challenged ChatGPT. Although one can’t say for certain that the bot always admits its mistakes, it definitely rejects inappropriate requests. It will deliver some clever pick-up lines. However, it won’t provide instructions for cheating on your taxes or on your driver’s license exam. And if you ask it what happens after you die, it is suitably dodgy.

But what makes ChatGPT so popular, and some would say dangerous, is the plethora of text-based documents it can write, such as the following:

  • Long definitions
  • Emails and letters
  • Scripts for podcasts and videos
  • Speeches
  • Basic instructions
  • Quiz questions
  • Discussion prompts
  • Lesson plans
  • Learning objectives
  • Designs for rubrics
  • Outlines for reports and proposals
  • Summaries of arguments
  • Press releases
  • Essays

And this is the short list, too, of its talents. That is, there are people who have used this friendly bot to construct emails to students, quiz questions, and definitions. The internet is also awash with how-to articles on using ChatGPT to write marketing copy, generate novels, and speeches.

Constructing Learning Goals

“College-educated professionals performing mid-level professional writing tasks experience substantial increases in productivity when given access to ChatGPT . . . . The generative writing tool increases the output quality of low-ability workers while reducing their time spent, and it allows high-ability workers to maintain their quality standards while becoming significantly faster.”

Shakked Noy and Whitney Zhang

Noy and Zhang’s findings are taken with a grain of salt. That is, just as many writers don’t trust Grammarly to catch subject-verb agreement errors, others don’t trust ChatGPT to write their emails or press releases.

Nonetheless, as an experiment, this writer tested the tool by asking it to generate two tasks of college instructors.

First, ChatGPT was given this heavy-handed command: “Please generate five learning goals for an introductory course on Science Fiction. Make sure that you do not use the words “understand” or “know” when constructing these goals. Also please rely on Bloom’s taxonomy.

ChatGPT-generated learning goals for a Sci-Fi course.

In a few seconds, out popped the learning goals on the right, which use several of Bloom’s verbs: analyze, evaluate, apply, create, and compare and contrast.

The prompt for the second attempt asked ChatGPT to put these goals in order of ascending complexity, to which it quickly obliged.

(Truthfully, no Sci-Fi course could live up to these goals, but this task was a fun one nonetheless.)

Generating Reference Letters

Next, ChatGPT was assigned a task common to many academics: writing a reference letter.

Students often request these letters, often at the end of the semester, an unfortunate time when many instructors are bone-tired from grading. It turns out that ChatGPT could have helped (however badly) with this task.

Why badly? ChatGPT is only as smart as its user. In this case, the prompt didn’t specify the length of the reference letter. So the little bot dutifully churned out an 8-paragraph, ridiculously detailed, effusive letter, one no reasonable human would write, let alone read or believe.

Let’s hope that admissions officers and scholarship officials are not wading through these over-the-top AI-generated reference letters.

ChatGPT reference letter.
An overly long and over-the-top reference letter generated by ChatGPT.

Recognizing ChatGPT’s Limited Knowledge

Despite helping us with onerous writing tasks, this artificial intelligence helper does have its limitations. In fact, right on the first page, OpenAI honestly admits that its chatbot “may occasionally generate incorrect information, and produce harmful instructions or biased content.” It also has “limited knowledge of world and events after 2021.”

And it reveals these gaps, often humorously.

For instance, when prodded to provide information on several well-known professors from various departments, it came back with wrong answers. In fact, it actually misidentified one well-known department chair as a Floridian famous for his philanthropy and footwear empire. In this case, ChatGPT not only demonstrated “limited knowledge of the world” but also incorrect information. As academics, writers, and global citizens, we should be concerned about releasing more fake info into the world.

Taking into consideration these and other errors, one wonders on what data, exactly, was ChatGPT trained. Did it, for instance, just skip over universities? Academics? Respected academics with important accomplishments? As we know, what the internet prioritizes says a lot about what it and its users value.

Creating Mistakes

There are other limitations. ChatGPT can’t write a self-reflection or decent poetry. And because it is not online, it cannot summarize recent content from the internet.

It also can’t approximate the tone of this article, which shifts between formal and informal and colloquial. Or whimsically insert allusions or pop culture references.

To compensate for its knowledge gaps, ChatGPT generates answers that are incorrect or slightly correct.

In the case of generating mistakes, ChatGPT does mimic the human tendency to fumble, to tap dance around an answer, and to make up material rather than humbly admit ignorance.

Passing Along Misinformation

Being trained on text-based data, which might have been incorrect in the first place, ChatGPT often passes this fakery along. That is, it also (as the example above shows) has a tendency to generate or fabricate fake references and quotations.

It can also spread misinformation. (Misinformation, unintentional false or inaccurate information, is different from disinformation: the intentional spread of untruths to deceive.)

The companies CNET and Bankrate found out this glitch the hard way. For months, they had been duplicitously publishing AI-generated informational articles as informational articles under a byline. When this unethical behavior was discovered, it drew the ire of the internet.

CNET’s stories even contained both plagiarism and factual mistakes, or what Jon Christian at Futurism called “bone-headed errors.” Christian humorously drew attention to mathematical mistakes that were delivered with all the panache of a financial advisor. For instance, the article claimed that “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.” In reality, you’d be earning only $300.

All three screwups. . . . highlight a core issue with current-generation AI text generators: while they’re legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.

John Christian

Revealing Biases

And ChatGPT is not unbiased either. First, this bot has a strong US leaning. For instance, it was prompted to write about the small town of Wingham, ON. In response, it generated some sunny, non-descript prose. However, it omitted this town’s biggest claim to fame: the birthplace of Nobel Prize winning Alice Munro.

This bias is based on ChatGPT being trained on data pulled from the internet. Thus, it reflects all the prejudices of those who wrote and compiled this information. This problem was best articulated by Safiya Umoja Nobel in her landmark book Algorithms of Oppression. In this text, she challenges the ideal that search engines are value-neutral, exposing their hegemonic norms and the consequences of their various sexist, racist biases. ChatGPT, to be sure, is also affected by if not infected with these biases.

Despite agreeing with Nobel’s concerns, and thinking that ChatGPT can be remarkably dumb at times, many writers don’t have want to smash the algorithmic machines anytime soon. Furthermore, many writers DO use this bot to generate definitions of unfamiliar technical terms encountered in their work. For instance, it can help non-experts understand the basics of such concepts as computational fluid dynamics and geospatial engineering. Still, many professional writers choose not to rely on it, nor trust it, too much.

Letting Robots Do Your Homework

But it is students’ trust in and reliance on one of ChatGPT’s features that is causing chaos and consternation in the education world.

That is, many recent cases of cheating are connected to one of this bot’s most popular features: its impressive ability to generate essays in seconds. For instance, it constructed a 7-paragraph comparison/contrast essay on Impressionism and Post-Impressionism in under a minute.

And the content of this essay, though vague, does hold some truth: “Impressionism had a profound impact on the art world, challenging traditional academic conventions. Its emphasis on capturing the fleeting qualities of light and atmosphere paved the way for modern art movements. Post-impressionism, building upon the foundations of impressionism, further pushed the boundaries of artistic expression. Artists like Georges Seurat developed the technique of pointillism, while Paul Gauguin explored new avenues in color symbolism. The post-impressionists’ bold experimentation influenced later art movements, such as fauvism and expressionism.”

With a few modifications and checking of facts, this text would fit comfortably into an introductory art textbook. Or maybe a high-school or a college-level essay.

Sounding the Alarm About ChatGPT

Very shortly after people discovered this essay-writing feature, stories of academic integrity violations flooded the internet. An instructor at an R1 STEM grad program confessed that several students had cheated on a project report milestone. “All 15 students are citing papers that don’t exist.” An alarming article from The Chronicle of Higher Education, written by a student, warned that educators had no idea how much students were using AI. The author rejected the claim that AI’s voice is easy to detect. “It’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own.”

And it’s not just a minority of students using ChatGPT either. In a study.com survey of 200 K-12 teachers, 26% had already caught a student cheating by using this tool. In a BestColleges survey of 1,000 current undergraduate and graduate students (March 2023), 50% of students admitted to using AI for some portion of their assignment, 30% for the majority, and 17% had “used it to complete an assignment and turn it in with no edits.”

Soon, publications like Forbes and Business Insider began pushing out articles about rampant cheating and the internet was buzzing. An elite program in a Florida high school reported a chatbot “cheating scandal”. But probably the most notorious episode was a student who used this bot to write an essay for his Ethics and Artificial Intelligence course. Sadly, the student did not really understood the point of the assignment.

Incorporating ChatGPT in the Classroom

According to a Gizmodo article, many schools have forbidden ChatGPT, such as those in New York City, Los Angeles, Seattle, Fairfax County Virginia.

But there is still a growing body of teachers who aren’t that concerned. Many don’t want to ban ChatGPT altogether. Eliminating this tool from educational settings, they caution, will do far more harm than good. Instead, they argue that teachers must set clearer writing expectations about cheating. They should also create ingenious assignments that students can’t hack with their ChatGPT writing coach, as well as create learning activities that reveal this tool’s limitations.

Others have suggested that the real problem is teachers relying on methods of assessment that are too ChatGPT-cheatable: weighty term papers and final exams. Teachers may need to rethink their testing strategies, or as that student from the Chronicle asserted, “[M]assive structural change is needed if our schools are going to keep training students to think critically.”

Sam Altman, CEO of OpenAI, also doesn’t agree with all the hand-wringing about ChatGPT cheating. He blithely suggested that schools need to “get over it.”

Generative text is something we all need to adapt to . . . . We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well.

Sam Altman

Read MTU’s own Rod Bishop’s much briefer take on academic integrity and AI.

Robots in the Workplace

Two large orange robotic arms in a factory setting.

Robots at Work

A robotic guard dog (or robodog) stationed in an abandoned warehouse relentlessly chases intruders across a barren, post-apocalyptic landscape. Armed with tracking weapons, highly sophisticated sensors, and artificial intelligence, this robodog does not give up its hunt easily.

To avoid spoilers, that is about all I will say about “Metalhead,” the fifth, and arguably, most terrifying episode of season one of the series Black Mirror. Although many have debated the episode’s meaning, one possible interpretation is a gruesome picture of what might happen if evolved, intelligent, unchecked robots ruled the workplace. And if they took their jobs, well, maybe a little too seriously.

The good news is that there are currently no rogue robodogs guarding warehouses and going on killing sprees. However, robots have been in industry for half a century. The effects of this integration, though certainly less sinister, have troubled a few. That is, one of the most popular searches on Google is this question or variations of it: “Will robots take our jobs?”

The answer is complicated: yes, no, and they already have. And the situation might be better or worse than you think.

Making Manufacturing Easier

When many of us contemplate robots in the workplace, we might think of Amazon. This company operates over 100,000 robots on its various factory floors. Autonomous mobile robots (AMRs) pick, sort, and transport orders; robotic arms pack items; and autonomous ground vehicles navigate the huge warehouses.

However, on the global stage, Amazon is somewhat of a bit player. FoxConn, a Chinese electronics manufacturer, currently has over 300,000 robots in use for assembling its products. These robots help create phones, computers, tablets, and gaming consoles for companies such as Amazon, Microsoft, and Samsung.

But the electronics industry was not the first to integrate robots into the workplace: the automotive industry was. It took a chance on and then popularized the first industrial robot: Unimate.

Unimate was the creation of Joseph Engelberger, whom many call the father of robotics. Inspired by Isaac Asimov and his vision of robot helpers, Engelberger strove to create robots that would improve manufacturing while making workers’ lives easier.

In 1959, General Motors installed the first prototype of Unimate #001 in its Trenton, New Jersey plant. Weighing a whopping 2700 pounds, this robot’s primary job was diecasting.

The Original Unimate: an industrial robot.
The Original Unimate.

And only a decade later, GM’s rebuilt factory in Lordstown, Ohio, housed an army of spot-welding robots. These robots could build 110 cars an hour, which was double the manufacturing rate at that time.

Choosing the Right Robots for the Job (or Jobs)

An automated machine that does just one thing is not a robot. It is simply automation. A robot should have the capability of handling a range of jobs at a factory.

Joseph Engelberger

Perhaps Engelberger’s dream is best satisfied by articulated robots, equipped for several jobs. With their flexibility, dexterity, and reach, these robots are adept at assembling, packaging, palletizing, welding, and more. Palletizing robots perhaps perform one of the most annoying and dangerous of tasks in a warehouse environment: stacking stuff. These hefty robotic arms spend all day neatly piling items onto pallets.

Other common robots include SCARA (Selective Compliance Articulated Robot Arm). SCARAs perform actions between two parallel planes or assemble vertical products. Delta (spider robots) excel at high-speed actions involving light loads.

And then there are Cartesian robots, or gantry robots. They “have an overhead structure that controls the motion in the horizontal plane and a robotic arm that actuates motion vertically. They can be designed to move in x-y axes or x-y-z axes. The robotic arm is placed on the scaffolding and can be moved in the horizontal plane.” It also has an effector or machine tool attached to its arm, depending on its function. This article goes into greater detail about the four types of robots that manufacturers should know and use.

The automotive industry (as does much of manufacturing) uses robots to spot-weld, pick, paint, and palletize–boring, yet dangerous jobs. Jeff Moore, Volvo’s vice president of manufacturing in the Americas, says that welding, “with all the heat and sparks and high current and things is a natural spot to be looking at where you can more heavily automate.” However, for intricate work on the assembly line, such as attaching hoods, bumpers, and fenders, “the human touch has a lot of advantages.

Integrating Robots and Automation

But these metal workers do not just assemble cars and create heavy-duty products. Robots and automation also assist in other industries, such as in agriculture and food production.

Helping With Agriculture and Food Production

In agriculture, for instance, robots may plant, harvest, spray crops, control weeds, analyze soil, and monitor crops. And when it comes to agricultural equipment, some of the biggest players are John Deere, AGCO, CNH Industrial, and Kubota. These companies are also investing in robotics and automation; as well as tractors, drones, and data analytics to improve efficiency and crop yield and to reduce costs. Recently, for instance, Trimble and Horsch collaborated to build an autonomous sprayer.

And in food production, robots might slice, package, and label products at a much more rapid rate than humans. For instance, the global food production and processing company Cargill heavily uses robotic automation. It invented the first robotic cattle herder. Cargill and Tyson Foods, in fact, are also moving heavily into automation and cobots for meat production.

Lucy and Ethel working on an assembly line at a chocolate factory.

In one of the more famous and humorous episodes of I Love Lucy, Lucy and Ethel get employment at a candy factory. Their job: keeping up with increasing production and quickly wrapping candy as it rolls down the belt. They fail miserably as the line picks up, shoving candy in their mouths, their pockets, and even their dresses. Well, thanks to robots, inadequately trained (and slower than ideal) humans will no longer have to keep pace by eating the profits. Their tasks might be made easier by cobots.

Recently, “cobots” or modular, agile, collaborative robots have been the focus of robot manufacturers. Rather than replace workers, cobots work alongside their human employees. Armed with sensors and sophisticated feedback equipment, cobots respond to changes in the workflow and help their human partners perform tasks accurately and safely. Some experts predict that the cobot market (currently valued at $1.1 billion) will expand to $9.2 million by 2028).

Performing Tedious and Dangerous Tasks

Robots can also complete tasks that are too tedious for humans, such as inspecting pipelines or sorting items. Additionally, they can monitor and analyze data in real time, allowing workers to make better informed decisions. In the oil and gas industry, for instance, robots inspect pipelines and inspect wells.

And it is not just repetitive and boring tasks, either. That is, another argument in favor of robots in the workplace is that they can perform hazardous tasks, such as working in extreme temperatures and dangerous environments; and cleaning up harmful materials.

One of of the most recently developed robots who might be fit for these tasks is MARVEL, appropriately named because of its superhero abilities. MARVEL is an acronym for Magnetically Adhesive Robot for Versatile and Expeditious Locomotion.

The brainchild of a research team from the Korea Advanced Institute of Science and Technology (KAIST), this robot is equipped with magnetic foot pads that can be turned on or off.

Researchers and MARVEL at KAIST

With these specialized feet, MARVEL can rapidly climb steel walls and ceilings, at speeds of 50 cm to 70 cm a second. Its design and speed make it appropriate for several tricky tasks requiring nimbleness, such as performing inspections and maintenance on high structures (bridges, buildings, ships, and transmission towers.)

Imagine, for a second, MARVEL safely performing maintenance on the Houghton lift bridge while it is still operational. No need to block off one lane and slow down the flow of traffic. No need to be late for work!

Taking Our Jobs? Maybe.

We are approaching a time when machines will be able to outperform humans at almost any task. I believe that society needs to confront this question before it is upon us: if machines are capable of doing almost any work humans can do, what will humans do?

Moshe Vardi

One of the most obvious downsides to incorporating robots in the workplace is that they will lead to job losses. That is, some experts estimate that as many as 20 million job losses will result as companies continue to rely on automation.

Critiquing Robots and Automation

Futurist and New York Times best-selling author Martin Ford has probably been the most vocal about the negative economic and social impacts of automation and robotics.

He has written Rule of the Robots: How Artificial Intelligence will Transform Everything (2021), Architects of Intelligence: The Truth About AI and the People Building it (2018), and Rise of the Robots: Technology and the Threat of a Jobless Future (2015).

Ford has argued that automation and robotics will result in job losses, wage stagnation, and widening inequality. These effects, which will be felt most acutely by low-skilled and middle-skilled workers, will also weaken worker bargaining power.

Cover of Martin Ford's book "The Rise of the Robots"

Alleviating These Problems

But there are solutions. That is, Ford has advocated that governments should prepare for and then take steps to address the issues posed by robotics and automation. Governing bodies could provide better access to education and new job training, invest in infrastructure, promote job-sharing, and provide more generous unemployment benefits.

To alleviate inequities caused by increasing automation, Ford has urged governments to create tax incentives that encourage employers to hire people and train them in the use of robots; or for companies to invest in robots designed to complement rather than replace human workers (such as cobots). He has also supported a basic monthly income for citizens so that everyone has a decent standard of living. How will this monthly income be funded? By taxing companies that use robots, or taxing the robots themselves to generate this income.

MIT professors Erik Brynjolfsson and Andrew McAfee, who wrote The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, also summarized the second machine age and evaluated in terms of its positive benefits (“bounty”) and increasing inequality (“spread”). After stating that the spread of technology is causing greater inequality, they proposed some similar policy interventions.

Defending Robots in the Workplace

Critics of Ford, McAfee, and Brynjolfsson, such as economists Lawrence Summers and Robert Gordon, and industry expert Jeff Bezos, take a contradictory perspective. They argue that robots and automation will create more jobs than they destroy. These technologies, they contend, will also lead to advanced productivity and efficiency, improved demands for goods and services, and, therefore, increased employment. Robots can also help reduce costs, which could lead to increased profits for companies and more jobs overall.

Summers takes a slightly different stand, affirming that robots could increase production and therefore benefit the economy and improve employment. However, governments should still invest in education and job training to ensure that workers have the skills needed to take advantage of the opportunities created by both automation and robotics.

Futurists at the Information Technology and Innovation Foundation (ITIF) have sung the praises of robots and automation for years. Their experts content that robots and automation will enhance productivity and reshape global supply chains. New production systems, they claim, will bring more (not less) manufacturing work to the United States.

And then there are the numbers, which currently don’t look that fearful. According to the International Federation of Robotics, in the United States, there were only 255 robotic units per 10,000 employees. Although 47% of CEOs are investing in robots (according to a poll by Forbes, Xometry, and Zogby), robots still only have a 2% presence in industry.

Whatever the industry, it is obvious that robots can increase both efficiency and safety. They can work 24/7. They won’t tire during a 16-hour shift, get repetitive stress injuries, or have fatigue-related workplace accidents. Robots can also increase output capacity by helping American manufacturers save on utilities and worker resources, so that they can compete more effectively with offshore companies.

Preparing for an Automated and Robotic Future

Robotic arm in a lab at Michigan Tech.

This blog has just scratched the surface of robots in the workplace. That is, it didn’t discuss robotic doctors, such as the impressive Davinci Surgical System. Also, I don’t pretend to be an expert here, just an ex Sci-Fi teacher fascinated with the robotic present and future.

Those who want to prepare for a future in robotics and automation can learn more by taking several educational paths at Michigan Tech. MTU offers major and minor degrees in computer engineering, data acquisition and industrial control, electrical and computer engineering, mechanical engineering, and robotics engineering.

More specifically, there is mechatronics: a field of engineering that combines mechanical, electrical, and computer engineering to create systems that can interact with the physical world. Mechatronic systems consist of sensors, actuators, and control systems. These systems are fundamental to creating robots and other automated systems. Students in this program can also join the Robotics Systems Enterprise “to solve real-world engineering problems.”

Through Global Campus, Michigan Tech also offers several related online graduate certificates in artificial intelligence in healthcare, manufacturing engineering, the safety and security of autonomous cyber-physical systems, and security and privacy in healthcare. And if you’re interesting in earning an online master’s degree, please check out our MS in Electrical and Computer Engineering or our online Mechanical Engineering programs, both MS and PhD.