Tag: algorithms

Five Ways Artificial Intelligence is Transforming Healthcare

A cellphone against an image of a heart and a robotic hand, meant to suggest the connectedness of artificial intelligence in healthcare.

In the Centre for the Fourth Industrial Revolution’s Top Emerging Technologies of 2023 report, the organization nominated AI-facilitated healthcare as the 10th top global trend. After stressing how the COVID pandemic magnified the shortages of healthcare systems around the globe, the report suggested that perhaps one of the most important goals of AI systems is that of improving healthcare. But how, exactly, can artificial intelligence help if not transform healthcare?

1. Providing Diagnostic Assistance

“Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3–5% of studies reported, and much higher rates reported in many targeted studies.”

Adrian P. Brady, Error and discrepancy in radiology: Inevitable or Avoidable

By leveraging machine learning algorithms to analyze medical data and images, whether they are CT scans, X-rays, retinal images, and more, Artificial Intelligence can play a crucial role in interpreting medical images and data.

Although AI can’t replace the expertise of radiologists, it can certainly supplement it. For instance, trained on vast data sets of diseases and anomalies, AI excels at recognizing both patterns and abnormalities in medical images. And it often does so with more efficiency, speed, and consistency than its human counterparts. AI systems also maintain a consistent level of performance regardless of factors like fatigue or distraction, which can affect human radiologists. Therefore, AI may provide more reliable results throughout the day and across different cases. This heightened accuracy could make the difference between life and death in emergency situations.

AI can integrate radiological findings with electronic health records (EHRs) and other clinical data. Thus, it helps to provide a more comprehensive, holistic analysis. It can also help with quality control by flagging potential errors and inconsistencies in medical images that lead to misdiagnoses.

2. Aiding in Drug Discovery and Development

An image of a lab worker who is using artificial intelligence to discover drugs.
Artificial intelligence can help researchers create new drugs.

AI is also transforming drug discovery and development by leveraging advanced computational techniques to analyze large datasets, predict molecular interactions, and streamline various stages of the drug development pipeline.

Algorithms can assess large data sets (genomic, proteomic, and clinical) not only to identify proteins and genes associated with diseases, but also to pinpoint targets that will respond to clinical intervention.

Machine learning can also predict the binding affinity of molecules to target proteins. This prediction allows researchers to prioritize and then test possible drug compounds. In other words, AI can help both streamline and reduce the cost of pharmaceutical research.

For instance in 2020, DeepMind’s AI system (AlphaFold), made headlines by accurately predicting 3D protein structures from amino acid sequences. According to its website, AlphaFold regularly “achieves accuracy that is competitive with experiment.” Why is this accuracy important? Understanding the 3D structure of proteins assists researchers in designing drugs that can interact more precisely with specific proteins. The outcome is more effective and targeted treatments.

Artificial intelligence can also optimize the design of critical trials by analyzing patient data, summarizing biomedical literature, identifying suitable patient populations, and predicting potential challenges. In addition, AI helps investigate drug repurposing, predict drug toxicity, and identify disease biomarkers.

3. Personalizing Healthcare

Personalized medicine, also known as precision medicine, involves tailoring preventive care, medical treatment, and interventions to an individual’s characteristics. These might include such factors as genetics, lifestyle, and environment.

Artificial Intelligence steps in by analyzing large datasets, making predictions and insights that enable more personalized, targeted, and effective healthcare.

An image of diabetes treatment devices to indicate how artificial intelligence can help personalize diabetes treatment.
Precision medicine can help create more effective, individualized diabetes treatment plans.

For instance, for diabetes treatment, precision healthcare considers an individual’s lifestyle factors, such as diet, exercise habits, and stress levels. Then, doctors use AI to create personalized dietary plans and exercise regimens to manage blood sugar levels effectively. Diabetes treatment might involve using wearable devices and health trackers to monitor and adjust lifestyle interventions based on real-time data.

Take Continuous Glucose Monitoring (CGM) systems, which provide real-time data on glucose levels throughout the day. Precision medicine utilizes this data to adjust treatment plans dynamically. Thus, Artificial Intelligence allows for more accurate insulin dosing based on the person’s unique glucose patterns.

In precision medicine, AI may also assess an individual’s genetic makeup in conjunction with their medical records. The goal is predicting how a patient may respond to certain medications and treatments. And in nutrigenomics, AI amasses a wealth of genetic, molecular, and nutritional data to make personalized dietary recommendations. In other words, machine intelligence helps create personalized plans that also better engage patients in their healthcare journeys.

4. Using Predictive Analytics to Improve Patient Outcomes

A smart phone held in front of a computer. In this image, predictive analytics are being used in healthcare.
Predictive analytics is one tool that can improve healthcare.

Predictive analytics uses various statistical algorithms, machine learning techniques, and data mining processes to analyze historical data and make predictions.

The goal of predictive analytics is identifying patterns, trends, and relationships within data to forecast future behavior or outcomes.

PA, then, leverages the power of data so that organizations make informed predictions and better decisions while reducing their risk.

For instance, AI can assess large datasets to identify both patterns and risk factors associated with disease. Thus, it helps healthcare providers estimate the likelihood of certain individuals developing conditions and diseases. They can then suggest preventative measures, such as lifestyle interventions and early screenings.

AI also assists in monitoring patients, such as in chronic disease management. By continuously analyzing patient data, such as vital signs and other health metrics, it can detect deteriorating health conditions earlier. With this data, healthcare providers can intervene more quickly and reduce the risk of complications.

Adhering to taking medication is obviously crucial to patient outcomes. AI can help predict and improve medication adherence by analyzing patient data and identifying factors that may influence a patient’s ability to follow prescribed medication regimens.

In other words, predictive analytics, when combined with early intervention, can improve patient outcomes while creating a more proactive and efficient healthcare system. Learn more about the growing importance of data analytics to healthcare.

5. Acting as Virtual Health Assistants

Artificial Intelligence-powered virtual assistants are also used throughout healthcare. In these roles, they provide information, answer questions, help in preliminary diagnoses, improve patient engagement, and direct inquirers to healthcare services.

Deepscribe is one such trusted medical scribe. After extracting information from the conversation during a doctor’s visit, it uses AI to create a medical note. This note “syncs directly with the provider’s electronic health record system, so all the provider has to do is review the documentation and sign off at the end of the day.”

A smartphone can help you access AI health apps.
AI mental health apps are easily accessed on smartphones.

Artificial Intelligence helpers are used in telepsychiatry and with predictive analytics to identify at-risk individuals. But it is AI’s use in chatbots for mental health support that might be one of its more important functions. Why?

The National Council for Mental Well-Being, which announced a mental health crisis in the US, found that 56% of Americans seek some form of mental health. However, the US has a surprising lack of resources. In the same survey, 74% of people stated that they didn’t believe these services were accessible to everyone whereas 41% said options are limited. And because of high cost and inadequate health insurance, 25% of those surveyed admitted that they often had to choose between getting therapy and buying necessities.

Accessing healthcare is also impacted by a person’s location and income. That is, those who live in rural areas and those with lower incomes are less likely to seek mental health. And, for many, there is also the stigma of getting help. In a recent study, 42% of employees confessed to hiding their anxiety from their employer. They were worried about being judged, demoted, or, at worst, fired.

Accessing Therapy Through Artificial Intelligence

But chatbots, who turn out to be good for things other than writing your term papers, could help in this mental health crisis.

That is, chatbots are often the first line of treatment for those with limited access to and funds for therapy. According to a Forbes Health article, the cost of therapy for those without insurance ranges between $100 and $200, depending on your location. And 26 million Americans (7.9% of the population) don’t have health insurance.

Chatbots, then, assist with therapy cost, accessibility, and patient privacy. These apps are available when people are at work or done for the day: ready when they need them. A study by Woebot, in fact, found that 65% of their app’s use was outside normal hours, with the highest usage rate between 5 and 10 PM. Although there are health hotlines and hospitals, most doctor’s offices and clinics close at 5 PM or earlier. So people who can’t leave their jobs to get help are often left stranded. (And, for employers, it turns out that these apps also reduce work downtime, too.)

A screenshot of Wysa's website, which shows the scale of impact of this artificial intelligence cognitive behavioral therapist.
Wysa summarizes its chatbot’s scale of impact.

Two of the most popular AI-chatbots are Wysa and Woebot, accessible through mobile apps.

Using principles of Cognitive Behavioral Therapy (CBT), these chatbots provide mental health support. They help users manage their stress and anxiety by tracking mood, offering coping strategies and mindfulness exercises, and engaging in conversations.

In other words, these chatbots create an anonymous safe space to talk about worries and stressors, working towards deescalating them. Others include Youper, and Replika, the “AI companion that cares.”

Keeping Pace With Artificial Intelligence in Healthcare

There is no doubt that Artificial Intelligence will continue to integrate (if not infiltrate) healthcare systems, inform new technologies, and guide medical interventions. Indeed, this blog has just scratched the surface of AI’s potential. There is still much to say, for example, about AI solutions for bolstering healthcare infrastructure and services in developing nations.

Those wanting to make a difference in data-driven healthcare, and ensure that artificial intelligence is used responsibly, securely, and ethically require specialized advanced education.

Besides its online certificate and MS in Applied Statistics, Michigan Technological University offers several online health informatics programs through its Global Campus. Along with Foundations in Health Informatics and an Online Public Health Informatics certificate, the Michigan Tech Global Campus has two other certificates and a graduate degree. In these CHI programs, students also get access to HIMSS, an interdisciplinary society that unites people striving to improve the global health ecosystem.

If you’d like additional information about these programs, reach out to the designated Graduate Program Assistant, Margaret Landsberger at margaret@mtu.edu. Or request information about one or more of of MTU’s Health Informatics Programs.

ChatGPT: Friend or Foe? Maybe Both.

An image of a network to symbolize ChatGPT.

(NOTE: This article is a slightly abbreviated and edited version of a blog originally published in May 2023.)

In 2006, British mathematician and entrepreneur Clive Humby proclaimed that “data is the new oil.”

At the time, his enthusiastic (if not exaggerated) comment reflected the fervor and faith in the then expanding internet economy. And his metaphor had some weight, too. Like oil, data can be collected (or maybe one should say extracted), refined, and sold. Both of these are also in high demand, and just as the inappropriate or excessive use of oil has deleterious effects on the planet, so may the reckless use of data.

Recently, the newest oil concerning many, one that is shaking up the knowledge workplace, is ChatGPT. Released by OpenAI in November 2022, ChatGPT combines chatbot functionality with a very clever language model. Or to be more precise, the GPT in its name stands for Generative Pre-trained Transformer.

Global Campus previously published a blog about robots in the workplace. One of the concerns raised then was that of AI taking away our jobs. But perhaps, now, the even bigger concern is AI doing our writing, generating our essays, or even our TV show scripts. That is, many are worried about AI substituting for both our creative and critical thinking.

Training Our AI Writing Helper

ChatGPT is not an entirely new technology. That is, experts have long integrated large language models into customer service chatbots, Google searches, and autocomplete e-mail features. The ChatGPT of today is an updated version of GPT-3, which has been around since 2020. But ChatGPT’s origins go further back. Almost 60 years ago, MIT’s Joseph Weizenbaum rolled out ELIZA: the first chatbot. Named after Eliza Doolittle, this chatbot mimicked a Rogerian therapist by (perhaps annoyingly) rephrasing questions. If someone asked, for instance, “My father hates me,” it would reply with another question: “Why do you say your father hates you?” And so on.

The current ChatGPT’s immense knowledge and conversational ability are indeed impressive. To acquire these skills, ChatGPT was “trained on huge amounts of data from the Internet, including conversations.” An encyclopedia of text-based data was combined with a “machine learning technique called Reinforcement Learning from Human Feedback (RLHF).” This is a technique in which human trainers provided the model with conversations in which they played both the AI chatbot and the user.” In other words, this bot read a lot of text and practiced mimicking human conversations. Its responses, nonetheless, are not based on knowing the answers, but on predicting what words will come next in a series.

The results of this training is that this chatbot is almost indistinguishable from the human voice. And it’s getting better, too. As chatbot engages with more users, its tone and conversations become increasingly life-like (OpenAI).

Using ChatGPT for Mundane Writing Tasks

Many have used, tested, and challenged ChatGPT. Although one can’t say for certain that the bot always admits its mistakes, it definitely rejects inappropriate requests. It will deliver some clever pick-up lines. However, it won’t provide instructions for cheating on your taxes or on your driver’s license exam. And if you ask it what happens after you die, it is suitably dodgy.

But what makes ChatGPT so popular, and some would say dangerous, is the plethora of text-based documents it can produce, such as the following:

  • Long definitions
  • Emails and letters
  • Scripts for podcasts and videos
  • Speeches
  • Basic instructions
  • Quiz questions
  • Discussion prompts
  • Lesson plans
  • Learning objectives
  • Designs for rubrics
  • Outlines for reports and proposals
  • Summaries of arguments
  • Press releases
  • Essays

And this is the short list, too, of its talents. That is, there are people who have used this friendly bot to construct emails to students, quiz questions, and definitions. The internet is also awash with how-to articles on using ChatGPT to write marketing copy, generate novels, and speeches. Noy and Zhang even claim that this “generative writing tool increases the output quality of low-ability workers while reducing their time spent, and it allows high-ability workers to maintain their quality standards while becoming significantly faster.”

Below are examples of two onerous writing tasks assigned to ChatGPT: a reference letter and learning goals.

ChatGPT reference letter.
AI writes a very wordy reference letter
Example of learning goals generated by ChatGPT
Here is an example of content created by ChatGPT after being instructed to use Bloom’s taxonomy to create learning goals for a Sci-Fi course.

Recognizing ChatGPT’s Limited Knowledge

Despite helping writers with mundane tasks, this artificial intelligence helper does have its limitations. First of all, it is only as wise as its instructions. For instance, the effusive reference letter above resulted from it having no guidance about length or tone. ChatGPT just threw everything in the written soup.

This AI helper also makes mistakes. In fact, right on the first page, OpenAI honestly admits that its chatbot “may occasionally generate incorrect information, and produce harmful instructions or biased content.” It also has “limited knowledge of the world and events after 2021.”

And it reveals these gaps, often humorously.

For instance, when prodded to provide information on several well-known professors from various departments, it came back with wrong answers. In fact, it actually misidentified one well-known department chair as a Floridian famous for his philanthropy and footwear empire. In this case, ChatGPT not only demonstrated “limited knowledge of the world” but also incorrect information. As academics, writers, and global citizens, we should be concerned about releasing more fake information into the world.

Taking into consideration these and other errors, one wonders on what data, exactly, was ChatGPT trained. Did it, for instance, just skip over universities? Academics? Respected academics with important accomplishments? As we know, what the internet prioritizes says a lot about what it and its users value.

Creating Errors

There are other limitations. OpenAi’s ChatGPT can’t write a self-reflection or decent poetry. And because it is not online, it cannot summarize recent content from the internet.

It also can’t approximate the tone of this article, which shifts between formal and informal and colloquial. Or whimsically insert allusions or pop culture references.

To compensate for its knowledge gaps, ChatGPT generates answers that are incorrect or slightly correct.

In the case of generating mistakes, ChatGPT does mimic the human tendency to fumble, to tap dance around an answer, and to make up material rather than humbly admit ignorance.

Passing Along Misinformation

Being trained on text-based data, which might have been incorrect in the first place, ChatGPT often passes this fakery along. That is, it also (as the example above shows) has a tendency to generate or fabricate fake references and quotations.

It can also spread misinformation. (Misinformation, unintentional false or inaccurate information, is different from disinformation: the intentional spread of untruths to deceive.)

The companies CNET and Bankrate found out this glitch the hard way. For months, they had been duplicitously publishing AI-generated informational articles as human-written articles under a byline. When this unethical behavior was discovered, it drew the ire of the internet.

CNET’s stories even contained both plagiarism and factual mistakes, or what Jon Christian at Futurism called “bone-headed errors.” Christian humorously drew attention to mathematical mistakes that were delivered with all the panache of a financial advisor. For instance, the article claimed that “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.” In reality, you’d be earning only $300.

All three screwups. . . . highlight a core issue with current-generation AI text generators: while they’re legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.

John Christian

Revealing Biases

And ChatGPT is not unbiased either. First, this bot has a strong US leaning. For instance, it was prompted to write about the small town of Wingham, ON. In response, it generated some sunny, non-descript prose. However, it omitted this town’s biggest claim to fame: the birthplace of Nobel Prize winning Alice Munro.

The bias is based on ChatGPT being trained on data pulled from the internet. Thus, it reflects all the prejudices of those who wrote and compiled this information.

Nobel exposes corrupt algorithms. Chat GPT was trained on these.
Nobel’s expose of algorithms

This problem was best articulated by Safiya Umoja Nobel in her landmark book Algorithms of Oppression.

In this text, she challenges the ideal that search engines are value-neutral, exposing their hegemonic norms and the consequences of their various sexist, racist biases. ChatGPT, to be sure, is also affected by if not infected with these biases.

What really made me lose confidence in ChatGPT is when I asked if the United States ever had a president with African ancestry, and it answered no, then apologized after I reminded the chatbot about Barack Obama.

Jamaal Abdul-Alim, Education Editor, The Conversation

Despite agreeing with Nobel’s and Abdul-Alim’s very serious concerns, and thinking that ChatGPT can be remarkably dumb at times, many may not want to smash the algorithmic machines anytime soon. Furthermore, there are writers who do use this bot to generate correct definitions of unfamiliar technical terms encountered in their work. For instance, it can help non-experts understand the basics of such concepts as computational fluid dynamics and geospatial engineering. Still, many professionals choose not to rely on it, nor trust it, too much.

Letting Robots Do Your Homework

But it is students’ trust in and reliance on OpenAI that is causing chaos and consternation in the education world.

That is, many 2022 cases of cheating were connected to one of this bot’s most popular features: its impressive ability to generate essays in seconds. For instance, it constructed a 7-paragraph comparison/contrast essay on Impressionism and Post-Impressionism in under a minute.

And the content of this essay, though vague, does hold some truth: “Impressionism had a profound impact on the art world, challenging traditional academic conventions. Its emphasis on capturing the fleeting qualities of light and atmosphere paved the way for modern art movements. Post-impressionism, building upon the foundations of impressionism, further pushed the boundaries of artistic expression. Artists like Georges Seurat developed the technique of pointillism, while Paul Gauguin explored new avenues in color symbolism. The post-impressionists’ bold experimentation influenced later art movements, such as fauvism and expressionism.”

With a few modifications and a checking of facts, this text would fit comfortably into an introductory art textbook. Or maybe a high-school or a college-level essay.

Sounding the Alarm About ChatGPT

Very shortly after people discovered this essay-writing feature, stories of academic integrity violations flooded the internet. An instructor at an R1 STEM grad program confessed that several students had cheated on a project report milestone. “All 15 students are citing papers that don’t exist.” An alarming article from The Chronicle of Higher Education, written by a student, warned that educators had no idea how much students were using AI. The author rejected the claim that AI’s voice is easy to detect. “It’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own.”

And it’s not just a minority of students using ChatGPT either. In a study.com survey of 200 K-12 teachers, 26% had already caught a student cheating by using this tool. In a BestColleges survey of 1,000 current undergraduate and graduate students (March 2023), 50% of students admitted to using AI for some portion of their assignment, 30% for the majority, and 17% had “used it to complete an assignment and turn it in with no edits.”

Soon, after publications like Forbes and Business Insider began pushing out articles about rampant cheating,the internet was buzzing. An elite program in a Florida high school reported a chatbot “cheating scandal”. But probably the most notorious episode was a student who used this bot to write an essay for his Ethics and Artificial Intelligence course. Sadly, the student did not really understood the point of the assignment.

Incorporating ChatGPT in the Classroom

According to a Gizmodo article, many schools have forbidden ChatGPT, such as those in New York City, Los Angeles, Seattle, Fairfax County Virginia.

But there is still a growing body of teachers who aren’t that concerned. Many don’t want to ban ChatGPT altogether. Eliminating this tool from educational settings, they caution, will do far more harm than good. Instead, they argue that teachers must set clearer writing expectations about cheating. They should also create ingenious assignments that students can’t hack with their ChatGPT writing coach, as well as construct learning activities that reveal this tool’s limitations.

Others have suggested that the real problem is that of teachers relying on methods of assessment that are too ChatGPT-hackaable: weighty term papers and final exams on generic topics. Teachers may need to rethink their testing strategies, or as that student from the Chronicle asserted, “[M]assive structural change is needed if our schools are going to keep training students to think critically.”

Sam Altman, CEO of OpenAI, also doesn’t agree with all the hand-wringing about ChatGPT cheating. He blithely suggested that schools need to “get over it.”

Generative text is something we all need to adapt to . . . . We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well.

Sam Altman

Read MTU’s own Rod Bishop’s guidance on ChatGPT in the university classroom. And think about your stance on this little AI writing helper.