Is AI a Force For Good, or Is It Dangerous?

Is AI a Force For Good, or Is It Dangerous?

When AI or Artificial Intelligence became a part of popular culture, people’s opinions ranged from hope to fear. But now that AI has become a more integrated part of day-to-day life; many people are changing their views. So, what’s your take–is AI a force for good, or is it dangerous?

Open AI with Chat GPT and Google has achieved a level of commonsense reasoning and in one of its Google language models, so AI is definitely improving. AI is a force for good as it helps improve, optimize, and complement present-day technology. In contrast, others are fearful that an AI-orchestrated disaster is just waiting to happen.

This article and video highlight some instances where AI has showed it’s a powerful asset for humanitarian causes and humankind. Following that, we discuss Black Box AI, and there are also mentions of a few bizarre occurrences where AI went haywire and appeared to threaten humanity. Carefully observing and understanding both arguments will help you make an informed opinion about AI.

5 Areas Where AI Is Proving To Be a Force for Good

Here are five areas where AI is proving to be a force for good. 

  1. Saving the Bees

Bees are dying on a massive scale, and without these global plant pollinators, there will be a sharp decline in fruits, nuts, vegetables, and crop growth. Honey Bees perform about 80 percent of all pollination worldwide.

U.S. Honey Bees had declined by 60 percent from 6 million colonies in 1947, according to the U.S.D.A. U.S. Bee colony numbers are recovering slightly from a low of 2.49 million in 2008 to 2.92 million colonies in 2022. U.S. Bee colony honey yields declined from a 2009 average of 69 pounds of honey to 57 pounds in the most recent decade.

This decline will affect the plant life necessary for a functioning ecosystem worldwide, making agriculture and food production difficult for humans. Understanding the imminent threat, The World Bee Project, in partnership with Oracle, is building an AI-powered system to save bees.

The idea is to collect data from a globally distributed network of cameras, mics, and IoT sensors planted near bee hives and then use AI to process the data and figure out current trends in the bee population in real time.

This knowledge will enable early intervention to protect bee colonies, and here are 10 ways to save bees, including planting trees and Bee Gardens.

  1. Helping in Disease Diagnosis
Helping in Disease Diagnosis
Artificial Intelligence in disease diagnosis.

Hospitals are currently using AI for the early diagnosis of cancer in patients, and it’s showing promising results.

Some cancers are curable if caught early, but the problem is that doctors and patients overlook the symptoms or can’t detect early-stage cancer cells until it’s too late. 

However, doctors can now diagnose early signs of cancerous growth using AI-powered screening techniques much more easily and accurately.

  1. Seizing Fake News Circulation

Fake news is a severe problem affecting our country and world – one that, if left unchecked, has the potential to crumble social structures. AI is constantly used to create and distribute fake news on social media.

That said, AI also holds the key to combating and seizing fake news before they go viral and start spreading misinformation.

Google, Facebook, Microsoft, and other tech giants are using AI to monitor various online distributed content looking for potential triggers that signal fake news. 

We also discuss AI in the news in our articles “Is Google News The Same For Everyone?” and “Is Google Personalized News Using AI To Create Bias?”

  1. Aiding People With Disabilities

AI is a powerful tool to help individuals suffering from disabilities and provide them with much-needed accessibility features.

AI is helping people with disabilities, such as vision, hearing, mental health, and mobility issues, just to name a few. A new AI-enhanced nerve-stimulation device can be utilized in early studies between the brain and the nerves of the lower bodies of people completely paralyzed after spinal cord injuries.

Envision AI-Powered Smart Glasses for the blind and visually Impaired components are on Google Glass Enterprise Edition #2 frames. These glasses are available globally for visually impaired people around the world.

Envision glasses allow enhanced Optical Character Recognition (OCR), improved text reading with contextual intelligence, and a third-party app ecosystem allowing specialist services, such as indoor and outdoor navigation.

  1. Reducing Economic and Social Inequalities

AI can help overcome economic and social inequalities by offering a detailed, optimized plan to tackle these issues.

For instance, Textio is an AI-powered text editor that helps write more “inclusive” job descriptions or company DNA, which can help organizations build a diversified workforce. 

Then there’s Aequitas – an AI-powered toolkit designed for machine learning (ML), data science, and policymakers that can automatically scan for human biases and correct them.

The Imperial College of London is also training an AI using street images to assess the living conditions of a particular neighborhood and then provide timely aid to its dwellers.

NEWS (Nutrition Early Warning Sign) is another AI and ML project to identify and predict areas worldwide that can suffer from food shortages because of crop failure, drought, or other conditions. With access to this information, governments and organizations can take timely precautions.

Some of the Risks of Artificial Intelligence

Is Black Box AI Dangerous?

Is Black Box AI Dangerous?
Picture from our article Is Black Box AI Dangerous?

We released an article titled “Is Black Box AI Dangerous?” to discuss the elephant in the room of artificial intelligence and how researchers often do not even understand what we are creating.

Neural network models are sometimes called black boxes because researchers who design them don’t always understand how all the individual neurons work together or how they generate answers.

We ask the question in our article, “how dangerous is it to let a computer model (program) like Black Box AI that we do not understand run significant institutions in our lives or affect a Facebook or Google News feed? If you don’t believe something like this can negatively affect your life, look around. Is everything ok?”

Is Black Box AI Dangerous?-www.techevaluate.com

One example of scientists trying to decipher Black Box AI is MIT researchers who created a new technique called MILAN. In natural language, this technique can automatically describe what the individual neurons of a neural network do and their roles. This question is often about the black-box neural networks emulating the human brain and whether it will be possible. 

AI Reasoning With Google Pathways Language Model (PaLM)

Google Pathways Language Model (PaLM)
How does the PaLM Black Box AI model reason?

Google’s PaLM is a new language model (program/algorithm) that has been trained on a set of data, which is an AI trained to predict the next word or words in a probability distribution over words or word sequences. 

A language model that has the ability for commonsense reasoning.

This neural network machine learning model shows the capability of the Google Pathways system going to thousands of accelerator chips across two Tensor Processing Unit v4 Pods by training a 540-billion parameter Transformer model.

According to Google:

  • PaLM can explain scenarios, including jokes, that require a complex combination of multi-step logical inference, world knowledge, and deep language understanding.
  • PaLM can also code, such as writing code using natural language description (text-to-code), translating code from one language to another, and fixing compilation errors (code-to-code).
  • Incredibly, PaLM can reason through chain-of-thought promptings, such as reasoning tasks that require multi-step arithmetic or common-sense reasoning

Regarding Black Boxes, what Google does not necessarily explain in their above AI Blog article is that their engineers don’t know how PaLM reasons. PaLM has demonstrated capabilities that Google has not seen before.

5 Incidents That Highlight the Dangers of AI

5 Incidents That Highlight the Dangers of AI

Here are five incidents that highlight the potential dangers of AI. 

  1. Chess-Playing Robot Breaks Young Boy’s Finger During a Match In Moscow

A chess-playing robot fractured the finger of its 7-year-old opponent during a tournament in Moscow on Jul 21, 2022.

During the Moscow Chess Open competition, the boy hurried the artificial intelligence-powered robot, then the robot squeezed and broke the child’s finger to slow the boy down.

The video above of the game shows the AI Chess Machine reaching for and grabbing one of the boy’s chess pieces and quickly putting it in a box. The young man then attempts to make another move by pushing one of his chess pieces into the same place as the recently removed piece.

The spooky video shows the robot’s mechanical hand moving back toward the chessboard and grabbing the young man’s finger. The robot held the boy’s finger for at least 15 seconds before two people from the crowd opened the AI Chess Machine claw and released the boy.

  1. Self-Driving Car Hits a Motorcycle From Behind
Here’s a 30-sec YouTube video showing a self-driving Uber running a stoplight.

One of the good use cases for AI is self-driving cars. But what happens if the AI and the driver disregard traffic rules, pedestrians, or other drivers? On July 24th, 2022, a Tesla Model 3 hit a motorcycle on I-15 in Draper, Utah, and killed the rider.

According to Utah Highway Patro, the motorcycle was southbound on the freeway near I-15 South when the Tesla Model 3 on autopilot hit it from behind at about 1:10 a.m.

There have been 21 verified Tesla Autopilot deaths that were reported to the National Highway Traffic Safety Administration since 2013. According to Elon Musk, Tesla, with Autopilot engaged, has an approximately ten times lower chance of an accident on the highway than the average car on the road today.

It should be noted that the law requires AI Auto-pilot features in self-driving vehicles still require the operator of the vehicle to be attentive, watch the road, and be in charge of the vehicle’s control.

As you can imagine, this doesn’t spell good news for a future when autonomous vehicles supposedly become more common. 

  1. AI Robot Makes a Run for “Freedom”
AI Robot makes a run for freedom, but its battery died.

One of the early prototypes of Promobot was being programmed to observe its environment and learn how to interact with humans. 

One day, during training, it noticed that one of the engineers left the lab gate open, so it decided to go out and wander into the city.

So what started this wanderlust? Well, the bot’s programmers don’t know because, after trying to “fix” this issue, the bot kept fleeing the premises. Currently, AIs are being designed as servants or helpers to humans. But what if AIs want to be “free”?

  1. Wiki AI Edit Bots Fighting One Another

Wikipedia employs AI editing bots to crawl their website, find errors, correct them, update broken links, etc. 

What’s important is that multiple old and new bots are working simultaneously to keep the site clean and error-free. And the problem is that these bots don’t get along too well.

Researchers from the University of Oxford tracked the behavior of these Wikipedia editing bots from 2001 to 2010, spanning 13 different languages. 

They found the bots digitally arguing with one another where one bot will make an edit, the other one will change it, and then the first one will change that, and they go at it in a loop. So what happens if AIs in more advanced and critical technologies start fighting each other?

  1. The Nazi Comments From Microsoft’s AI Chatbot

Back in 2016, Microsoft unveiled an experimental AI chatbot called Tay. Its purpose was to interact and entertain human users, for which Microsoft fed it tons of data on how humans talk on social media.

The AI was also designed to be super adaptive, where it could get “smarter” the more you talked with it.

So people talked to Tay, and the AI responded to them. Within 24 hours, Tay went from saying “humans are super cool” to “Hitler was right” in 24 hours. Sixteen hours after this incident, Microsoft took down Tay.

This shows the difficulties in programming an AI with cultural appropriation, understanding human history, proper judgment between right and wrong, and much more.

So what if an AI acts on its judgment instead of simply spouting the wrong words and taking improper action?

In Closing

In Closing for Is AI a Force For Good, or Is It Dangerous?

For years, people have doubted AI could ever exist or that it wouldn’t understand human requirements. But now, there’s artificially intelligent software and hardware helping with disease diagnosis, saving the bees, aiding disabled people, stopping fake news circulation, and reducing economic and social inequalities.

But at the same time, Artificial Intelligence has also done weird and alarming things like fighting among themselves, making offensive comments, and not serving the intended purpose. This makes some people question whether AI is reliable and trustworthy. 

References:

Fremontii, LLC. is compensated for referring traffic and business and as an Amazon Associate, we earn from qualifying purchases. By using the affiliate links, you are helping support our Website, and we genuinely appreciate your support.

John Mortensen

As a kid I wanted to be an astronaut, geologist, or scientist. I became a project manager which is involved with many of those things. I am a project manager and tech writer who researches the latest alternative and green technologies. We write helpful articles about green electronics and green technology products. AI, extreme weather, electric vehicles, are all in our future and we want to know the best way to deal with the effects of these on the power grid and emergency preparedness. https://techevaluate.com/author-bio-page-john-w-mortensen/

Recent Posts