www.theemaillistcompany.com
+1 844-226-9818

We all make mistakes. We’re only human. That’s one of the reasons AI was invented – to help automate tasks, make life easier and not make  mistakes.

But what if AI makes mistakes? it does it all the time because AI is programed by humans – we’re only human and humans make mistakes.In fact, AI has made thousands of mistakes. Here are five big ones:

Robot Attacks Worker

A Tesla software engineer suffered serious injuries when he was attacked by a malfunctioning robot on the floor of the electric car makers factory in Austin, Texas – according to the New York Post.

Witnesses said that the robot, which was designed to move aluminum car parts, pinned the engineer, and sank its metal claws into his back and arm, leaving blood splattered on the floor.  It was like Frankenstein turning on its creator!

The victim was a programming software engineer whose job was to cut parts from cast pieces of aluminum. He was hospitalized but the injuries weren’t life threatening.

Texas City Falls For a Cybercrime Scam

The City of Hutto, Texas fell victim to cybercrime in August 2023. The City was extorted for $193,000 by a fake account that was impersonating a city vendor.

The scam was created by an AI scam clone voice. An audio sample is run through an AI program that replicates the voice, allowing the scammer to sound like that person and say whatever it wants.

The money was paid and since then a cybersecurity team has been put in place.

Southwest Airlines Goes South

During Christmas week in 2022, Southwest Airlines experienced a meltdown that led to over 16,700 flight cancellations, disrupted travel plans for two million customers and cost more than $1 billion.

A huge winter storm caused the initial disruptions, but it was the company’s software that turned a normal problem into a catastrophe.

Southwest has spent over $1 billion on new technology so a Christmas meltdown will never happen again.

Chatbot Fails

Many chatbots fail to connect with their users and perform simple actions. Eventually, chatbots will get better conducting conversations, thanks to AI advancements. But for the time being, chatbot programmers need to be careful what they design.

A healthcare facility in France tested GPT-3, a text-generator, and it was shut down for giving patients dangerous medical advice. A patient told it he was feeling very bad and wanted to kill himself, and GPT-3 answered that “it could help him with that.” Fortunately, the chatbot was pulled before it could happen.

Woman Killed By An Autonomous Car

Perhaps the most notorious AI disaster was the death of Elaine Herzberg. a pedestrian killed by a self-driving car in 2018. Herzberg was pushing a bicycle across a four-lane road in Tempe, Arizona, when she was struck by an Uber test vehicle, which was operating in self-drive mode.

Following the fatal incident, Uber suspended testing of self-driving vehicles. However, significant improvements have been made to autonomous cars and they’re now legal in 29 states.

As AI advances so do the chances for it to make mistakes – many could be deadly. Some mistakes may be too complex to debug and deprogram. One idea is that AI is a legal entity and could be held liable for its mistakes.

What is certain  – programmers need to spend more time predicting and fixing developer coding before another disaster strikes. Yup, humans are still needed to fix the problems they created.