Not long ago, the abbreviation AI stood for things like Adobe Illustrator, Army Intelligence, Autistically Impaired, Air India, and so on. Today, AI stands for only one thing – artificial intelligence.
AI is hot and it’s trending everywhere. Some think AI began recently– with the Introduction of ChatGPT. Others believe it started with Amazon’s Alexa, Apple’s Siri, or GPS. In fact, AI has been around much longer than that.
First, let’s define what AI is. Simply put, AI is intelligence – perceiving, synthesizing, and processing information by machines as opposed to intelligence displayed by humans and animals.
With the help of Harvard University research, we compiled a timeline of AI milestones, and how it evolved into the formidable presence it is today.
Who Invented AI
The first form of AI was invented in England in 1951. Alan Turing, a British scientist, and mathematician explored the mathematical possibility of artificial intelligence. Turing proposed a radical idea that since humans use information as well as the reason to solve problems and make decisions, why can’t machines do the same thing? This was the framework of his 1950 paper, Computing Machinery and Intelligence – where he wrote about intelligent machines and how to test their intelligence.
Alan Turing is consides “the Father of AI.” In 1954, during the midst of his groundbreaking work, he was discovered dead in his bed, poisoned by cyanide.. The official verdict was suicide.
AI in the 1950s
Before 1949 computers couldn’t store commands, only execute them. They could be told what to do but couldn’t remember what they did. And they were very expensive. In the early 50s, the cost of leasing a computer could cost up to $200,000 a month. But as the price of computing came down, research and experimentation ramped up.
Computers flourished in the 50’s. They could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved. Still, these computers were too weak to exhibit intelligence.
The first artificial intelligence program was held at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in 1956. This research laid the foundations for what we now consider the science of AI. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation, and they were given millions of dollars to make this vision come true. Government agencies like the Defense Advanced Research Project Agency (DARPA) funded AI research at many institutions.
AI in the 1960’s
The 60’s brought us the first robots – and robots are considered AI. Unimate was the first industrial robot. It was invented by American inventor George Devol, and it worked on a General Motors assembly line in New Jersey. In 2003, the Unimate robot was inducted into the Robot Hall of Fame.
In 1966, Richard Greenblatt, a programmer at MIT, built a knowledge-based chess-playing program called Mac Hack. It was good enough to achieve a class-C rating in tournament play. Greenblatt has also been called the world’s first hacker.
AI in the 1970’s
In 1970, the first anthropomorphic robot, WABOT-1, was built in Japan. It featured moveable limbs, the ability to see, and the ability to converse. An anthropomorphic robot is a robot shaped like a human.
But AI was subject to many critiques and financial setbacks in the 70’s. Researchers didn’t fully understand the problems they faced. Their optimism raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared. Plus, limited computer power made it difficult to accomplish anything useful. Despite these difficulties, new ideas were explored in logic programming, commonsense reasoning, and other areas.
AI in the 1980’s
In the late 1980’s, government agencies like the Strategic Computing Initiative cut AI funding down to a trickle. New leadership at DARPA determined that AI was not “the next wave” and directed funds toward other projects that seemed more likely to produce immediate results.
By the end of the 80s, the first commercial wave of AI was effectively ended, as over 300 AI companies were shut down, went bankrupt, or were acquired.
AI in the 1990’s
The field of AI, now more than half a century old, finally achieved some of its original goals. There were major advances in all areas, with significant progress in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics.
The memory and speed of computers caught up to expectations in the 90’s, in many cases, surpassed our needs. In 1997, IBM’s Deep Blue computer defeated world champion Gary Kasparov in chess.
AI in the 21st Century
In the first decade of the 21st century, cheaper and faster computers, and advanced machine learning techniques were successful in solving many problems. AI applications grew rapidly across many fields including business, medicine, education, research, avionics, industrial robots, and more.
SRI International released Siri as a stand-alone virtual assistant for Apple’s iOS operating system in 2010. Apple bought Siri in 2011 and introduced Siri on the iPhone 4S.
The recent introduction of ChatGPT has turned the world upside down with all its abilities. ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and it can answer questions and assist you with tasks like composing emails, essays, and code.
GPS (Global Positioning System) is AI. It was invented in 1978, but it didn’t take off until the 2000’s. Now it’s in every smartphone.
Google uses AI every time a user enters a search query, and the technology is constantly learning and improving.
Today and Tomorrow
Today, AI software and devices help you throughout your daily life. From the moment you wake up to when you go to bed at night, AI drives much of what we do. The future possibilities of AI are endless. But as artificial intelligence gets more intelligent, there are warning signs everywhere. Even Elon Musk warns us about the dangers of AI.
The use of AI will continue to expand. Marc Gyongyosi, the founder of Onetrack.AI, says: “People need to learn about programming like they learn a new language. And they need to do that as early as possible because it really is the future. If you don’t know coding, and if you don’t know programming, it’s only going to get more difficult.”