You may or may not know it, but you interact with some form of AI daily. For instance, if you use Gmail, that automatic e-mail filtering feature that prevents you from dealing with pesky sales and promo emails is a form of AI.
If you own a smartphone, it probably fills out your calendar with the help of Google Assistant or Siri- another form of AI. And if you drive on the regular, that driver-assist feature that you simply cannot live without is another form of AI.
As human beings, we stand on the edge of a crucial moment in our history. Thanks to advancements in technology, we are now living in the age of artificial intelligence. While we still have a long way to go before we can outperform the sentient machines that are portrayed in science fiction, the development of algorithms that can learn, understand human language, as well as mimic particular aspects of our brains, have led to major advances.
As a result, AI is being utilized in hundreds of fields and industries. The machines that we utilize today have been getting smarter and smarter, which implies that AI is no longer a pipe dream but a technology that continues to become increasingly integrated into nearly every facet of modern life.
From suggesting which books, one might like to purchase online to powering the virtual assistants that inhabit our smartphones and smart speakers. In reality, AI has been touching our lives in a lot more ways than most realize. But where did it all start? Here are key moments in the history of artificial intelligence:
Computational machines and calculators are introduced
Back in the 17th century, scientists and philosophers such as Hobbes and Descartes were already suggesting that all human reasoning could be diminished to simple computations, which meant that in the future, this task would be carried out by machines.
This point of view is what has driven AI research to become the force that it is today. It is also this train of thought that led to the invention of the first computational machines or calculators. The first calculators were not sold for public use until 200 years later, which is also around the same time that Charles Babbage and Ada Lovelace began hypothesizing the world’s very first computers.
Isaac Asimov proposes the Three Laws of Robotics
Isaac Asimov was one of the most popular and respected authors of science fiction. In his 5 decades as a professional, he authored some of the best artificial intelligence books (more than 400 books) and garnered the respect of scientists and educators alike as well as industry book reviewers. He also proposed the Three Laws of Robotics also known as Asimov’s laws, which paved the way for robots to take over.
Although Asimov’s laws were primarily an organizing principle and a theme for most of Asimov’s robotic based fiction books, they played a major role in shifting perceptions about robots and their place in human society.
In 1983, the Toronto Star asked Asimov to predict what the world would look like in 2019. He made predictions about computerization, which were accurate for the most part. He also made predictions about education as well as space travel, but these predictions turned out to be overly optimistic.
He did however, predict that computers would disrupt common work habits and replace old jobs with ones that are considerably different. He also stated that robotics would slowly kill off routine clerical and assembly jobs, which has already happened. These predictions about the future of computerization paved the way for AI to become a reality.
Alan Turing proposes the Turing Test
Alan Truing is often referred to as the father of theoretical computer science and artificial intelligence. Turing, who was way ahead of his time, spent a huge portion of his career investigating whether machines could think like humans. He started dabbling with this possibility back in the 1950s, at a time when computers were still new and the terms artificial intelligence was not even a word yet.
All scientific thoughts need to be backed up using solid evidence, a fact that Turing knew all too well. It is for this reason that Turing published a paper in 1950 in which he suggested a test known as The Imitation Game. The idea behind this test was to verify whether machines could be taught to be intelligent or not. Today, this test is known as the Turing test.
AI is finally born
The field of AI was born in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence. In the early 50s, scientists would refer to the field of artificial intelligence by various names such as or thinking machines, automata theory, or cybernetics.
In 1955, however, a young mathematics professor at Dartmouth College called John McCarthy decided to organize a small conference where ideas about thinking machines could be shared, developed and clarified.
McCarthy gave the new field they were about to discuss the name artificial intelligence. This workshop organized by McCarthy ran for 6 weeks but it was here that the field of artificial intelligence took form to grow and become what it is today.
It was thanks to the ideas that were discussed at this conference that high-level computer languages such as Joseph Weizenbaum’s chatterbot ELIZA (1966) were created. ELIZA would become one of the very first programs to pass the Turing Test successfully. Around this time, WABOT-1, the 1st full scale intelligent humanoid robot in the world was created in Japan.
Over the next coming years, technologies and trends at the cutting edge of Artificial Intelligence will explode. The implication of allowing these silicon brains into human society is bound to have some profound implications not only in the way we do things but also in the way that we learn.
AI will shape human relationships, our jobs, and careers as well as our societies. As Alan Pelis, one of the earliest pioneers of computer science stated, ‘’spend a year in artificial intelligence and you will believe in God’’.