This content originally appeared on HackerNoon and was authored by Neha Pant
While AI has created quite a stir in the past few years, the groundwork for AI development was laid as early as the start of the 20th century.
\
\
The Evolution of AI
With the recent hype around AI, some of us may feel it’s a recent phenomenon. It’s not. It all started at the start of the early 20th century. However, substantial work in this field only started to happen around the mid-20th century. It was around this time that the visionary mathematician, Alan Turing, conceived the creation of intelligent machines—a dream that’s been realized by the advent of technology like machine learning and natural language processing.
\ This evolution story of over 100 years is an intriguing one. It’s akin to an acorn being sown into the fields, being watered and nourished, and then waiting for it to develop into a mighty oak tree. To understand how the early groundwork evolved into the intelligent technology we see today, let’s dive into the history of artificial intelligence.
The Early 1900s
Before any concept or idea of scientific interest is explored by scientists, it is born in the minds of artists. The same is true of the idea of “artificial humans” or “robots” that germinated in the mind of the Czech playwright Karel Capek and culminated in his 1920 hit play “R.U.R. - Rossum’s Universal Robots.” He is credited with the coining of the term “robot” meaning workers.
\ We rave about Google’s self-driving cars but do we know that the concept of driverless cars was first launched in 1925? The first driverless cars to hit the streets of New York City were radio-controlled cars released by Houdina Radio Control. It took 90 years for Alphabet Inc., Google’s parent company, to launch Waymo, which now provides commercial robotaxi services—a marvel of intelligent technology.
\ Once an idea starts in one part of the world, it quickly travels and gains new dimensions. In 1929, Japanese professor Makoto Nishimura created with some success what Karel Capek had envisioned in his play—a robot! This Japanese marvel called Gakutensoku, meaning learning from nature’s laws, could move its head, and hands, and change facial expressions. Nishimura achieved it by using an air pressure mechanism.
\ It appears that the human stream of consciousness was collectively dreaming about machines that could not just emulate human work but also do things that humans couldn’t. H.G. Wells, the futurist author who conceived concepts like time travel, invisibility, bio-engineering, and more, predicted in 1937 that “the whole human memory can be, and probably in short time will be, made accessible to every individual” and that “any student, in any part of the world, will be able to sit with his [microfilm] projector in his own study, at his or her convenience, to examine any book or document, in a replica." We can safely assume that he was dreaming about the future computers that we use today.
\ Later, in 1949, the American computer scientist, Edmund Callis Berkley, published the book “Giant Brains Or Machines That Think”. This was the first time that the prototype of what can be termed the first personal computer, Simon, was described in a book. The most exciting part of the book for science and technology aficionados was the survey of the pioneering mechanical brains (early computers) of the time—MIT’s differential analyzer, Harvard’s IBM sequence-controlled calculator, the Moore School’s ENIAC, and Bell Laboratories’ relay calculator.
The 1950s—the time when AI was born
While the early 1900s were a time when humans were either still creating intelligent machines in their minds or taking little steps toward creating actual machines, the 1950s were when real strides were taken in this direction. It was, by far, the most invigorating period in the history of AI.
\ You need to be a child at heart if you wish to think as freely and creatively as a child. Playing games is one of the best ways to keep that child alive. But Alan Turing went a step ahead when he devised “The Imitation Game”, popularly known as the Turing Test, in his seminal 1950 paper “Computer Machinery and Intelligence”. This game was designed to evaluate the intelligent behavior of a machine which made it difficult to distinguish it from a human. In modern times, we use the reverse Turing Test, popularly called the “Completely automated public Turing test to tell Computers and Humans apart” or CAPTCHA to determine that a human and not a bot is operating the machine.
\ It was in 1948 that Turing started writing a program for a computer to play a game of chess. In 1952, he implemented this program on Ferranti Mark 1. Unfortunately, the computer could not use the program to play the game, but Turing activated his inner child to play the game of chess using the instructions in the algorithm manual. Much later, Russian chess grandmaster and former World Chess Champion, Garry Kasparov, looked at the game’s recording and touted it as “a recognizable game of chess.”
\ But the fascination with games was not restricted to Turing. Another pioneer in computer gaming and artificial intelligence, Arthur Samuel, introduced the world’s first successful self-learning program for playing checkers called the “Samuel Checkers-Playing Program” in 1952. Samuel saw the potential of games in research for artificial intelligence as they made it easy to evaluate the computer’s performance against a human’s. Samuel’s brilliance was further exemplified when he popularized the term “machine learning” in 1959, the research for which he had started in 1949.
\ The era of the mid-1950s was bustling with research and work around artificial intelligence. In fact, the term “artificial intelligence” or “AI”, as it is popularly called, was coined in 1955 by another dazzling mind who’s also one of the founders of AI as a discipline—John McCarthy. He coined the term in a co-authored document but the term gained popularity at his summer workshop at Dartmouth College that is said to have been attended by the leading computing minds of the time. He didn’t stop there as he further refined his thoughts on AI by inventing the Lisp, a programming language, in 1958. He went on to publish “Programs with Common Sense” in 1959 where he described the program to solve problems by manipulating sentences—it was called the Advice Taker.
\ All these inventions and discoveries of the 1950s were followed by other pathbreaking work through the 1960s and 70s. Chief among these was the interactive program ELIZA. Developed by Joseph Weizenbaum in 1965, this program could carry on a dialogue on any topic in the English language, much like today’s chatbots! What was most curious about this program was that many people attributed it to having human-like feelings—a feature that is still elusive and debatable.
\ The “Deep Learning” methodology used in the development of AI today, was conceived in 1968 by the Soviet mathematician Alexey Ivakhnenko in his work “Group Method of Data Handling” which was published in the journal “Avtomatika”. It may sound unbelievable but the rapid technology advancements we see today stand on the shoulders of slow and steady work done through the mid-20th century.
The 1980s—The Time of Heightened Action
\But things didn’t stay slow and steady. They picked up steam in the 1980s when the interest in AI and consequent funding and research in the field grew by leaps and bounds. This period saw the development of programs that replicated the decision-making abilities of human experts in specific fields.
\ 1980 was the year of the first-ever conference of the Association for the Advancement of Artificial Intelligence, or the AAAI, which was founded in 1979. With its 38th annual conference held this year, this association continues to foster research in the field of AI and the exchange of scientific ideas among global practitioners in the field. But there was a time in 1984 when the AAAI had predicted what was called an “AI Winter”, a time of sluggish research due to reduced interest in AI.
\ However, before this prediction, the Japanese Ministry of International Trade and Industry allocated an astounding 850 million dollars to the Fifth Generation Computer Project in 1981. The 10-year project was ambitious, ahead of its time, and a commercial failure—a perfect example of being a hype instead of a zeitgeist. The project, however, exemplified the Japanese philosophy of IKIGAI as the scientists exploited their zest to give a boost to the development of concurrent logic programming.
The Late 1980s and the Early 1990s—an AI Winter?
Contrary to weather predictions by the meteorological department, the AAAI’s prediction of an AI winter proved correct. The end of the Fifth Generation project was one of the causes of the loss of interest and investment in the field of artificial intelligence in the late 1980s. But other setbacks in the expert systems and machine markets, including the collapse of the specialized LISP-based hardware due to cheaper alternatives from IBM and Apple in 1987, also contributed toward the disinterest in AI.
\ However, not all was crummy during this period. The father of the slain American journalist Daniel Pearl and the winner of the 2011 Turing Award, Professor Judea Pearl, published “Probabilistic Reasoning in Intelligent Systems” in 1988. The champion of the probabilistic approach to artificial intelligence and inventor of Bayesian networks was a revolutionary thinker whose Bayesian models became a significant tool for important work in engineering and natural sciences.
\ The Jabberwock might be a scary fictional character from Alice in Wonderland but its purpose was the same as the Jabberwacky, a 1988 chatbot developed by Rollo Carpenter—to delight and entertain. The Jabberwacky was equipped to simulate natural human conversation entertainingly and humorously.
1993-2011—The rise of the sleeping giant
Winter isn’t long, especially in the sub-tropical regions, and as it leaves, spring ushers in beautiful blooms that are loved by one and all. Something similar happened to the AI winter as well. While the ominous prediction of the end of the human era in 1993’s “The Coming Technological Singularity” by Vernor Vinge may have scared the best of us, this paper also predicted that within 30 years we’d “have the technology to create superhuman intelligence.” Thirty years since we may not have achieved what he predicted, but it appears we’re headed in that direction.
\ In 1997, the world saw the defeat of Gary Kasparov, a reigning world chess champion, at the hands of Deep Blue, the first computer chess-playing program. This incident was a watershed moment in the history of AI and thus became the creative influence for many a book and film. In the same year, Dragon released NaturallySpeaking 1.0, also known as DNS, a speech recognition software that would run on Windows.
\ The year 2000 saw further progress in the field with the development of Kismet, a robot that could simulate human emotions. This and other robots were the brainchild of Professor Cynthia Breazeal who was still a student at the MIT Artificial Intelligence Lab. The dreams that the playwrights of the early 1900s saw were starting to take form.
\ Yuri Gagarin’s space odyssey and Neil Armstrong’s moon landing were worth celebrating but so were the landing of Spirit and Opportunity, the two U.S. rovers, on Mars in 2003. They successfully operated on the planet without human intervention—a major win for the world of technology.
\ Twitter, Facebook, and Netflix—the contemporary tech giants—started using AI for their advertising and UX algorithms almost two decades ago in 2006. The signs that AI as a technology would really kick off had started becoming visible. The win of Deep Blue in 1997 was a glimpse of IBM’s research in the field of AI and IBM Watson showed how far their research had led them when the program defeated Brad Rutter and Ken Jennings in the Jeopardy! Challenge in 2011.
\ It was IBM’s Watson that made industries from retail to financial services think about the possibility of AI deployment in business. 2011 was also the year when Apple’s Siri, a virtual assistant, was launched.
2012 to Now—The Big Leaps in AI
Time is a continuum and everything we do, even AI, is part of this continuum. We’ve seen how the baby steps of the 1900s led to major discoveries and inventions of the 2000s. As the first quarter of the 2000s inches toward its end, we see how far AI has come, especially in the past decade and a little more.
\ One of these journeys is that of the Deep Learning model. From its humble conception in 1968, the model attained a giant outlook when Google’s Jeff Dean and Andrew Ng connected 16,000 computer processors to create one of the largest neural networks with more than a billion connections for machine learning in 2012. They fed the network 10 million random images of cats from YouTube videos and something extraordinary happened—the network started recognizing cats. Cat lovers can make some noise!
\ The other journey is that of robots. From Karel Capek’s Rossum’s Universal Robots to Makoto Nishimura’s Gakutensoku to Joseph Weizenbaum’s chatbot ELIZA to Cynthia Breazeal’s Kismet, the robot had come a long way, but it was in the year 2016 that Sophia, the humanoid robot with realistic human features and expressions became the first “robot citizen”. Surprised much, don’t be.
\ Some things rock our boat and then some rock humanity. One such thing happened in 2017 when the tech world was left awestruck at the behavior of two Facebook chatbots that seemed to have developed their own negotiation language that was incomprehensible for humans but showed certain patterns to convince AI engineers that it wasn’t random babble but a language they actually understood. While this didn’t herald the arrival of the singularity as predicted by Vernor Vinge, the bots’ negotiations, including their feigning interest in an item to convince the other of a compromise later, left humans gaping at the future possibilities of machines.
\ In March 2020, came the first wave of the Covid-19 pandemic that disrupted work as we knew it till then. As companies began scrambling for ways to do work remotely, Open AI launched its Generative Pre-trained Transfomer, or GPT-3, as it is popularly called, in May 2020. The GPT-3 is one of the largest language learning models that can perform tasks with heightened accuracy because of its increased capacity and higher number of parameters—175 billion which is 10x more than its closest competing model Turing-NLG.
\ Since then, newer improved versions, the GPT-3.5 and GPT-4 have been launched. More recently, GPT-5 has also been announced promising Advanced AI, empathy, confidentiality, dynamic customization, and chain of thought and reasoning step by step.
\ Our journey has brought us to the present day, where AI permeates every aspect of our lives. From virtual assistants anticipating our needs to self-driving cars navigating city streets, the once-fantastical dreams of AI pioneers have become our daily reality.
\ But, as we turn the page to the future, the plot thickens with ethical dilemmas, societal impact, and the quest for artificial general intelligence – the ultimate frontier. The saga of AI continues to unfold, captivating our imagination and challenging our understanding of what it means to create intelligence from silicon and code.
\ And so, the tale of Artificial Intelligence, a mesmerizing blend of human ingenuity and technological evolution, marches forward into the unknown, inviting us all to witness the next chapters of this riveting narrative.
\ Appendix:
\
- https://www.forbes.com/sites/gilpress/2021/05/19/114-milestones-in-the-history-of-artificial-intelligence-ai/?sh=2f563b0474bf
- https://www.tableau.com/data-insights/ai/history
- https://www.researchgate.net/publication/334539401_A_Brief_History_of_Artificial_Intelligence_On_the_Past_Present_and_Future_of_Artificial_Intelligence
- https://www.researchgate.net/publication/328703834_Historical_Evolution_Current_Future_Of_Artificial_intelligence_AI
- https://ourworldindata.org/brief-history-of-ai
- https://www.techtarget.com/searchEnterpriseAI/tip/The-history-of-artificial-intelligence-Complete-AI-timeline
- https://edoras.sdsu.edu/\~vinge/misc/singularity.html
- https://en.wikipedia.org/wiki/Fifth_Generation_Computer_Systems
- https://www.ibm.com/watson?mhsrc=ibmsearch_a&mhq=watson
- https://computerhistory.org/profile/john-mccarthy/#:\~:text=McCarthy coined the term “AI,programming language lisp in 1958.
- https://history.computer.org/pioneers/samuel.html
- https://spectrum.ieee.org/the-short-strange-life-of-the-first-friendly-robot#toggle-gdpr
- https://monoskop.org/images/b/bc/Berkeley_Edmund_Callis_Giant_Brains_or_Machines_That_Think.pdf
- https://www.gutenberg.org/files/59112/59112-h/59112-h.htm
- https://en.wikipedia.org/wiki/Turing_Award
- https://en.wikipedia.org/wiki/Judea_Pearl
- https://en.wikipedia.org/wiki/H._G._Wells
- https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)
- https://en.wikipedia.org/wiki/Cynthia_Breazeal
- https://www.springboard.com/blog/data-science/machine-learning-gpt-3-open-ai/#:\~:text=GPT-3 was introduced by Open AI earlier in,successor to their previous language model (LM) GPT-2.
- https://www.cnet.com/tech/services-and-software/what-happens-when-ai-bots-invent-their-own-language/
- https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/
- https://www.independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html
\
This content originally appeared on HackerNoon and was authored by Neha Pant
Neha Pant | Sciencx (2024-09-11T15:00:24+00:00) Dreams to Reality – The AI Evolution Story. Retrieved from https://www.scien.cx/2024/09/11/dreams-to-reality-the-ai-evolution-story/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.