In the year 1939, one of the biggest International Expos of its time was held in New York city. The theme of that fair was “The world of tomorrow”. It was an opportunity for different countries to display their technological achievements and show the world, what the future might look like. In that expo, Westinghouse Electric Corporation, an American company, gave a public demo of the first ever AI powered robot named Elektro. This robot was around 270 pounds and had 48 electric relays in its metal body. It could identify red and green colors, follow voice command, and speak around 700 words. Of course, Elektro was not nearly as sophisticated as the current advanced humanoid robot 'Sophia'. Still, Elektro was a radical AI product for its time that demonstrated the capability and vision of the human race to master machines. Andrew Ng, a very respected AI leader of our generation has called Artificial Intelligence the new age electricity. According to him, similar to how electricity transformed every major industry 100 years ago, Artificial Intelligence is going to do the same – transform every major industry today, but at a much faster pace. Several business and world leaders have called data the new oil in this age of AI. Organizations with the deepest and most comprehensive data sets in different fields will have a huge competitive advantage in building better and smarter self-learning and trained algorithms.
To understand the current state of AI, it is important to go back in the past and study how AI has evolved over the years. The concept of modern Artificial intelligence has been around for decades. Mathematicians and Scientists have held a profound belief that human thinking and intelligence can be improved with the use of computation power. The field of modern computing started with Charles Babbage's mechanical analytic engine in the 1840s. In 1850s, George Boole, an English Mathematician and Logician developed Boolean logic, which defined the output as either True or False. This abstraction of logic by Boole was the first step in giving computers the ability to reason. Later in the year 1936, Alan Turing proposed Turing theory; he broke down the logic into Math, translating it to the machines, so that computers could solve the problems by manipulating strings of ones and zeroes. Turing, while creating the idea of Turing machines, also created the basis of modern computing. In 1948, Claude Shannon published a landmark theory called “Mathematical theory of communication”, which suggested that all information in the entire universe could be represented in binary. This had profound implications for AI suggesting that we could potentially break down human logic and replicate the functioning of the human brain with computing technology. This fact was demonstrated in 1955 by the first ever AI program, called “Logic Theorist”. AI development accelerated during the 1950s and 1960s, and it was during that time only when the term “Artificial Intelligence” was coined by John McCarthy, who is often considered as one of the founding members of the modern AI. Over the decades, scientists kept researching on how to abstract human logic and behavior, making new advancements in the field of AI. As computers became more capable every year, and neural networks and other AI advances started to come through, the research process accelerated, thus giving birth to the modern computer based AI.
Algorithms teach machines to think like humans. Thus just like human brain takes time to learn things, the algorithms are designed so that as more and more simulations are done, the results get better. Once trained, deep learning helps extract useful patterns from data. Machine learning is suddenly seeing a huge surge of interest among researchers, developers and businesses. The key factors behind the recent advancements in the field of machine learning are the availability of big data (from the social web, IoT and mobile), cheaper and more powerful storage and compute power in the cloud, advances in tools, and the investments flowing into AI. Machine learning uses algorithms to analyze data and derive meaningful predictions and conclusions. Data is the most important ingredient in the machine learning process.
Algorithm learning takes the following different forms:
- Supervised Learning
- Augmented Supervised Learning
- Semi-Supervised Learning
- Reinforcement Learning
- Unsupervised Learning
Typically humans learn quickly from few examples and life experiences. In contrast, most machines today still need tons of data and examples to become good at predictions and learning. Errors in predictions are passed back to the neural networks for further training to help increase accuracy. Tools and frameworks such as TensorFlow, PyTorch, MXNet, CNTK, Caffe, Keras, Theano and others are helping make it easier to develop ML and Deep Learning models. New compute hardware such as TPU by Google contains custom ASIC (Application-Specific Integrated Circuit) that is specialized for machine learning. The different Machine-learning schools of thought are inductive reasoning, connectionism, evolutionary computation, bayes theorem and analogical modeling. All these machine-learning tribes are working hard to make progress.
Developments in AI have led to many new startups that are leveraging the power of technology to bring innovative products and solutions to the market. AI and robotics are powering a range of efficiency improvements in the human life – from drones delivering emergency medical kits in remote areas, to helping transform farming to increase crop yield. There has been exciting progress in face and speech recognition, image classification, text-to-speech generation, machine translation, transcription and recommendations. Not everyone is excited about AI since as routine jobs and tasks get automated, there will be a lot of pain in the short term. People will have to get re-skilled which is not easy and takes time. Our educational systems will need to overhaul to keep up with these advancements to provide relevant skill development and training. In the long term, AI will help the world economy grow and create new jobs. There are several jobs that didn’t exist two decades ago. And the same will happen in the next two decades. So should we worry about the increasing impact of AI? Not at the moment. The road to the development of artificial general intelligence is long and complicated. We should embrace the technology and think of new ways of how AI powered apps and machines can bring positive changes to our world.