An updated version of Marc Andreessen’s famous quote, “Software is eating the world” probably is “AI is eating software.” We tend to think of AI in incremental ways, and we need to urgently change that thought process because our approach to AI will demarcate the difference between linear thinking and transformational thinking. Most organizations want to use AI to cut costs and do the thing they are already doing faster and quicker; this is an incremental approach to AI, whereas we need to focus on the next-level use of AI, that exponentially transforms the way we have been doing things thus far by creating new systems.
For example, Amazon Go (Amazon’s retail store) isn’t using AI to simply remove the role of the cashier, but it is designing a new retail experience that is data and information-driven. Thus, instead of simply putting a layer of AI on top of existing processes, Amazon Go is changing the average grocery-shopping exercise into an experience-driven activity that is all about data, understanding people, behavior and design layout. Similarly, the objective of driverless autonomous vehicles is not merely to eliminate the cost of the driver, but to change the way we travel and redesign the entire transportation industry as well as create ripple effects in the e-commerce and delivery industries.
The four drivers of AI: Why now?
In 1956, IBM launched Arthur Samuel’s Checkers program, which was among the world’s first successful self-learning programs. Similarly, in 1957, Frank Rosenblatt made the Perceptron, an electronic device with cameras and neural networks that could look at objects and determine what they were and also had an ability to learn from experiences and examples. So if artificial intelligence and machine learning have been here for a while, why are they only making their impact felt now?
Back then, AI experts killed the project due to lack of data, storage, and network-related infrastructural support. They focused on developing rule-based AI that was convenient and faster to develop. However, the exponential growth of data in the last few years has worked as ‘fuel’ for making the AI better and more efficient. This development has been in tandem with the rise of new ways to collect and process data (think of GPS, phone, camera, sensors); however, the challenge is that most of it is still unstructured data and we are still formalizing the processes to collect, categorize and process data effectively. Here are the four drivers of artificial intelligence that have served as the foundation for most of the AI technological advancement and innovation registered in the last few years:
Increase in computing powers of machines
Moore’s law, a law about the increasing pace of computing speeds and prices, which held true for the last several decades, stands invalid today. We simply cannot comprehend the exponential rise in computing speed and power in the previous decade or so, and the advances that are yet to come in the coming years.
Rise of data-based AI
The other critical development is the speed with which rule-based AI has been replaced by data-based AI, thus, facilitating machine learning. Instead of ‘if/then’ rules, we are now showing machines how to learn and implement processes on their own, and they are getting better at it with time. For example, the more a driverless car drives on the road, the better it gets.
Advances in deep learning
Today, an app is getting better at detecting skin cancer than a doctor, and technology is enabling farmers in deciding the best time to sow and harvest. Deep learning has enabled machines to understand and comprehend the world in new ways. However, up until very recently, this wasn’t the case.
For instance, it took a long time for machines to distinguish between similar-looking objects (like kittens and puppies or bikes and cycles). Finally, in 2010, the AI-community decided to crowdsource the solution to the challenge and released millions of images, asking software developers all around the world to create software systems to distinguish between them. The objective was to exceed the accuracy level of 95 percent (that of a human), but the best software that year reached a level of 75 percent. While this was good enough for playing simple games, it wasn’t good enough for real-world applications (like to power driverless cars).
In the year 2011, the levels remained the same and didn’t show any improvement, and that impacted the sentiment in the tech industry. Experts started predicting that we were decades away from attaining human-like levels of accuracy. However, in 2012, this level 84 percent with SuperVision (based on deep learning) and in 2013, this further increased to 90 percent; the error rate has been on a decline ever since and innovation took off.
In 2016, the error rate was just 2.5 percent, and interestingly, this improvement was achieved using the same basic concepts used in Frank’s camera in 1957, using layers and neural networks. Furthermore, all these technologies are open-source, but the catch is that the data that is used to power them is not free. The flipside of deep-learning is that technology has started learning to do things of its own as well and there really is no telling how smart machines can become.
Cloud computing is the last driver that has enabled AI to learn from other AI. Thus, instead of pitting intelligent technology against other technologies and developing them separately, cloud technology has enabled them to work with each other. Therefore, in the current ecosystem, even if one driverless car completes a trip successfully, all driverless vehicles become more intelligent and efficient. Similarly, the more the applications there are and the more that people use them, the better they get.
In today’s day and age, customer information and data is akin to gold because that’s what makes AI transparent and frictionless. How organizations collect and process, data will drive and change the way we work in the future and ultimately determine what the future of work is all about.
(This article is based on the Masterclass ‘The four drivers of Artificial Intelligence’ by Hod Lipson, Professor of Engineering and Data Science, Columbia University, at TechHR 2019.)