If you've been paying attention to technology trends in the past few years, the words “artificial intelligence” (AI) have been used with increasing frequency. On one side there is speculation about AI’s potential and how it will affect our future. On the opposite side we hear about the less exciting potential of AI ruining jobs or turning against us in some apocalyptic scenario. It’s hard to find the truth between the polar opposites. For organizations trying to stay on top of technology trends and stay in business, the AI trend can look like a way to fall behind. Companies such as Amazon, Facebook and Google have leveraged their data like a precision weapon against any competitor. Is it a matter of keep up or be irrelevant? How much of this weaponized data falls under the umbrella of AI mastery?
AI is not a new field. AI has been a research topic going back to the earliest days of computer science. Back then, much research focused on how to make an “intelligent” decision with very little data. The Turing test was seen as the ultimate realization where people could be fooled into thinking they were talking to a human through a natural conversation. Algorithms can’t beat the depth of experience humans gather as they go through life. Compare a conversation with a 5-year-old child versus a 10-year-old child. Simply put, it’s the backing of enormous amounts of data applied to a single situation to make a decision. Computers couldn’t compete with the storage capacity of humans until recently. The past 20 years in computer science have seen a lot of energy put to processing the large volumes of data we collect. In those years machine learning techniques have enabled computer scientists to analyze data at the scale that started to make a real difference.
Machine learning that is currently undergong widespread adoption is a lot less intelligent beyond the data scientists that create the actual algorithms. Whereas AI could be creating insight, machine learning has been about finding it. Those lines are blurring quickly with the types of algorithms being used, but there is very little actual learning being used at scale. Most machine learning is finding probabilities using statistical methods. Will customer A buy a widget based on the fact that other customers similar to them also purchased this widget. Where AI could potentially add to this solution, for example, is using a voice-driven assistant that learns about the customer in real time while using a backlog of machine learning data to influence decisions. This is a Turing test for retail. The real-time element is very compelling where you take the uncertainty out of the transaction with humanlike insight. If you have used any voice-recognition software to type on your computer, you know this dream is still some way out. It is getting much closer, though.
The hype cycle is graphical representation of how products move through a market, first coined by American research and information technology firm Gartner. It is a clear way of showing the progression of a technology with respect to the adopters and market forces. The trough of disillusionment is a place that almost every technology comes to in its lifecycle. It's inevitable that you will arrive. For AI, there was a lot of hype by the early adopters doing amazing things. That has created an anxiety inside other companies trying to compete, thinking that they are farther behind. The reality is that a small group has made it successful, but it has not become generally available for a larger group of companies. I jokingly say that Google has more doctors than a hospital and that the type and intensity of research are hard to match. What this has done is push the promise of AI into an unsustainable perigee. As organizations attempt to emulate that success, they run into project failure or a lot less than they were hoping for in initial planning. The result is a general disdain for the technology, and into the trough we go. The effect is a building of reverse hype, and for the early adopters and the true believers, it can be demoralizing.
To pull out of the trough I feel it will come in one of two ways or maybe a combination of these two ways. Cloud-based services and/or a complete enveloping of AI.
The first path out is cloud-based services that include deeper integration with machine learning already used at scale. This avoids the up-front cost of building the infrastructure and developing your own basic expertise. The challenge there will be learning when to apply AI techniques to your business problems. A lower barrier to entry is the key to any exit of the trough, and that’s exactly what cloud companies are promising. Many cloud companies are already offering some sort of “AI as a service,” but the initial offerings still require a baseline level of knowledge about what AI can do for your business. Those services do very little to help you properly apply the technology and find real return on investment (ROI). If you take a quick look at the roll call of start-ups trying to solve this problem, it’s clear this is something that will be solved soon.
The second path out is a lot more subtle. We build applications and consumer services already. What if those existing tools and products just became smarter? An example is a shopping cart that learns about the customer in real time and can add discussion with consumers as they are buying products. This example is not a specific AI initiative that has to be understood, but rather, a much better enhancement to something we already do and use. In general, I feel this is the more likely direction for AI as it slowly rides like a wave over what we are already familiar. It’s happening in the auto industry; it will happen in the IT and application economy.
Unless you are an AI researcher, you may agree with my premise here by your own experience. We are heading into a period of time where project meetings will meet “Let’s use AI” with a lot of skepticism. Those days will end, but until you see the path out of the trough, choose wisely and proceed carefully.