After existing in the dreams of science fiction authors for centuries, in recent years artificial intelligence (AI) has quickly started to become a reality.
The computer processing power available today, combined with the explosion in the amount of data available to us in a digital world, means smart self-teaching machines are now commonplace. However, they are often hidden away behind services or web interfaces where we may not even notice them, unless we know what we’re looking for.
But behind the scenes at Google, Facebook, Netflix or any of the hundreds of organizations that have deployed this revolutionary technology, vast data warehouses and lightning-fast processing units crunch through huge volumes of information to make this a reality. Here’s an overview of the technology that goes into the natural language processing, image recognition, recommendation and prediction engines used in today’s cutting-edge AI.
AI depends on the data that is gathered. Just as our brains take in huge amounts of information from the world around us and use it to make observations and draw conclusions, AI can’t function without information.
In the AI technology stack, this data can come from a number of places. Thanks to the ongoing rollout of the Internet of Things, millions of devices worldwide – from industrial- scale machinery to the smart phones we carry everywhere we go – are connected and able to talk to each other. The data collection layer of an AI stack is composed of software that interfaces with these devices, as well as web-based services that supply third-party data, from marketing databases containing contact information to news, weather and social media application programming interfaces (APIs). Virtual personal assistants allow data collection to take place from human speech, and natural language recognition will convert speech to data, regardless of whether background noise or commands are issued directly to a machine.
Once you’ve collected data or set up streams so that data is pouring into your AI-enabled organization in real time, you need somewhere to put it. Because AI data is usually Big Data, it needs a lot of storage space, and this storage needs to be accessed very quickly.
Often this is where cloud technology will play a leading role. Some organizations use technology such as Hadoop and Spark to provide them with the capability and resources to establish their own distributed data centers that can cope with the vast amount of information. Often, however, third-party cloud infrastructure – such as Amazon Web Services or Microsoft Azure – provides a more suitable solution. These third-party cloud platforms enable organizations to scale storage up or down when it is needed, thereby saving money. These platforms also provide a host of methods for integrating with analytics services.
Data Processing and Analytics
The area of data processing and analytics is probably what most people consider the most important element when they talk about artificial intelligence – although without the rest of the stack (collection, storage and output) any insights are going to be severely limited.
AI processing takes in machine learning, deep learning, image recognition, natural language processing, sentiment analytics, recommendation engines – all the hot topic buzzwords we’re used to hearing when organizations wax lyrical on the subject of how smart and cognitive their technology is.
These algorithms are often provided in the form of services that are either accessed through a third-party API, deployed on a public or private cloud or run “on the metal” in a private data center, data lake or, in the case of edge analytics, at the point of data collection itself (for example, within sensor or data capture hardware).
The power, flexibility and self-learning capabilities of these algorithms is what really differentiates the latest wave of AI from what has come before – together with the increase in the amount of data available. Today, the increase in raw power comes from the deployment of graphics processing units (GPUs), processors originally designed for the very heavy-duty task of generating sophisticated computer visuals. Their mathematical prowess makes them ideal for repurposing as data-crunchers. A new wave of processing units specifically designed for handling AI-related tasks should provide a further quantum leap in AI performance in the very near future.
Data Output and Reporting
If the aim of your AI strategy is to get machines working more efficiently and effectively together (perhaps for predictive maintenance purposes or minimizing power or resource use), then this will be technology that communicates the insights from your operational AI processing to the systems that will benefit from it. Other insights may be intended for humans to take action on – for example, sales assistants using handheld terminals to read insights and recommendations relating to customers who are standing in front of them. In some cases the output may be in the form of charts, graphics and dashboards. Virtual personal assistant technology, such as Apple’s Siri and Microsoft’s Cortana, can often play a role here, too. These products use natural language generation to convert digital information into human language – which alongside visuals is the most easily understood and acted-upon form of data output for a human.