Desktop version

Home arrow Psychology

  • Increase font
  • Decrease font

<<   CONTENTS   >>

The Coming Cognitive Augmentation Era

The theme of this book is how, in the coming years, mass adoption of cognitive systems technology will bring about cultural, social, and political changes. Research and development in cognitive systems is producing artificial systems, called cogs, able to perform high-level cognition. Until now, humans have had to do all of the thinking. However, soon, our computers, handheld devices, cars, and all other objects we interact with on a daily basis will be able to perform human-level thinking with us and without us. The future will belong to those better able to partner with and collaborate with these devices. Over the next fifty years, the infusion of intelligence into our lives will bring about as many changes as the computer/Internet/social media revolutions has over the last fifty years.

This is a bold prediction. Why should we believe such a prognostication? As described in Chapter 1, technological revolutions do not happen overnight. Rather, technological revolutions are the confluence of several different lines of technological development. The coming cognitive systems era is no different. The cognitive systems revolution is the convergence of the following:

  • • Deep Learning/Machine Learning
  • • Big Data
  • • Internet of Things
  • • Natural Language Interfaces
  • • Open-Source AI
  • • Cloud-Based Services
  • • Social Media
  • • The Connected Age

Deep Learning/Machine Learning

Programming a computer to learn has been the goal of artificial intelligence researchers since the early days of artificial intelligence as a discipline. Arthur Samuel coined the term machine learning in 1959 and helped develop some of the first computer programs that improved over time with experience (Samuel, 1959).

As described in more detail in Chapter 4, in the 1960s and 1970s, researchers first attempted to use symbolic reasoning to achieve machine learning. However, representing knowledge as a set of symbols is difficult due to the quantity of knowledge required for intelligence and the "fuzziness" (ambiguity) of knowledge itself. In the 1970s and 1980s, expert systems captured knowledge obtained from experts in the form of sets of production rules (if..then logic statements). However, these knowledge stores were not learned by the computer (Carbonell, 1983). Instead, humans captured and carefully engineered experts' knowledge and the nuances of that knowledge. The knowledge engineering effort required a large amount of time, effort and resources.

In the 1970s, a new way to do machine learning became prominent— neural networks. Inspired by the human brain, a neural network is a collection of highly-connected circuits each with a weighting factor determining the strength of the connection. A network is presented with a stimulus creating an array of signals throughout the network. Weights are adjusted to get the network's response closer to the ideal and then it is presented with another stimulus. Over time, the weights associated with the connections are tweaked until the network's response to a class of stimuli is firm. When the network is presented with a slightly different stimulus from the ones used in the training set, it is still able to recognize it as belonging to a certain class of stimuli. This makes neural networks quite robust in the face of incomplete data (which the real world is composed of).

However, like with expert systems, training neural networks required large amounts of careful knowledge engineering to select positive and negative training examples and to provide feedback to the network. As a result, by the 1990s, machine learning research by any method had largely been abandoned.

Advances in computer speeds, reduction in costs, availability of enormous datasets, and new statistical-based algorithms led to a resurgence of interest in machine learning during the 1990s and into the 2000s. With much lower barriers to entry and fueled by the "dot com boom" of the late 1990s, many startup companies emerged tackling tasks such as natural language understanding, computer vision, image classification, handwriting recognition, and data analytics.

Deep learning employs multiple layers of neural networks allowing the extraction of higher-level features from a stimulus (instead of just one response from a single-layer network). Allowing a computer to learn multiple things at different levels of abstraction led to the deep learning revolution. By 2011 and 2012, deep learning systems were demonstrating superhuman performance in handwriting recognition and image classification. In 2012, a deep neural network named AlexNet won the annual ImageNet Challenge beating all competitors by a substantial margin (Krizhevsky et al., June 2017). The success of this deep neural network garnered interest not only within the artificial intelligence community but across the entire technology industry.

A startup called DeepMind began in 2010. Using convolutional deep neural networks and a proprietary machine learning algorithm called Q-Learning (Watkins, 1989). DeepMind built computer systems able to learn how to play 1980s-style video games without training sets. The system learned how to play simply by watching video games being played. This is called unsupervised learning and is an important component of deep learning and the cognitive systems revolution. With unsupervised learning and self-supervised learning, there is no need for time and effortconsuming knowledge engineering like there was in previous machine learning eras. This opens the horizon.

Google purchased DeepMind in 2014. In 2016, a system called AlphaGo defeated the reigning world champion in Go, a game vastly more complex than Chess. In 2017, a version called AlphaGo Zero learned how to play Go by playing games with itself and not relying on any data from human games. AlphaGo Zero exceeded the capabilities of AlphaGo in only three days. Also in 2017, a generalized version of the learning algorithm called AlphaZero was developed capable of learning any game. AlphaZero achieved expert-level performance in the games of Chess, Go, and Shogi after only a few hours of unsupervised self-learning (DeepMind, 2018a; DeepMind, 2018b; ChessBase, 2018).

Using unsupervised deep learning, future cognitive systems will be able to learn on their own and acquire expert-level knowledge and capability in periods of time that will astound us. These systems will not be programmed to perform in a certain domain, they will acquire that capability on their own and in their own way. Still in its early stages of development, the importance of unsupervised learning cannot be understated. Along with the other technologies described in this chapter, unsupervised learning will lead to systems able to learn and perform at expert levels in virtually any domain. This will drive the cost of producing such systems down so as to make them affordable to the masses leading to the democratization of expertise.

12 Democratization of Expertise

Big Data

If cognitive systems of the future are going to be consuming information and learning on their own, consideration needs to be given as to where this information will come from. Recently, advances in handling and processing enormous of unstructured data of various types has led to the big data industry. Big data can be characterized by the following Gain, 2016):

Volume Amount of data (exabyte and petabyte-sized stores)

Variety Type of data (text, images, audio, video, etc.)

Velocity Speed of data (generation, acquisition, processing)

Veracity Quality of data (accuracy, reliability, validity, etc.)

Value Worth of data (return on investment)

The field of big data involves ways to store, process, and analyze data stores too large and too unstructured to be handled with traditional database technology. The goal is to extract meaningful and useful knowledge from these vast data stores. Often, big data analytics exposes patterns and connections in the data humans have no reason to expect are there.

There are many examples of big data analytics. For example, Walmart handles more than one million customer transactions per hour and their analytics processes databases measured in exabytes. Walmart is continuously analyzing buying behavior so it can accurately configure its stores, and the products offered in their stores, to meet the demands of the customers near the store.

The Internet and social media provide examples of using big data. For example, NetBase analyzes people's postings on various social media platforms to determine people's sentiment on a topic (www.netbaase. com). Better than knowing a customer bought your product is to know why the customer bought the product and what they and others feel about your product.

Cognitive systems are currently looking at various big data stores and detecting patterns and relationships in the data. Through unsupervised learning as discussed in the previous section, cognitive systems will obtain knowledge on their own from these sources.

Internet of Things

Today, over four billion people use the Internet. Internet usage, especially social media and video, have exponentially increased the number of bytes humans generate each year. Fiumans are currently generating hundreds of exabytes (billions of gigabytes) per year and the trend will continue into the foreseeable future with the total amount of data generated doubling within two years and, within a few years, doubling every year. This alone gives big data systems zettabytes of data to process and analyze. However, another technology is coming promising to increase data generation even more—the Internet of things.

Internet of Things (IoT), also ubiquitous computing, refers to Internet connectivity being extended to devices and everyday objects such as: appliances, automobiles, bedrooms, toilets, etc. (Weiser, 1991). Making everything in our lives Internet addressable enables new kinds of applications and technology. It also promises to add even more data to the amount already generated by humans.

However, there is a difference. IoT data will capture much more meaningful data about us and our behavior. When a person watches a movie streamed from one of the Internet streaming services, the action generates on the order of a couple of gigabytes of data. How much can be learned about the person who is watching the movie? Certainly, a number of things can be learned from the time of day, day of the week, movie genre, etc. In fact, sendees use these data now to better serve their clientele. However, consider the difference between those two gigabytes of data and two gigabytes of data from the person's toilet and bedroom. These kinds of data open up new possibilities of learning about a person. By interacting with our devices in everyday life, we put out enough biomedical information to equal or exceed that of an office visit with the doctor.

The cognitive systems era will see the development of systems able to monitor the continuous stream of data we exude and detect things about us using unsupervised deep learning. The IoT will bring cognitive systems into intimate contact with us.

<<   CONTENTS   >>

Related topics