Desktop version

Home arrow Communication

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Examples of cognitively inspired systems and application of the Minimal Cognitive Grid

Abstract

Given the proposal presented in the book so far, this chapter describes some practical applications of the Minimal Cognitive Grid by showing how it allows one to collocate different types of artificial systems in the landscape formed by the cognitive design approach. Examples of artificial models of cognition and cognitive architectures will be presented and compared with examples of functionalist AI systems that, despite being called instances of “cognitive computing”, cannot be considered realistic models of our cognition.

Modern AI systems: cognitive computing?

As sketched out in the first chapter, Artificial Intelligence (AI) is nowadays mainly focused on building systems able to execute, in an efficient and efficacious way, specialized tasks in a variety of domains, ranging from machine translation to autonomous cars and robotics. Also the two most successful AI systems of the last decade, namely IBM Watson (Ferrucci et al., 2013) and the Alpha Go System (Silver et al., 2017) developed by Google, can indeed be assigned to the category of narrow AI systems. In addition to the being narrow, however, they have another important quality: they are “Super-Human” AI systems. “Super-Human” because they have outperformed humans in two different complex tasks like question answering (Watson) and Go (Alpha Go), a popular strategic game - mostly known in Asia - that is much more complex than chess from the point of view of the combinatory explosion of possible moves.

Let’s explore the details: the Watson question-answering system was able to defeat human champions of a game known as Jeopardy! (a very famous TV quiz show in the USA, consisting of answering rich natural language questions covering a broad range of general knowledge) by integrating several AI techniques that mainly resort to probabilistic strategies for the determination of its output. In the original version of Jeopardy! there are three human contestants. The game in which Watson successfully participated consisted of two human players plus the IBM system.

Alpha Go, on the other hand, is a deep neural network system incorporating reinforcement learning strategies (i.e., strategies similar to the “operant conditioning” adopted in the behaviourist tradition of psychology and described in the previous chapters, see footnote 17 in Chapter 1) that defeated the human world champion of Go: the South Korean Lee Seidol. Such system has been developed by Deep Mind, an innovative spin-off company from Google, specializing in the development of novel deep learning techniques. Alpha Go achieved such results after being trained for several thousands of hours on previous games played by human champions of Go. It relied on a Monte Carlo Tree Search (MTCS)1 strategy and on two different specialized deep neural networks using reinforcement learning. These two networks are a policy network, deciding what move to make, and a value network that analyzes the position of the pieces on the board.[1] [2]

Such systems are important for the purposes of this book since they constitute a recent example of inappropriate labelling coming from the “cognitive” vocabulary and ascribed to successful AI systems. These systems, indeed, have been often claimed in these years, both in the media and in scientific AI conferences, to be member of the class of “cognitive computing” systems. This attribution has been usually justified in light of the impressive results obtained and, in the case of the Watson system, due to its ability to deal with questions in natural language. However, as already mentioned in Chapter 2 and as should be clearer at this point of the book, this qualification is incorrect and misleading since these system have been built according to a “functional” approach and without any cognitive constraint able to justify the expression “cognitive computing” attached to them. In other words, these systems do not have any explanatory role with respect to the analogous human functions that they aim to replicate. This “new story” is actually not so new in its development. In particular, if we look back at the case of the IBM chess machine Deep Blue, which in 1997 defeated the then chess World Champion Garry Kasparov by exploiting its computational strengths (e.g., in particular its memory storage capacity and, among other sophisticated technicalities, the minimax algorithm3 to decide its next move), we have a similar situation: the result obtained by the system was impressive from an AI perspective but was not particularly significant from the point of view of psychological realism of the simulation. Nonetheless, the understandable enthusiasm for this impressive achievement similarly risked leading to improper ascriptions of other super “intelligent” faculties to the IBM program. Of course, the fact that such new AI systems, in the same way as Deep Blue, cannot be defined as “cognitive systems” or “cognitive computing systems” is not a diminutio from a technological or engineering point of view since - as stated from the beginning - they represent the state of the art in their respective fields and are very sophisticated AI technologies. However, the sloppy and propagandistic attribution of expressions coming from the “cognitive science” vocabulary to AI systems legitimately adopting “machine-oriented” heuristic solutions/approaches to solve problems, is a source of conceptual confusion not only among non-AI experts but also within the AI community itself.[3] [4]

Given this clarification, let us “test” the Minimal Cognitive Grid presented above on these two systems. The first point concerns the functional /structural ratio of the proposed model. In both cases, there is no structural constraint that has been assumed about mental or brain processes. Therefore, on this aspect such systems are completely functional. From л generality perspective both systems are task specific: Watson can do Question Answering and not, for example, Machine Translation or Computer Vision tasks. Similarly, the architecture of Alpha Go (and of its successor Alpha Go Zero) can learn play to Go and can’t do other tasks. It is worth noting, however, that a generalization of the two original versions of the Alpha Go architecture - called Alpha Zero - has been successfully able to learn, at a super-human level, other similar, Go-related board games like chess and shogi (a strategic game very famous in Japan). All these games, however, are very similar in nature, although they have a different degree of complexity (with Go being the most complex one and, thus, in a certain sense, already “subsuming” the other two games). As a consequence, the classical transfer learning problem affecting deep net architectures - and concerning the fact that the models learnt by these architectures are task specific and are not able to be generalized to other problems - remains unanswered.

In regards to the last point of our Minimal Cognitive Grid, the performance match, these systems have achieved, as mentioned, super-human performances. So at first glance, it seems that they perform very well on this dimension. As mentioned before, however, the performance match dimension, in order to be relevant from a biological/cognitive perspective, should consider both success and error cases. Since the successes of both systems are well known and have been described above, let us focus on some of the errors produced by them. Upon closer look it appears that both these systems, or better: their underlying technological solutions, have produced - in some cases — very strange and “sub-human” errors. “Sub-human” since no one, including seven to eight year old children, would have produced the kind of errors that the underlying architectures of such systems have provided in certain situations. Let us look in more detail at what these “errors” look like: the very famous case involving the Watson system during its Jeopardy! match against the two human champions is what we can call the “Toronto dilemma”. In particular, during the game, one of the questions proposed to both the system and the two humans was, “What is the US city whose largest airport is named after a World War II hero and whose second largest airport after a World War II Battle?’0 While the human champions quite easily provided the correct answer, “Chicago”, the response provided by the probabilistic engine of Watson was, incredibly, “Toronto”. Now, it is not necessary to be a Jeopardy! champion to know that Toronto is not a US city! IBM engineers tried for months to understand what triggered that answer and why Watson replied in that way but were unsuccessful - this is an example of how “opacity” is also a feature of probabilistic symbolic systems.

Let us consider now the Alpha Go system. For this system there is no direct report of a similar non-human failure during its matches against Lee Seidol. From this point of view, however, it should be noted that dealing with language risks a greater probability of the kind of errors presented above since understanding language is a more complicated task than playing a table game, even if it is a complex one like Go. It involves, indeed, a plethora of different complex subproblems (from anaphora resolution to semantic ambiguity, etc.) that do not have a definitive solution in the context of natural language processing. In the context of Go, on the other hand, it is also difficult for humans to evaluate eventual erroneous moves played by the system since the strategies employed are not interpretable from our perspective. As a matter of fact, the learning strategies of the Alpha Go system and its successors showed a better modelling capability of the task environment (i.e., the board of the game and the different positioning patterns [5]

of the pieces at each point in time). The “acquired” knowledge about their environment, a crucial element for the exhibition of intelligent behaviour as showed in Simon’s ant metaphor, included the analysis of different “optimal patterns” on the whole board. This global view enabled the system to explore unknown positioning patterns (i.e., there were moves and situations never played by any human player) and to find, also in these situations, optimal decision strategies. Another interesting finding that came from the analysis of the Alpha Go results is that this system was somehow able to treat what is considered a strategic reasoning game as a vision game: the use of the so called convolutional networks - a particular type of deep networks mainly used in computer vision and adopted in Alpha Go architecture in both the “policy” and “value” networks - for the correct identification and categorization of different game patterns is paradigmatic in this respect.

It is exactly from this strength, however, that potential risks for systems like Go and other deep learning systems (including autonomous cars) also arise. Convolutional neural networks, indeed, raised many criticisms when, upon being applied to an image recognition task in a Google Photo application, mislabelled a black couple as gorillas/’ an incomprehensible error to the human eye. Additionally, it has been subsequently demonstrated how it is relatively easy to deliberately fool these networks by using “adversarial networks” (another class of deep nets that are able to learn what minimal set of pixels to change in order to let the attacked network generate a wrong classification with respect to the original, correct one). In particular, a study by Su et al. (2017) showed how, in certain cases, a single pixel change could lead to completely diverse categorizations (the study shows, for example, how picture originally labelled as “Egyptian cat” was successively classified as a “bath towel”; similarly, a “giant panda” was later classified as a “vulture”). Of course these single-pixel changes are invisible to the human visual system and, as such, this kind of “sub-human” error is symptomatic of the fact that — as in the case of Watson - the underlying components of the Alpha Go system (and therefore the system as a whole) cannot be qualified as structural models.

To sum up: by applying our Minimal Cognitive Grid as a tool to analyze the two systems, it becomes evident why they both cannot be considered cognitive systems. They are actually two admirable examples of functional AIs that make no structural assumptions. In regards to the generality criterion: at first glance, Alpha Go (and, in particular, its successor Alpha Zero) seems to perform better that Watson on this dimension. But that’s only an illusion. As mentioned, in fact, the “transfer” towards other games was obtained for board games that, in nature, were of the same type as Go but less complex (and, as such, were somehow “subsumed” by the kind of strategies put in place for Go itself). In other words, if [6]

Watson would have been successfully applied to the famous game Who wants to be a millionaire? as well, presenting different simplifications with respect to Jeopardy!, we similarly would not have taken that as evidence of a more general capability of that system.

With reference to the performance: both the underlying architectures adopted by the two systems have shown super-human strengths and sub-human limitations. Therefore, even if it is true that in Watson such limitations were “directly” manifested during the game and in the case of Alpha Go they were only “indirectly” shown through the functionality of its core components used in other applications, from the analysis of their types of errors it seems evident that both these systems cannot play any kind of explanatory role with respect to the corresponding human cognitive mechanisms they model in an AI setting.

In the following section, we will explore a completely different kind of artificial system, one we briefly mentioned in the first chapter of the book: the cognitive architectures.

  • [1] Monte Carlo Tree Search (MTCS) is a strategy for obtaining optimal decisions and one that isadopted in a variety of AI applications. In the context of Go, the game is represented as a tree(a game tree). With this problem configuration, MCTS is able to assign to each node of the gametree a statistical value that highlights the most interesting nodes in the tree (thus avoiding thecombinatorial explosion of possible moves). The values are assigned by simulations (that startrandomly and are later adjusted via backward updates). In the context of Go MTCS plays a rolesimilar to the one played by the so-called "mini max algorithms” (see below) in the game of chess.
  • [2] Its successor, Alpha Go Zero, introduced some architectural novelty (e.g., the combination ofthe two neural networks in a huge larger one) as well as the capability of learning from self-playrather than from a huge training set represented by thousands of hours of expert games.
  • [3] The minimax is a decision procedure used to minimise the loss function in a worst-case scenario.Similarly to the MCTS, this procedure — when applied to the chess game - establishes at eachnode of the game (also formalized as a “tree”) which branch leads to a position of maximum advantage for system and minimum advantage for his adversary. Of course this evaluation cannot bedone on the whole “game tree” via “brute force” algorithms (since this leads to a combinatorialexplosion) and different “bounds” can be applied. Deep Blue, for example, was able to “see” andapply this procedure from 12 to 40 turns ahead with respect to the current situation in the game(see, Hsu and Feng-Hsiung, 1999). Kasparov, of course, did not have this possibility.
  • [4] Anecdotal facts are not scientifically relevant and there is no exception here. However, sometimesthey can help give an idea of the state of affairs. Personally, I remember at least a couple of recentoccasions, one at the International Joint Conference on Artificial Intelligence with a very famousresearcher in NLP, and another one during a “Cognitive Computing Symposium” organised atESCOP 2015 (sponsored by IBM Research, to which I was kindly invited by Antonis Kakas,Loizos Michael, and Irene-Anna Diakidoy) where I expressed these basic concerns and my wordscame as a surprise to some (too many) researchers in the audience (including those from IBM).
  • [5] The video of this moment of the Jeopardy! game is also available on YouTube: https://www.youtube.com/watch?v=Y2wQQ-xSE4s&t=3s.
  • [6] As reported by the MIT Technology Review in November 2018, the strategy used by Google to“fix” this problem has been one of censoring image tags relating to many primates https://www.technologyreview.com/2018/01/ll/146257/google-photos-still-has-a-problem-with-gorillas/.
 
<<   CONTENTS   >>

Related topics