Desktop version

Home arrow Communication

  • Increase font
  • Decrease font


<<   CONTENTS   >>

The next steps

Abstract

This concluding chapter will synthesize the main issues presented in the book and will try to provide a roadmap for the coming years in the context of cognitive AI research, by suggesting fields where the cognitive design approach can provide valuable inputs for the realization of better AI systems.

The road travelled

In the previous pages of this volume, we introduced and touched upon different themes that have been debated for decades by computer scientists, philosophers, engineers, psychologists, and, more generally, by the scholars working in the interdisciplinary fields of both Artificial Intelligence (AI) and Cognitive Science. The time has come to put some order to the main themes traced in order to identify some of the most relevant ideas conveyed in the book. The first idea concerns the observation that, from a modelling perspective, there is not a definitive “winning” method in the “science of artificial” (to use an expression coined by Herbert Simon). As we have seen, indeed, different approaches are useful for modelling certain classes of cognitive phenomena, but no one single approach can account for all aspects of cognition. As a consequence, the road travelled so far suggests that all the different types of models and modelling paradigms are needed and that an important goal for current and future research should consist of individuating how to integrate them in a scientifically principled and non- ad-hoc way. The research area in cognitive architectures, in this respect - given its more than 40 years of experience in the challenges concerning the realization of integrated intelligent systems - could be a field that gains major attention from the AI and cognitive robotics communities (which are nowadays mainly focused on building super “intelligent” and narrow applications). This research area will also certainly retain its central role in the context of the computational cognitive science/cognitive modelling community. The second main idea condensed in the book is that it is not sufficient for an artificial system to obtain human (or super-human) level performances in specific tasks to attach to it the label of a “cognitive system”. In particular, we have seen how “functional” systems (in the sense explained in the book) cannot be considered artificial models of cognition if they are not additionally equipped with “structural constraints”. As such, they cannot provide any explanatory role with respect to the analogous natural systems executing the same tasks and manifesting the same behaviour. In a period of AI hype and propaganda, driven mainly by the media and the marketing departments of both companies and academia rather than the scientific and technological ones, it is important to keep in mind these very basic, but important, distinctions in order to avoid unfounded conclusions based on the (wrong and manipulated) perception of scientific facts. Among these facts to keep in mind are, at least, the following: (1) we are still far from building machines able to exhibit, in a satisfactory way, human-level abilities (and this is true also if we consider employing machines not as our “peers” but just as useful artificial companions; let us consider how minimal - despite the big headlines in newspapers - has been the help provided by robotics and AI systems during the first wave of the COVID-19 crisis); (2) we are still far from understanding many aspects of how the mind and brain work. Of course, some progress has been made in the last few decades, but a full account of this aspect requires more research and joint effort. The third main idea proposed in the book concerns the need for a comparative tool for evaluating the “structural accuracy” of different AI systems (in particular, those biologically or cognitively inspired). Despite the enormous amount of literature devoted to this theme, in fact, a real methodological and practical operationalization of what such an expression really meant is lacking and this has led to tonnes of literature targeting this expression in a way that was much too ambiguous and vaguely defined to have a scientific impact. To fill this gap, I have proposed the Minimal Cognitive Grid (MCG): a simple methodological tool that can be used to practically project and compare different kinds of artificial systems along the “structural accuracy” dimension. As reviewed in the previous chapters, this tool permits the provision of a non-subjective, graded evaluation that allows both quantitative and qualitative analyses about the cognitive adequacy and the human-like performances of artificial systems in both single and multi-tasking settings. In principle (and in prospective), the psychometric characterization of one of its composing dimensions (in particular, the “performance match”) could also be useful to evaluate human-level performances in both narrow and unrestricted settings. The fourth idea defended in the book, which was somehow the underlying fil rouge of the entire narration, is that in order to avoid the above described myopic euphoria about AI and, on the contrary, in order to make real progress from the scientific point of view, we really need a renewed pact and collaboration between AI and Cognitive Science researchers. In particular, such collaboration is needed both to advance some of the main limitations of modern AI technologies and to improve the understanding of our mental and brain activities via computational simulations. It will probably not be necessary to extend to the two entire fields this collaboration (for example, systems that will not communicate or interact with humans but that will only communicate with other machines do not require any “cognitive interface”), however, there are research areas where the lack of progress is probably due, apart from the intrinsic complexity of the problems needing to be addressed, to the fact that the two areas nowadays do not talk to each other.

 
<<   CONTENTS   >>

Related topics