Desktop version

Home arrow Psychology

  • Increase font
  • Decrease font


<<   CONTENTS   >>

The Personal Cog Revolution

None of the systems already in existence described above are personal cogs meaning the system does not engage in personalized collaboration with its human user nor does it build a history with the user. Cogs developed so far are tools a user uses to partially automate part of their work. We think the future will belong to personal cogs. Cogs should know the human they are working with like a human colleague or human co-worker would. A personal cog should remember past interactions with the human. Personal cogs should also know and understand the human's professional, personal, and social context and use this knowledge together with its memory of past experiences to serve the human better in the future by tailoring interaction and cognitive services accordingly. When cogs become personal in nature they will become our colleagues and we predict this will be a flex point in cognitive systems history.

Forbus and Hinrichs (2006) describes companion cognitive systems as software collaborators helping their users work through complex arguments, automatically retrieving relevant precedents, providing cautions and counter-indications as well as supporting evidence. Companions assimilate new information, generate and maintain scenarios and predictions, and continually adapt and learn about the domains they are working in, their users, and themselves. Companions operate continuously over weeks or months at a time and must be capable of high- bandwidth interaction with their human partners. Forbus and Flinrichs recognizes such a companion would need to maintain several models:

• Situation and domain models current problem and relevant

knowledge

• Task and dialogue models describe shared task a human/

computer partnership

• User models preferences, habits, and utilities of the

human partner(s)

• Self-models companion's own understanding of

its abilities, preferences

A distributed system of agents manages the session with a human in a companion role. Each agent in the system performs a specific function. We next envision a synthetic colleague going beyond the Forbus /Hinrichs model.

The Synthetic Colleague Model

Our goal in this chapter is to envision a cog able to serve as a personal professional research colleague. We call this research cog Synclair. Synclair is based on the Model of Expertise discussed in Chapter 7 and is shown in Fig. 11-1.

To be an effective colleague, it is critical for Synclair to know the researcher at both a professional and personal level. To do this, Synclair maintains a number of models of and about the researcher:

As described below, the Mgeslure and Mbod models enable Synclair to read and understand the researcher's gestures and body language. The dialog model Mdia|0g allows Synclair to tailor the ongoing visual, textual, voice, and mixed reality dialog to the current activity (Mactivity), current

Synthetic research colleague (Synclair)

Fig. 11-1: Synthetic research colleague (Synclair).

context, and researcher's preferences. Over time, the human researcher and Synclair will develop their own unique style of collaboration (Mcoiiaborate)- Synclair maintains and constantly evolves the collaboration and dialog models with experience (KE). Overall, Synclair knows details about the researcher's research agenda and portfolio (Mrescarch) recognizing the researcher may have more than one research agenda at any given time.

Social and Professional Network Assistant

Synclair knows and understands the researcher's social environment (Msocjal) and professional environment (Mpro). These models contain names and contact information, and other information about friends, family, co-workers, and professional colleagues. Managing one's social and professional network is an important task for a researcher and maintaining relationships require constant attention. Synclair can periodically send emails, tweets, or make social media postings on the researcher's behalf. As described later, Synclair is intimately familiar with the ongoing research agenda of the researcher, so Synclair can send out notices to colleagues about the researcher's recent work, links to articles and papers, etc. These notices can be sent automatically, as defined in the and

M models, or sent upon request by the researcher.

Synclair also performs other tasks such as send holiday or special occasion cards and notices, extend invitations to meetings and presentations, and, in general, be the editor of the researcher's virtual newsletter.

Conference and Journal Publications and Grants Assistant

Publishing papers and articles in conferences, journals, and magazines is a vital task for a researcher. The administrivia associated with this is formidable. Synclair can assimilate and monitor all calls for participations (CFPs) and requests for proposals (RFPs) relevant to the researcher's research agenda (Mresearch) and the researcher's publications model (Mpubs) model and assist in answering and tracking those submissions. Furthermore, Synclair can assist the researcher in meeting the rather demanding schedules and deadlines for reports and updates when conducting funded research.

Writing a grant proposal is often a nearly overwhelming task for a researcher. Most grant proposals require an extensively detailed definition of the project, the participants, outcomes, budget, institutional qualification, and supporting letters and materials. Synclair can assist in gathering and maintaining this information as well as keeping track of deadlines and other requirements associated with a grant proposal.

Multi-Modal Interface

Synclair will certainly interact with researchers via spoken natural language. Dialog with Synclair will be conversational in nature. Optionally, Synclair may listen to our casual conversations (e.g., a phone call or office visit with a colleague) and extract details of interest to the research agenda (Mresearch) or any field/topic models being maintained (Mfjeld and Mtopic). Synclair must hear, listen, and learn from ambient conversation much as a "human in the room" does.

Dialog will also be contextual in nature lasting over an extended period of time—even across significant gaps of time (minutes, days, weeks). Conversation with cogs must be as natural as speaking with a fellow human colleague. For example, the researcher may pose a question to Synclair and refer to a previous conversation, even days before. Synclair maintains episodic memory (KE) of every interaction and is able to include this knowledge in its question answering, dialog, and collaboration abilities to understand the researcher's reference.

However, natural language conversation is only a small portion of the Synclair's information bandwidth. Synclair can acquire and deliver information from and to virtually any form of digital communication (vastly exceeding the capabilities of a human colleague). Synclair will produce and consume text, email, graphics, pictures, animations, and videos, as well as listen to sounds and music. Some of this information will be found as a result of searching the Internet and other sources but some of it will be encountered organically in the ambient environment surrounding the researcher.

Synclair will also observe the researcher via video camera and other sensors and be able to respond to gestures and body language. Synclair will be able to read the researcher's body language, as described in Chapter 10, but also be able to understand gestures. A gesture is "a movement of part of the body, especially a hand or the head, to express an idea or meaning" (Lexico, 2019). Beginning with Myron Krueger's artistic experimentation with a light box and cameras in the mid-1980s, gesture recognition has evolved over 30 years (Krueger et al., 1985). Also in the mid-1980s, the DataGlove was the first commercially available glove, designed for the Apple Macintosh, using a magnetic positioning and orientation system. In 1986, the NASA Ames Research Center utilized the DataGlove in a virtual environment project to test the functionality for potential distribution into space or planets proving too dangerous for direct human interaction. The use of magnetic tracking was well suited for the potential environmental unknowns in an extraterrestrial setting (Zimmerman et al., 1987).

Microsoft's introduction of the Kinect game controller on the Xbox 360 in 2011 was a big jump from gesture recognition research to commercial applications. Kinect used body movements and voice commands for a natural user interface. The Leap Motion Controller, designed to read sign language and released in 2013, was a 4-inch x 1.5-inch device able to view an area up to one meter from the device (Potter et al., 2013). BMW has recently released a new gesture recognition tool for limited use. The user is able to control volume, navigation, recent calls and turning on and off the center screen in an automobile (Burns, 2019). Motions can be detected as far away as two feet from the screen/scanner. A user resting their arm on the center arm rest in the front screen can point a finger at the screen and make a spinning motion to control the volume. Not having to look at a touchscreen and manually turn a dial or push a button certainly allows greater safety for use while driving. Synclair can use similar technology to facilitate gesture-based communication and control on the part of the researcher.

Many recent systems are using cameras on the device to detect gestures. Most major smartphone manufacturers are planning to include gesture recognition into near-term models of their products (Goode, 2018). The Samsung Galaxy S4 introduced SmartPause in 2013. SmartPause paused a video when the smartphone detected users were not looking at the screen and restarted the video when users looked back at the screen. The Google Nest Hub detects hand gestures to start/stop content being displayed on the device.

Synclair will observe and respond the researchers body language and gestures as defined in the Mgesture and Mbody models. This is necessary for Synclair to understand the researcher's context so it can tailor interaction, dialog, and collaboration with the researcher. For example, if the researcher enters the office and begins typing on the computer keyboard hurriedly, but does not sit down, Synclair can infer the researcher is busy and so therefore would refrain from beginning a lengthy dialog. If, however, Synclair observes the researcher sitting down at the desk, leisurely leafing through some items on the desk, and sipping coffee, it can infer the researcher is in a relaxed state and intends on being at the desk for some time and might determine it is a good time to discuss some new recent work Synclair has discovered.

Synclair uses its observations (T) and activity models (Maclivily) to judge the researcher's immediate intentions. Synclair then uses the researcher's dialog model (Mdiatog) to tailor its interactions with the researcher. For example, Synclair may have learned the researcher does not want the early morning routine to be interrupted with information about recently discovered new work. Over time, Synclair continually modifies the activity and dialog models forming a unique and evolving relationship with the researcher.

Other forms of multi-modal interaction are: augmented reality (AR), enhanced reality (ER), mixed reality (MR), and virtual reality (VR). Virtual reality involves a completely artificial visual field, usually in 3D, viewed by a person using a head-mounted device obstructing any view of the real world.

Augmented reality refers to the presence of computer-generated information provided as a visual overlay to one's view of the real world (Caudell and Mitzell, 1992). An example is the heads-up display (HUD) in an aircraft. As the pilot views reality through the HUD, computergenerated information is displayed on to the HUD and incorporated into the pilot's view.

In AR, the digital information is not considered part of the real world but instead is superimposed onto the real world. However, in mixed reality, digital information is placed in the field of view so it appears as if it is in the real world thereby blending artificial and real objects (Milgram and Kishino, 1994). An example of MR is a computer-generated character appearing to sit at a table alongside real humans.

In AR and MR, the added digital information is not visible unless one looks through an enabling device. In enhanced reality, digital information is displayed into the real world and is visible without the need of any special tool or appliance (naked-eye visibility). An example would be a mobile robot projecting an animated pathway on the floor in front of the robot to convey to people it is safe to walk in front of the robot.

Synclair is able to communicate with the researcher via a combination of AR, ER, MR, and VR. In fact, the beginnings of this technology can be seen today with devices like: Google Glass, Microsoft HoloLens, and Oculus Rift. There are also many AR and MR apps and games on smartphones and tablets in use today.

 
<<   CONTENTS   >>

Related topics