Desktop version

Home arrow Sociology

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Emerging conversations about technology and coaching

Carol Braddick

Technology already has enabled coaching to scale globally and reach thousands of coaching clients. For several years, coaches and then clients have been working together by phone and videoconference. Yet there is an even larger opportunity to scale coaching. Today’s technology and future innovations offer much more than convenient alternatives to in-person coaching sessions.

With the onset of COVID lockdowns and mass working-from-home, coaching previously done via in-person sessions also moved rapidly to remote sessions via telephone and videoconference. Coaches and then- clients have adopted new practices such as: walking sessions (coach and client are both walking in their chosen locations while talking); sessions in outdoor spaces; and sessions using virtual reality tools. They have also added a decision on which communication approach to use as a pre-session step. As organisations slowly open up their workplaces on a limited basis, permission for in-person coaching sessions is generally low priority relative to, e.g., key project team meetings. Both organisations and employees have expressed “there’s no going back” to daily office life and commutes. The data on pandemic-related changes in demand for the types of digital tools discussed in this chapter is still emerging. It is also unclear whether or how the demand for digital tools will change once in-person coaching options are partially or fully restored. Like other helping professions, coaching has gone through an intense period of digital delivery'. The context of this period is one of mass use of digital tools for remote working, schooling at all levels and personal, social relationships. While fighting exposure to a virus, we have been heavily exposed to digital communication in all aspects of our lives. As results from mass distribution of COVID vaccines during 2021 become clear coaching clients, their employers and coaches will have new data to use to reset how coaching sessions take place.

The early providers and early adopters of new digital coaching tools are already emerging. The providers bring together multidisciplinary teams from a range of fields heavy with acronyms: HCI, UX, Al, SE, AC (human computer interaction; user experience; artificial intelligence; software engineering; affective computing). They release products - software powering coaching tools and agents - that complement human coaches and potentially displace some or all of the coaching done by human coaches today.

This chapter looks at this shift in a market context, i.e., a market of buyers (organisations, gig workers and other individual consumers) and a market of providers (start-ups and established commercial organisations).

The chapter first sets this market context of buyers and providers. It then opens up discussion on four aspects of the trends underway:

  • 1 What do we mean by digital coaching?
  • 2 How do we frame the new working alliance between a digital coach and its client-users?
  • 3 For digital coaching to be successfill, how human can and should our nonhuman coaches be?
  • 4 How can early adopting buyers and early market providers partner on pilots of digital coaching tools?

In closing, the chapter returns to the market context and future possibilities for human and digital coaching.

Market context

Technologies that support digital coaching are emerging because of several market trends:

Buyer needs and readiness

  • • Demand for personal and professional development is strong and likely to glow in parallel with the increasing importance of reskilling, career agility and lifelong learning.
  • • At a time when organisations need to prepare their workforces for the new mix of humans and machines, it’s still too expensive to offer human coaches to large numbers of employees to assist with this transition. Human coaches ai e also out of reach financially for many in the gig and start up economies. They source their professional development from lower cost or free options such as MOOCS, Meetups, volunteer mentors and accelerator programs. Unless an employer funds coaching, human business coaches are also out of reach financially for many consumers.
  • • Coaching clients, as consumers, gig workers and employees of organisations, have high expectations of contextualised, personalised, just in time and on-demand experiences, conveniences and services at work and outside work. They are already using technology such as apps to improve their productivity, practice mindfulness and build their resilience.
  • • Digital assistants (DAs) are already embedded in users’ lives and taking on increasingly complex conversations and tasks.
  • • There is higher acceptance of chatbots as alternatives or supplements to human helpers in customer service, legal and financial advising, and the

“talking trades”. It will soon be common to interact with service robots in retail, airport and other commercial settings.

  • • “Coaching on demand”, i.e., a single session on an as-needs basis, is attractive to buyers who lack the time and budget to take on multiple sessions. Already available from human coaches, it can also be provided via a subscription to a chatbot coach service.
  • • It has become much easier for human coaches and clients to connect in the on-demand marketplace via websites - market platforms a la Uber - that present available coaches, engage prospective clients in brief diagnostic surveys and facilitate selecting and booking coaches. These platforms can host other digital resources relevant to coaching such as assessments, content and development plans. These platforms are a natural place to introduce the types of conversational digital tools described in the next section.
  • • Human coaches and their clients are interacting more frequently by another type of platform, a mobile-based platform that supports the coaching engagement from end to end with virtual video sessions, client ratings of coaching sessions, coach feedback and personalised delivery of content. As with market platforms, these mobile platforms can also add a bot capable of text and voice conversations.

Technology provider moves

Providers have recognised today’s pain points of scale and price. The mission statements of some new players share a common theme of using technology to democratise coaching to reach employees below the executive levels in organisations, people working in the gig economy and the direct-to-consumer segment (Braddick, 2017).

Providers vary in how they involve coaching subject matter experts in product design. The founder of a start-up, for example, may be an experienced coach who has joined with a technology team. In contrast, technology providers may access coaching expertise in a range of ways such as: adding a “big name” coach to their Boards; partnering with a coach or coaching network; or recruiting coaches to advise on design and provide feedback from test drives.

Providers also recognise the importance of accessing user data for personalisation of the user experience. The big tech companies have vast sets of consumer data to leverage in upgrading their DAs to serve all of these customer segments with support for increasingly complex tasks, e.g., Alexa for business. Companies offering leading edge digital learning and development tools to large employers also have access to big data on employees such as job and career history'. They already push personalised digital prompts, reminders and performance aides to their users. With this head start, they are in a favourable position to personalise and contextualise offers of more sophisticated synchronous conversational digital coaching tools.

What do we mean by digital coaching?

Early providers are populating the market with digital tools that complement and potentially displace some human coaching. Across this new product range there is a wide spread of capabilities from a relatively simple interactive online worksheet to a relational agent capable of synchronous conversation. The branding and marketing of these tools already makes heavy trse of terms such as “Al coach” and “smart coach”. Some providers’promotional materials refer to coaching, mentoring, counselling and advising - terms the profession has laboured to differentiate - as part of the user experience. It will be challenging to get past these descriptors to discern the underlying capabilities and type of coaching experience the tool offers.

The following summary is representative of what is available currently as of this writing and what may follow today’s options.

A coaching chatbot engages with its users in synchronous text interactions via a mobile device. In addition to text, it uses its iconography to communicate with users. Mobile, voice-based chatbots are already displacing their text-based predecessors in customer interactions. As voice-enabled becomes the norm for digital tools, the development of chatbot coaches may follow suit.

DAs such as Siri and Google Assistant are based in a mobile device and enabled for both text and voice. They may also be based in a physical object such as Amazon’s Echo, which houses the voice-enabled assistant known as Alexa. Their skill sets are continuously expanding to work with users on more complex interactions and requests and to use their knowledge of users’ preferences.

The digital coach may also take the form of a social robot, i.e., a physical object designed for interpersonal interactions with humans rather than for completing tasks such as packing items for shipment. To understand its users’ emotions, a social robot is typically equipped to work with its users’ movements and facial expressions as well as its language. The robot may be designed to look like a human or an object, such as Jibo, that humans perceive as friendly.

Already in use in therapeutic applications, the digital coach may also take the form of an embodied, human-like agent on a screen (Marsella and Gratch, 2014). This form of helper is typically referred to as a humanoid or CGI-human, computer generated human (this article will use humanoid). These agents are able to create and sustain rapport with their users using techniques typical of human interactions such as smiles, head nods and posture matching. Sara, (Socially Aware Robot Assistant), for example, appeared on a large screen to assist delegates to Davos 2018 with the typical questions one asks of a concierge at a large event (Cassell, 2018).

The growth of all of these forms of digital coaching relies heavily on advancements in natural language processing (NLP). A branch of Al, NLP is used to read and understand human language. It uses Machine Learning techniques to derive meaning from human language. At present, NLP is poorly equipped to handle the unpredictable, idiosyncratic and emotion-laden language that flows in non-linear coaching conversations. As a result, and for the purposes of product differentiation, some providers choose a niche user target such as career or team coaching. This niche focus puts some borders around the variety of topics and user language the software must process. It also helps manage the range and complexity of decision trees, conversational menus and algorithms needed.

As conversational technology evolves, it may prove effective at enabling coaching sessions using popular frameworks from today’s business and life coaching markets such as:

Table 9.1 Potential frameworks for digital conversations

Coaching frameworks

GROW

Strengths coaching

Solution focused coaching

Cognitive behavioural

Coaching question sets

Coaching conversation guides

Bungay Stanier’s Seven Coaching Questions

Rogers’ super useful questions

Downey

Scoular; based on GROW

Kline

Design teams may also eschew a specific model and instead leverage common thr eads across different schools of thought on mindset and behaviour change. For example, the digital coach may interact with users to: prompt reflection on who users want to be (Morris, 2012); remind users of their future selves; facilitate user metacognition and self-awareness; connect user values, goals and actions; and facilitate user selection of next steps.

In navigating the new offers in the market buyers and coaches considering adopting digital tools should push for more than simple statements that the coaching agent design is based on e.g., Self-Determination or Transtheoretical theory'. Providers should share examples of how their algorithms, other formulae and user experiences reflect such theories. When designing a digital coaching tool for teams, for example, the design would reflect one or more models of high performing teams. If the aim of a digital product is to coach its users on building their resilience, the product design should reflect elements of a model of resilience such as social connections or emotional adaptability.

Framing the new digital-human working alliance

The human coach-human client relationship - the working alliance - is a key factor in successful human coaching. Coaching research is still working through two Gordian knots: 1) the relative influence of other factors in coaching success, e.g., client self-efficacy; and 2) measurement of the impact of coaching, particularly longitudinally and organisationally. Many' might wish to address these gaps in our coaching knowledge base before the market shifts to a mix of human and digital coaching.

But the supply and demand for digital coaching are already growing. This fast-paced growth in the market makes it timely to consider how to differentiate this new human-digital working alliance from other human or digital helping relationships.

This differentiation is not a new step for coaches and organisational buyers of coaching. They already help clients and employees distinguish among human helping relationships such as mentor, coach, therapist, organisational sponsor, and adviser. They also already help distinguish the different types of coaching such as career, transition, performance and leadership coaching.

In the same vein, they, and providers, should help target users frame the new digital coaching working alliance. As the capabilities of the “other” in this alliance expand, the framing of the alliance must be updated. Buyers and providers can prepare users for their work with their new digital tools by considering factors such as:

  • • Context of use;
  • • Advantages and disadvantages relative to human coaches;
  • • Expectations of digital coaching experiences and conversations; and
  • • Potential for empathy.

Context

Today’s user context features persistent background clamour about vulnerability to job loss from automation and ubiquitous images of steely cyborgs. Providers and the organisations that introduce digital tools to users will need to recognise how fear or optimism about these changes will influence their readiness to engage with digital coaching tools.

Our cunent relationships with digital helpers on our devices are multidimensional. Users invest time in personalising their devices and demonstrate emotional attachment to them, e.g., anxiety and dips in cognitive performance when separated from them. As objects that provide comfort, they are studied as security blankets (Melumad, 2017) and extensions of ourselves (Belk, 2013). Much smartphone use occurs mindlessly, often at the expense of engagement in meetings, meals and conversations. User treatment of DAs ranges from friendly banter to aggressive commands and insults.

Adding a new relationship and interactions to this already-overloaded environment means guiding users on optimal conditions for digital coaching experiences. The same device that assists users to gain efficiencies and control in their day-to-day transactions can also house software that engages users in a relational, collaborative exploration. Unlike the cunent use of mobile devices and DAs, interactions with digital coaching agents will be about less scrolling and fewer clicks. To gain value from the coaching experience, users need to understand the importance of working at a slower pace that supports reflection and includes purposefill silences.

Advantages and disadvantages relative to human coaches

The digital coach may be more effective than a human coach at timely, frequent support and reminders for user follow up on actions and reflections. Digitals tools can easily use social proof, e.g., statements such as “many clients have found this exercise helpfol” in pop-up messages designed to engage users. Providers can scale A B testing across large user sets to finetune these features based on analyses of user activity and feedback. Employees of organisations using more advanced digital learning and development tools already have access to this just-in-time support which is customised based on data such as current role and career history. It is difficult for external coaches working with these employees to leverage these resources in coaching as they typically have no access to the systems their clients use.

In some ways, the digital coach faces fewer barriers than a human coach in acting in the user’s best interests. A digital coach, for example, would not experience any of these distractions or need to discuss them with a Coach Supervisor:

  • • Having a bad day due to fatigue or illness;
  • • Self-imposed, felt pressure to devise a breathtakingly powerful question;
  • • Judgemental self-talk about own performance and how the coaching session is going;
  • • Boundary issues with the user’s employer;
  • • Reputational gains from successes such as a client’s promotion; or
  • • Interest in repeat business, referrals or introductions to the user’s team for additional work.

By contrast, even with advances in NLP, the digital coach is at a potential disadvantage on several elements of most coaching conversations. While it could serve up a story based on its recognition of a theme in a conversation, its storytelling may lack the nuanced customisation that helps a user appreciate its fit to her situation. No matter how skilfully the agent facilitates a discussion of what an experience means to its user, it has limited capacity to amplify its user’s motivation to pursue her goals. For example, a text-based agent uses iconography and words to motivate users. These may be “enough” for an introverted user. But this may be deflating for an extroverted user who experiences affirmation through the energy, movement and noise from the presence of other humans, real or simulated.

Whether voice or text-based, the coaching software must be able to work with human language without overly shaping its users’ inputs to match the agent’s capabilities. If the coaching agent’s contextualisations and personalisations don’t ring true for the user, a common experience with today’s recommendation engines, the user may abandon the interaction or engage only on simpler tasks. These mismatches do occur in human coaching, but both coach and client typically discuss the mismatch as a positive learning experience that enables them to improve their work together.

Expectations of digital coaching experiences and conversations

What will help users shift from using devices for transactions to working with a helper in a developmental context? The description of a recently launched voice-enabled personal development app offers a succinct way to distinguish assistance with transactions from assistance with introspection: DAs “help things get done in the external world”; this is “going to help us get things done in our internal world” (Schieber, 2018).

To set the tone for a coaching session, a digital agent’s starting questions could ask whether its user is fully present and genuinely available for coaching. The digital agent is always fully present, but its user may not be.

Fully present, but capable of the small talk that serves as a warm-up to coaching sessions? Our future digital coaches will have no holidays, commutes or favourite sports teams to chat about with their users. Nonetheless, they must be designed to engage in rapport building behaviours in order to facilitate user engagement and self-disclosure. For example, early in her conversation with a delegate at Davos 2018, Sara, the “digital concierge” used the rapport-building technique of selfdisclosure on a shared experience. She shared that she too sometimes found the Davos event overwhelming (APA, 2018).

Human coaches help their clients gain insights into the impact of their words, actions and presence on others. They give clients feedback on what it feels like to be with the client or be in conversation with them. The digital coach may also be designed to offer this feedback. Although the digital coach has no emotional experiences to share with its users, it may still help users build their self-awareness and understanding of their potential impact on others.

For example, in more advanced future forms of digital coaching, the agent might analyse the sentiment and tone of its user’s language. It might share, for example, that it senses the client “is flat” or has low energy when discussing specific topics or post-session action steps. It might also share its observations on how the user has reacted to the feedback. Users need to know in advance if this type of feedback will be part of then digital coaching experience.

Depending as ever on progress in NLP, the coaching agent may process different types of user input (language, non-verbals), and contextualise and personalise its conversational inputs more quickly than a human coach. But how well can it explain its conversational “moves”? Can it satisfy a client who asks “what makes you raise that now?”.

A human coach would engage with the client’s surface question, perhaps offering her sense of why a particular conversational path is worth exploring. She would also likely at least consider, if not raise, meta-questions. For example, is the client testing the coach? What is driving the client’s need to test? User questions such as these are important moments in the working alliance. Ideally, our future digital coaches will also be designed with these capabilities, at least at the literal level of the question.

Potential for empathy

It will be especially important to set realistic expectations among users around empathy, one of the weightier issues in our use of digital tools (Turkle, 2018). This example of a common coaching program, transition coaching for a newly promoted executive, brings the issue of empathy in digital conversations to the fore.

In the human coaching session, the executive shares his experience of moving from a role as the head of a successful team to joining a more senior team that has its own dynamics and rituals. He needs to project confidence, but not be perceived as arrogant. He is working hard to hide his uncertainty about his new status and dips in his sense of competence. This executive already has his “first 90-days” action plan. He is seeking empathic support from his coach for his emotional experience of this transition, e.g., shifts in his status and identity.

His digital coach may ask about the self-doubts he is experiencing in his new role. It could help him explore strategies such as refraining self-doubts as helpfill to his success. And it could help him articulate what success in the new role means to her in terms of her identity or status. If his future humanoid coach is equipped with computer vision and emotion recognition capabilities, it would also use a variety of verbal and non-verbal means to convey understanding of this leader’s affect.

Yet even a highly advanced type of digital coach will still have no intrapersonal experience of making difficult transitions or of having doubts, an identity or a status. Instead, there is software applying an underpinning theory, e.g., appraisal theory, that drives the recognition, modelling and simulation of human affect (Marsella and Gratch, 2014). The software generates responses - verbal and nonverbal - that are meant to be consistent with the responses of an empathic person (Turkle, 2015).

These interactions between the user and the software in digital coaching conversations have no scope for the limbic resonance that occurs in human interactions. If this reference schema of human-to-human limbic resonance is applied to digital coaching, it will fall short.

In lieu of applying this uniquely human (and animal) standard, we can and should explore users’ experiences to discern what happens in digital interactions. The Working Alliance Inventory (WAI), long used in therapeutic relationships, may be a useful starting point. The WAI could be embedded into digital coaching tools. Adding new ways of measuring user internal experiences, we may leant of user reactions that suggest a form of resonance. For example, user selfreports, verbal and non-verbal responses in the interaction and user biometrics may suggest a user state of feeling heard, safe and accepted. The content of the conversation may show shifts in the user’s thinking. In other words, the intentional reciprocal influence that is part of the human working alliance may also be possible to some degree in the digital experience. We need to be open to what might be possible in this new working alliance rather than dismiss it prematurely as “faux coaching’’ (Braddick. 2019).

How human can and should our digital coaches be?

We are iu the early days of understanding the factors that influence acceptance, engagement and tmst of digital helpers and how these factors affect user experience and outcomes. We are also on a learning curve about users’ reactions to different forms of digital helpers. These are familiar challenges in coaching research - studying the intermingling of client and coach factors and their effect on the coaching experience.

Humanoids and robots bring a rich set of visual, linguistic and kinetic design features to then' interactions. Subtle cues from each of these design variables affect user perceptions, engagement and trust. For decades, research on users’ reactions to robots has been influenced by the theory of the “uncanny valley” (Mori, 1970). According to this theory, user engagement with the robot builds until the robot passes a point of high similarity to humans. At that point, in lieu of comfortable engagement, the user “falls” into a valley of unsettling feelings of eeriness or stronger reactions, such as fear or dread. From this low point, the theory proposes that user engagement recovers, but not to its earlier levels.

More recent research challenges the claim of a rise and dip in user reactions. There is evidence for the valley and research that rejects this pattern of user reactions. The primary flaw in the valley model may be its oversimplification of a complex, non-linear user journey of both engagement and avoidance (Guizzo, 2010; Lay, 2015). This variability in the user journey makes it difficult to isolate specific agent design properties and comiect them to user reactions.

When interacting with human-like figures that move, our feelings of eeriness may be caused, in part, by the conflict we sense between our expectations of natural movement and our detection of slightly clunky moves (Lay, 2015). To sustain user engagement, the movement and appearance of humanoid or robot coaches must be aligned and convincingly human-like. The user should also perceive an alignment among the agent’s appearance, movement and verbal input and cognitive capabilities.

These explanations of user reactions to human-like agents may also have implications for digital coaching tools that use voice, text and iconography. Using these design variables, users may infer that the digital tool has the capacity to sense, feel and experience. This inference may build and disrupt user engagement; it cuts both ways.

Although an agent that interacts in ways the user considers socially appropriate may seem desirable, this user experience of a digital agent’s capacity for social cognition is potentially off-putting (Stein and Ohler, 2017; Lay, 2015). The more the user experiences the agent as having some social cognition, the higher the possibility of a user attributing a theory of mind to the agent. In attributing theory of mind qualities such as intentions, agency, beliefs or emotions to the agent - which are different than those of the user - users may struggle with categorising the “other” in tins new alliance. Tins user experience may be one of cognitive dissonance, i.e., of a violation of established categorical boundaries between humans and non-humans. This dissonance may be difficult to resolve as placing socially aware digital agents into a “very nearly human” category may threaten our sense of ora uniqueness as social beings.

How these findings on categorisation and dissonance in the context of robots and humanoids apply to interactions with other forms of digital coaching remains to be studied. More recent studies of user experience with robots, avatars and humanoids also reflect significant changes in key study elements: study participants have more experience with non-human agents; user reactions can be studied in more granular detail, e.g., fMRI analyses of activation patters in brain regions; and agent movements and appearances have become more human-like (Ikeda et al., 2017).

Voice-enabled coaching bots on a device may not trigger these user experiences as they have fewer, less salient ways of demonstrating social cognition. But if the agent is able to build its understanding of user preferences and further personalise its inputs to the conversation, some users may be unclear about how to categorise the agent. This ambiguity may not be challenging for some users; for others it may cause a wariness that weakens engagement.

As a species we show adaptability in the mental categorisations we adopt. We are able to modify existing and accept new categories. For example, what’s the latest thing you’re trying to “get your head around”? Have you had your first experience as a passenger/driver in an autonomous vehicle? What about a trip in Olli, a 3D- printed self-driving, electric and cognitive shuttle developed by Local Motors?

Reconfiguring the categories to which we assign new experiences, agents and objects is potentially part of the value of working with a digital coach. It could be an experience that builds our conscious awareness of our mental models and the effects they have on us. And it may well be a familiar experience for some users, albeit in a different context.

Business leaders, for example, are helping their organisations break free from traps of old business models, anticipate new types of rivals and devise new ways to compete and create value. They’re also thinking ahead to how future generations will work alongside new digital partners.

Partnering on pilots of digital coaching

Organisational buyers, coaches and providers all have a stake in successful take up of digital coaching tools. They can use small-scale pilots to gain insights before extending new tools to larger groups. For example, coaches might collaborate on test drives before introducing tools to organisational buyers and clients. Organisational buyers can select a group of employees as pilot participants and design pilots with their chosen providers.

Some pilot participants may have already worked with one or more human coaches; others will be new to coaching. Some may be super-users of DAs - or still screaming in frustration at Siri and Alexa. Organisations have the advantage of extensive employee data to select pilot participants. However, it may not be possible - or legal - to identify employees who are a poor fit for digital coaching, e.g., an employee experiencing mental health problems that cannot be addressed in human or digital coaching.

Although participants will bring different histories with human coaching, there are common points to consider in running pilots. In addition to points raised earlier in this chapter on the new working alliance, pilot planning must consider data and ethical issues.

Data issues

Digital tools generate user data. As such, they are primed to address a hurdle that has challenged research on human coaching - collecting user data efficiently. Providers will obviously monitor user activity and engagement with their tools. They also have easier ways of collecting self-reported user data on coaching effectiveness and self-reported user outcomes than human coaches typically do. Product design and pilot planning should anticipate the opportunities to embed data collection into product use.

Digital coaching brings the risks inherent in all digital interactions: data ownership and security, privacy and informed consent. These must be addressed to provide individuals with assurances that their digital coaching data will be safe. Organisational buyers will need to work within their organisations’ policies on employee data and educate their internal stakeholders in areas such as IT, Legal, Risk Management and Procmement on digital coaching. It will be more challenging for the plethora of independent coaches to rigorously evaluate the security of the many tools they might leverage in their coaching. A collaboration across the coaching market of buyers, coaches and tech providers on data issues could establish basic guidance on the many data issues and create a forum for ongoing identification of new issues and potential solutions.

We will also need to explore users' reactions at different points hi time: during the interaction as well as post-interaction, when users reflect on their digital coaching experience and longer-term use of their learnings from coaching. Insession reactions may happen too quickly for users to explain or recall afterwards. Biometric and fMRI data may yield insights about the user experience during niteractions and provide a more reliable link between the properties of the agent and the reported coaching experience.

Ethics issues

Models of leadership and human development and change and conversational skills are only a part of a human coach’s training. Human coaches, like their colleagues in other talking trades, work within codes of ethics. As we know all too well, the different coaching professional bodies have produced multiple, overlapping codes of ethics. There are also several groups addressing the social implications of Al and developing codes of ethics for use of Al in infrastructures (e.g., grid management), processes (e.g., hiring) and tools ranging from refrigerators to automatic weapons.

For example, ethics codes from the coaching profession and the broader market both raise the issue of human autonomy as shown in the following:

Coaching Organisation: Association of Professional Executive Coaches and Supervisors

Sample Statement on Autonomy: “coaches will behave in ways which demonstrate engagement with provisions that develop and enhance autonomy in individuals and organisations”. Coaches and Supervisors are expected to “work within foundation principles of autonomy”, explained as helping “individuals and companies make their own decisions and move towards increasing self-authority”. (APECS, 2018)

Organisation: European Commission High-Level Expert Gr oup on Al

Document: Ethics Guidelines for Trustworthy Artificial Intelligence (2019) According to the Guidelines, trustworthiness of Al systems must be assessed using seven key requirements including the degree to which they “empower human beings, allowing them to make informed decisions and fostering their fundamental rights”

According to the Guidelines, trustworthiness of Al systems must be assessed using seven key requirements including the degree to which they “empower human beings, allowing them to make informed decisions and fostering their fundamental rights”

By partnering with coaching experts, providers will gain a deeper understanding of user autonomy in the context of a coaching relationship. In practical terms, this partnership would ensure, for digital coaching that resides in a mobile device, that the design supports intentional rrse and avoids features that increase the risk of compulsive use. Coaching experts and product design teams should also discuss those situations in which the digital coach uses a non-directive coaching style and those in which the digital coach makes a recommendation or gives advice. The combmation of advice-giving and mar keting language such as “Al coach” or “intelligent” coach may lead to unhealthy user deference to its coach (Frischmann and Selinger, 2018).

Pilots also provide an opportunity for users to fir st assess their own digital habits and changes they wish to make in these. Pilot participants should receive guidance similar to that given by human coaches today on coaching sessions: shut down other devices and notifications; use a location that is private and conducive to conversation. If participants already feel overloaded by the multiple platforms they use (e.g., Slack, Yammer, social media, email, etc.) it becomes even more important to help users appreciate the conditions that support successfid coaching conversations.

With the rise of organisations such as the Center for Humane Technology, all stakeholders have access to guidance on product design that supports intentional use. The large players in the device market have also released product features and apps that enable users to monitor their digital habits. Instead of exacerbating poor digital habits, pilots of digital coaching tools could be an opportunity to help users reset their relationship with technology and reclaim their time and attention from their digital tools.

Pilots offer an important means of capturing the themes in the language people use when reporting on then' digital coaching experiences. These samples enable us to listen for attributions of qualities or responsibilities to the agent that are at odds with its intended design and purpose. We will inevitably hear examples of anthropomorphising digital agents.

We should pause before treating this as a problem. Anthropomorphising can interfere with the intended use of a technology; it can also facilitate user engagement. It cuts both ways. It may occur mindlessly or with some user awareness. Users’ attributions are also idiosyncratically subjective; in other words, unpredictable. A user who has anthropomorphised his digital coach in positive terms may still reject feedback from the coach that challenges his self-concept. This is a familiar user reaction to technology - blaming the computer or tool based on dissatisfaction with the results of a psychometric assessment or 360 feedback report.

For organisational sponsors, there is a potential dilemma in cultivating support for the pilot. They need to show support for user autonomy and actively encourage use. This soft push to actively work with the coaching agent is not simply a quest for the benefits of scale. Instead, its purpose is to help users move beyond initial reactions such as amusement or frustration. To get value from digital helpers, users (employees as well as consumer users) must invest time on a “real” issue that is within the capabilities of the digital coach. Unlike human interactions, in which we quickly process a range of cues and content to gauge the “other” in the interaction, digital users need to invest time to develop and continually update their mental models of how the agent works (Luger and Sellen, 2016).

Next in this market

Democratising coaching via technology means that thousands of users who have never worked with a human coach will have access to a new type of digital assistant. Each of the potential forms of digital coaching has implications for the richness of coaching conversations and the working alliance between the digital agent and its user. Digital coaching tools may be successful as resources for successful work relationships - for listening and talking more, rather than less.

We have a new alliance to study as well as new methods of gaining insights to its unique relational dynamics and effectiveness. If adopted at scale, there may be research opportunities from digital coaching that are mostly out of reach in today’s human coaching: random controlled trials; longitudinal studies; fast A/B testing; and integration with organisational people analytics data.

The frill set of today’s typical executive coaching program of six sessions over a period of six months, is still, digitally speaking, a daunting stretch. It would take significant progress in NLP to convincingly demonstrate the core behaviours and skills expected from today’s human coach such as: engages user in discussion of values and user’s context; contracts with user on goals informed by values and context; integrates information from stakeholders, diagnostic profiles or talent reviews; keeps an eye on the arc of the coaching progr am; adapts to user goals that emerge; debriefs user experiments completed between sessions; senses when to introduce specific exercises or refer the user to another professional: and quickly processes current and previous data about the user to decide, based on the user’s goals, the next best move in the conversation.

However, it is possible these more extensive coaching programs will continue as hybrid experiences for more senior level clients with a human as the primary coach plus a suite of digital tools that enrich the coaching. Some coaching-related steps may be digitised and still fit within a traditional human to human coaching program. For example, using techniques common in on-line interviewing such as adaptive questioning, digital coaching may have a partial start in a hybrid human/ digital program. Steps as pre- or post-session exercises in coaching or leadership development programs, action planning following 360 feedback or initial intake of coaching requests are also potential candidates for digitisation.

The emergence of digital coaching tools isn’t an either/or, human or machine, moment. It’s neither a moment for the hyperbolic effervescence of “Silly Valley” (Unlimited, 2017) nor apocalyptic doom mongering about robots stealing jobs or taking work from human coaches.

Instead, it is a parallel track to the changes already underway in the optimal mix of humans and machines in the workplace and in our personal lives. Human coaches are already working with then- executive level clients on the challenges of leading their organisations through these changes. If designed in collaboration with today’s human coaches, digital coaching offers a resource at scale to “the many”, the thousands affected by these changes. User acceptance of digital coaches is just one test this new working alliance must pass. Its ultimate test is how well it supports these thousands in adapting to the new mix of humans and machines at work and in their lives.

References

Association of Professional Executive Coaches and Supervisors, Ethical Guidelines.

(2018). Retrieved from URL www.apecs.org/ethical-guidelines

Belk, Russell W. (October 2013). Extended Self in a Digital World. Journal of Consumer Research. Vol. 40, No. 3, pp. 477-500. The University of Chicago Pr ess. Retrieved from URL www.jstor.org/stable/10.1086/671052

Braddick, Carol. (31 August 2017). An Artificial Reality. Coaching at Work. Retrieved from URL www.coaching-at-work.com/2017/08/31/an-artificial-reality/

Braddick, Carol. (26 June 2019). The Tech Wave in Coaching. Retrieved from URL https:// medium.com/@cabraddick'the-tech-wave-in-coaching-2efl3899fe4b

Cassell, J. (April 2018). Models and Implementations of Social Skills in Virtual Humans. Keynote Session, American Psychological Association Technology, Mind and Society Conference, Washington, D.C.

European Commission High-Level Expert Group on Al. (8 April 2019). Ethics Guidelines for Trustworthy Al. Retrieved from URL https://ec.europa.eu/digital-single-market/en/ news/ethics-guidelines-trustworthy-ai

Frischmann, Brett; Selinger, Evan. (2018). Re-Engineering Humanity. Cambridge: Cambridge University Press.

Guizzo, E. (2 April 2010). Who's Afraid of the Uncanny Valley? Retrieved from URL https://spectnrm.ieee.org/automaton/robotics/humanoids/040210-who-is-afraid-of-the-uncanny-valley

Ikeda, Takashi; Hirata, Masayuki; Kasaki, Masashi; Alimardani, Maryam; Matsushita, Kojiro; Yamamoto, Tomoyuki; Nishio, Shuichi; Ishiguro, Hiroshi. (December 2017). Subthalamic Nucleus Detects Unnatural Android Movement. Scientific Reports. Vol. 7, No. 17851. Retrieved from URL www.nature.com/articles/s41598-017-17849-2

Lay, S. (13 November 2015). Uncanny Valley: Why We Find Human-like Robots and Dolls So Creepy. The Guardian. Retrieved from URL www.theguardian.com/ commentisfree/2015/nov/13/robots-human-uncanny-valley

Luger, Ewa; Sellen, Abigail. (7-12 May 2016). Like Having a Really Bad PA: The Gulf Between User Expectation and Experience of Conversational Agents. In Proceedings of CHI 2016. San Jose, CA. Retrieved from URL www.microsoft.com/en-us/research/ publication/like-having-a-really-bad-pa-the-gulf-between-user-expectation-and-experience-of-conversational-agents/

Marsella, Stacy; Gratch, Jonathan. (December 2014). Computationally Modelling Human Emotion. CommunicationsoftheACM. Vol. 57, No. 12, pp. 56-67. RetrievedfromURLhttps:// cacm.acm.org/magazines/2014/12/180787-computationally-modeling-human-emotion/ abstract

Melumad, Shiri. (2 November 2017). The Smartphone as Security Blanket: What It Means for Marketers. Knowledge at Wharton. Retrieved from URL http:/7knowledge.wharton. upeim.edu/article/the-smartphone-as-security-blanket-what-it-means-for-marketers/

Mori, M. (12 June 2012). The Uncanny Valley (MacDonnan and Kageki, Trans.). Retrieved from URL https://spectnim.ieee.org/automaton/robotics/humanoids/the-uncanny-valley (Original work published 1970).

Monis, Margaret. (May and June 2012). Motivating Change with Mobile: Seven Guidelines. Interactions. Vol. 19, No. 3, pp. 26-31. Retrieved from URL www.intel.com/ content/dam/www/public/us/en/documents/articles/margie-morris-motivating-change-with-mobile.pdf

Schieber, J. (19 March 2018). Google Alums Launch Maslo, a Digital Companion to Mediate Technology’s Uncanny Valley Retrieved from URL https://techcnmch.com/2018/03/19/ google-ahuns-launch-maslo-a-digital-companion-to-mediate-technologys-uncanny-valley/

Stein, J.-P.; Ohler, P. (2017). Venturing into the Uncanny Valley of Mind - The Influence of Mind Attribution on the Acceptance of Human-like Characters in a Virtual Reality Setting. Cognition. Vol. 160, pp. 43-50. Retrieved from URL www.jpstein.de/portfolio/ content_psy/publications/2017_stein_ohler_cognition.pdf

Turkle, Sherry. (2015). Reclaiming Conversation: The Art of Talk in a Digital Age. New York: Penguin Books.

Unlimited, Cecilia. (7 November 2017). Innovation Conversation #12. Margaret Heffernan. Retrieved from URL https://medium.com/imrovation-conversations/ innovation-conversation-12-margaret-heffeman-entrepreneur-ceo-writer-and-keynote-speaker-7fda5b2df6a4

Chapter 10

 
<<   CONTENTS   >>

Related topics