Tailoring Safer Systems

Reaching optimal Human Performance through effective System Design

Designing automation for complex socio-technical systems, to ensure optimal Human Performance of human operators, is a challenging endeavour. Especially in safety-critical environments, humans may need to adapt quickly to changing levels of demands, complexity and uncertainty, in order to maintain optimal performance, efficiency and safety of operations. Under these conditions, humans may benefit from automation. In most cases, automation is designed to take over low-value tasks, i.e. tasks that are simple and easy to automate. However, designing automation to support the human with cognitive demanding tasks such as problem solving, and complex decision-making is more challenging for various reasons. First, it is required to build an understanding of all high-level tasks and underlying (human) cognitive functions, and to identify to what extent these tasks are currently supported by automation, and what humans need in terms of resources to execute them. Second, automating tasks requires to re-think the new distribution of (cognitive) functions between humans and automation on a higher level, what organizational structures are required, and how cognition is shared amongst humans and automation (i.e. how humans are able to work effectively with automation). Third, it needs to be understood how automation should be designed so it can support humans optimally in managing complex tasks, in particular when decision-making or problem solving under rapidly changing demands, high levels of complexity, and uncertainty is required. Therefore, creating automation to support humans requires a deep understanding what strategies humans adopt when engaging in complex problem solving and decision making. What strategies do they adopt and what do they need as automation support? This article provides an overview on how to tackle these challenges.

Step 1: Understanding tasks and underlying (cognitive) functions of a system

We have to consider, that in most cases, we do not develop systems from scratch. Rather, we are building upon existing systems for improvements in terms of safety, efficiency, or other performance dimensions. This means, we have to understand what tasks and underlying (cognitive) functions currently exist and what functions currently are supported by automation, in order to identify possibilities to further automate complete tasks or underlying (cognitive) functions or improve existing automated functions.

In order to identify what automation optimally supports the human in complex tasks (ensuring human-centric decision-making), we first need to identify all tasks and corresponding (cognitive) functions. We also need to identify the current allocation of tasks (and underlying cognitive functions) between humans and automation. Some tasks may be allocated to humans, with various levels of automation support, some tasks may be allocated fully to automation. But it is also possible that tasks are dynamically allocated to humans or automation. It is necessary to understand how changing the allocation of tasks may impact the overall system in terms of interdependencies between humans and automation. A Cognitive Function Analysis (CFA) (Boy, 1998) is an important instrument for Human Factors Engineers and Designers (e.g. UX Engineers) to generate an understanding of all tasks and underlying functions of a system, and the implications of changing the allocation of functions between humans and automation. When doing a CFA, it is important that a wide range of techniques is used, including interviews, observations as well as documentation study. Interviews and observations are important as in most cases, humans may have evolved to use the system differently as intended, which often is not documented.

Step 2: Understanding the impact of function allocation on system stability

Changing allocation of functions between humans and automation may have an impact on system stability (Straussberger et al., 2008). When automating existing functions currently allocated to humans, it therefore needs to be assessed what impact redesigning human and machine cognitive functions through increasing automation will have on the overall stability of a complex socio-technical system. This will ultimately determine the resilience of the system to respond to all operational demands. Stability exists on various different layers. It is the result of organizational structures linked to procedures and technical systems and will reflect a system’s ability to recover after disturbance. The stability of socio-technical systems is defined through two processes (Straussberger et al. 2008):

    1. Global socio-cognitive stability
    2. Local socio-cognitive stability

Global socio-cognitive stability is concerned with the appropriateness of functions allocated to humans or automation, the pace of information flows and related coordination, through designing appropriate structures linked to:

    • Authority
    • Responsibility
    • Controllability
    • Ability

Issues may arise if these structures have not been adequately designed. For example, when humans have formal responsibility but do not have controllability or ability to execute certain tasks or high-level functions. Or, alternatively, functions become fully allocated to automation, yet humans maintain formal responsible for these functions, whereas they have no control or ability to intervene in their execution. Issues may also arise when functions are dynamically allocated to humans or automation or delegated to the system by humans, and the conditions which must be met for delegation are not transparent to humans or are simply not defined.

Local socio-cognitive stability refers to humans’ workload, situation awareness, ability to make appropriate decisions and take action. Local socio-cognitive stability will mainly rely on humans’ ability to understand automation and to gain a mental model of the system. Automated systems need to be designed such that humans are able to predict (anticipate) responses of automated systems on human input as well as receive adequate feedback, and regain authority if needed (Boy, 1998). Also, transparency of automated functions needs to be considered, so that humans can develop a valid mental model of the system, its functions, and its behaviour.

Ensuring both global as well as local socio-cognitive stability, will ensure a common frame of reference, supporting joint situation awareness between humans and automated systems.

Step 3: Design automation to support expert decision-making

Designing automation to support human macro cognitive functions starts with understanding how human operators respond to high levels of complexity and uncertainty. Humans may need to adapt to changing demands, which requires anticipating, extrapolating into the future, and creating an assessment based on experience. It may also be required to plan ahead and build capacity to be able to manage situations in the near future. They may also need to engage in strategies to deal with future demands and unexpected situations. Such strategies may be dedicated to either reduce or manage complexity and uncertainty. Examples of complexity and uncertainty management strategies include (Corver & Grote, 2016):

  • Anticipatory thinking (extrapolating the current situation into the future based on past experience on observed deviations)
  • Adaptive planning (i.e. creating back-up plans)
  • Weighing pros and cons of different options (comparing alternative solutions)
  • Forestalling (improving readiness, e.g. to manage resources for future demands)
  • Reducing uncertainty (e.g. increase accuracy and reliability of data through the integration and validation of information from different sources)

The understanding of these strategies is important to start designing useful automation to support human operator decision-making and task execution in highly dynamic situations with high levels of complexity. The following questions should be asked: What information is required from which sources and what data accuracy is required? What cues are required for human operators to be adequately alerted about deviations in order to allow them to quickly respond adequately? What do humans consider when analyzing a situation and engaging in complex decision-making? Automated support tools can be designed to support humans’ ability to filter and cluster information where it is needed, to extrapolate into the future, and be alerted when the situation deviates, or to make complex decisions based on operational trade-offs (Corver & Grote, 2016). Finally, an understanding of the tasks and information needs can support the design of automation which supports humans with clustering, integrating and filtering different information from different sources for improved and quicker decision-making.

In summary, the identification of human macro cognitive strategies allows to understand how automation can support human needs and will allow to increase overall performance of a system.

———————————————————————————————————————————————————

References

Corver, S.C. & Grote, G. (2016). Uncertainty management in en route air traffic control: a field study exploring controller strategies and requirements for automation. Cognition, Technology & Work.

Boy, G. (1998). Cognitive function analysis. Westport, CT: Ablex, Greenwood Publishing Group.

Straussberger, S., et al. (2008). PAUSA for the future – A synthesis of Phase 1. June 2008. Final Report.

Sifra Corver

Human Factors Expert

Leave a Reply

Your email address will not be published. Required fields are marked *