Skip to main content

The use and creation of Artificial Intelligence (AI) have steadily increased over the past few decades. Many AIs are created to reduce labor costs, generate solutions faster than humans, and perform tasks beyond human capabilities. Often AIs are designed to work in human teams. Unfortunately, AIs that are not designed with human interactions in mind may result in failure: Humans may be less likely to adopt the new technology, the technology may be used inconsistently with its design, and other failures can occur due to mismatches between human implicit expectations and explicit design requirements. Therefore, the design of AIs should include requirements for interacting with humans.

We propose a multi-disciplinary model for AI-human teams. The key motivation for developing this model is the prospect of AI-enabled teams that consist of humans and AIs that are capable of achieving human-defined objectives. Perhaps AIs can free up humans to focus on tasks humans excel at, alleviating some of the burdens of mundane tasks. Perhaps, AIs can do work that humans are not as efficient or accurate at accomplishing or are incapable of, such as vigilance tasks and processing large amounts of data quickly or consistently (i.e., eliminating transposition errors).

For future developers and researchers to develop AIs to work successfully in human teams, we propose a multilevel model. We expand on the input-mediator (process)-output-input (IMOI) model (Ilgen, Hollenbeck, Johnson, & Jundt, 2005; Kozlowski & Ilgen, 2006) to suggest an IPEOI model or an input-process-emergent state-output-input model. First, we describe the inputs to the team, such as individual skills, personality, and task constraints. Next, we describe team processes, which are dynamic interactions over time, such as communication and conflict (Kozlowski, 2015). We also distinguish between team processes, which are interactions that occur over time, and emergent states (e.g., Marks et al., 2001; Kozlowski et al., 2016), or “constructs that characterize properties of the team that are typically dynamic in nature and vary as a function of team context, inputs, processes, and outcomes'' (Marks et al., 2001). Emergence is when a higher-level phenomenon comes into being because of interactions at a lower level (Cronin et al., 2011). Team outputs include performance and satisfaction. The outputs can be at the end of the life of the team but also occur at points throughout the life of the team (e.g., planning the next search, finding the next victim, assessing their state). We also include feedback loops, such that outputs become inputs later in the life of the team. Our model includes task and mission factors, time factors, and a multilevel structure. This multilevel structure involves individual humans, individual AIs, teams, and organizational and societal levels.

Our model aims to inform the design of AI teammates that are human-centered within the context of a teaming system, with the hope of mitigating potential failures. Acknowledging human teaming requirements and designing AIs with the capability to understand the human narrative will allow AIs designed to work with humans to do so successfully. Training humans how to work with their AI teammates will also reduce the likelihood that humans will use the technology for unintended purposes, and will allow humans to maintain appropriate expectations of AI teamwork and taskwork capabilities.

Download the report - No AI in Teams