As military personnel, and even more broadly people in general, become increasingly dependent on AI systems, there exists a real danger that should AI systems malfunction its users will not be able to adapt in time to prevent disaster. A malfunction may overwhelm the human “user” as they struggle to understand the situation and environment, they find themselves in while stripped of the AI-enabled tools and systems they’ve become reliant on for achieving this very purpose.
DARPA brings up the example of deadly aircraft crashes resulting from a failure of pilots to timely and effectively assess their situation and respond after a failure of automated systems. Thus, DARPA believes that it is necessary to develop “human-machine interfaces (HMI) that allow humans to maintain situational awareness of highly automated and autonomous systems so that they can adapt in the face of unforeseen circumstances”.
According to Bart Russell from DARPA’s Defense Sciences Office:
“As highly-automated machines and AI-enabled systems have become more and more complicated, the trend in HMI development has been to reduce cognitive workload on humans as much as possible. Unfortunately, the easiest way to do this is by limiting information transfer. Reducing workload is important, because an overloaded person cannot make good decisions. But limiting information erodes situational awareness, making it difficult for human operators to know how to adapt when the AI doesn’t function as designed. Current AI systems tend to be brittle – they don’t handle unexpected situations well – and warfare is defined by the unexpected. […]
We need HMIs that do a better job of exchanging information between the system and the human. […] It’s not about how fast you press a button, or the ergonomics of your cockpit, it’s about how well you perceive the information that’s coming to you and does that help you develop sufficient understanding of systems processes, status against the machine’s performance envelope, and the context in which it’s operating to still complete a mission despite off-nominal conditions.”
In order to address this, DARPA announced its new Enhancing Design for Graceful Extensibility (EDGE) program on 21 May. The program aims to address this issue in HMI development by helping to craft systems which allow for better situational awareness and by extension improve the ability of humans to adapt to unexpected situations. This means making sure that the human tasked with interacting with an AI system understands its basics workings, when it is likely to fail and is able to actively see “the system’s status against its performance envelope (i.e., if it’s in its “comfort zone,” or near the edges of its speed, range, etc.)”. DARPA has scheduled a proposer’s day for 1 June.