Bad UI or Bad UX? The Real Story Behind the Hawaii Missile False Alarm.

The Hawaii missile false alarm was blamed on a moron operator and bad UI design. But what’s the real story?

One Saturday morning in the state of Hawaii, everyone’s phone beeped and buzzed and displayed this official emergency message: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” A similar Emergency Alert System warning also interrupted television programming.

There was statewide confusion, alarm, and panic. Families ducked into their bathtubs. Hotel guests were ushered into basements. 911 dispatch systems were overwhelmed. Hundreds of thousands of people wondered if they were going to die that morning.

It turned out to be a false alarm.

Within fifteen minutes, word had begun to get out from various official sources that there was no threat. Thirty-eight minutes after the initial alert was issued, an official false alarm notice went out to everyone’s cellphones.

The Big Question

Once the panic subsided and everyone caught their breath, the big question was: how did this happen?

The Governor of Hawaii issued a statement, reporting that the false alarm was not the result of hacking or other malicious intent. The story was, simply: during a routine drill at the Hawaii Emergency Management Agency (HI-EMA), a false alarm was caused by “human error” and that an employee “hit the wrong button.”

The dots were connected, and a narrative immediately formed.

Bad UI as the Culprit

In the days that followed the missile scare, the blame was largely pinned on bad user interface. Something like a cottage industry of content popped up on the subject of HI-EMA’s bad UI: from snide comments and gifs to vague think pieces and thoughtful analyses on the importance of usability.

This in itself was a reminder of how the zeitgeist has changed over the past few decades: the public at large now seems to recognize that “human error” tends to mean “bad interface”. Humans make mistakes, but the computer applications we use at home and at work—even when we have been “trained” on how to use them—often invite our mistakes or exacerbate them.

Of course, there were still people who declared the guy who issued the false alert to be a “moron”—it is the internet, after all. And there was plenty of mocking and scorn rightly and wrongly directed at officials who floundered in the 38 minutes after the alert went out.

On the whole, however, the narrative around the cause of the false alarm was bad UI. Much was written about it, fueled in part by “screenshots” of the UI in question that showed up days later. Confusingly, each of the two UI snips released publicly were quickly determined to not be the actual interface, however the version that depicted a dropdown menu was declared to be “similar” to the real thing. (The actual interface would not be distributed for security reasons.)

Despite the confusion, the pretty-close-to-actual UI samples depicted a UI that was easy to criticize, and rightly evaluated as problematic and error-prone. The event was a teachable moment for bad UI.

Then we learned that the missile alert wasn’t triggered by accident.

It Wasn’t the Bad UI After All

By the end of January 2018, state and federal investigations revealed the reason why an operator at the Hawaii Emergency Management Agency triggered the infamous alarm: he thought there was a real threat.

It wasn’t a mistake due to an error-prone UI, it was an intentional action made by a confused man who thought he was warning his fellow citizens of impending danger and doing his job as he was trained.

But, as we know, there was no actual threat; it was a drill.

So, what the heck?

A Different Kind of Human Error

Initially, the world assumed—and not without reason—that the missile alert mishap was caused by one kind of user error, where the user’s action doesn’t match their intention. But—surprise!—the actual cause was a different kind of user error, where the user’s intention is incorrect.

The employee’s intention was set incorrectly by something that happened just a couple minutes before he sent out the alert.

This is where our story of human error winds up containing more old-fashioned human error than originally thought:

The guy in question was sitting with a room full of people who all heard the same test exercise recording, played to them over a speakerphone. Everyone else in the room understood it to be a drill; this guy did not. (As investigators reported, it wasn’t the first time he’d confused exercises with reality.)

There were errors up the chain that probably helped cause this man’s confusion. While the speakerphone message correctly included “EXCERSISE, EXCERSISE, EXCERSISE,” at the outset, the body of the message incorrectly included live missile alert language, including, “THIS IS NOT A DRILL.”

Once this guy mistook the drill for a live threat (and didn’t talk to his coworkers about it), there was no process in place to stop his misconception from becoming the alert that went out to the masses. He went back to his workstation, intentionally selected the live alert option from the menu, and selected “Yes” at the confirmation prompt. Done.

It did not matter that the UI was stodgy, or that confirmation prompts aren’t the best safeguard against mistaken actions. The degree to which one particular UI screen “sucked” was not the issue that morning.

The user error wasn’t the kind the world had assumed it was. But either way, the systems in place—both people-powered systems and computer-driven systems—were not built to handle the outcome swiftly or gracefully.

What Can We Learn from All This?

Even though we were collectively wrong about the exact cause, many of the lessons we learned and conclusions we drew along the way are still applicable… except that we need to look deeper. Perhaps we should start associating “human error” with “bad system” instead of “bad interface”.

In the end, a general lesson to draw is that robust scenario design and task analysis are important when we create and improve our software and processes. We should look broadly at whole systems—not narrowly at one software application—and appreciate the complexity of all the human-human, human-computer, and computer-computer interactions in that system.

We should look for ways to minimize errors where we can, but we should always design with the assumption that errors will happen (of both kinds!), and then ensure that processes and functionality for dealing with those errors are an inherent part of the design.

Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

%d bloggers like this: