Posts What Inhibits Good Cybersecurity and Privacy Behaviors?
Post
Cancel

What Inhibits Good Cybersecurity and Privacy Behaviors?

I direct the SPUD (Security, Privacy, Usability and Design) Lab at Georgia Tech — and not just because I like potatoes. Our rallying call: How can we design systems that encourage better cybersecurity and privacy behaviors? Answering this question is important [1], and increasingly so as technology continues to bridge the gap between the cyber and physical.

The first step in designing systems that encourage better cybersecurity behaviors is to understand the existing barriers: What stops people from practicing good cybersecurity behaviors today?

The field of usable privacy and security has explored this question for years. From a close reading of much of this prior work, my colleagues and I identified three inter-related high level barriers that may explain why advice about security and privacy is often ignored and why many security and privacy tools go largely unused: awareness, motivation, and knowledge.

Security Sensitivity

In what follows, I provide some evidence for each of these barriers based on prior work.

Awareness

First, many people may not be aware of the security threats that are relevant to the data and devices they’d like to protect. They may also be unaware of the tools available to protect themselves against those threats.

An early study found that insufficient awareness of security issues caused users to construct their own model of security threats that are often incorrect. Another study found that many people, even “experts”, lack awareness of basic security principles, leading to security mistakes such as using a social security number as a password. A third study found that many people are unaware of potential points at which their security and/or privacy may be compromised when data is transmitted through the internet.

People who are unaware of a threat or the available tools to protect themselves against that threat cannot take measures to avoid the threat and defend themselves.

A good amount of extant work in usable privacy and security recognizes the awareness problem, but solutions to address it, that I know of, primarily center around warnings and notifications.

Ability

The second inhibitory barrier to good security behaviors is that people may not have the ability to properly practice good security behaviors — they may not know when, why and how to act.

Security tools are often too complex to operate for even those who are aware and motivated, suggesting that many people often do not have the specialized knowledge to actually utilize security tools. If I gave you my public key, would you immediately know how to use it to send me an encrypted message?

Applying some basic HCI terminology here, there is a wide gulf of execution for most security features for most people. One common piece of advice people are given to avoid phishing scams is to make sure that the webpage URL matches your expectations — but many people cannot distinguish legitimate vs. fraudulent URLs, nor forged vs. legitimate email headers. Another example is that security features in very commonly used Microsoft programs such as Windows XP, Internet Explorer, Outlook Express, and Word applications are difficult for lay users to navigate. More generally, many people hold “folk” models of computer security that are often misguided, and use these incorrect models to justify ignoring security advice.

Knowledge is perhaps the most widely acknowledged and addressed inhibitory barrier to good security behaviors. Indeed, an argument can be made that the fundamental goal of usable security, to date, has been to lower the knowledge barrier to practicing good security behaviors. One of the seminal papers in usable privacy and security, in fact, is about highlighting just how difficult it used to be for a regular person to send encrypted e-mails. Another is that is silly to only blame people for this lack of knowledge — the designers of these unusable interfaces should share this blame.

Motivation

Finally, even people who are aware of security and privacy threats and able to use preventive tools to combat those threats often lack the motivation to utilize security features to protect themselves.

This lack of motivation to use security features is not entirely surprising: stringent security measures are often antagonistic towards the goal of the end user at any given moment. For example, if you want to access your Facebook account, a complex password or two-factor authentication might prevent you from accessing Facebook for an intolerable amount of time.

Negative experiences with or impressions towards security behaviors can also impact motivation. In a survey of over 200 security experts and non-experts, one study found that the overlap between what non-experts do and what experts recommend is very thin. Indeed, while experts value keeping software up-to-date non experts reported being skeptical of the effectiveness of these behaviors or avoided them because of prior negative experiences with updates.

Other work has found that people can sometimes have a defeatist attitude towards cybersecurity, believing that if an attacker wanted to access their data they would irrespective of any counter-measures taken.

Low motivation may also be symptomatic of a deeper root cause, which is that many security threats remain abstract to most individuals: e.g., Bob may know, conceptually, that there are security risks to using the same simple password across accounts, but does not believe that he is, himself, in danger of experiencing a security breach. Some have argued that this perspective may be economically rational, as the expected cost, in monetized time, of following security advice might actually be higher than the expected loss one would suffer if their account actually was compromised.

Finally, the benefits of security features are often invisible, as users are often not cognizant of the absence of a breach that otherwise would have occurred without the use of a security or privacy tool. In all, it is unsurprising that many users lack the motivation to explicitly use security tools: to do so would mean to incur a frustrating complication to everyday interactions in order to prevent an unlikely threat with little way to know whether the security tool was actually effective. More generally, people often reject the use of security and privacy tools when they expect or experience them to be costly.

Conclusion

In sum, prior work in usable security suggests that there are at least three large obstacles inhibiting the widespread use of security tools: the awareness of security threats and tools, the motivation to use security tools, and the knowledge of how to use security tools.

I refer to these barriers at the security sensitivity stack for ease of discussion, as it encapsulates how likely a user is to seek information about and use security tools. In future posts, I hope to write more about the GT SPUD Lab’s approach to addressing these barriers. Spoiler alert: making security more social, more fun, more concrete and more contextually aware seems to work well :)

Footnotes

[1] If you don’t believe me, perhaps I’ll explain why in a separate post, but I’m going to assume you’re reading this post because you’re already convinced.


Thanks for reading! If you think you or your company could benefit from my expertise, I’d be remiss if I didn’t alert you to the fact that I am an independent consultant and accepting new clients. My expertise spans UX, human-centered cybersecurity and privacy, and data science.

If you read this and thought: “whoah, definitely want to be spammed by that guy”, there are three ways to do it:

You also can do none of these things, and we will all be fine.


This post is licensed under CC BY 4.0 by the author.