Posts Grand Challenges in Sociotechnical Cybersecurity
Post
Cancel

Grand Challenges in Sociotechnical Cybersecurity

Take a step back, and ask yourself: What are the largest societal problems that relate to cybersecurity today?

Depending on your background, you may come with very different answers. Computer security researchers may come up with technical answers relating to formally verifiable code or homomorphic encryption or the structure of the internet. Tech-savvy policy makers may think about the “going dark” problem: or, tensions between law enforcement wanting a “backdoor” into encryption algorithms and cryptographers’ insistence that weakening encryption for anyone weakens encryption for everyone. Social psychologists may think of how to better structure organizations so that security is a more valued part of the executive decision making process in corporations and government.

These are all very different problems. Each are hard and important. So, what happens if you pull together a group of researchers in the computer and social sciences, pair them with policy makers, and ask them all to agree upon a few “grand” challenges in sociotechnical cybersecurity?

I was fortunate enough to participate in a workshop with exactly that premise: The Computing Research Association‘s “Sociotechnical Cybersecurity Grand Challenge” workshop in 2017. More specifically, this was a “planning workshop” to come up with a set of topics to discuss as well as an agenda for a later workshop that will have the goal of developing four grand challenges for sociotechnical cybersecurity. I thought I’d recap some of what we discussed, though I’d like to add one large disclaimer here: this post is very much filtered through my own experience at the workshop. I was only exposed to a small subset of all of the interesting conversations that were happening.

The workshop took place at the University of Maryland, College Park over two days. Most people in attendance were well-established faculty in the computer and social sciences from a variety of universities all over the country. Also in attendance were government and industry researchers in key roles at their respective organizations.

We started with a brief discussion on what is a grand challenge. The definition we eventually arrived at was a challenge that is important and will require multiple years of significant, non-incremental, cross-disciplinary work to address. Equipped with this basic understanding, we spent the remainder of our time uncovering key challenges in cybersecurity as each of us understood them and synthesizing these varied thoughts into a set of problem areas within which there may lurk a grand challenge.

We began this process with a series of panels, which were organized around white papers submitted to the workshop. There were three panels: one on cybercrime, another of metrics and measures, and a third on individuals and norms. The panels were selected based on white paper submissions that were solicited a few months earlier.

The cybercrime panel focused on our (lack of) understanding of cybercrime. What even is a cybercrime? Is cyber bullying a cybercrime? We currently have no comprehensive typology that helps us understand the boundaries between cybercrime, physical crime and nasty use-cases of technology that are not specifically “crimes”. This matters because if we do not have a clear definition for what is a cybercrime, we cannot collect clear metrics on whether it is getting better or worse. Likewise, under what jurisdiction does cybercrime fall? When one is robbed in the physical world, the next steps are generally obvious: call 911. There is no equivalent for when one’s identity is stolen online.

The metrics and measures panel discussed the need for measurable constructs that help us answer seemingly simple questions like: what is “good” cybersecurity and what is “bad” cybersecurity?; Are employees complying with organizational cybersecurity policies? What measures matter in diagnosing suspicious cybersecurity activity? What behaviors do cybercriminals and “hackers” partake in, and how is that changing over time? In many ways, the sort of questions that were being asked in this panel are the most fundamental: without a clear measure of how to improve security, how can we know what to do next?

Finally, the individuals and norms panels (of which I was a part), discussed our general lack of understanding of the social and behavioral components of cybersecurity. Security evolved in a tradition of military and high-stakes corporate use. Accordingly, security protocols and systems have been developed assuming: (1) that people always act optimally in the interest of security at all times; and, (2) that individuals make their security decisions in a vacuum unaffected by the behaviors of others (likely because everyone is assumed to always act optimally). Of course, both of these assumptions are untrue, especially once we consider that it’s not just the military or corporations that are using computing systems anymore — it’s everyone.

Based on a synthesis of the panelists’ original work as well as the broader discussion with the larger group, we synthesized four broad themes on which we had breakout groups to synthesize a set of areas within which a grand challenge could be hiding. We continued these discussions for the rest of the conference (two half-day sessions) to finally arrive at a number of problem areas that could constitute “grand challenges” in sociotechnical cybersecurity. While I can’t remember all of the problem areas, some that were discussed include (and I’m paraphrasing) (and in no particular order apart from the order in which they came to my mind):

  • How can we make security seamless, so that it just “fits in” with our lives? An example is single sign-on, which drastically reduces the number of login attempts we need to make. How can we replicate that more broadly without, in turn, reducing security? Of course, the challenge here is that security is, it self, a seam: it sits at the interface between humans and technology.
  • How can we create better, adaptable behavioral models of adversary behavior through collaborations with social scientists? As computer scientists, we like to think of formal threat models so that we can partition the problem space and create solutions with guarantees. But, in the real world, attackers are adaptable and clever and break many of the assumptions that we make: they are rarely accurately described by formal threat models. Could better models of human behavior help us build systems that are more resilient to attackers?
  • What can we do about the “going dark” problem? Loosely, this is the problem of law enforcement wanting privileged access to encrypted data if given permission through lawful processes (e.g., search warrants). In the physical world, we have all agreed upon a lawful processes by which authorities may search and seize evidence to arrest bad actors. In the virtual world, however, encryption can undercut the established expectations of search and seizure to which we have grown accustomed in the physical world. Cryptographers, of course, argue that “privileged” access is the same as designing for weakness, and weaknesses are universal: any malicious party will likely be able to exploit a backdoor.
  • How can we create “cybersecurity hygiene” habits that increase people’s awareness, motivation and knowledge of cybersecurity threats and methods to counter-act those threats? We brush our teeth and buckle up through years of national awareness campaigns and doctors’ instruction. Can we replicate that for good cybersecurity practices? If so, how, given that the landscape of cyberthreats changes so rapidly?
  • How can we incentivize good security at the organizational level? Currently, C-suite executives have little incentive to prioritize security over new features that introduce code bloat and, by definition, more security vulnerabilities. Chief security officers are also often organizationally embedded in odd ways that make it difficult for them to enact meaningful changes in product decisions. Should we view this problem as needing more carrots (rewards for good security) or needing more sticks (regulations and fines for bad security)?
  • What are better measures and metrics we can track to answer simple questions about cybercrime like “Are cybercrimes getting more or less frequent?” and “Are we doing better at combatting cybercrimes this year than last year?” If/when we have these better measures, how can we aggregate them at the national scale? If we were to make a National Cyber Crime Reporting Bureau, what sort of data would it need to collect and how can incentivize the collection of such data from private institutions who may not want to share data about breaches they have experienced?
  • How can we create better systems for data stewardship? Large companies make unilateral decisions about personal data collection, retention and processing. Individuals have very little agency to act on their privacy and security concerns with respect to many of these practices. How can we create sociotechnical systems that allow for more influential entities to act as stewards for consumer privacy protections.
  • How can we create security tools and systems that are more aware and responsive to human social behavior? Currently, security systems are designed with little understanding of social norms: for example, is it really appropriate for family members to each have their own secret password to access their accounts in a shared Xbox? We also know from more general technology adoption models that observability helps diffuse technology, yet security technology is not observable at all.
  • How can we better inform users of the consequences / outcomes of their security (in)action? Currently, the benefits of good security behaviors are abstract but the costs are concrete. For example, enabling two-factor authentication immediately and forevermore adds tens of seconds to logging in to your e-mail account, but its benefits only possibly come to fruition sometime in the future (if/when an attacker is trying to access one’s account). Moreover, if security works well, nothing happens. Accordingly, motivating non-experts to behave securely if often difficult. How can we better associate the consequences of security (in)action?

There were probably many other interesting questions discussed: I had just one cross-sectional view of the entire workshop as I was siloed in panels and breakout groups that restricted my view. My understanding of what other groups spoke of is based on a final report towards the end of the workshop, though I’m sure these reports did not fully capture the breadth of interesting ideas discussed. Importantly, these questions are not simply food for thought — they’re calls to action for the researchers, designers and practitioners of our community. What do you think are the biggest security and privacy problems that face our society today?


Thanks for reading! If you think you or your company could benefit from my expertise, I’d be remiss if I didn’t alert you to the fact that I am an independent consultant and accepting new clients. My expertise spans UX, human-centered cybersecurity and privacy, and data science.

If you read this and thought: “whoah, definitely want to be spammed by that guy”, there are three ways to do it:

You also can do none of these things, and we will all be fine.


This post is licensed under CC BY 4.0 by the author.