USABLE at the 2019 Symposium on Usable Privacy and Security (SOUPS)

September 19, 2019
 SOUPS conference logo
SOUPS conference logo

The USABLE team facilitated a half-day workshop this past August at the 2019 Symposium on Usable Privacy and Security (SOUPS) in Santa Clara, California. The workshop, entitled “Designing for the Extremes of Risk,” was an interactive exploration of what it means to work with at-risk communities, particularly in the context of designing and developing privacy and security tools. The nearly 25 attendees represented a diverse range of professions, from academics to UX researchers to product teams and engineers.

The workshop began with a conversation about “high-risk” individuals and communities. After defining high-risk, the project team provided an overview of the USABLE project and an introduction to the UX Feedback Collection Guidebook (to be publicly released in October 2019). The half-day workshop concluded with four breakout sessions to dive deeper into relevant topics, such as:

  • Working with High-Risk Communities: Barriers and Solutions,
  • Trends and Themes Across Demographics of High-Risk Users,
  • Building a Community of Practice, and
  • Usable Security for Developers: Integrating UX into the Process.

In addition to organizing and facilitating this half-day workshop, the project team also attended an afternoon session organized by WIPS on Inclusive Privacy and Security. The WIPS team referenced USABLE personas during one of their activities and one breakout group used USABLE persona Marina from Russia as the template for their 3D storyboard creation. See photo below.

Marina storyboard

The next two days of the conference were focused on technical SOUPS sessions. For a quick summary of relevant papers and presentations, check out this blog post we co-authored with our partners in the usability space: https://simplysecure.org/blog/SOUPS-2019

Below we have highlighted a few main take-aways from our engagements at SOUPS this year.

  • From a product perspective, it can be difficult to identify at-risk communities, connect with them, and build the level of trust needed to collect accurate feedback. Using trusted intermediaries, such as digital security trainers, can provide tool teams with access to relevant feedback without jeopardizing the safety of at-risk communities.
  • Tool teams can consider offering incentives (such as small stipends) for individuals who participate in user research. If offering incentives, be sure to research local implications for end users. For example, will accepting a stipend impact any government support that low-income participants may be receiving?
  • International events and UX convenings are a good entry point for interested parties to meet at-risk users and begin to build trust. It is helpful for people outside the existing Internet Freedom (IF) community to understand what events exist, who attends each one, and what the purpose of these events are.
  • Always be clear about what will be shared from meetings or gatherings (participant list, notes, attribution, etc.) and set clear ground rules from the beginning to foster a sense of trust among attendees.
  • Co-design is most effective when it is implemented throughout the entire process. Utilize the co-design process not just for feature or tool development, but also developing the larger feedback collection process.
  • User personas allow design and development teams to understand users without requiring direct access or communication.
  • User engagements or any feedback collection activities should always take place in a trusted environment or location.
  • When designing alerts, it is important to consider the cross-cultural interpretation of the language/design. Images and graphics can be used to send a more universal message or serve users who may be illiterate.
  • There is no universal catalog for usability security bugs, analogous to the Common Vulnerabilities and Exposures (CVE) catalog for security threats. Are there ways to identify and/or automate the testing of usability failures by referencing “chaos engineering” style approaches (see https://en.wikipedia.org/wiki/Chaos_engineering#10-18_Monkey)?
  • Reframe “edge cases” as “stress cases.” Account for how people operate under stress, as this is a more universally applicable approach. Though levels of stress can differ, all users face stress at some point. (Credit to 18F staff, who originally shared this reframing.)