Background
My team and I had been working on an experience for data security specialists to set up and maintain their internal controls for regulatory compliance for nearly 6 months. As our release inched closer, we set a goal to circulate our latest end to end experience amongst business partners, internal SMEs, and customers to understand whether the experience delivers enough value for a first release, or if there are any experiences missing that will inhibit a user from successfully setting up their controls.
As my squad does not have a dedicated researcher, I did a lot of the heavy lifting for this project.
Need statement
As a product team, we need to confirm that the experience we’ve created meets users’ most critical needs for compliance, ahead of our first release.
Users
Scott, the Data Security Admin
Jobs to be done
- Works with a non-technical Compliance Specialist to understand requirements for different regulations
- Maps compliance requirements to tools through “if this, then that” rules
- Generates compliance reports and supports the audit process
- Understands shortcomings in current set up in meeting compliance
- Monitors sessions, collects and reviews data about user activity
Steps to proving regulatory compliance
A previous round of research showed that Scott and his team typically go through the following steps in order to prove regulatory compliance:
Our end to end experience would need to cover all of the steps in order to understand whether our designs met MVP expectations. However, the primary scope of the redesign was focused on “Implementing technical controls”.
Process
Create research plan
I created a research plan that outlined the objectives of the study, our current questions and assumptions, and our methods for evaluating our solution.
For this study, we had access to 2 Business Partners, 3 internal SMEs and unfortunately, only 1 customer of our legacy product. I flagged this as a risk to management, that we were not getting enough input from those who more aligned with our target user.
Create testing materials
Next, I created a script to run a usability test that would cover the entire process of setting up and maintaining compliance controls. By giving the participants a chance to walk through the entire process from start to finish, my team and I would be able to see how participants interact with the prototype firsthand, and ask targeted questions along the way to address our assumptions. The participants had 15 tasks to complete in total.

Running the sessions
My teammate ran the sessions while the rest of our design squad listened in and took notes. In previous rounds of research, I introduced my team to a method for capturing notes during interviews that I’d learned about in Jake Knapp’s book, Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days. In a shared virtual collaboration space, I created a grid with participant names on one axis, and the task on the other. Each observer gathers notes in the appropriate section, making it easier to condense and organize notes during synthesis. (The Research team in my organization were also impressed with this practice after seeing it in our squad, and adopted it as well as part of their Research Guidelines.)
Synthesis
I had a tough time in previous research rounds trying to wrangle the team to do group synthesis virtually. So I decided to try something new. I volunteered to do some preliminary affinity mapping, to break down the vast amount of data we had, so it would be easier for others to consume.
I went task by task comparing the responses from each participant, and clustered to identify common themes. Then I wrote a summary statement for each cluster, so it would be easier for others to consume the key takeaway at a glance.
Once I had completed my preliminary affinity mapping, I created a working session with the rest of the design team, to give them a chance to get to know the findings, and share their point of view on which points to surface to the greater product team.

Finally, I compared everyone’s comments and takeaways to see what we all felt were the most critical points to bring back to the product team. We received a lot of positive feedback about the experience, but inevitably learned of areas where the experience needed to be improved.
Quotes from participants
Research playback
I used the opportunity of having a research playback to highlight the most risky findings that needed a high level discussion across all members of the product team. This allowed us to surface some big gaps we had in the experience, including missing content and scalability concerns we’d not previously been aware of. I summarized at the end some quicker fixes that the design team would need to adjust ahead of the release.
Key findings
Outcomes
Although we only had access to one customer, my team and I took advantage of whatever feedback we could get ahead of the release. Seeing the experience through another’s eyes helped us identify areas where we as a product team had a gap, or could improve usability. Based on this feedback, we made changes to content, eliminated a dashboard that didn’t provide enough value, and put more emphasis on ‘getting started’ moments.
Lessons learned
Although I would like to find a successful way of synthesizing virtually as a group, for the amount of data we had, it was very helpful that I did an initial round of synthesis for my team. See quotes below!
“Blown away. How did you do all this yourself” -Teammate 1
“Just wanna say Mallory smashed refining all the research o.0 !!!” -Teammate 2
Ideally, I would like to have including product managers and developers in this process, but they were only working on this project part time and didn’t have the bandwidth to participate.