responsive web app
Survey administrators have to sift through thousands of responses to find real insight. Bridge Engage simplifies their work through a human-centered approach to data design.
BRIDGE ENGAGE is an enterprise SaaS solution in the nascent employee engagement market. The core product is a survey tool built to probe for and then gather a series of insights around challenges in the people space. Every question contains an area for an employee to comment freely, and as you might imagine, responses pile up quickly. Through internal use and pilot clients, we found that an average survey generates 3-5 comments per user.
Engage does an excellent job of breaking survey/response data down at a high level. Still, it’s not always easy for a user to get from a factor data abstraction to actionable insights. It’s far easier to respond to individual feedback, but the number of responses received can easily reach into the thousands. With that many comments, it becomes impossible to respond to everyone. And it’s an even tougher job to synthesize that data manually.
Another challenge is looking at feedback objectively. If you’re the CEO, for example, people are commenting on you and your performance, and it’s tough not to take a ding every time someone says something negative about you. Abstracting individual comments into themes and topics helps avoid the psychological trap of recency bias.
At Instructure, our CEO read the comments. Every one. And we worked with like-minded companies that work very hard to meet challenges in the people space head-on.
INTERVIEWING HR LEADERS at our client companies was crucial. These individuals were already working on like challenges, and we wanted to see how they were doing it, what tools they used, and what outcomes they were hoping to achieve.
One such interview was with our own internal People Team. Instructure has a history with both homegrown and vendor-supplied engagement solutions. Our C-team read every comment submitted, every quarter, and they knew firsthand the pain of trying to discover signal in the noise. Average comments on Instructure engagement surveys average between 3000 and 3500. Other interviewees voiced similar concerns.
Away from any specific attachment to survey data, we recognized through our interviews that we needed to be sympathetic to different understandings of charted data. Because Engage is built entirely around data and insights, we needed affordances that explained specific charts and the data behind them (and what was missing, if anything). But our team also believed in the importance of access to raw data— the charts exist to help a user pursue a direction of inquiry, not obscure the data itself.
In other areas of Engage, we emphasize “closing the loop” through the UI. So we explored that idea within the comments report and tested a prototype of an anonymous conversation feature. It tested so well that the feature needed almost no iteration to release.
THE TEAM'S biggest challenge was how in finding a way to work through a large number of comments systematically to collect insight. Our engineers came up with the solution—Amazon machine learning. One service scores the comment and sends back a confidence interval. A second Amazon service scans comments in aggregate to pull out common topics. And because all questions belong to a factor, we can graph these scores in groups. Being able to chart numerical data meant we could give users a sense of scale and direction before they get to the comments themselves.
But it took a few tries to get to a confident solution. We tried a few things:
Population pyramids give more weight to large teams, so a large high-performing (or low-performing) team could mask real problems elsewhere in the organization. (There is a second reason this chart fails: using color alone to denote state fails any reasonable test of accessibility.)
Stacked Area: We thought it might be interesting to plot sentiments against each other so that we could see the impact of one against another. Comparison is, after all, a great way to gain context. But this chart failed at every level—our users could not quickly ascertain what they should be looking for, and stacking seems to weight one factor over another.
Bubble charts do a better job of helping users draw a conclusion. Users can guess where the actual data point might be, but without practice, it’s not much more accurate than playing pin a tail on the donkey. Because the conclusion of the graph should point a user in a direction they may investigate, we found that more precision was needed.
With each prototype, we tested small tasks on groups of users. Each failed, and ultimately we settled on a two-axis frequency chart because it can quickly show both how positive or negative something is, and how frequently employees rate that factor. Later we layered in the data from a bubble chart so that when a user investigates a particular factor sentiment score, they see how the data distributes across teams. In response to feedback, we also added historical data from the previous survey so our users could measure the impact of their work.
But it still wasn’t as clear to our users as we’d hoped. We knew from interviews that our users were more interested in outcomes than specific data points. So we continued by building helpers—easy buttons to help users quickly find the highs and lows.
Actions speak louder than words.
Another thing we pursued was making comments more accessible to our admins while maintaining anonymity for the employee. Engage never reveals the identity of the individual commenter, but responses to that person are known. So if, for example, a user has an issue with an easy resolution, an admin can pop in and provide direction. And the admin is known so that the commenter can trust the response.
An employee’s real-life context can quickly affect how they answer the survey. So if something negative happens to the employee, a natural response is to answer survey questions with a negative mindset. But our admins can’t know that context (survey responses are anonymous), so we wanted to build in a feature that threads responses together. If someone poo-poohs every answer, our admins can string them together in the report view and decide what to do in context. And that context is valuable still when the responses vary.
Threading comments helps the admin gain context to specific responses.
ONE ADDITIONAL REQUIREMENT: as an Instructure product, one of the criteria for release is accessibility. Through ideation and design and development, we focused first on getting the data and charting right. But we didn't skip the sometimes-unseen work. WCAG AAA FTW!
I've often felt like my design output didn't account for the WHOLE problem because different companies or teams did not prioritize accessibility work. So although this doesn't appear to be much, I'm proud to have worked with teams at Instructure that take this work seriously. It IS important.
We built the core Engage product in 12 months. It's actively used internally at Instructure and has sold into at least 10 Bridge accounts. I’m highlighting one of the more challenging reports here, and I have also written about one of our other essential features in Manager Reports.
The Figma prototype. Click around — these screens are almost exactly the same as how we prototyped with clients and delivered to the engineering team.
If ad blockers are disabling the Figma embed, see it here: Comment Reporting Prototype
Credits
Project Type: Web application
My Role: UX, research, visual design
Selected Work
Bridge Engage for ManagersProduct Design (UX + UI)
Harmons GroceryiOS, Android
Give Miraclesmobile payments
Starbucks Reordermobile experience
© Copyright 2024 Don Carroll
Get in touch: don@sbx.cr
Get in touch: don@sbx.cr