This project was a three-month UX research and design consulting engagement. The client was a small, early-stage fintech startup that had created three interrelated software products. While the client cared about user feedback, the organization did not have robust processes in place for conducting UX research. My team was asked to conduct research to evaluate all three of the clients’ products, and provide recommendations and instruction on how the client could set up their own UX research practice as their business scales up.
My team consisted of myself, another product designer, a principal product designer, and a product manager. While the principal designer was the overall project lead, I was the documentation lead on this project. This meant that I was responsible for making sure all of our final deliverables were completed appropriately and delivered on time, as well as for overseeing creation and delivery of any UX research process resources that my team developed for the client. In addition to that, I naturally took on the role of workshop facilitator for all of our internal workshops, and kicked off building the project roadmap with the rest of the team.
NOTE
Images below are modified
In order to protect the privacy of my client, I have removed the client's brand colors, typography, logos, and identifying sample data in the images below. As a result, the examples presented in this case study are not exactly what I delivered for the client, but they provide a close enough representation of my work.
This project had four main phases. The first phase primarily focused on gathering the resources that my team would need to move forward. This included learning more about the client’s business and desired goals for the project, identifying unknowns in our knowledge, and making plans to gain more information to fill any gaps.
The second phase delved deeper into understanding the client’s internal operations, so that we could establish their level of UX maturity and then provide growth recommendations. The third phase focused on the products themselves, while the fourth phase encompassed user feedback and perspectives.
The goal in this phase was to identify gaps in our knowledge and more deeply understand the project needs. To this end, I ran a workshop to guide the team to create a research plan for the project. I started by setting up a collaborative brainstorming space in FigJam for the team, with the end goal of determining the research activities that we would conduct, the people who would need to be involved, and the questions that we would try to answer through our research.
We already knew that this would be a UX research project, so one of the main questions was which research methods would provide the results that the clients wanted. Our client specifically mentioned having heuristic evaluations done on their product line, so we knew to include that activity. We also knew that the client wanted to learn how to extract and synthesize research data to provide product insights, and that the client wanted feedback on their own internal processes.
With this knowledge, we determined that user interviews, surveys, and usability tests would provide the desired feedback from users, while conducting stakeholder interviews and surveys would give us the insight we needed on the current state of the client’s organization. For research methods to evaluate the product itself, we selected heuristic evaluations, as well as competitive analysis and usability testing.
From here, I created a preliminary roadmap for the project. Using the research activities that the team selected, as well as information on the three focus areas of the project (product, users, and business), I was able to develop a framework for the flow of research activities. I then shared this framework with the team, and we discussed timelines for the activities and deliverables that would be required. From there, the team’s product manager was able to create a formal backlog of activities in Jira, and pre-scope sprints all the way to the final day of the project.
With the roadmap in place, the first focus area to dig into was the client’s business — specifically, it’s UX operations. Before we could conduct stakeholder interviews, my team needed to first establish what we wanted to learn from those conversations. I created a brainstorming activity and facilitated the associated workshop for my team to determine what questions were critical for us to get unblocked and make progress. As part of the activity, I led the team to identify what we didn’t know, prioritize those unknowns in light of our project goals, then finally organize the highest priority items into the script for stakeholder interviews.
As we began scheduling interviews, we learned that the client team was split up by product line — each of the three products had its own product manager and associated director. To learn more about their processes, my team attempted to set up interviews with stakeholders on each of the three products; however, the lead for one of those projects was under a looming deadline for a major release and was reluctant to take the time to speak with us. After informing our primary stakeholder, we were asked to abandon that product and only focus on the remaining two.
Scheduling stakeholder interviews on the other two products went quite smoothly, and in total my team was able to conduct six interviews, three for each product. We used Dovetail to collect video recordings and transcripts of all of the calls, which we then used for thematic analysis. I set up a FigJam space for myself and the other two designers on the team to collaboratively determine a set of codes that we all would use in thematic analysis. Once we had aligned on the codes, each of us reviewed all of the transcripts and applied the codes to snippets in Dovetail. From there, I lead the thematic analysis activity in Dovetail — this involved grouping related snippets, often across codes, to create themes that captured the findings from the collection of qualitative data.
The themes that arose from this activity all pointed toward an organization with fragmented UX research, siloed product teams, and inconsistent methods for integrating user feedback into the product. However, the product teams were also unified by the genuine desire to improve their products by increased understanding of their end users. Findings from the thematic analysis were consistent with the quantitative data that my team’s principal designer collected in a survey — this survey aimed to gauge the client’s overall UX maturity according to the scale defined by the Nielsen Norman Group. Results placed the client between the limited to emergent stages (levels 2-3).
After compiling data and establishing findings about our client’s current practices, my team pivoted to diving deeper into the two products that we would provide UX research findings for. From Phase 1, we knew that we would undergo heuristic evaluations and competitive analysis for both products. I led creation of a final heuristic evaluation report to hand off to the client, while the rest of the team focused on conducting the competitive analysis and generating a feature parity list and task flow analysis. Throughout the phase, I also wrote and delivered UX process documentation.
For both products’ heuristics evaluations, my team and I collected screenshots of the target areas that we were asked to evaluate. We placed these in a Figma file. From there, we reviewed the product screenshots and left Figma comments pointing out the heuristic issues that we observed. For each issue, we recorded its severity, level of effort to fix, and category within the ten types of heuristics defined by the Nielsen Norman Group. The Figma comments were exported to a Google Sheets file for further data analysis to identify the top heuristics that the products had problems in, and to aggregate information for the overall level of effort and severity of the heuristic issues that were discovered.
Armed with both the qualitative evaluation data and the quantitative analytics data, I compiled our findings into the heuristics evaluation report. This report started with an overview of the product background, key persona, and user objectives that were evaluated against for the evaluation. It provided overall insights into the product’s heuristics results, then went into a deep dive of individual problems for each target area that my team evaluated. Through the report, I delivered my teams assessments and subsequent recommendations for improvements. These included suggestions to conduct a copy audit in order to develop consistent brand language, evaluate the products’ information architecture to align more closely to users’ mental models, and create a cross-functional design library to achieve design consistency and reduce design debt over time.
In the last phase of the project, my team conducted user research for the two products that we had reviewed in the previous phase. Our specific research activities were user interviews and usability tests. My responsibilities in this phase included facilitating usability tests and delivering process documentation on user research activities.
Usability tests were conducted remotely over Zoom, in a moderated format with no more than two facilitators present (one to drive the test and one to take notes). Users were asked to complete a set of three task tests, which were chosen based on the target area of the products that we were evaluating. The test comprised of both qualitative and quantitative measures; after all test activities, the users were asked to rate the difficulty of the activities that they were asked to complete, and then to answer a few questions relating to their perception of the product and its ease of use. For each product, my team conducted two usability tests, for a total of four across both products.
Results from the user interviews that my team conducted and the usability tests that I facilitated indicated that while users had a positive impression of the product and felt that it met their needs more than competing products in the market, it still had some limitations in terms of onboarding, unintuitive information architecture and interactions, and inability to find certain features due to distracting interfaces and lack of effective visual hierarchy.
With findings from each phase of the project, Business, Product, and Users, my team switched gears to delivering our final reports and handing off any remaining deliverables. I owned the delivery of process documentation for the client’s nascent UX research practice. In total, I wrote and reviewed over 40 articles covering topics including user interviews, thematic analysis, heuristic evaluations, and essentially every type of research activity that my team completed during this project.
I organized documentation into four types: guides, examples, templates, and report. Guides were step-by-step walkthroughs on how the client could adapt a UX research activity within their practice; in contrast, examples were the specific research processes and outputs that my team generated during our research into the client’s business, users, and products. Templates were reusable artifacts that the client could use to streamline their research process, and finally reports were the formalized findings and recommendations from the research that my team conducted.
My team and I held delivery presentations over three days — one for one of the client’s products, one for the second product, and one for UX research operations. Overall, the client received the information well, and praised my team especially for the well-documented process articles that I delivered to their Confluence. The client also appreciated gaining more insights into their users and usability issues; some of our research validated knowledge that they already had, while other findings were new to them.
The end of our delivery presentations marked the end of the project. To end this case study, I’ll cover a couple of things I learned, and improvements that could have been made to this project in hindsight.
Through working on this project, I gained experience in making process recommendations for organizations without a strong UX research process. Designers often have a heavy focus on “user research”, but the same techniques can be applied to improving internal processes of an organization. Through understanding the problems that stakeholders are facing and asking questions to gain clarity on pain points, I was able to collaborate with my team to surface targeted recommendations that the client could use to strengthen their UX research practice and their internal team collaboration.
Through conducting thematic analyses multiple times on this project, I also discovered how thematic analysis can be an incredibly powerful tool for uncovering insights from qualitative data in a systematic way. The process is a large lift and can take some intensive time to complete, but the results yield a comprehensive story within the target area of research that can drive next steps and highlight problem areas (or even areas that are going well!).
In reading this case study, you may have noticed that my team typically engaged with two people per product for activities like user interviews, usability tests, and so on. The typical norm for these user research activities is to collect feedback from three to five people, in order to get a more representative sample without large outliers. Since our samples were so small, the data we collected may not reflect the majority of the products’ users. The natural question then, is why didn’t we do more user interviews?
One reason is that time was very limited, as the scope was very large on this project. My team had to adhere to strict deadlines in order to ensure that we completed all the requested deliverables by the project end date, as we were handling multiple products while also delivering feedback on the client’s internal research operations. Scheduling interviews with users took a long lead time, since the client preferred to handle sending inquiries to their customers on their own — we were not allowed to directly reach out to the client’s customers. The client was not able to provide a higher number of users within the necessary timeframe.
This would have been a larger problem if the project did not also include the delivery of a UX research process to the client. However, since part of the project included delivering a UX research process to the client, the reduced dataset was not an issue. My team provided resources on how the client could do interviews and related research activities themselves, so they could do more in the future. My team did discuss the drawbacks of a smaller user sample size with the client, but the client felt prepared to conduct more interviews on their own after we delivered our process recommendations to them. The client was thereby enabled to add on to the user interviews and usability tests that my team begun.
On a related note, something else that was identified retrospectively was the scope of work itself. Because my team was conducting research in parallel with consulting about research, we were unable to truly dive deep into either side in a way that may have yielded more effective results. We couldn’t devote much time to any single area, since we were rushing to complete all areas. Just like the way we were limited in the number of users that fit in our timeline, we also were limited in the way that we could engage with learning about the client’s business side. It would have been great to delve into the client’s operations more thoroughly by speaking to a greater number of stakeholders in varied positions throughout the organization. In similar, future projects, I would recommend splitting business consulting work from product related work, even if both sides address the same topic, which in this case was UX research.