Lucid Ratings Mobile Apps
The first blockchain-based authentication system for ratings and reviews. How I used UX Research and Strategy to bring truth and transparency to healthcare.
PROJECT BACKGROUND
Online doctor rating and review systems have existed since 1998, but they remain deeply flawed. These systems have no authentication or verification, so fake and fraudulent reviews are rampant. Less than 1% of patients leave reviews, so very little meaningful information about doctor quality exists. Further, no mechanism exists for handling unduly negative reviews, so clinicians can suffer undeserved reputation damage.
THE PROBLEM
OUR SOLUTION
We built a two-sided marketplace with patient and provider apps (iOS and Android) that incentivizes patients to leave reviews and provides an enhanced rating methodology, demonstrating the real quality of clinician care and adding transparency to the healthcare community.
PRIMARY DELIVERABLES
July '20 - June '21
UX Research
UI Design
Information Architecture
Front-end dev support
Product Strategy
Product Management
Ian Dorward: Founder
Stephen Foster: Chief Architect
Taehee Kim: UI Designer
Veronica Benduski: UX Researcher, Designer, Product Manager
Sketch
Figma, Figjam
Miro
Notion
Visual Studio
BACKGROUND RESEARCH & VALIDATION
Taking a step back
I joined the team with high-fidelity screens already in development. However, customers had not seen either app. So, I took the team a step back for user research and validation.
COMPETITIVE ANALYSIS
Starting with a competitive analysis matrix, Subject Matter Experts (SME) interviews, and generative research with local patients and providers, I validated that Lucid Ratings indeed addresses a crucial gap in the marketplace.
USER RESEARCH
I used an affinity mapping exercise to synthesize all the insights gained from user research, generating key insights that informed a baseline understanding of our core customer needs and motivations. This then allowed me to more accurately craft usability test plans for product validation.
USABILITY TESTING
How insights informed
iteration
I conducted virtually moderated usability testing on our existing product, tracking quantitative task ratings based on perceived difficulty and synthesizing post-test qualitative impressions of both patient and provider app.
This testing revealed a few key insights, which guided our redefinition of the problem statement, a change in scope, and subsequent ideation.
PROBLEM REDEFINITION & STRATEGY
I facilitated a workshop where we turned key insights from usability testing into "How Might We" questions. This helped us to more clearly redefine core customer problems and prioritize four key areas for iteration with impacts on both Patient and Provider apps:
IDEATION
1. Provider Data
“What is Ribbon Health?”
When searching for a doctor or completing their profile, all test participants asked this question. We learned that we had to explain what this external database was and how we were using it to help providers quickly and accurately create their profiles.
We clarified the UX design and copy and gave providers the option to merge data or skip it altogether, ensuring that Lucid Ratings always had the most up-to-date provider information.
For patients, we removed any mention of Ribbon Health from the direct patient user-flow to keep focus on searching for a provider, and removing any possibility for confusion. If patients wanted to learn more about Ribbon Health, they could check out information about how we got our data in the "About" drawer.
2. Question Set
Secondly, testing revealed that the question set was too long for the majority of patients and providers. We used 6-8-5 sketching exercises to generate potential solutions.
Provider App: We added a summary at the top of the review confirmation section; providers found this particularly useful when quickly taking a look at patient reviews.
Patient App: We needed to make it quick and easy for patients to leave a review. So we slimmed down the question set, created categories with an overview screen to start the flow, added a tracker, and provided bonus rewards after completing each section. Patients told us this helped keep them engaged throughout the process and they were 93% more likely to complete the review following these updates.
3. Negative Review Mediation
Negative Review Mediation exists when the app’s algorithm flags an excessively negative review left by a patient. Although these are often very valid, we want to give the patient extra time to either reach out to their provider and/or review their responses. The goal is to help catch fake reviews.
Provider App: Providers told us they felt that patients need more time to take a beat and get a second opinion after a very negative healthcare experience. We increased the patient review time to 5 business days and included a countdown notification on the provider dashboard.
Patient App: Contrary to our assumptions, the majority of patients did not want to speak to their doctors after a very negative experience. However, they were open to office administrators reaching out to them after they left a negative review. So we used a brainwriting exercise to pivot. We generated an option that gave patients the ability to reach out to a staff member in the manner in which they felt most comfortable. We also collapsed the question set into sections with symbols flagging the areas still considered “very negative” by our algorithm, making it less overwhelming for patients to see the summary of their reviews.
4. Identity Verification
Lastly, while the business had already formed a partnership with an external vendor to verify identity, usability testing revealed that all participants failed the task of verifying their identity by downloading a separate app.
We decided to instead create an integrated in-app experience with an API. This made it simple for customers to securely verify their identities without ever leaving the app. Another round of usability testing returned a 100% success rate on the same task.
INFORMATION ARCHITECTURE
IA Update
These learnings also meant reworking parts of our Information Architecture, especially when it came to the identity verification process.
I did a content audit of our current mobile app architecture in a spreadsheet, and visually mapped the core pieces of our navigation to help lead our discussion, highlighting where additional complexity needed to be further detailed out with a user flow.
This user flow made it much more clear where the overlaps existed between Patient and Provider apps, as well as where we could use existing work and how to prioritize additional development work.
Solution & Learnings
In this project, I learned the importance of not making assumptions. Customers always surprise us in the way they interact with our experiences. I learned how to connect customer-backed data with business outcomes, clearly communicating with and aligning the team on tangible next steps.
The four key iterations presented above led to a two-sided application that was accepted into Capital Innovators accelerator program in its final phases of testing and launch. Without conducting generative research into the problem space and continuous usability testing, we would not have built a product that so successfully addressed both customer and business needs.