MVA More Menu Tree Test
This study was one of the first studies I worked on as the Troubleshoot & Assist Portfolio lead and was one of my favorite studies to to the complexity and novelty of tree tests to me at the time. The aim of this study was to to evaluate the structure and labels the MyVerizon App 'More Menu' to determine how easy it is for users to find support information and resources. We tested three different versions of the More Menu: the current version at the time and two versions the team hypothesized would perform well.
Objectives
Evaluate the performance of 3 different menu structures to understand how users naturally categorize and organize the support-related options in the MVA More Menu.
Key Questions
-
How do users categorize the support-related options in the More menu?
If a user designates a particular categories can they explain the reasoning -
Were there any missing topics or categories they would like to add?
-
If the users had to order and prioritize the categories, what would that look like?
-
Were there any challenges in classifying the labels?
-
For the categories defined, are there any options that users feel should be added to that category?
Methodology
Remote, quantitative tree test
Unmoderated individual sessions
Mobile device Modality
Participants
-
300 participants total
-
3 test groups viewed 3 different menu formats
-
100 users viewed Option A
-
100 users viewed Option B
-
100 users viewed Option C
-
-
All current Verizon customers
-
All were required to have used the MyVerizon App within a month of participating in the study.
-
Mix of ages, incomes, ethnicities, and genders.
Evaluated Stimuli
Task List
Tools
UserZoom - UserZoom was utilized for recruitment, as well as data collection, monitoring, and analyzation.
Figma - The prototypes were created through Figma.
Google Office Suite - I utilized Google Office Suite to write test plans and to create findings presentations.
Miro - I utilized Miro for notetaking and synthesis.
Timeline
The Troubleshooting design team submitted a research brief in early January with an overall idea of what they wanted to accomplish with this study but they weren't sure of which methodology would work the best to answer their questions. The research brief and prototypes needed some fine tuning, but after that we were off and running and within 6 weeks the team had answers to their questions
Week 1
-
Research brief submitted.
-
Sent out invites for research sync to walk through brief and prototypes.
-
​Refined brief and prototypes.
-
Held Project kickoff with finalized brief and prototypes.
-
After kickoff, I wrote the test plan.
Week 2
-
Finished writing test plan.
-
Sent test plan to stakeholders for review; allotting 3 days for team to review.
-
Edited and finalized test plan from feedback.
-
Program test in UserZoom.
Week 3
-
Submitted request for a test run to ensure programming was correct and to ensure tasks were clear.
-
Analyzed test run results and made adjustments to task wording to help clarity.
-
Submitted finalized test and began data collection.
Week 4
-
Data collection finished over the weekend.
-
Began note taking and data synthesis.
Week 5
-
Finished Synthesis.
-
Wrote final presentation report
-
Presented findings to team.
Key Takeaways
-
For tasks 1 (battery drain) and task 2 (dropped calls), Versions A and B performed the best out of the three versions.
-
The commonality between these versions is that 'Troubleshooting' is on the L1 menu.
-
-
For any tasks that involved 5G home internet (tasks 3-6), across all three versions most users clicked into the L1 options 'Home Support'/ 'Manage 5G Home'.
-
In tasks 4 (restart internet) and 6 (test internet speeds), all versions met the 65% minimum success threshold.
-
In tasks 3 (internet connection issues) and 5 (improve internet speeds), no version met the 65% minimum success threshold.
-
Task 3's success option was 'troubleshooting', however because the task was focused on 5G internet, many users chose to go into the L1 options 'Home Support'/ 'Manage 5G Home'.
-
On task 5, most users chose to go into the L1 options 'Home Support'/ 'Manage 5G Home', but less that 65% in all versions did not choose 'Optimize 5G signal strength'
-
-
-
All three versions have >85% task success on Task 7 (contact us).
-
In task 8(search for signal issues in area) versions A and B had a 1% or less success rate, and Version C had a 62% success rate.
-
The commonality between versions A and B being that the success option is located within the L1 option 'Feedback'.
-
Impact
-
For tasks 3 (internet connection issues) and 5 (improve internet speeds), none of the versions met the 65% success threshold. Users frequently opted for 'Home Support' or 'Manage 5G Home', indicating that they may not have recognized 'Troubleshooting' as the appropriate option for internet specific issues. This misalignment suggests that users might need clearer guidance or labels for troubleshooting tasks related to home internet vs. mobile tasks.
-
This led to the decision have an L1 option focused on all things related to mobile support ("Mobile troubleshooting") and another option for all things relating to home support ("Home Internet Support").
-
-
​The significant difference in success rates for task 8 (search for signal issues in the area) indicates a critical issue. While Versions A and B had a success rate of 1% or less, Version C performed much better at 62%. This highlights the importance of the placement of options within the menu and suggests that the placing the success option for task 8 in the L1 option "Feedback" was misaligned with user expectations.
-
This misalignment led to the Check Network Status tool to have an additional link within the new "Mobile Troubleshooting" and "Home Internet Support" L1 options. However, the Check Network Status tool still live within "Feedback as well.​
-
Challenges
-
Poor participant quality - The test results took a hit due to poor participant quality, which meant I had to go through every recorded session to weed out those who didn't really engage with the tasks and instructions. About 87 of the 300 participants seemed to exhibit behaviors indicative of random selection, likely motivated by the desire to receive their incentive as quickly as possible. This behavior skewed the data and made it hard to get a clear picture of the each versions performance. It was a bit of a hassle to sift through all the sessions, but ultimately I was grateful the majority of the participants were engaged with the task provided to them. I did proceed to recruit 87 more participants to fill the participant count gap -- monitoring the quality of the sessions as they came in. This setback only pushed back the study timeline by 2 days and was simply an inconvenience, but nothing too serious. Fortunately, we were able to fulfill the 300 participant count.