Triaging Fuel Purchases for Fleets

Erin Bailie
6 min readApr 1, 2024

--

This blog covers a UX Research project performed at IntelliShift in early 2024. IntelliShift is an all-in-one fleet efficiency platform. Fleet Managers install IntelliShift devices in their vehicles and equipment, and the devices transmit GPS and engine data to the web, where IntelliShift’s web platform offers tools and dashboards for viewing data and finding trends and insights.

During this time, I was an individual contributor Product Manager, and was spearheading the creation of UX Research operations within the company. In this blog, I discuss all the usual stuff: study goals, methods used, etc. and I also touch on the process of developing and informed strategy and bringing it to reality.

If you prefer a visual medium, you can view the study details in FigJam.

Background

One of IntelliShift’s products is Fuel Manager, a tool designed to help fleet professionals monitor the fuel efficiency of their fleet, and identify potential instances of fuel theft. One of the biggest hindrances to accurate insights for a fleet is when fuel transaction data cannot be associated with a driver or a vehicle, breaking the chain of insight that connects a purchase to a responsible party.

The goal of this study was to explore UX enhancements for the Fuel Transactions page to facilitate manual assignment of transactions. Before the feature was implemented, less than half of fuel purchases in the platform were associated with a driver or vehicle entity. Our strategic goal was to increase that metric to 90% or higher, and we determined that creating a feature to allow manual assignment of transactions was our first iterative step. This research evaluated which UX designs to support that strategic goal.

Wireframe for Design A: focused assignment of transactions via modal.
Wireframe for Design B: in-line assignment of transactions.

During UX design ideation, two design patterns emerged as possible directions for the feature. My UX Design collaborator and I identified that we needed more information to downselect to a final design, so I made a plan to conduct UX Research on the designs.

  • I wanted to compare the design options head-to-head, ruling out attitudinal studies.
  • I had no engineering resources available for the design period, ruling out A/B testing.
  • The modus operandi at IntelliShift — interviews — were not scalable study methods, so I wanted to set precedent for more scalable methods.

Due to these constraints and priorities, I opted for an unmoderated usability test.

Study Setup

I selected Maze to run the study, opting for the free tier with the goal of using the results of the study to justify purchasing licenses for higher tiers in the future. In collaboration with the UX Designer on the team, I created the set of study questions — and then whittled them down to fit into the constraints of Maze’s free tier.

On the left are the original study questions, and on the right are the revised questions to fit within the study tool’s limitations.

The study consisted of 6 sections:

  1. Introduction
  2. Open Question, which displayed the current product page for fuel transactions, and asked the participant how they use the page today. The goal was to get the participant “warmed up” and in-context for the following tasks.
  3. Prototype Test with Prototype A (focused assignment via modal)
  4. Prototype Test with Prototype B (in-line assignment)
  5. Open Question, asking the participant which design they preferred. The goal of this question was to provide attitudinal insights to supplement the behavioral insights gathered in the two prototype tests.
  6. Open Question, asking for any other comments or feedback

The study was sent to a subset of users who had interacted with the Fuel Transactions page in the past 30 days. 7 users were invited to participate via email, and 3 responded to the invitation.

Email invitation for participation in the study.

Study Results and Insights

42% (3 of 7) of invitees participated in the study.

When looking at the Time to Success and Misclick data, a trend emerges of Prototype 1 taking users more time and effort to successfully complete the task compared to Prototype 2. Additionally, all 3 users surveyed expressed preference for Prototype 2.

Exploring heatmaps of clicks showed that selecting the touch target in the dropdown generated the highest rate of misclicks.

When deriving insights from the study results, there are two major caveats to consider:

  1. All users were presented the prototypes in the same order. The results contain recency and primacy bias. Participant familiarity and comfort within the Maze system grew as they completed the study. They became more adept with the tool or more familiar with the UX components in the design, allowing them to succeed more quickly. When considered in conjunction with the high misclick rate in the dropdown component, it’s possible that it was the dropdown — not the modal — which caused such poor results for Prototype A.
    In follow-up studies, I plan to randomize tests whenever possible to remove the influence of familiarity.
  2. The number of study participants was small, and there is risk in extrapolating insights to a larger population. It’s possible the 3 participants who responded don’t reflect the larger needs of the customer base! That said, after considering the qualitative insights on user attitudes via Question 4, which validated that Prototype B was preferred, I felt comfortable moving forward with that design. Had the user comments been less cohesive, I would have pursued additional study.

Product Outcomes

In parallel with this research, I spearheaded a project with our backend engineering team to improve the goal metric: the percentage of transactions assigned to a vehicle or driver. The backend project had a surprising impact — once implemented, over 98% of transactions in the platform were assigned. That beat our goal metric of 90% by a long shot.

Because the business need had been met, the designs for transaction assignment were never implemented.

Reflections

Operationally, I learned the importance of randomization in unmoderated studies to head off recency and primacy bias. In studies since, I’ve been diligent about randomizing the order of testing questions, and being skeptical of the results of the first task when a subject is using an unfamiliar tool.

Organizationally, I used the example of this study to show our C-Suite the value of Maze as a UX Research tool, and IntelliShift now has a corporate license for the tool, allowing me to conduct additional studies and freeing me from the limitations of the free license tier.

The lack of participant response from this study also contributed to an organizational conversation about customer engagement and helped us prioritize initiatives to get mroe customer engagement. As a result, studies and interview requests in the months following have had larger participant pools and higher response rates.

Lastly, IntelliShift is a small company. Wearing many hats on the product team — at the same time — gets tricky! Specifically for this project, I felt a tension between one desire to get rich insights and triangulate with multiple methods of study and another desire to achieve certainty on the designs and move the project to the next phase.

--

--

Erin Bailie
Erin Bailie

Written by Erin Bailie

Bringing customer voice to product teams for 10+ years.

No responses yet