Triaging Fuel Purchases for Fleets

Erin Bailie
6 min readApr 1, 2024

This blog covers a UX Research project performed at IntelliShift in early 2024. IntelliShift is an all-in-one fleet efficiency platform. Fleet Managers install IntelliShift devices in their vehicles and equipment, and the devices transmit GPS and engine data to the web, where IntelliShift’s web platform offers tools and dashboards for viewing data and finding trends and insights.

During this time, I was simultaneously 1/2 of the Product Management team and spinning up UX Research efforts within the company. In this blog, I discuss all the usual stuff: study goals, methods used, etc. and I also touch on the challenges of juggling UXR and PM responsibilities in a small company.

If you prefer a visual medium, you can view the study details in FigJam.

Background

One of IntelliShift’s product modules is Fuel Manager, a tool designed to help fleet professionals monitor the fuel efficiency of their fleet, and identify potential instances of fuel theft. Fuel managers import their fuel purchases into the platform via front-end file upload or recurring integrations via API. One of the biggest hindrances to accurate insights for a fleet is when fuel transactions cannot be tied back to a driver or a vehicle, breaking the chain of insight that connects a purchase to a responsible party.

The goal of this study was to explore UX enhancements for the Fuel Transactions page to facilitate manual assignment of transactions. Before the feature was implemented, 14% of fuel transactions in the platform were missing assignment to driver or vehicle — the goal of the feature was to drive that percentage as low as possible, and the goal of the study was to find the UX designs which best supported the user behavior of manually assigning transactions.

Wireframe for Design A: focused assignment of transactions via modal.
Wireframe for Design B: in-line assignment of transactions.

During UX design ideation, two design patterns emerged as possible directions for the feature. My UX Design collaborator and I identified that we needed more information to downselect to a final design, so I made a plan to conduct UX Research on the designs.

  • I wanted to compare the design options head-to-head, ruling out attitudinal studies.
  • I had no engineering resources available for the design period, ruling out A/B testing.
  • The modus operandi at IntelliShift — interviews — were not scalable study methods, so I wanted to set precedent for more scalable methods.

Due to these constraints and priorities, I opted for an unmoderated usability test.

Study Setup

I selected Maze to run the study, opting for the free tier with the goal of using the results of the study to justify purchasing licenses for higher tiers in the future. In collaboration with the UX Designer on the team, I created the set of study questions — and then whittled them down to fit into the constraints of Maze’s free tier.

On the left are the original study questions, and on the right are the revised questions to fit within the study tool’s limitations.

The study consisted of 6 sections:

  1. Introduction
  2. Open Question, which displayed the current product page for fuel transactions, and asked the participant how they use the page today. The goal was to get the participant “warmed up” and in-context for the following tasks.
  3. Prototype Test with Prototype A (focused assignment via modal)
  4. Prototype Test with Prototype B (in-line assignment)
  5. Open Question, asking the participant which design they preferred. The goal of this question was to provide attitudinal insights to supplement the behavioral insights gathered in the two prototype tests.
  6. Open Question, asking for any other comments or feedback

The study was sent to a subset of users who had interacted with the Fuel Transactions page in the past 30 days. 7 users were invited to participate via email, and 3 responded to the invitation.

Email invitation for participation in the study.

Study Results and Insights

42% (3 of 7) of invitees participated in the study.

When looking at the Time to Success and Misclick data, a trend emerges of Prototype 1 taking users more time and effort to successfully complete the task compared to Prototype 2. Additionally, all 3 users surveyed expressed preference for Prototype 2.

Exploring heatmaps of clicks showed that selecting the touch target in the dropdown generated the highest rate of misclicks.

When deriving insights from the study results, there are two major caveats to consider:

  1. All users were presented the prototypes in the same order. The results contain recency and primacy bias. Participant familiarity and comfort within the Maze system grew as they completed the study. They became more adept with the tool or more familiar with the UX components in the design, allowing them to succeed more quickly. When considered in conjunction with the high misclick rate in the dropdown component, it’s possible that it was the dropdown — not the modal — which caused such poor results for Prototype A.
    In follow-up studies, I plan to randomize tests whenever possible to remove the influence of familiarity.
  2. The number of study participants was small, and there is risk in extrapolating insights to a larger population. It’s possible the 3 participants who responded don’t reflect the larger needs of the customer base! That said, after considering the qualitative insights on user attitudes via Question 4, which validated that Prototype B was preferred, I felt comfortable moving forward with that design. Had the user comments been less cohesive, I would have pursued additional study.

Reflections

I wish I had been able to conduct more research to rule out the influence of familiarity and sample size in the results. If time hadn’t been a constraint for this project, I would have conducted a follow-up study with the following parameters:

  • Invite additional IntelliShift users to participate. Since we had exhausted the user pool of Fuel Transactions regular users, I would identify users who are adept in other areas of the platform.
  • Expand the touch target on the dropdown.
  • Randomize the order of the tasks.

One of my biggest personal struggles through this research effort was balancing my responsibilities as a UX Researcher and my responsibilities as a Product Manager. IntelliShift is small, and the product team is two employees and a few hours a week with a UX Design contractor. Wearing many hats — at the same time — gets tricky! Specifically for this project, I felt a tension between one desire to get rich insights and triangulate with multiple methods of study and another desire to achieve certainty on the designs and move the project to the next phase.

I’m aware that PM bias towards decisions can be a poison that sours research insights, and I do my best to prevent that. I try to build a mental firewall between the two functions: I use different web browsers for PM and UXR tasks, and try to split them to different days of the week. If any readers have feedback or insights on keeping discovery and decision-making separated, I’m all ears!

--

--

Erin Bailie

Former PM, looking to pivot into UX Research. This used to be a blog about bikes, and sometimes still is.