Expert Mileage Tracking

In which the actual problem wasn't what we thought it was. Like, at all.

$200,000

$200,000

$200,000

In annual employee hours saved

Data Gaps

Data Gaps

Data Gaps

Uncovered during user research

Team

I was the lead designer and researcher on this project on a small team with 5 engineers and a Product manager. I collaborated with my pal Bodhi on research execution and ideation.

Problem

In Q1 of 2020, our Field Operations partners brought us a problem with Expert (employee) mileage tracking. They noticed an issue with employees adding a little more mileage to their expense reports each day, at a rate of +6.7 miles per job over initial forecasting, at $0.56 per mile, 3.2 jobs per day.

Operations hypothesized that Experts were tracking their mileage during lunch breaks, trips to gas stations off-route, and other unsanctioned trips. Our team was asked to develop a feature that would keep employees from over-reporting their mileage.

Across a network of 1500 employees, this issue was costing the company $2.08M annually.*

Another issue was that Coaches (managers) were also manually validating the milage on 20-25 expense reports each week, depending on the market. That takes a lot of time!

*Allegedly…we’ll come back to this.

In Q1 of 2020, our Field Operations partners brought us a problem with Expert (employee) mileage tracking. They noticed an issue with employees adding a little more mileage to their expense reports each day, at a rate of +6.7 miles per job over initial forecasting, at $0.56 per mile, 3.2 jobs per day.

Operations hypothesized that Experts were tracking their mileage during lunch breaks, trips to gas stations off-route, and other unsanctioned trips. Our team was asked to develop a feature that would keep employees from over-reporting their mileage.

Across a network of 1500 employees, this issue was costing the company $2.08M annually.*

Another issue was that Coaches (managers) were also manually validating the milage on 20-25 expense reports each week, depending on the market. That takes a lot of time!

*Allegedly…we’ll come back to this.

In Q1 of 2020, our Field Operations partners brought us a problem with Expert (employee) mileage tracking. They noticed an issue with employees adding a little more mileage to their expense reports each day, at a rate of +6.7 miles per job over initial forecasting, at $0.56 per mile, 3.2 jobs per day.

Operations hypothesized that Experts were tracking their mileage during lunch breaks, trips to gas stations off-route, and other unsanctioned trips. Our team was asked to develop a feature that would keep employees from over-reporting their mileage.

Across a network of 1500 employees, this issue was costing the company $2.08M annually.*

Another issue was that Coaches (managers) were also manually validating the milage on 20-25 expense reports each week, depending on the market. That takes a lot of time!

*Allegedly…we’ll come back to this.

Discovery

Understanding what we pay for

To make sure we were clear on what our employees were supposed to get reimbursed for, I reached out to our ops team to get a copy of our Expert handbook.

What qualifies as reimbursable miles? The employee handbook outlined three use cases for reimbursable mileage:
  • En-route to a job (from warehouse or prior job)

  • Heading home for the day when you are more than 50 miles from the warehouse — Experts are paid for mile 51 and up.

  • Trips en-route back to the warehouse during the day (these are very infrequent)

What matters to our users?

We had two primary users to consider here — Experts (the frontline employees) and Coaches (their managers).

After speaking to Experts and Coaches, it was clear to us that both groups were most concerned about their on-time arrival rate. Out of all the metrics by which they are measured, on-time arrival was most important.

"I will go a route that might be more miles to get there faster because I want to keep getting that, that high OTA score. That's what [my Coach] is always coaching us on — get there in the time window, and your [rating] is going to stay up" — Expert

Employee tools

We needed to understand how our Experts were using their mileage tracking system, and how they were keeping up with miles driven to enter them accurately into the system. Here's what I learned about how they viewed and handled mileage tracking in our round-table sessions:

  • Experts use Concur's UI to track their mileage.

  • Concur doesn't ask for mileage driven — the software asks for addresses. Once an Expert adds the addresses they need to visit, Concur uses the Google Maps API to determine the shortest route, and that's the distance Experts are compensated for.

    • This wasn't editable on mobile. So if an Expert drove a quicker route that had less traffic, but more miles, they had to settle for getting paid for the shorter route, or log into their computer to edit it, and contest the recommended mileage.

We just say, you're gonna get — you're gonna get paid the mileage based on what the app tells you. So you put in your location A, and you put in location B, and if you go a different way, that's up to you, but like, that's basically — it's all calculated. And that's why we recommend they use Google Maps. — Coach

  • It is also worth calling out that once a job was completed, it vanished from the app. This meant Experts needed to enter their miles before driving them, or record customer addresses either in their phones or in a notebook.

As for people managers, operations projected each Coach was spending about 173 hours each year ($258k annually when spread across 78 coaches) reviewing, validating, and approving mileage reports. Here's how one coach explained his mileage approval process:

Understanding what we pay for

To make sure we were clear on what our employees were supposed to get reimbursed for, I reached out to our ops team to get a copy of our Expert handbook.

What qualifies as reimbursable miles? The employee handbook outlined three use cases for reimbursable mileage:
  • En-route to a job (from warehouse or prior job)

  • Heading home for the day when you are more than 50 miles from the warehouse — Experts are paid for mile 51 and up.

  • Trips en-route back to the warehouse during the day (these are very infrequent)

What matters to our users?

We had two primary users to consider here — Experts (the frontline employees) and Coaches (their managers).

After speaking to Experts and Coaches, it was clear to us that both groups were most concerned about their on-time arrival rate. Out of all the metrics by which they are measured, on-time arrival was most important.

"I will go a route that might be more miles to get there faster because I want to keep getting that, that high OTA score. That's what [my Coach] is always coaching us on — get there in the time window, and your [rating] is going to stay up" — Expert

Employee tools

We needed to understand how our Experts were using their mileage tracking system, and how they were keeping up with miles driven to enter them accurately into the system. Here's what I learned about how they viewed and handled mileage tracking in our round-table sessions:

  • Experts use Concur's UI to track their mileage.

  • Concur doesn't ask for mileage driven — the software asks for addresses. Once an Expert adds the addresses they need to visit, Concur uses the Google Maps API to determine the shortest route, and that's the distance Experts are compensated for.

    • This wasn't editable on mobile. So if an Expert drove a quicker route that had less traffic, but more miles, they had to settle for getting paid for the shorter route, or log into their computer to edit it, and contest the recommended mileage.

We just say, you're gonna get — you're gonna get paid the mileage based on what the app tells you. So you put in your location A, and you put in location B, and if you go a different way, that's up to you, but like, that's basically — it's all calculated. And that's why we recommend they use Google Maps. — Coach

  • It is also worth calling out that once a job was completed, it vanished from the app. This meant Experts needed to enter their miles before driving them, or record customer addresses either in their phones or in a notebook.

As for people managers, operations projected each Coach was spending about 173 hours each year ($258k annually when spread across 78 coaches) reviewing, validating, and approving mileage reports. Here's how one coach explained his mileage approval process:

Understanding what we pay for

To make sure we were clear on what our employees were supposed to get reimbursed for, I reached out to our ops team to get a copy of our Expert handbook.

What qualifies as reimbursable miles? The employee handbook outlined three use cases for reimbursable mileage:
  • En-route to a job (from warehouse or prior job)

  • Heading home for the day when you are more than 50 miles from the warehouse — Experts are paid for mile 51 and up.

  • Trips en-route back to the warehouse during the day (these are very infrequent)

What matters to our users?

We had two primary users to consider here — Experts (the frontline employees) and Coaches (their managers).

After speaking to Experts and Coaches, it was clear to us that both groups were most concerned about their on-time arrival rate. Out of all the metrics by which they are measured, on-time arrival was most important.

"I will go a route that might be more miles to get there faster because I want to keep getting that, that high OTA score. That's what [my Coach] is always coaching us on — get there in the time window, and your [rating] is going to stay up" — Expert

Employee tools

We needed to understand how our Experts were using their mileage tracking system, and how they were keeping up with miles driven to enter them accurately into the system. Here's what I learned about how they viewed and handled mileage tracking in our round-table sessions:

  • Experts use Concur's UI to track their mileage.

  • Concur doesn't ask for mileage driven — the software asks for addresses. Once an Expert adds the addresses they need to visit, Concur uses the Google Maps API to determine the shortest route, and that's the distance Experts are compensated for.

    • This wasn't editable on mobile. So if an Expert drove a quicker route that had less traffic, but more miles, they had to settle for getting paid for the shorter route, or log into their computer to edit it, and contest the recommended mileage.

We just say, you're gonna get — you're gonna get paid the mileage based on what the app tells you. So you put in your location A, and you put in location B, and if you go a different way, that's up to you, but like, that's basically — it's all calculated. And that's why we recommend they use Google Maps. — Coach

  • It is also worth calling out that once a job was completed, it vanished from the app. This meant Experts needed to enter their miles before driving them, or record customer addresses either in their phones or in a notebook.

As for people managers, operations projected each Coach was spending about 173 hours each year ($258k annually when spread across 78 coaches) reviewing, validating, and approving mileage reports. Here's how one coach explained his mileage approval process:

Hypothesis

After conducting interviews with three Coaches and 18 Experts, and gaining a better understanding of how our Expert mileage tracking system worked, I started to question the accuracy of our data, and formed a hypothesis:

The data set was incomplete — our projections weren't taking into account long drives home (>50 miles) and 2nd trips to the FSL).

Turns out, I was right. And in fact, we weren't taking into account many of our edge cases in our projections. For instance:

"Let's say an Expert drives all the way to a job... and sometime while they were driving, it got cancelled. They really don't have a good way to prove they were at that location... Some days I'll have an Expert with 2 jobs in Power BI, and they claim mileage for 3... The Experts really have to stay on top of it to help with the success of it."
— Coach

💡 Edge cases like this weren't making it into our projection data!

After conducting interviews with three Coaches and 18 Experts, and gaining a better understanding of how our Expert mileage tracking system worked, I started to question the accuracy of our data, and formed a hypothesis:

The data set was incomplete — our projections weren't taking into account long drives home (>50 miles) and 2nd trips to the FSL).

Turns out, I was right. And in fact, we weren't taking into account many of our edge cases in our projections. For instance:

"Let's say an Expert drives all the way to a job... and sometime while they were driving, it got cancelled. They really don't have a good way to prove they were at that location... Some days I'll have an Expert with 2 jobs in Power BI, and they claim mileage for 3... The Experts really have to stay on top of it to help with the success of it."
— Coach

💡 Edge cases like this weren't making it into our projection data!

After conducting interviews with three Coaches and 18 Experts, and gaining a better understanding of how our Expert mileage tracking system worked, I started to question the accuracy of our data, and formed a hypothesis:

The data set was incomplete — our projections weren't taking into account long drives home (>50 miles) and 2nd trips to the FSL).

Turns out, I was right. And in fact, we weren't taking into account many of our edge cases in our projections. For instance:

"Let's say an Expert drives all the way to a job... and sometime while they were driving, it got cancelled. They really don't have a good way to prove they were at that location... Some days I'll have an Expert with 2 jobs in Power BI, and they claim mileage for 3... The Experts really have to stay on top of it to help with the success of it."
— Coach

💡 Edge cases like this weren't making it into our projection data!

Solution

When Product and Operations partners realized the projections were wrong, we shifted the focus of our solution from working to automate mileage to working to increase transparency.

Because we realized this wasn't as big of an opportunity as we initially projected, our team decided to focus on building a solution that would aim to reduce employee hours spent on expense reporting, and expose customer addresses to Experts after the job was completed (in hopes of improving the accuracy of expense reports on first submission).

The Job History Feature

We decided to build a Job History feature. This feature would expose 14 days of Expert jobs, and allow them to tap on the job to copy the address to their device. This would keep Experts from having to enter mileage before the job was completed, allow them access to jobs that were cancelled if they drove to the address, and free them from having to keep up manually with customer addresses (PII!).

User Feedback

"I love the feature. I go back and forth sometimes with Concur if I didn't get a chance to put the information before I start the jobs in the morning.... I love that you can go back the next day and copy and paste to Concur."

"Being able to pull up customer name from the day before is so nice."

"It's helpful that you can go back and get caught up if you don't remember to enter an address.

When Product and Operations partners realized the projections were wrong, we shifted the focus of our solution from working to automate mileage to working to increase transparency.

Because we realized this wasn't as big of an opportunity as we initially projected, our team decided to focus on building a solution that would aim to reduce employee hours spent on expense reporting, and expose customer addresses to Experts after the job was completed (in hopes of improving the accuracy of expense reports on first submission).

The Job History Feature

We decided to build a Job History feature. This feature would expose 14 days of Expert jobs, and allow them to tap on the job to copy the address to their device. This would keep Experts from having to enter mileage before the job was completed, allow them access to jobs that were cancelled if they drove to the address, and free them from having to keep up manually with customer addresses (PII!).

User Feedback

"I love the feature. I go back and forth sometimes with Concur if I didn't get a chance to put the information before I start the jobs in the morning.... I love that you can go back the next day and copy and paste to Concur."

"Being able to pull up customer name from the day before is so nice."

"It's helpful that you can go back and get caught up if you don't remember to enter an address.

When Product and Operations partners realized the projections were wrong, we shifted the focus of our solution from working to automate mileage to working to increase transparency.

Because we realized this wasn't as big of an opportunity as we initially projected, our team decided to focus on building a solution that would aim to reduce employee hours spent on expense reporting, and expose customer addresses to Experts after the job was completed (in hopes of improving the accuracy of expense reports on first submission).

The Job History Feature

We decided to build a Job History feature. This feature would expose 14 days of Expert jobs, and allow them to tap on the job to copy the address to their device. This would keep Experts from having to enter mileage before the job was completed, allow them access to jobs that were cancelled if they drove to the address, and free them from having to keep up manually with customer addresses (PII!).

User Feedback

"I love the feature. I go back and forth sometimes with Concur if I didn't get a chance to put the information before I start the jobs in the morning.... I love that you can go back the next day and copy and paste to Concur."

"Being able to pull up customer name from the day before is so nice."

"It's helpful that you can go back and get caught up if you don't remember to enter an address.

Impact

Well, didn't save $2M+ with this feature.

Despite the opportunity projection being wrong, we did end up with some meaningful results.

  • Coaches were able to shave an average of 1.4 hrs weekly off of their expense report checking. That's just over $2k each week, and over $100k annually.

  • This prevented an estimated 156 hours ($93K annually) in retooled expense reports by Experts.

  • All in all, this feature is saving the company nearly $200k/year in employee hours.

  • More importantly, it helped us uncover gaps in our data so that we can make more accurate projections.

  • This work also exposed some ethical issues with how our mileage tracking system works. We're working towards figuring out how to fully automate our mileage tracking for Experts.

Well, didn't save $2M+ with this feature.

Despite the opportunity projection being wrong, we did end up with some meaningful results.

  • Coaches were able to shave an average of 1.4 hrs weekly off of their expense report checking. That's just over $2k each week, and over $100k annually.

  • This prevented an estimated 156 hours ($93K annually) in retooled expense reports by Experts.

  • All in all, this feature is saving the company nearly $200k/year in employee hours.

  • More importantly, it helped us uncover gaps in our data so that we can make more accurate projections.

  • This work also exposed some ethical issues with how our mileage tracking system works. We're working towards figuring out how to fully automate our mileage tracking for Experts.

Well, didn't save $2M+ with this feature.

Despite the opportunity projection being wrong, we did end up with some meaningful results.

  • Coaches were able to shave an average of 1.4 hrs weekly off of their expense report checking. That's just over $2k each week, and over $100k annually.

  • This prevented an estimated 156 hours ($93K annually) in retooled expense reports by Experts.

  • All in all, this feature is saving the company nearly $200k/year in employee hours.

  • More importantly, it helped us uncover gaps in our data so that we can make more accurate projections.

  • This work also exposed some ethical issues with how our mileage tracking system works. We're working towards figuring out how to fully automate our mileage tracking for Experts.

Conclusion

I chose to write a case study on this project because it was equal parts deflating and encouraging. It felt from the start like some of our partners had made an assumption that there was foul play occurring at a large scale across our Expert population, and having spent a lot (I mean, like, a lot) of time with our users, I just didn't see it. When this problem was initially shared with our team, some of the folks seeing the numbers used phrases like "bad apples" and "gaming the system." Initially, I got some pushback about digging into the problem. One colleague asked why we couldn't "just build the feature and move on."

Being right doesn't always feel good.

My gut told me that we had it wrong, and it turned out my gut was right. That didn't feel great, honestly. It was disappointing that so many folks had made assumptions about our frontline Experts, that they would assume the answer was a lack of ethics and not a bad data set.

Ultimately, I would have liked to have designed something more robust for our Experts. A feature that would give them the opportunity to log their mileage more accurately, and fix the issues with our instance of Concur. But, that wasn't something the business had time and resources to invest in, so we chipped away at what we could to make the system a little better for users.

A lesson in empathy.

All-in-all, I think the biggest win here was exposing our own mistake to the larger team. Digging into this through user interviews and being able to share in their own words what issues users were having caused us not only to re-examine our own data sets, but also to dispel the notion that the mileage we were paying out over projection was all due to the bad actors in the group.

I chose to write a case study on this project because it was equal parts deflating and encouraging. It felt from the start like some of our partners had made an assumption that there was foul play occurring at a large scale across our Expert population, and having spent a lot (I mean, like, a lot) of time with our users, I just didn't see it. When this problem was initially shared with our team, some of the folks seeing the numbers used phrases like "bad apples" and "gaming the system." Initially, I got some pushback about digging into the problem. One colleague asked why we couldn't "just build the feature and move on."

Being right doesn't always feel good.

My gut told me that we had it wrong, and it turned out my gut was right. That didn't feel great, honestly. It was disappointing that so many folks had made assumptions about our frontline Experts, that they would assume the answer was a lack of ethics and not a bad data set.

Ultimately, I would have liked to have designed something more robust for our Experts. A feature that would give them the opportunity to log their mileage more accurately, and fix the issues with our instance of Concur. But, that wasn't something the business had time and resources to invest in, so we chipped away at what we could to make the system a little better for users.

A lesson in empathy.

All-in-all, I think the biggest win here was exposing our own mistake to the larger team. Digging into this through user interviews and being able to share in their own words what issues users were having caused us not only to re-examine our own data sets, but also to dispel the notion that the mileage we were paying out over projection was all due to the bad actors in the group.

I chose to write a case study on this project because it was equal parts deflating and encouraging. It felt from the start like some of our partners had made an assumption that there was foul play occurring at a large scale across our Expert population, and having spent a lot (I mean, like, a lot) of time with our users, I just didn't see it. When this problem was initially shared with our team, some of the folks seeing the numbers used phrases like "bad apples" and "gaming the system." Initially, I got some pushback about digging into the problem. One colleague asked why we couldn't "just build the feature and move on."

Being right doesn't always feel good.

My gut told me that we had it wrong, and it turned out my gut was right. That didn't feel great, honestly. It was disappointing that so many folks had made assumptions about our frontline Experts, that they would assume the answer was a lack of ethics and not a bad data set.

Ultimately, I would have liked to have designed something more robust for our Experts. A feature that would give them the opportunity to log their mileage more accurately, and fix the issues with our instance of Concur. But, that wasn't something the business had time and resources to invest in, so we chipped away at what we could to make the system a little better for users.

A lesson in empathy.

All-in-all, I think the biggest win here was exposing our own mistake to the larger team. Digging into this through user interviews and being able to share in their own words what issues users were having caused us not only to re-examine our own data sets, but also to dispel the notion that the mileage we were paying out over projection was all due to the bad actors in the group.

©2023 Alex Fortney. All Rights Reserved, etc, etc, and so on. Just please don't steal my things.

©2023 Alex Fortney. All Rights Reserved, etc, etc, and so on. Just please don't steal my things.