Overview
The Rate Accuracy Report helps you track your machine learning model’s performance. The accuracy of Rates predictions will always show some variation, so the goal is to keep within a certain margin of error. You can find this margin of error expressed as a percentage on the Rate Accuracy Report, so you can gauge the AI's accuracy for yourself.
The stats provided show different slices of MAPE and Confidence. Each time we train a model for you, we take a sampling of your past two weeks of data (or more in certain cases) and we withhold that from the model’s training. This dataset is called the test set. The model then runs predictions on the test set, which we compare to the rates on successful bookings to see how well the model is performing.
Current and History Tabs
The Current tab shows the stats for your current model so you can so you can measure performance from the last AI model training.
The History tab includes stats from the past five model trainings for the sake of context. Model trainings usually happen weekly or biweekly, but may happen more often in some cases, as we're always looking for ways to increase prediction accuracy.
MAPE by Confidence
This chart shows you the Mean Absolute Percentage Error (MAPE) across all of your loads. It divides your loads by Confidence Level (Very High, High, Medium, and Low). We like to see the overall number below 10%, but this depends on how your company books freight and what kind of freight you book. If your pricing tends to have more variability, the MAPE will be higher:
MAPE by Lane
This chart divides your loads into New Lanes and All Lanes, so you can see how the model is performing on new lanes compared to its performance overall. New lanes often have a higher error percentage, so showing them side-by-side gives you a little more context:
Confidence by Request
This chart breaks down the requests your users have run since the previous training by Confidence Level (Very High, High, Medium, or Low) so you can see the general trend of prediction Confidence:
Confidence by Load
In this chart, you can see the distribution of test set load predictions across Confidence Levels. The goal here is to have >50% of predictions in the Very High and High Categories:
Predicted vs. Booked and Live Model Performance Tabs
These two sets of metrics are the most effective way to analyze the performance of a prediction model in real time. They show you how our predictions stack up against actual booking rates. When paired with your outlier data and proprietary pricing, they become an essential compass in the landscape of market pricing.
One quick and easy way to make this data work for you is to start taking the average percentage difference and adding the opposite percentage to your sell price, so that you make up the margin.
Predicted vs. Booked
This tab shows your predictions matched against loads booked through the product UI or API. It consolidates successful matches and provides aggregated data on prediction accuracy:
For example, if you receive a prediction of $1000 and actually pay $1050 for the load, the Predicted vs. Booked report will show -5%. This indicates that either the model has underpredicted, or that the booking rate was affected by other factors. Now, let's say you manage to book the same load for $950. The report will then show a 5% difference, meaning that either the model has overpredicted or you just got an amazing deal.
Generally speaking, anywhere between -5% and 5% is a good zone to be in. If we're predicting a difference of more than +/-10% on aggregate, our team will take a much deeper look into what's going on with the model and with your data.
Live Model Performance
This tab is very similar to Performance vs. Booked, but includes a much wider data range. Instead of only making predictions on loads you request explicitly through the UI or API, our software runs predictions on every load you've uploaded since the model was last trained. This gives us a much larger sample, and a better picture of the model's performance than the Performance vs. Booked reports:
At this time, we're following the same general rules as Performance vs. Booked (<+/-5% is solid, >+/-10% is concerning) but we expect this methodology to reflect more reliable statistics, especially as we continue to calibrate towards higher accuracy within the entire network.
Diff % Average
This chart shows you the average difference between predicted and booked rates by percentage. For instance, if you had two rate predictions in your historical data, where one was 5% higher than the booked rate and the other was 10% lower, the average would be -2.5%, because the average of 5 and -10 is -2.5. This helps you keep track of the general margin of error in real time. In this case, it would tell you that the model was predicting a little low overall.
Clicking Use Absolute Values will show you the absolute value of the average difference. This number doesn't take into account whether the prediction is high or low, only of how far it is from the booked rate. For instance, if you looked at the two rates above (5% high, 10% low) using absolute values, the difference would be the average of 10% and 5%, or 7.5%. This shows you how far the model is from booked rates overall:
Diff $ Average
This chart is just like the Diff % Amount chart, but it shows you the average difference between predicted and booked rates by dollar amount instead of percentage:
Diff % Distribution
This chart shows you how the AI's margin of error is distributed by percent to give you a still more precise view of the AI's accuracy. Loads are distributed along a vertical line according to the difference between the predicted rate and the rate at which they were booked:
You can hide extreme outliers by clicking Hide Extreme Outliers at the top right of the graph. (Generally these represent loads where a lot of unusual conditions affected the price, so including them can be misleading).
A bigger and darker green dot represents a larger number of loads. For instance, on the chart below, hovering over the dot at the 0% line at week 31 shows us that 507 loads were booked that week at the exact rate predicted:
Hovering over the dot at 10% shows us that the AI predicted a rate of 10% higher than the booked rate for 104 loads that week:
Diff $ Distribution
This chart works just like the Diff % Distribution chart, but it shows you the AI's margin of error is distributed by dollar amount rather than percentage:
Improvements to Come
We're exploring ways to break down metrics, such as by transport type or region, to provide you with more data to support your decision-making process. Stay tuned.