How do we measure "Ramped-Upedness"

"To Onboard or Not To Onboard"
A sales leader once told me that “onboarding was a waste of time”, that it was his belief that we needed to push sellers into their territory and customer conversations as soon as possible, that the best "sales teacher" was "experience". As the person responsible for new hire training, I argued vehemently that without my bootcamp training on our solutions and value propositions, our new hires couldn’t possibly know how to position our solutions effectively. This wasn't just your "standing-by-the-water-cooler" type of conversation. I felt my very professional existence was at stake.
So, I set out to prove that I was right. And, the only way I could think of to do that was to measure the premise with data. But that's when I realized, when I really thought about it, I didn't know what the right metric was.
{Since that time, learning what I have, I think it's important to tell you right up front that BOTH the manager and I were missing something important...}
How long does it take a new seller to be “ramped”?
If you’ve read any blogs on “accelerating new hire performance”, you’ve seen lots of different metrics used to decide when a seller is “officially ramped”. But, which one (or ones) are the best?
Some metrics focus on the training process itself. Metrics like: Training Attendance, Test Scores, or even Role Play Certifications. Unfortunately, while these can measure a person’s product knowledge or presentation skill, closing real-world deals is significantly more difficult and complex than these training activities can represent. And I’m not sure that anyone has ever proven that high assessment scores are an accurate indicator for predicting a seller’s future sales potential.
Other metrics look at seller performance from a time perspective like: Time to First Deal, Time to First Quota Attainment (usually monthly or quarterly), or even Time to Annual Quota Attainment. While these are performance-based, they “stop” once the seller has achieved the benchmark. So, if we look at a class of 25 new sellers, and their Time to Average Quarterly Quota Attainment is dispersed across 3 or 4 or 6 quarters, which one is the “right one”? Or, is it an average across them?
And, are any of these metrics actually useful in deriving a correlation with future sales success? In my opinion, I don’t believe they do. Or, at least, I haven't been able to calculate it.
Time to Average Performance (TTAP)
After thinking about this for a long time, I have settled on one metric: Time to Average Performance. In other words: how long does it take a new salesperson to achieve the same level of performance as everyone else.
Having worked with clients on generating this metric, we’ve found, however, that it’s not as easy as it sounds. This single metric is actually a combination of metrics: including average revenue across sellers and average productivity by tenure. And, only after massaging and cleaning the data, have we achieved a “true” picture of the development time of new salespeople.
In spite of the data cleansing challenges, in my opinion, there is a tremendous amount of value in this metric, because it peels away any illusion of “quota achievement” or other non-performance-related viewpoints on a seller’s development. They are either at the same level of revenue performance as everyone else or they aren’t. And, when they aren’t, we can see how far off they are.
There are a number of reasons why I believe this is the best metric to determine how long it takes a new seller to “Ramp Up”:
“Class-based” – This analytic looks across “classes” of new hires versus individual sellers. This smooths out the data and limits the influence of extreme performances.
“Quota-less” – This metric takes quota out of the equation. By only looking at revenue performance, it is grounded in a more universal and factual foundation for analysis.
Consistent across currencies – Because the metric is really looking at the “relationship” between the average of all sellers with the performance of a class of sellers, the currency is actually irrelevant. This means that the development of new hires in Japan, Germany, Brazil and the United States can all be compared with confidence.
“Universal Responsibility” – In general, with sales organizations that experience complex B-2-B sales cycles, the development time of new sellers typically takes longer than a year and occurs well past the point where onboarding stops. What this means is that this metric reveals the performance of the entire team responsible for a new seller’s development (from training through management). And, in my experience, it typically highlights the “lack of structured development” available to new hires after their initial 90-day onboarding process has completed.
Lionboard Analytics
It should be no surprise that the metric of Time to Average Performance is one of the primary analyses available within our Lionboard Dashboard.
The below image is a snapshot of what this analysis can look like for a company with a complex sale process.

This chart is showing the average performance of sellers based on how long they have been with the company. And with this image, we can clearly see that on average it takes 3.5 years before a new seller achieves average performance.
Before you scoff at this chart and think to yourself that this is WAY too long. Let me tell you that this level of development is comparable across each of my clients. {To be clear, I believe this timeframe is reasonable for B-2-B sales organizations where their average sales cycle is more than 4 months, numerous decision makers are involved, political navigation is required, and establish competitors need to be overcome.)
So, I believe the question you should be asking isn’t “does it really take this long?”, the question you should be asking is: “Is 37%, 66%, and 84%, the right level of performance for my newest sellers in their first three years?”
What is the right pace of performance for new sellers?
This is the critical question! Because accelerating new hire performance doesn’t happen in the first 90 days or even 180 days. {And this is where both the manager and I were both wrong…}. It happens over years.
And your entire organization has to be honest with itself: on a scale of “perfect” to “poor”, how well does it support the development of your new sellers AFTER the onboarding process is completed. Don’t worry, I’m pretty sure we both know the answer to that already and it rhymes with “sore”.
This TTAP metric (and our dashboard) is delivering to you a much better understanding of your new sellers’ reality. And, with this insight, you should be able to do two things:
Develop a strategy that extends beyond onboarding that will have a much more meaningful impact on seller performance and their revenue.
Manage your new sellers based on a more realistic expectation of performance.
Conclusion
With our clients, we continue to uncover that sales enablement has a significantly larger influence on seller performance and company revenue than we could have imagined. But, it’s all "unrealized revenue" until we develop strategies and programs designed to specifically target and effect enablement gaps.
The "real" development time that a new hire requires to reach "average" is one of those gaps, and without a strategy targeting that specific development effort, we are leaving millions of unrealized revenue in each of our new seller's territories.