Ratings Matter
Last Edited
Mar 27 2024
Category
Community
One of the reasons pickleball is so popular is because it’s just so much fun. But one of the key ingredients that makes it so much fun is its competitiveness. Part of the charm and magic of pickleball is the inability to judge anyone before stepping on a court. It doesn’t matter if you are a six-foot-five inch professional athlete in your prime or a five-foot-two inch, seventy-seven-year-old lady with a hip replacement. There is no way to assess an opponent or their competency on the court just by giving them an eye test, you need to actually see them play. Being able to articulate what level you are, in a common language, is imperative to finding competitive games and ultimately to the success of the sport.
If you have ever played with friends where the skill levels are all over the place, you know, these games aren’t competitive and the experience is not nearly as enjoyable. (I know, there is certainly a place for hitting the ball around with friends just for fun…) Pickleball’s origins are on the courts of parks and neighborhoods all over the country. Open play has propelled this game to heights no one thought possible.
As the game grows and evolves, identifying the players who are most compatible for your level gets even more challenging. Part of the problem is this game looks so freaking easy. Anyone who has been a successful athlete looks at the game and thinks “I would be good at that.” I was guilty of it too, when I first started I watched high level pickleball on Youtube and honestly believed I could go out there and compete with them, before ever hitting my first dink. It’s the pickleball’s own version of the Dunning-Kruger effect and it is precisely why we need a rating system we can trust. Ratings are a form of shorthand, allowing us to understand who plays at a comparable level and will facilitate the competitive games we love. Problem is, there are a few of these rating systems. But not all rating systems are created equal. As a public service, I am going to rate the rating systems.
We have done our research, spoken to all the ratings companies, some of the data scientists, and their executive teams to get a deep understanding of the ratings business and what is required to make a quality rating. Before rating these systems, it’s important to clarify a few things that we learned along the way. Creating an accurate and dynamic rating system is actually quite difficult. Just like we see in other aspects of the sport, there is no shortage of new entrants into the space trying to figure it out. However, in order to have a truly universal ratings system, we must have a universal scale on which we are measuring these skills. I will attempt to break down which systems offer the most promise, why some are better than others, and what needs to happen for a robust ratings system to emerge that’s good for the sport. Let’s start with seven elements we have identified that are required in order to develop an accurate rating system.
Accuracy
A rating system is intended to measure a player's skill level. But painting with too broad a stroke will create an imbalance on the court, so the more granular a system can be, the more effective a service it will provide.
Speed
Being dynamic is crucial in pickleball because of how quickly players can improve and how quickly the limited data at the beginning of one’s journey can be measured to create a reliable rating. The ability to go up or down must be reflected in as close to real time as possible. Significant lag in data entry or change to one’s rating is problematic.
Consistency
In order to have accuracy, ideally we would all be on the same system. If we have different systems, with different ways to measure skills, and varying numbers representing the level of play, it will cause confusion amongst players. But in a free market, this isn’t a reality and competition is a good thing. But at very least, if we have different companies running their own ratings, it would be incredibly beneficial to everyone if they would at very least agree on a common language of levels. For instance, currently a 5.0 level player is recognized as an expert player, just under professional. And a 2.5 level player is at the beginning of this spectrum. Ratings should match up with the tournament structure (3.0, 3.5, 4.0, 4.5, 5.0, etc…) for this language and scale to work.
Algorithm
In order to properly measure our skills accurately, we must first devise a system to measure such information. Data scientists must weigh the importance of each instance and create formulas to articulate whether the person’s rating goes up or down and by how much. Getting this algorithm right is incredibly difficult because pickleball is scored differently than tennis and is a primarily doubles sport. This means borrowing from tennis is ineffective and accounting for different levels in partners and opponents is complex.
Data
The information that gets input into the algorithm are results and scores which arrive in the form of data. The data must be accurate, structured properly, consistent, verified, and most importantly, uploaded to the system in a timely manner. Whether it’s rec games at the park or serious tournament play, each win or loss represents valuable data to inform your ranking. The algorithm determines the importance of these scores and weighs them differently to determine your rating.
Connectivity
Critical mass is necessary to provide accurate scores. A 5.0 in Florida is currently not the same as a 5.0 in Vietnam. The more connectivity there is (players playing outside their bubbles), the smarter the algorithm gets and the more universal and trusted these scores become. It is then able to determine the correct rating and apply it evenly regardless of where in the world they are playing.
Reliability
Now that you have the makings of a good rating, how it is applied to each individual is different. For example, when playing infrequently or with players well outside their skill level, it is hard to determine an accurate rating. So having a way to measure the reliability of the rating is just as important as the rating itself.
Rating The Competition
Let’s start with the OG of rating systems, UTPR (USAP Tournament Player Ratings). UTPR is the official rating system of USAP, the governing body of the sport. UTPR might have been adequate for the sport that existed before pickleball’s explosive growth, but as the game evolved, UTPR did not. The biggest weakness is it only recognizes scores from USAP sanctioned events. When UTPR was originally launched, scores from all tournaments were accounted for, but USAP decided to limit what was measured to just USAP sanctioned tournaments. This might have been fine a decade ago when there were a limited number of tournaments a year and a handful of players playing, but now there are thousands of tournaments and millions of players. To compound the problem, the vast majority of tournament players compete in tournaments not sanctioned by USAP. Then factor in the reality that 95% of pickleball players have not participated in a tournament at all renders UTPR pretty much useless.
Because of these factors, UTPR scores tend to remain stagnant and not effective at representing one’s improvement over time. This year’s USAP National Championships in Dallas is a perfect example. There was a ton of complaining about sandbagging. But the reality was the rating system was more to blame than the players actually sandbagging.
I spoke with USAP CEO, Mike Nealy about this very matter back in October, where he admittedly answered my question with a non-answer. But one thing he mentioned was that in a perfect world, the governing body would own the rating system. But as he conceded, ‘the toothpaste is out of the tube’ and the rating system will be competed for in the free market. I reached back out to discuss UTPR with USAP and they sent me the following statement.
“USA Pickleball understands that player ratings are becoming difficult for players to navigate as there are multiple ratings options offered within the sport. Until recently, USAP has been exclusively using the UTPR rating for sanctioned tournaments. UTPR was the first and only elo-based rating for the sport. Multiple ratings options can make it difficult for players to decipher their competitive position.
As the governing body, it is our goal to remain objective and to be as inclusive as possible when working with qualified ratings platforms that will be competing in the space. As a result, while USAP is currently utilizing the UTPR rating, we are working on effective ways to sanction tournaments that are using other ratings. USAP’s goal is to work towards identifying and supporting a globally accepted rating for the sport.”
From the sound of it, the days of UTPR are coming to an end. USAP will certainly have input in which direction it goes when they choose which system to back. But if there is one thing USAP could do to ensure we have the best system in place would be to formalize the rating scale. USAP tournaments have set the standard with 3.0, 3.5, 4.0, 4.5, 5.0, and professional levels for tournaments. We would encourage USAP to recognize this scale as the universal scale, which would allow for healthy competition, all speaking the same language.
UTPR Rating: 2.83
DUPR, (Dynamic Universal Pickleball Ratings) the brainchild of Steve Kuhn, back in 2021 it launched and quickly established itself as the tech-savvy platform that would tick all the boxes laid out above. One of DUPR’s points of difference was that it took each point scored (and against) into account and calculated a rating score all the way to the fourth decimal point. DUPR did an excellent job of marketing their product and there is huge demand in the marketplace. But as many tech startups have realized, software and algorithms are not always easy to implement and deploy.
The DUPR algorithm was a black box of sorts and players ratings would occasionally fluctuate counterintuitively. For instance, a player could win a match against a lower rated opponent, but because the game was closer than the algorithm had predicted, their rating would go down. This caused some frustration in the marketplace and then the team at DUPR compounded the issue by overcompensating for the problem. They launched a new algorithm, this time using the ELO (or Modified ELO to be precise). ELO, which was mentioned in the USAP response above, is used in chess and is a way to rate players in zero-sum games. This pivot meant it no longer took each point into account, measuring only wins and losses. They also reduced the score from the four decimal points down to two. No one missed the additional decimal points, but the omission of points won/lost weakened the accuracy and was a key differentiator that set DUPR apart from the rest of the rating systems.
DUPR is officially the rating system of the PPA Tour, but Pickleball Brackets, the software that runs the majority of tournaments is no longer working with DUPR to ingest scores automatically. At one point Pickleball Brackets made it impossible for tournament directors to even download scores and enter into DUPR manually. Without these scores being uploaded DUPR will have to rely on players to input their own scores, which will have its own set of problems.
Changes have taken place at DUPR as they reorganize management. Kuhn has stepped aside and just last week DUPR announced they had received $8M in funding from a strong group of investors including Angre Agassi, providing some much needed capital. DUPR has been aggressive with a global push into South America, Australia, Asia, and Europe which gives them a first mover advantage as the game grows internationally and helps them with connectivity. We are bullish on DUPR’s prospects, but for it to be smooth sailing, it will require Pickleball Brackets and PPA to play nicely in order for DUPR to live up to its potential.
DUPR Rating: 4.53
PBR (Pickleball Brackets Rankings) are the newly created ratings from Pickleball Brackets. The creation of this rating might very well be the reason Pickleball Brackets has created friction with scores being uploaded to DUPR. Pickleball Brackets is part of Pickleball Inc, which consists of PPA, Pickleball Central, and a few other pickleball related brands.
If you are familiar with the Tour Wars, you understand that DUPR was a Steve Kuhn product and PBR is now a Tom Dundon product. You can draw your own conclusions, but it feels like the rating systems are being very much affected by the PPA and MLP merger delays.
PBR has all the tournament data from events that run through their software. There are competitors in the marketplace for tournament software, but Pickleball Brackets and Pickleball Tournaments were the two largest players and they were both acquired and rolled up into Tom Dundon’s Pickleball Inc group of companies.
One other thing to keep in mind is that Pickleball Tournaments had created World Pickleball Ratings, which has since been sunsetted. This was Pickleball Tournaments’ solution to UTPR’s limitations and took into account all tournament play, not just USAP sanctioned events. When Dundon purchased Pickleball Tournaments, the technology and algorithm that powered WPR was part of this deal. But WPR still only addresses tournament play. The ratings for millions of recreational players needs to be accounted for and we would not be surprised if the team at Pickleball Inc is looking to acquire a software solution if they haven’t done so already.
If PBR is something Pickleball Inc decides to pursue over the partnership with DUPR, this could give oxygen to the multitude of tournament software companies looking to make a name for themselves. PPA might be the pinnacle of the professional sport, but even as the premier amateur tour, they are only serving a small percentage of the pickleball playing public. With their deep pockets and huge upside, we believe that PBR could be a solid solution moving forward, but there is work to be done.
PBR Rating: 3.17
UTR-P, (Universal Tennis Ratings for Pickleball) is the leading rating system from tennis and is making their entry into pickleball. Having a robust system in place from tennis is certainly helpful, but the sports are too distinct to be able to leverage the algorithm. But their relationship with tennis clubs and facilities gives them significant reach and for tennis players migrating over to pickleball, a familiar brand to choose. Even without any real system in place just yet, they managed to have signed a partnership deal with the APP, making them the official rating of the tour. It is worth noting that APP events are all USAP sanctioned, so seeing an alliance between USAP and UTR-P is not out of the question.
Right now UTR-P is a self rated system, which goes from P1-P5. But it appears that when their software is ready, they will switch to a rating going from 1-10, 1-5 for amateur and 6-10 for professionals. Choosing to recreate the scale feels like a miss and will cause confusion in the marketplace. That being said, their credentials in tennis make them a strong competitor and we look forward to seeing their entrance into the space.
UTR-P Rating: 3.48
Next Steps
There are some clear learnings to take away. We need a common scale on which we rate pickleball players. Today that system of 2.5-8.0 seems to work for most systems and is compatible with the established tournament levels of 3.0-5.0 + Pro. Keeping these scores consistent would be helpful for onboarding new players and allow for healthy competition amongst these different systems. We hope this is something that USAP, APP, and PPA can get behind and agree upon.
One other issue that emerged was the issue of identity. Much like we see on social media, people are able to manipulate their personas and the same can be done with pickleball ratings. UTPR has the best protection against this because in order to play in a USAP sanctioned event, you must be a member of USAP, which requires an annual payment. Without a payment mechanism, preventing players from creating multiple accounts or manipulating their data will be difficult, if not impossible.
So, while we did not hear any of these companies discuss plans to charge for their services, it would make sense for them to move towards this model. Even a nominal fee, where a credit card is placed down and one’s true identity confirmed, would prevent the majority of fraudulent activity and improve the accuracy and trust of these systems.
After speaking to so many really smart and capable data scientists, executives, and investors in the space, I am confident we are on our way to a competitive marketplace with robust and capable rating systems working hard to improve the way we measure our skills. I just hope these companies are invested enough in the sport itself to recognize that healthy competition makes both their businesses and the sport of pickleball, better.