Measuring performance or value is always tricky. There are no statistics for leadership, or toughness, or being the ultimate teammate.
Yet we have this insatiable drive to rate things. We want to know who the best is, and we want there to be a definitive answer for it.
In baseball, you have WAR, which isn't the definitive answer for value, but comes close to telling you what a player was worth to his team during the season. You wouldn't necessarily vote for the MVP of the league based solely on WAR, but it would give you a good list of candidates to start thinking about.
We have nothing like that in basketball, at least nothing anyone is pleased with. There is PER, there is +/-, there are efficiency numbers, but no solid way of thinking about what a player is worth to his team versus that one guy on everyone's bench whose name doesn't really matter.
Thus began the journey.
I spent a good week thinking about this. I am not sure that what I have developed is truly unique, nor am I sure it is right. I am sure there are flaws, but it seems to work for me.At its basic level (because the full explanation would bore you*) a player is rated based on how many points he saves on the defensive end, and how many he contributes to on the offensive side of the ball. That way things like assists, steals, turnovers and blocks all count. That number is then transformed so that every player's stats are based on the average pace of play in the country. We then subtract out what a "replacement" player would contribute (about 160 "points" per 500 minutes of play) and divide by 30. The last part is all about getting to a number equal to wins about the replacement player.
Getting the replacement player stats is part of the craziness involved here. Again, best served for another post with lots of math in it.
How does it work in practice?
Most people believe that Anthony Davis from Kentucky is the (INSERT AWARD) of the year for college basketball. It would stand to reason that he should have the highest ... let's call it HOOPWAR, of anyone playing out there.
I ran Thomas Robinson, I ran Draymond Green, I ran Davis.
The winner: Davis. He ends up with 11 HOOPWAR, and it isn't even close. Robinson and Green were both worth about 8 HOOPWAR.
The rest of the country, at least the players on those teams that I have put through the formula, are worth less and less. The unfortunate worst player among the 20 teams or so that I have calculated is Miami's Will Sullivan who cost his team three games by playing his 411 minutes this season. Perhaps this is why Charlie Coles is retiring.
Wednesday the MAC all-conference teams were announced, headlined by conference player of the year Mitchell Watt from Buffalo.
Mitchell Watt, Buffalo
Javon McCrea, Buffalo
Justin Greene, Kent State
Julian Mavunga, Miami
D.J. Cooper, Ohio
Do these players hold up under the slightly scratched lens of the microscope of HOOPWAR?
It should be no surprise that the top player according to the statistics was Julian Mavunga, who led the MAC in scoring and rebounding. That it took him about 200 more minutes than everyone else to compile those numbers is something for another time.
As it was, he was worth about 8 wins above a replacement player in his spot. Without him, Miami would have been even more dreadful than they were this season. He was a clear choice for the first team.
But that is when things get a little tricky.
Here are the rankings of the other four players:
D.J. Cooper, Ohio: No. 5, 6.1 HOOPWAR
Mitchell Watt, Buffalo: No. 6, 6.0 HOOPWAR
Javon McCrea, Buffalo: No. 10, 4.8 HOOPWAR
Justin Greene, Kent State: No. 19, 2.7 HOOPWAR
Can I say that the voters for all-conference did a bad job? Not necessarily. But the selection of Greene, who may not even have been the most valuable player on his own team could be questioned.
Both Michel Porrini and Chris Evans scored higher for the Golden Flashes with Porrini tallying 3.5 and Evans at 3.2. I am not sure on the margin of error here, but the fact that two players finished above Greene should cause some alarm.
Maybe it was Greene's defense that stood out for the voters. He did finish with about 15 more points prevented than either of his teammates. But even that small an amount is hard to visually see during a 30-game season.
Is anyone really watching closely enough to recognize that half-point difference per game? Doubtful.
The three players who finished between Mavunga at No. 1 and Cooper and Watt at Nos. 5 and 6 were Central Michigan's Trey Zeigler (6.9 HOOPWAR), Toledo's Rian Pearson (6.9 HOOPWAR) and Ball State's Jarrod Jones (6.6 HOOP WAR).
All three helped their team avoid the fate of Northern Illinois -- the basement of the MAC (the Huskies were led by freshman Abdel Nader with 1.3 HOOPWAR). I would like to think that the performance of the team overall was what held these players back from the first team. Jones and Pearson were second-team selections. Zeigler was on the third team.
As I started the discussion with, the HOOPWAR stat might not be perfect, but it is a pretty good first stab at understanding the value of a player in relation to others in the country. I think it jibes with the eye test so far.
At worst, it is a great way to start the conversation on things like who should be considered the best player for a team, league, and country. And with that, I open the floor for questions.
*Link coming soon for the whole formula broken down. Don't worry math geeks, I wouldn't leave you hanging.