The PairWise Rankings(PWR) are college hockey's answer to the BCS. I know, I know, with an endorsement like that, what could possibiy go wrong? While most of the sports world will focus their attention of the decisions of a select few men in a smoke-filled room in Indianapolis when it comes to selecting the final at-large spot in the NCAA basketball tournament, the NCAA Hockey tournament committee takes all the surprise out of selection day by relying solely on a mathematical system to determine who will fill out the field of 16 that will compete for the NCAA hockey championship.
How Does It Work
We'll start with the very basic of how the system, since that's probably what the unfamiliar are most interested in, and work our way down to bitching about it later on.
The first step in this process is to calculate the Ratings Percentage Index(RPI) of every team in college hockey. College basketball fans are probably familiar with the RPI. The RPI takes a team's winning percentage, their opponent's winning percentage, and their opponent's opponent's winning percentage and factors them together to come up with a team's RPI. The only difference between the two is the weighting of each category. College basketball still uses 25%-50%-25%, while college hockey weights their RPI 25%-21%-54%. I'm not sure why. I guess they felt it gave a more accurate result, but the numbers seem to change every year. College hockey's RPI also factors out the occasional game against a really bad team where winning actually hurts a team's RPI.
Once every team is ranked 1-59 based on RPI, the top 25 teams are seperated out into what are known as Teams Under Consideration(or TUC). Each TUC is compared against every other TUC on the basis of four categories. The team that wins the most comparisons against TUCs is ranked number one, and teams are ranked so on down the line.
The four categories used in each individual comparison are RPI(worth one point), record against TUCs(worth one point), record against common opponents(worth one point), and head-to-head record(each win is worth one point). If there is a tie, the team with the higher RPI wins the comparison.
For example, let's look at the comparison between Boston University(currently #1 in the RPI) and Notre Dame(currently #2 in the RPI)
RPI: BU-.5976 Notre Dame-.5819--One point for BU
Record vs. TUCs: BU-14-3-3 Notre Dame-6-5-0--One point for BU
Record vs. Common Opponents: BU 7-1-1 Notre Dame- 5-2-0--One point for BU
Head-to-Head: BU and Notre Dame didn't play each other, so nobody gets a point
BU wins the comparison over Notre Dame 3-0.
As it stands now, BU would win the comparison against all 24 other teams under consideration, and thus, they are ranked number one with 24 comparison wins. Notre Dame is second with 23 comparison wins. Michigan is third with 22, and so on down the line.
Why This System?
A long time ago, college hockey used to go with the "smoke-filled room" style approach like the NCAA basketball tournament. And like with the basketball tournament seemingly every year, there was the occasional controversial decision made between two teams on the bubble. College hockey is a smaller community than college basketball, and those controversies had a more profound effect.
The tournament committee began going with a more mathematical approach in the 90's. They didn't use this exact system, but they used some sort of rough grid that compared teams at or near the bubble with other teams near the bubble. Eventually a group of people got together, analyzed the committee's decisions, and created the PWR system to mimic the tournament committee's selection process.(This is what happens when schools like Cornell and Harvard are major players in your sport). The PWR system correctly predicted every at-large berth for a number of years, and fans loved the ability to see exactly what was happening in regards to the NCAA tournament and not having to worry about any surprises on selection day.
Eventually, the PWR was codified in the rule book as the only thing the selection committee is allowed to look at when determining the field.
So Why Does It Suck?
And now for the bitching. It takes a whole lot of numbers to come up with the 10 at-large bids for the NCAA tournament. The layperson looks at the PWR and thinks "Boy that's a lot of numbers, it must be pretty accurate". The problem is that most people well-versed in math will tell you that it sucks and it some cases, doesn't make a lot of sense. Personally, I have two main problems with the system.
1. The TUC Cliff
The PWR draws an abitrary line at the top 25 teams in the RPI. Anyone on the right side is annointed a "good team", anyone on the other side a "bad team". The thing is, there are only 58 teams in college hockey, meaning the top 25 encompasses 43% of all teams.
In the TUC category of a comparison, a win against the #1 team counts the same as a win against the #25, even though there is a huge gap between the two.
This can create some wild fluctuations in the calculations or a week-to-week or even game-to-game basis. If a team sweeps a season series and goes 4-0-0 against a team near the TUC cliff, having those four games on their record could be enough to flip a lot of comparisons depending on which side of the cliff that teams ends up. Proponents of this system argue there is no cliff because the system is designed to only be looked at once, at the end of the season, and thus, there are no fluctuations, but regardless, teams still gain a disproportinate benefit if a team they beat ends up 25th rather than 26th.
One of the most interesting TUC cliff examples came last season. Notre Dame was on the outside looking in at the NCAA tournament late in the season until Northern Michigan won a couple games and moved into the top 25. Adding their games against Northern Michigan was enough to push Notre Dame back on the right side of the NCAA tournament bubble.
It just so happened that Notre Dame met Northern Michigan in the third place game of the CCHA tournament. Northern Michigan needed to win the game in order to remain a TUC. A loss or tie would have knocked them out of the top 25, but a Notre Dame win would have been enough to get them in the tournament regardless of Northern Michigan. So the situation was simple: either a win or loss by Notre Dame got them into the tournament, but a tie would knock them out of the NCAA tournament. Northern Michigan tied the game at one apiece in the third period, but then eventually took the lead two minutes later.
To their credit, Notre Dame gave an honest effort to try and tie and hopefully win the game, and a potential game-tying shot rang off the post late in the third period, but they ended up losing their way into the tournament, where they went on a nice run to the NCAA championship. It brought up interesting hypothetical questions, however, about whether it would be worth shoveling a puck into your own net in overtime, if it meant securing an NCAA bid.
2. Outliers
The reason the NCAA doesn't solely use the RPI is that they don't really believe it's a perfect measure of a team's season. So they created a system that uses the RPI heavily, but would also modify it a slight bit in certain instances.
The problem is that sometimes those unweighted modifiers produce some goofy, or just plain illogical results that completely override a season's worth of work measure by the RPI, rather than simple modifying it a little bit.
Take this year, for example. Currently, Vermont has the 5th best RPI in the country at .5606, while Minnesota has the 16th best RPI in the country at .5308. That's a pretty substantial gap, and you'd be hard-pressed to find any person that would say Minnesota has been better than Vermont this season. But the computer says Minnesota wins the comparison against Vermont.
Vermont has the huge advantage in RPI, so they get that point. But Minnesota has a one game advantage in TUC record--Vermont is .500, Minnesota is one game over .500--so they get that point(Although Minnesota has a ton of games played against teams right on the TUC cliff so that could fluctuate a lot). And Minnesota has the advantage in common opponents thanks to their 2-0-1 record against common opponents with Vermont.(And Vermont has six common opponent games with Minnesota, and most people would tell you it's much tougher to go 4-0-2 against a set of teams than it is to go 2-0-1). They never played head-to-head so there's no point there. So basically, a huge advantage over the course of a 34-game season gets completely wiped out by about four games.
What's a Better Solution?
A lot of mathy people endorse the KRACH rating system. Some of you might know it better as the "Bradley-Terry" system, which gets brought out a lot when debating the merits of the BCS' selections.
The math may be better, but the results it spits out don't draw many fans. Because of college hockey's insular schedule and high proportion of league games to non-conference games, the teams in the strongest conference seem to get a huge boost in the KRACH system. There have been instances where teams with a losing record have been in line for the at-large NCAA tournament bid with the KRACH system, which is something that wouldn't go over well.(The NCAA has banned teams with losing records from getting at-large tourney bids in hockey, though Wisconsin got in last year with a losing record before that rule took affect).
Overall, I think they've got the right idea by starting with the RPI and trying to modify it. The issue is finding a more accurate way to modify the RPI numbers. The TUC and Common Opponent categories need to have some way to weight for number of games played and the strength of the opponents.
And if there has to be a cliff, it should be in a more logical spot. NCAA tournament fates shouldn't be decided on whether a team finishes in the 43rd or 44th percentile of college hockey. They should go back to the old way where the teams under consideration were actually teams under consideration for an at-large spot.
This all seems pretty complicated, perhaps overly so, but there's also no denying its importance. The NCAA hockey isn't like the NCAA basketball where odds are heavily stacked against a double-digit seed. With just 16 teams, in a single-elimination format, every game is pretty much a coin flip determined by which team gets better goaltending and a lucky bounce or two. Each team in the tournament has a legitimate chance to win it all, so getting in is huge.
For further reference, College Hockey News does a job of explaining the PWR.