R-P-I. It has become an acronym as familiar as NCAA or ESPN to die hard college basketball fans. Casual fans look at the RPI and wonder how a team like #23 RPI Cincinnati (16-9, and unranked in the polls) is rated ahead of a team like #35 RPI George Washington (20-1, #7 AP and #8 Coaches' Poll). With apologies to The Grinch and Dr. Suess, one could puzzle and puzz 'til their puzzler was sore and never figure out how any rating could say that the Bearcats are a better team than the Colonials. And there lies the error.
The RPI is not a ranking of teams based on who is better. In fact, it is not a ranking at all. It is a "rating." Webster's defines a 'rating' as "a relative estimate or evaluation." It defines 'ranking' as "determining the relative position of." This may seem like splitting hairs, but for our purposes, the difference is pretty clear: the polls are a ranking of the best teams as perceived by coaches or sports writers. The Rating Percentage Index is a statistical measurement of how a team has performed against its given schedule. The RPI is not a tool to place the best teams in order. If that were the case, no team would be skipped over when selecting teams for the NCAA Tournament. If a team has a difficult schedule, they have a better shot to be highly rated in the RPI even if they lose some games (Arizona, anyone?). If they have an easy schedule, it is more difficult to gain a high rating (ask GWU).
How does it work?
The formula consists of three components: a team's winning percentage, their opponents' winning percentage (OWP), and their opponents' opponents' winning percentage (OOWP). Here is where it gets a little confusing. A team's winning percentage is not simply wins divided by total games. As of last season, home and road wins are weighted differently. When a team wins a game at home, they get credit for 0.6 wins. If they win on the road, they get credit for 1.4 wins. Conversely, if they lose at home, 1.4 goes into the loss column, and if they lose on the road, 0.6 goes into the loss column. Games played at neutral sites are simply 1.0 both ways. This adjustment only applies to the team's winning percentage portion of their RPI. The OWP and OOWP are straight win/loss records. If you do not like numbers and formulas, skip on down to the next section. If you are a huge geek like me, read on.
For an example, let us use UNC-Wilmington. They have 12 home wins (11 x 0.6=6.6), two neutral wins (2), and seven road wins (7 x 1.4=9.8). Therefore, their total wins in the RPI reads as 18.4 (6.6+2+9.8=18.4) even though they have won 20 games total. The Seahawks have lost seven games this season, but only one was at home (1.4 value) and six were on the road (6 x 0.6=3.6). So, for RPI purposes, their losses read as 5 losses (1.4+3.6=5), making their RPI record 18.4 and 5 for a winning percentage of .7836. The average of the Seahawks' opponents' winning percentages is .5330. The average of their opponents' opponents' winning percentages is .5129.
Once the three percentages have been determined, the team winning percentage is worth 25% of the RPI, the OWP is worth 50% and the OOWP is worth 25%. So, UNCW's RPI is calculated this way.
Winning %: .7836 x .25=.1959
These three values add up to .5913 and that is UNCW's RPI.
Talking heads on television like to throw out the term "strength of schedule (SOS)" without explaining it. A teams SOS is simply the last two components of the RPI: the OWP and the OOWP. It is 2/3 OWP and 1/3 OOWP; the same ratio as in the RPI. That's it.
How can a team move up or down in the RPI without playing?
As stated above, only 25% of the RPI is actually controlled by the team. The other 75% is composed of what a team's opponents and their opponents do. So, although Western Kentucky was idle last night, six of their opponents played last night, going 4-2 in those games. Therefore, WKU's raw RPI gained enough to move them from 47 to 46 overnight. If their opponents would have gone 1-5 or 0-6, they likely would have dropped a slot or two.
How can a team drop after winning a game?
This is similar to the question above. Sometimes, especially in home games against bad teams, the negative impact to the OWP is greater than the positive impact on a team's winning percentage. For example, Virginia's RPI dipped from .5663 to .5633 even though they hammered Longwood last night. Why? Longwood's 5-16 record hurt UVA's OWP, and the gain in UVA's winning percentage was not enough to make it up. That may seem unfair, but remember that the RPI is a measure of how a team performs against its schedule.
How important is the RPI to the selection committee?
Many, many factors that are not directly connected to the RPI are a part of the selection process, including but not limited to overall win/loss record, conference record, road/neutral record, record in the last ten games, and non-conference record. The regional team rankings determined by coaches are in the room also, although no one is entirely clear on how they are used.
But, many of the factors are connected to the RPI including the raw RPI; strength of schedule; record vs. the Top 25, Top 50, and Top 100 of the RPI; losses to sub-100 and sub-200 RPI teams; and conference RPI. The RPI is obviously an important tool for the selection committee, but no team gets in on RPI alone. Higher RPI teams are left out for more worthy teams with lower RPI's each and every year. That said, no team better than #33 in the RPI has ever been left out, but teams that high in the RPI usually have lots of good things going for them.
Is the RPI skewed to favor the Big Six conferences?
In my opinion, it is easier to make the RPI work for you if you are in a Big Six conference, but that is mainly because the Big Six conferences generally have lots of teams with good records. However, this season we have seen the Missouri Valley crack the top five in conference RPI. Memphis and Gonzaga have maintained RPI's by following this formula: play a killer non-conference and win all of your conference games. That is tough to do, but some teams are doing fine with that formula.
Where it really hurts is in the skewed number of road games. Syracuse played their first fifteen games in the state of New York. They went 13-2 in those games. Louisville played twelve of their first thirteen at home. The power conference programs have a huge scheduling advantage over teams like Bucknell who played six non-conference road games or George Mason (five roadies). Imagine where George Mason would be without those road losses to Mississippi State and Wake Forest. GMU just cannot schedule tough teams in their gym, and they have to swap road games with other tough CAA-equivalent conference teams. The adjusted weight to road games supposedly will help level the playing field, and I truly hope it does.
Earlier this season, Numbers Lord Ken Pomeroy had this to say about the Big Six teams' scheduling methods:
"We can only speculate on the reason for the lack of change in scheduling practices, but to me it's clear. Teams with the big budgets are not willing to trade two or three spots in the RPI for the money that home dates bring in. This may come back to haunt one or two teams each season, thereby costing them revenue they would get from the NCAA Tournament, but most schools are willing to take that risk."
To me, that is an acceptable trade-off. The big boys keep their home games, and the non-power teams get an extra slot or two in the Big Dance. I think (hope) we saw the beginnings of that last season with Northern Iowa's inclusion over Maryland and Notre Dame. We will have more evidence to ponder in just a few more weeks.