I spent most of Friday and Saturday at the MIT Sloan Sports Analytics Conference in Boston, checking out other research and discussing my work (with my buddy Chris) on the NBA draft and tanking. Peter Dizikes wrote a nice article for MIT News discussing our project and some of the other work by MIT affiliates. I was also interviewed by a fellow named David Staples from the Edmonton Journal about our project.
David mentions another project on tanking presented at the conference. Adam Gold, who’s a PhD student at the University of Missouri, presented his “solution” for tanking. The proposal: total team wins after playoff elimination should determine draft order. My problem with this: teams that are eliminated sooner have more time to accumulate wins post-elimination, so, rather than race for the overall worst record, teams would race to be eliminated first. I think this would make the problem worse, since teams with low expectations might give up early in the season, even if these expectations were wrong.
Adam’s response was that no teams will tank early, since they all try to make the playoffs first and foremost. I wish that were true, but it’s not. There are many teams each year that have no plans to make the playoffs. By this point in the season, there are many more who know they won’t make it, even though they are not mathematically eliminated. Fans — even from other teams — would push these teams to achieve elimination so that the exciting competition for the first pick could get up and running. If nothing else, my research with Chris shows that teams respond strongly to the tanking incentives currently in place. I am sure they would do the same under Adam’s plan.
Indeed, Adam showed that teams with worse overall records tend to pick higher in his system, affirming that he would continue to reward poor play. Any system that maintains the redistribution or fairness goals of the current draft rules must create incentives to tank. You can’t give bad teams the best players without creating an incentive to be bad, because everyone wants the best players. In Adam’s system, the incentive is a little bit hidden, but it’s there.
I find the whole issue of redistribution through the draft intriguing, given the general meritocratic viewpoint in the U.S. Our economic rules reward the successful much more than the rules in most European countries, yet we insist on redistribution through the draft and salary caps in our professional sports leagues. Meanwhile, in Europe, poor performing clubs are kicked out, and the only way to get players is to go out and buy them with your own cash. I prefer our system, because I think it’s more exciting and unpredictable, and tanking is just one unsavory component of it, nothing more.
We got nice feedback on our poster. Most people believed the results (note that we find much more tanking than Adam did with his simple comparison of winning percentage before and after playoff elimination). Lots of people asked how exactly teams tank. My guess is that it’s personnel, not effort. Teams probably rest stars when they need to lose and play weird rotations under the auspices of experimentation. I don’t have proof for this yet, though.
Others asked about the NFL and NHL. I hope to repeat the analysis on those leagues soon (I expect less tanking, since it’s harder for one player to change a team in football and hockey).
The conference was great — there’s nothing else like it for sports. I was so grateful to be invited. I still have a few suggestions for them going forward, though. First, the panel discussions dominate the proceedings. They have lots of big names (Mark Cuban and Bill Simmons always attend, Bill James came this year), which brings in the average fans. However, it’s hard to share expertise in this format. The conference should keep doing the panels — there’s clearly demand for them — but there’s an opportunity to expand the research presentations and discussion. This year, the research side was artificially limited, since the conference offered the same presentations both days to reduce scheduling conflicts for attendees. Research attendance was much lower on day two, though, so I’m not sure how much overlapping interest exists between the research presentations and panels. In any case, the papers are available online in the event of scheduling issues.
My second suggestion would ensure that the presentations are worthwhile. Currently, the selection process pushes through some questionable research. Some papers are quite good (check out “CourtVision,” “Deconstructing the Rebound,” “Effort vs. Concentration,” and “Predicting the Next Pitch”). Others address interesting problems but suffer from standard statistical issues (such as reverse causality and omitted variables bias) that deny a causal interpretation of their results. I don’t blame the authors, who are surely submitting the best research they can do, but it’s no surprise that the four I list above were all written by PhDs. I imagine that the selections year to year discourage many other good analysts from submitting work.
Perhaps because of this (and because of the higher profile of the conference), the ratio of skilled analysts to interested fans seems to be dropping. I had very few conversations with individuals trained in statistical analysis this year and I was impressed by just a few research papers. We were the only authors that regularly stood next to our poster to explain our findings but we had very little traffic. To improve credibility, the organizers could ask top level analysts to review the research entries. I imagine many would be honored to judge for the conference.
Of course, this depends on the organizers’ goals. They could continue to develop it as a sports celebrity meet and greet with some interesting general discussion about analytics. There’s probably plenty of money in that direction. However, to keep their prestige in the long run, I think they need the analysts themselves on their side.