Pelton mail: Do all the 3s actually make the Rockets more consistent?

ByKevin Pelton ESPN logo
Sunday, January 21, 2018

This week's mailbag features your questions on 3-point randomness, young All-Stars, better lineup data and getting rid of MVP voters.



You can tweet your questions using the hashtag #peltonmailbag or email them to peltonmailbag@gmail.com.



Ok!



With Chris Paul/James Harden together, the Rockets only have 1 game with an oRTG below league average. Is it possible that they've crossed a 3PA threshold that eliminates some downward variance due to eFG% of taking that many 3PA? #peltonmailbag



- Mike Zavagno (@MZavagno11) January 19, 2018



The default assumption is that as teams take more 3s, that means more randomness and less predictability in this performance. This is completely logical, and something I had taken as gospel. But last year on Nylon Calculus, Bo Schwartz Madsen found this theory wasn't actually backed up by looking at the game-to-game variability in teams' offensive ratings. There was no significant correlation in the last five years between the number of 3s they attempted and their inconsistency in offensive output.



At that point, Madsen did note that the Rockets were actually "among the offenses with a higher spread on their offensive output." That's no longer the case. As part of a Twitter thread Friday, Madsen pointed out this season Houston actually has the lowest variability of any NBA team (as measured by median absolute deviation, which gives less weight to outliers than the more common standard deviation).



Let's trace this specifically to 3-point attempts. Madsen also found in his original analysis that -- naturally -- more 3s meant less variability in a team's 3-point percentage from game to game. (This is basically an inevitable result of the law of large numbers.) At the same time, more attempts mean that slight variations in the Rockets' 3-point percentage have more impact. So are they more or less consistent, specifically on 3s?



So far this season, the standard deviation of Houston's 3-point percentage is 6.6 percent. By contrast, the Golden State Warriors' 3-point percentage has a standard deviation of 8.7 percent, about a third higher. When we look at total 3s made, the Rockets therefore have a slightly smaller standard deviation (3.2 per game) than the Warriors (3.3). That difference isn't enough to explain Houston being more consistent overall than Golden State, but it does back up Madsen's original finding that consistency isn't related to 3-point attempts.



@kpelton #peltonmailbag is there a growing length of time between the year a player is drafted vs when they make an allstar game? in the 2017 game the latest a player was drafted was 2013 (greekfreak)



- Chuck Spadina (@HpyCsPz) January 18, 2018



There is, yes. No player in his first three seasons has made either of the last two All-Star Games, which is pretty unusual historically. Here's how the distribution of All-Star selections by experience since 1978 compares with the last four years:



Of course, to some extent, this is an inevitable product of players entering the league at younger ages. And when you look at the distribution of All-Stars by age, the last four games look a lot more similar to the past four decades:



There have actually been a few more players aged 20-22 in the All-Star Game the last four years than overall, a trend that will likely continue this season. Between Nikola Jokic, Kristaps Porzingis and Karl-Anthony Towns, we'll probably see a couple of third-year All-Stars, and all of those contenders are age 22.



@kpelton Are you aware of any luck-adjusted net rating data for *lineups*? Something that reduces the impact of hot/cold shooting and hones in on the true quality of a lineup, perhaps using Expected EFg based on shot location and defender distance? #peltonmailbag



- Jeff Allen (@TYTEJEFF) January 14, 2018



No, I'm not. First, we need to do something similar for player on-court and off-court stats. Jacob Goldstein started this process recently on Nylon Calculus, though his method tends to overstate the magnitude of good or bad luck for players because it doesn't account for the fact that more missed shots also mean some points via offensive rebounds.



I'm a little more skeptical at the lineup level because the samples are so much smaller in most cases that I think looking just at shooting luck will ignore other possible sources of randomness. Over the hundreds or thousands of minutes players spend on or off the court, these tend to even out, leaving fluky shooting as a primary source of randomness. But few lineups reach that many minutes; through Friday's games, according to NBA.com/Stats just 18 five-man lineups had played the 250 minutes together that are the minimum we'd typically ever consider for player stats.



Smaller lineup combinations (two- or three-man units) can build up larger sample sizes, but for the most part, I'd say even with an adjustment for shooting luck, that lineup data is too noisy to be given much weight.



"Why not get rid of the panel of voters who decide the MVP, and replace them by assigning a weighted score for basic stats (PPG, APG, RPG, BPG, etc.) and advanced metrics (PER, win shares, real plus-minus, etc.) and let the numbers speak for themselves? This would seem to get rid of various voter bias that inevitably occurs. If such a system could be created, who would you consider MVP in each of the last few seasons?"



-- Tanner Short



Ultimately, a lot of what we're discussing in these MVP debates is how to weigh various measures of value, statistical and otherwise. To your point, the answers to those questions sometimes change depending on which measure supports which player -- an example of what's known as "motivated reasoning" -- and adopting a standard could potentially create more consistency.



In practice, however, the example of NCAA football's Bowl Championship Series suggests to me that as soon as an outcome of the MVP formula didn't "feel" right, it would change, and probably keep changing based on overcorrections to whatever the most recent round of complaints was.



Moreover, I don't think a formula would have changed much in recent seasons. While there have been some difficult calls along the way (Russell Westbrook over James Harden et. al last season, Stephen Curry over Harden in 2014-15), ultimately, I think the voters have agreed with the consensus based on advanced metrics every year since Derrick Rose won in 2010-11. While that selection hasn't aged well, I'm not sure that's enough of an improvement in the success rate to justify such a major (and risky) change.



Related Video


Copyright © 2024 ESPN Internet Ventures. All rights reserved.