Z-Scores Are Lying to You (And Why Everyone Uses Them Anyway)
In fantasy basketball analysis, using Z-scores to evaluate players isn't necessarily wrong. Those Z-scores just might be answering a different question than you think.
Anyone who's prepared seriously for a fantasy basketball draft has encountered Z-scores. FantasyPros uses them. Basketball Monster uses them. That guy in your league who uses a spreadsheet to draft? Probably built on z-scores.
The appeal is obvious: convert everything to a common scale, compare apples to oranges, rank players scientifically. But there's a fundamental flaw worth understanding.
What Z-Scores Actually Measure
A z-score represents how many standard deviations from average a value sits. If players average 15 points with a standard deviation of 5, a player scoring 25 has a z-score of +2.0.
This solves a real problem. You can't directly compare 25 points per game to 8 assists per game because they're different units. Convert both to z-scores and suddenly they're commensurable. A player with +2.0 in points and +1.5 in assists provides +3.5 combined value.
It's elegant and mathematically sound. For years, it seemed like the right framework.
The Core Problem, Backed by Real Math
Z-scores measure statistical unusualness, not contribution to winning. Consider two players with identical z-scores in different categories:
Player 1: Tyler Herro
- 26.5 points per game
- League average (top 150 players): 17.2 PPG
- PPG relative to average: +54%
- Z-score: 1.52
Player 2: Joel Embiid
- 1.7 blocks per game
- League average (top 150 players): 0.68 BPG
- BPG relative to average: +121%
- Z-score: 1.52
In a 12-team league where winning requires accumulating more of each category than opponents, which contribution matters more? Z-scores can't answer this. They only indicate that both contributions were equally unusual relative to the average player.
The Systematic Bias
Counting stats like points, rebounds, assists typically have lower standard deviations relative to their means than percentages or defensive stats.
The consequence: z-scores systematically overweight counting stats. The mathematics work exactly as designed: counting stats get overvalued relative to their actual value, while efficiency and defensive contributions get undervalued. This creates a predictable market inefficiency.
Why Z-scores Persist as an Industry Standard Persists
Z-scores work reasonably well for a lot of really good reasons:
- They're mathematically legitimate. Anyone with a statistics background recognizes them as theoretically appropriate for comparing distributions.
- They're straightforward to implement. One formula handles all categories.
- Network effects matter. Once major platforms adopted z-scores, they became the common language of fantasy basketball analysis.
Z-scores are a good solution to a genuinely difficult problem. Just not necessarily the optimal one.
The Practical Implication
If z-scores work for you, there's no urgent need to abandon them. They're effective tools that already put you ahead of most casual players.
But understanding their limitation, that they systematically overvalue counting stats while undervaluing efficiency, reveals opportunities. Players who fall further than they should. Categories are left ripe for exploitation, giving you a strategic edges your league hasn't recognized.
Next Week
The alternative framework is PoV. It's simpler to calculate than z-scores, more intuitive to interpret, and focused on what actually matters: each player's proportional contribution to winning each category.
Your pre-draft rankings might shift in meaningful ways.