Defense Never Slumps (Or Does It?): Estimating the Variability of Defensive Performance
One of the oldest adages in baseball is that speed and defense never go into slumps. The thinking goes that while hitting and pitching are subject to a lot of variability, fielding and speed remain relatively constant and unaffected by luck and other factors. Due to random chance, a hitter may have a fairly good or fairly poor season simply by luck, all while his true batting skill remains the same. However, it's thought that a fielder's performance will remain steady and be mostly unaffected by chance. However, I'm not convinced that defense is as constant as the old adage states. In this article, I'll try to estimate the inherent variability of a fielder's performance during the course of a season.
When the ball is hit in play towards a fielder, that fielder has a chance to make a play on the ball. Sometimes he'll be able to make a play and record an out, and other times he won't. This uncertainty leads to the variability. For instance, suppose the batter hits a ball up the middle, and the shortstop dives to make a stop and throw to first for the out. The shortstop made a fine play, but if 100 of those exact same balls are hit to the same fielder, he probably does not make that play all 100 times. Maybe he only makes that play 50 times, while on the other 50 balls, he doesn't get quite as good of a jump on the ball, or mis-times his dive, or can't get enough mustard on the throw. Overall, based on his fielding skill, the location of the ball, speed of the batter, etc, he had about a 50% chance to make the play, and in this case he got a little bit lucky in being able to convert that 50% chance to record an out.
Just as with hitting, this luck doesn't necessarily even out over the course of a game, a week, or even a season, and hence, it's possible a player may have a good (or poor) fielding season simply due to luck. But is fielding subject to the same random fluctuations as hitting?
Of course, not every ball in the field is a 50-50 proposition. Many balls are either sure hits or (nearly) sure outs and there is not much room for chance with these balls. Obviously if every ball were a sure hit or sure out, fielding wouldn't be subject to any variation at all. To determine the amount variability associated with fielding, we'll need to know the distribution of out probabilities for a batted ball.
Distribution of Out Probabilities
Many defensive metrics, such as Ultimate Zone Rating (UZR), already estimate the probability of an out on each batted ball by dividing the field into small areas and measuring proportion of plays made in each area. But, while these systems are good for measuring the skill of defenders over the course of a season, these systems aren't designed to get an accurate probability on a single play. While UZR may say that a ball has a 30% chance of becoming an out, the actual probability of an out may be as low as 0% or as high as nearly 100% when factoring the exact location and trajectory of the hit, position of the fielder, skill of the fielder, speed of the runners, etc. Due to these limitations, data from UZR or other systems won't be of use here.
So, how can we get the distribution we are looking for? The best way I could think of was to put on my scouting hat and estimate it with my own eyes. Using MLB.TV's condensed game feature, I picked a random sample of games and looked at 200 balls in play. On each ball, I estimated the probability that an out would be recorded - in other words if the same ball was hit in the exact same spot again, with the same fielder, runners, etc, how often could the fielder turn the play into an out? While my estimates surely won't be perfect, it should get us the rough distribution of probabilities we are looking for.
Overall, in 200 balls in play, 96 balls were nearly sure outs, in which I estimated the probability of an out to be 98% or higher. These were routine flies and grounders which we are all so accustomed to seeing. 35 balls were likely outs, in which I estimated the probability of an out to be between 80%-95%. 22 balls were toss-ups, with a probability of an out estimated between 25%-75%. 13 balls were likely hits, with a probability between 5%-20%. And 34 balls were sure hits, where I estimated an essentially 0% probability of an out being recorded.
A histogram showing this distribution is below:
As you can see from the histogram and the text above, the distribution of out probabilities is bimodal, in that most balls are either certain outs or certain hits, while there are relatively few balls in between. This finding probably matches your intuition.
This type of evaluation gives us a rough distribution of the probability that a batted ball will be turned into an out. From this we can calculate the standard deviation of a player's fielding ability. If the same players were to field the exact same 200 balls over again, according to the probabilities I assigned, we would expect them to record 139.3 outs with a standard error of 3.3 outs. The standard deviation on one ball in play is 3.3/SQRT(200) = 0.23.
What does this mean over the course of a season? Usually there are about 4000 balls hit into play against a team during a year. Using the standard deviation of 0.23 we see that we would expect that the number of outs recorded by the defense would have a standard error of 0.23*SQRT(4000) = 14.5 outs. If we assume that the run value of each hit is .55 (mostly singles, with some doubles) and the run value of each out is -.28, we find that the standard error of the number of runs allowed by the defense over the course of a season is 14.5*(.55+.28) = 12.0 runs.
So, after a lot of math, the bottom line is that the plays made by the defense will vary by give or take about 12 runs simply due to luck, even when the fielders' true skill remains the same throughout the year.
How does this compare to the variability of offense? In contrast, the standard deviation of linear weights batting runs in one plate appearance is about 0.43 runs (compared to a SD of 0.19 runs for fielding). Over the course of a season's worth of 6200 plate appearances, the standard error of the number of runs produced is 34.2 runs. As we can see, this is much larger than the standard error of 12 runs surrounding a team's defensive efforts.
We see that over the course of a season, indeed, the old adage is right in some respects - the amount of luck associated with fielding is much less than the amount of luck associated with batting. However, the standard error of 12 runs is also nothing to sneeze at, and from these calculations we see that lucky fielding can give a team one or two extra wins over the course of a season (and of course the reverse is true for unlucky fielding).
From an individual player's standpoint, the average fielder has about 500 balls in play in his area over the course of the season (of course, this varies by position, and we can adjust accordingly) . Using the numbers above, we see that the average fielder has a standard error of about .23*SQRT(500) = 5.14 outs over the course of a season. This means that he is prone to make about 5 or so more or 5 or so less plays in a season than his true talent would usually call for. This corresponds to a difference of about 4 runs in a season. While this is fairly small, it does show that random variability can play a part in a fielder's performance just as it can for hitters.
Since not all balls are identical, the variability associated with fielding performance is not easily calculated like it is for offensive performance. By doing this calculation, this article hopefully sheds some light on the natural variability we can expect to be associated with fielding. More research can be done by checking to see how the probability distributions (and hence the variability) might differ by position.