Yesterday I looked at how and whether a hitter’s mid-season-to-date stats can help us to inform his rest-of-season performance, over and above a credible up-to-date mid-season projection. Obviously the answer to that depends on the quality of the projection – specifically how well it incorporates the season-to-date data in the projection model.

For players who were having dismal performances after the first, second, third, all the way through the fifth month of the season, the projection accurately predicted the last month’s performance and the first 5 months of data added nothing to the equation. In fact, those players who were having dismal seasons so far, even into the last month of the season, performed fairly admirably the rest of the way – nowhere near the level of their season-to-date stats. I concluded that the answer to the question, “When should we worry about a player’s especially poor performance?” was, “Never. It is irrelevant other than how it influences our projection for that player, which is not much, apparently.” For example, full-time players who had a .277 wOBA after the first month of the season, were still projected to be .342 hitters, and in fact, they hit .343 for the remainder of the season. Even halfway through the season, players who hit .283 for 3 solid months were still projected at .334 and hit .335 from then on. So, ignore bad performances and simply look at a player’s projection if you want to estimate his likely performance tomorrow, tonight, next week, or for the rest of the season.

On the other hand, players who have been hitting well-above their mid-season projections (crafted after and including the hot hitting) actually outhit their projections by anywhere from 4 to 16 points, still nowhere near the level of their “hotness,” however. This suggests that the projection algorithm is not handling recent “hot” hitting properly – at least my projection algorithm. Then again, when I looked at hitters who were projected at well-above average 2 months into the season, around .353, the hot ones and the cold ones each hit almost exactly the same over the rest of the season, equivalent to their respective projections. In that case, how they performed over those 3 months gave us no useful information beyond the mid-season projection. In one group, the “cold” group, players hit .303 for the first 2 months of the season, and they were still projected at .352. Indeed, they hit .349 for the rest of the season. The “hot” batters hit .403 for the first 2 months, they were projected to hit .352 after that and they did indeed hit exactly .352. So there would be no reason to treat these hot and cold above-average hitters any differently from one another in terms of playing time or slot in the batting order.

Today, I am going to look at pitchers. I think the perception is that because pitchers get injured more easily than position players, learn and experiment with new and different pitches, often lose velocity, their mechanics can break down, and their performance can be affected by psychological and emotional factors more easily than hitters, that early or mid-season “trends” are important in terms of future performance. Let’s see to what extent that might be true.

After one month, there were 256 pitchers or around 1/3 of all qualified pitchers (at least 50 TBF) who pitched terribly, to the tune of a normalized ERA (NERA) of 5.80 (league average is defined as 4.00). I included all pitchers whose NERA was at least 1/2 run worse than their projection. What was their projection after that poor first month? 4.08. How did they pitch over the next 5 months? 4.10. They faced 531 more batters over the last 5 months of the season.

What about the “hot” pitchers? They were projected after one month at 3.86 and they pitched at 2.56 for that first month. Their performance over the next 5 months was 3.85. So for the “hot” and “cold” pitchers after one month, their updated projection accurately told us what to expect for the remainder of the season and their performance to-date was irrelevant.

In fact, if we look at pitchers who had good projections after one month and divide those into two groups: One that pitches terribly for the first month, and one that pitches brilliantly for the first month, here is what we get:

Good pitchers who were cold for 1 month

First month: 5.38
Projection after that month: 3.79
Performance over the last 5 months: 3.75

Good pitchers who were hot for 1 month

First month: 2.49
Projection after that month: 3.78
Performance over the last 5 months: 3.78

So, and this is critical, one month into the season if you are projected to pitch above average, at, say 3.78, it makes no difference whether you have pitched great or terribly thus far. You are going to pitch at exactly your projection for the remainder of the season!

Yet the cold group faced 587 more batters and the hot group 630. Managers again are putting too much emphasis in those first month’s stats.

What if you are projected after one month as a mediocre pitcher but you have pitched brilliantly or poorly over the first month?

Bad pitchers who were cold for 1 month

First month: 6.24
Projection after that month: 4.39
Performance over the last 5 months: 4.40

Bad pitchers who were hot for 1 month

First month: 3.06
Projection after that month: 4.39
Performance over the last 5 months: 4.47

Same thing. It makes no difference whether a poor or mediocre pitcher had pitched well or poorly over the first month of the season. If you want to know how he is likely to pitch for the remainder of the season, simply look at his projection and ignore the first month. Those stats give you no more useful information. Again, the “hot” but mediocre pitchers got 44 more TBF over the final 5 months of the season, despite pitching exactly the same as the “cold” group over that 5 month period.

What about halfway into the season? Do pitchers with the same mid-season projection but one group was “hot” over the first 3 months and the other group was “cold,” pitch the same for the remaining 3 months? The projection algorithm does not handle the 3-month anomalous performances very well. Here are the numbers:

Good pitchers who were cold for 3 months

First month: 4.60
Projection after 3 months: 3.67
Performance over the last 3 months: 3.84

Good pitchers who were hot for 3 months

First month: 2.74
Projection after 3 months: 3.64
Performance over the last 3 months: 3.46

So for the hot pitchers the projection is undershooting them by around .18 runs per 9 IP and for the cold ones, it is over-shooting them by .17 runs per 9. Then again the actual performance is much closer to the projection than to the season-to-date performance. As you can see, mid-season pitcher stats halfway through the season are a terrible proxy for true talent/future performance. These “hot” and “cold” pitchers whose first half performance and second half projections were divergent by at least .5 runs per 9, performed in the second half around .75 runs per 9 better or worse than in the first half. You are much better off using the mid-season projection than the actual first-half performance.

For poorer pitchers who were “hot” and “cold” for 3 months, we get these numbers:

Poor pitchers who were cold for 3 months

First month: 5.51
Projection after 3 months: 4.41
Performance over the last 3 months: 4.64

Poor pitchers who were hot for 3 months

First month: 3.53
Projection after 3 months: 4.43
Performance over the last 3 months: 4.33

The projection model is still not giving enough weight to the recent performance, apparently. That is especially true of the “cold” pitchers. It over values them by .23 runs per 9. It is likely that these pitchers are suffering some kind of injury or velocity decline and the projection algorithm is not properly accounting for that. For the “hot” pitchers, the model only undervalues these mediocre pitchers by .1 runs per 9. Again, if you try and use the actual 3-month performance as a proxy for true talent or to project their future performance, you would be making a much bigger mistake – to the tune of around .8 runs per 9.

What about 5 months into the season? If the projection and the 5 month performance is divergent, which is better? Is using those 5 month stats a bad idea?

Yes, it still is. In fact, it is a terrible idea. For some reason, the projection does a lot better after 5 months than after 3 months. Perhaps some of those injured pitchers are selected out. Even though the projection slightly under and over values the hot and cold pitchers, using their 5 month performance as a harbinger of the last month is a terrible idea. Look at these numbers:

Poor pitchers who were cold for 5 months

First month: 5.45
Projection after 5 months: 4.41
Performance over the last month: 4.40

Poor pitchers who were hot for 5 months

First month: 3.59
Projection after 5 months: 4.39
Performance over the last month: 4.31

For the mediocre pitchers, the projection almost nails both groups, despite it being nowhere near the level of the first 5 months of the season. I cannot emphasize this enough: Even 5 months into the season, using a pitcher’s season-to-date stats as a predictor of future performance or a proxy for true talent (which is pretty much the same thing) is a terrible idea!

Look at the mistakes you would be making. You would be thinking that the hot group were comprised of 3.59 pitchers when in fact they were 4.40 pitchers who performed as such. That is a difference of .71 runs per 9. For your cold pitchers, you would undervalue them by more than a run per 9! What do managers do after 5 months of “hot” and “cold” pitching, despite the fact that both groups pitched almost the same for the last month of the season? They gave the hot group an average of 13 more TBF per pitcher. That is around a 3 inning difference in one month.

Here are the good pitchers who were hot and cold over the first 5 months of the season:

Good pitchers who were cold for 5 months

First month: 4.62
Projection after 5 months: 3.72
Performance over the last month: 3.54

Good pitchers who were hot for 5 months

First month: 2.88
Projection after 5 months: 3.71
Performance over the last month: 3.72

Here the “hot,” good pitchers pitched exactly at their projection despite pitching at .83 runs per 9 better over the first 5 months of the season. The “cold” group actually outperformed their projection by .18 runs and pitched better than the “hot” group! This is probably a sample size blip, but the message is clear: Even after 5 months, forget about how your favorite pitcher has been pitching, even for most of the season. The only thing that counts is his projection, which utilizes many years of performance plus a regression component, and not just 5 months worth of data. It would be a huge mistake to use those 5 month stats to predict these pitchers’ performances.

Managers can learn a huge lesson from this. The average number of batters faced in the last month of the season among the hot pitchers was 137, or around 32 IP. For the cold group, it was 108 TBF, or 25 IP. Again, the “hot” group pitched 7 more IP in only a month, yet they pitched worse than the “cold” group and both groups had the same projection!

The moral of the story here is that for the most part, and especially at the beginning and end of the season, ignore actual pitching performance to-date and use credible mid-season projections if you want to predict how your favorite or not-so favorite pitcher is likely to pitch tonight or over the remainder of the season. If you don’t, and that actual performance is significantly different from the updated projection, you are making a sizable mistake.

 

 

Recently on twitter I have been harping on the folly of using a player’s season-to-date stats, be it OPS, wOBA, RC+, or some other metric, for anything other than, well, how they have done so far. From a week into the season until the last pitch is thrown in November, we are inundated with articles and TV and radio commentaries about how so and so should be getting more playing time because his OPS is .956 or how player X should be benched or at least dropped in the order because he hitting .245 (in wOBA). Commentators, writers, analysts and fans wonder whether player Y’s unusually great or poor performance is “sustainable,” whether it is a “breakout” likely to continue, an age or injury related decline that portends an end to a career or a temporary blip after said injury is healed.

With web sites such as Fangraphs.com allowing us to look up a player’s current, up-to-date projections which already account for season-to-date performance, the question that all these writers and fans must ask themselves is, “Do these current season stats offer any information over and above the projections that might be helpful in any future decisions, such as whom to play or where to slot a player in the lineup, or simply whom to be optimistic or pessimistic about on your favorite team?”

Sure, if you don’t have a projection for a player, and you know nothing about his history or pedigree, a player’s season-to-date performance tells you something about what he is likely to do in the future, but even then, it depends on the sample size of that performance – at the very least you must regress that performance towards the league mean, the amount of regression being a function of the number of opportunities (PA) underlying the seasonal stats.

However, it is so easy for virtually anyone to look up a player’s projection on Fangraphs, Baseball Prospectus, The Hardball Times, or a host of other fantasy baseball web sites, why should we care about those current stats other than as a reflection of what a certain player has accomplished thus far in the season? Let’s face it,  2 or 3 months into the season, if a player who is projected at .359 (wOBA) is hitting .286, it is human nature to call for his benching, dropping him in the batting order, or simply expecting him to continue to hit in a putrid fashion. Virtually everyone thinks this way, even many astute analysts. It is an example of recency bias, which is one of the most pervasive human traits in all facets of life, including and especially in sports.

Who would you rather have in your lineup – Player A who has a Steamer wOBA projection of .350 but who is hitting .290 4 months into the season or Player B whom Steamer projects at .330, but is hitting .375 with 400 PA in July? If you said, “Player A,” I think you are either lying or you are in a very, very small minority.

Let’s start out by looking at some players whose current projection and season-to-date performance are divergent. I’ll use Steamer ROS (rest-of-season) wOBA projections from Fangraphs as compared to their actual 2014 wOBA. I’ll include anyone who has at least 200 PA and the absolute difference between their wOBA and wOBA projection is at least 40 points. The difference between a .320 and .360 hitter is the difference between an average player and a star player like Pujols or Cano, and the difference between a .280 and a .320 batter is like comparing a light-hitting backup catcher to a league average hitter.

Believe it or not, even though we are 40% into the season, around 20% of all qualified (by PA) players have a current wOBA projection that is more than 39 points greater or less than their season-to-date wOBA.

Players whose projection is higher than their actual

Name, PA, Projected wOBA, Actual wOBA

Cargo 212 .375 .328
Posey 233 .365 .322
Butler 258 .351 .278
Wright 295 .351 .307
Mauer 263 .350 .301
Craig 276 .349 .303
McCann 224 .340 .286
Hosmer 287 .339 .284
Swisher 218 .334 .288
Aoki 269 .330 .285
Brown 236 .329 .252
Alonso 223 .328 .260
Brad Miller 204 .312 .242
Schierholtz 219 .312 .265
Gyorko 221 .311 .215
De Aza 221 .311 .268
Segura 258 .308 .267
Bradley Jr. 214 .308 .263
Cozart 228 .290 .251

Players whose projection is lower than their actual

Name, PA, Projected wOBA, Actual wOBA

Tulo 259 .403 .472
Puig 267 .382 .431
V. Martinez 257 .353 .409
N. Cruz 269 .352 .421
LaRoche 201 .349 .405
Moss 255 .345 .392
Lucroy 258 .340 .398
Seth Smith 209 .337 .403
Carlos Gomez 268 .334 .405
Dunn 226 .331 .373
Morse 239 .329 .377
Frazier 260 .329 .369
Brantley 277 .327 .386
Dozier 300 .316 .357
Solarte 237 .308 .354
Alexi Ramirez 271 .306 .348
Suzuki 209 .302 .348

Now tell the truth: Who would you rather have at the plate tonight or tomorrow, Billy Butler, with his .359 projection and .278 actual, or Carlos Gomez, projected at .334, but currently hitting at .405? How about Hosmer (not to pick on the Royals) or Michael Morse? If you are like most people, you probably would choose Gomez over Butler, despite the fact that he is projected  as 25 points worse, and Morse over Hosmer, even though Hosmer is supposedly 10 points better than Morse. (I am ignoring park effects to simplify this part of the analysis.)

So how can we test whether your decision or blindly going with the Steamer projections would likely be the correct thing to do, emotions and recency bias aside? That’s relatively simple, if we are willing to get our hands dirty doing some lengthy and somewhat complicated historical mid-season projections. Luckily, I’ve already done that. I have a database of my own proprietary projections on a month-by-month basis for 2007-2013. So, for example, 2 months into the 2013 season, I have a season-to-date projection for all players. It incorporates their 2009-2012 performance, including AA and AAA, as well as their 2-month performance (again, including the minor leagues) so far in 2013. These projections are park and context neutral. We can then compare the projections with both their season-to-date performance (also context-neutral) and their rest-of-season performance in order to see whether, for example, a player who is projected at .350 even though he has hit .290 after 2 months will perform any differently in the last 4 months of the season than another player who is also projected at .350 but who has hit .410 after 2 months. We can do the same thing after one month (looking at the next 5 months of performance) or 5 months (looking at the final month performance). The results of this analysis should suggest to us whether we would be better off with Butler for the remainder of the season or with Gomez, or with Hosmer or Morse.

I took all players in 2007-2013 whose projection was at least 40 points less than their actual wOBA after one month into the season. They had to have had at least 50 PA. There were 116 such players, or around 20% of all qualified players. Their collective projected wOBA was .341 and they were hitting .412 after one month with an average of 111 PA per player. For the remainder of the season, in a total of 12,922 PA, or 494 PA per player, they hit .346, or 5 points better than their projection, but 66 points worse than their season-to-date performance. Again, all numbers are context (park, opponent, etc.) neutral. One standard deviation in that many PA is 4 points, so a 5 point difference between projected and actual is not statistically significant. There is some suggestion, however, that the projection algorithm is slightly undervaluing the “hot” (as compared to their projection) hitter during the first month of the season, perhaps by giving too little weight to the current season.

What about the players who were “cold” (relative to their projections) the first month of the season? There were 92 such players and they averaged 110 PA during the first month with a .277 wOBA. Their projection after 1 month was .342, slightly higher than the first group. Interestingly, they only averaged 464 PA for the remainder of the season, 30 PA less than the “hot” group, even though they were equivalently projected, suggesting that managers were benching more of the “cold” players or moving them down in the batting order. How did they hit for the remainder of the season? .343 or almost exactly equal to their projection. This suggests that managers are depriving these players of deserved playing time. By the way, after only one month, more than 40% of all qualified players are hitting 40 points better or worse than their projections. That’s a lot of fodder for internet articles and sports talk radio!

You might be thinking, “Well, sure, if a player is “hot” or “cold” after only a month, it probably doesn’t mean anything.” In fact, most commentaries you read or hear will give the standard SSS (small sample size) disclaimer only a month or even two months into the season. But what about halfway into the season? Surely, a player’s season-to-date stats will have stabilized by then and we will be able to identify those young players who have “broken out,” old, washed-up players, or players who have lost their swing or their mental or physical capabilities.

About half into the season, around 9% of all qualified (50 PA per month) players were hitting 40 points or less than their projections in an average of 271 PA. Their collective projection was .334 and their actual performance after 3 months and 271 PA was .283. Basically, these guys, despite being supposed league-average full-time players, stunk for 3 solid months. Surely, they would stink, or at least not be up to “par,” for the rest of the season. After all, wOBA at least starts to “stabilize” after almost 300 PA, right? Well, these guys, just like the “cold” players after one month, hit .335 for the remainder of the season, 1 point better than their projection. So after 1 month or 3 months, their season-to-date performance tells us nothing that our up-to-date projection doesn’t tell us. A player is expected to perform at his projected level regardless of his current season performance after 3 months, at least for the “cold” players. What about the “hot” ones, you know, the ones who may be having a breakout season?

There were also about 9% of all qualified players who were having a “hot” first half. Their collective projection was .339, and their average performance was .391 after 275 PA. How did they hit the remainder of the season? .346, 7 points better than their projection and 45 points worse than their actual performance. Again, there is some suggestion that the projection algorithm is undervaluing these guys for some reason. Again, the “hot” first-half players accumulated 54 more PA over the last 3 months of the season than the “cold” first-half players despite hitting only 11 points better. It seems that managers are over-reacting to that first-half performance, which should hardly be surprising.

Finally, let’s look at the last month of the season as compared to the first 5 months of performance. Do we have a right to ignore projections and simply focus on season-to-date stats when it comes to discussing the future – the last month of the season?

The 5-month “hot” players were hitting .391 in 461 PA. Their projection was .343, and they hit .359 over the last month. So, we are still more than twice as close to the projection than we are to the actual, although there is a strong inference that the projection is not weighting the current season enough or doing something else wrong, at least for the “hot” players.

For the “cold” players, we see the same thing as we do at any point in the season. The season-to-date stats are worthless if you know the projection. 3% of all qualified players (at least 250 PA) hit at least 40 points worse than their projection after 5 months. They were projected at .338, hit .289 for the first 5 months in 413 PA, and then hit .339 in that last month. They only got an average of 70 PA over the last month of the season, as compared to 103 PA for the “hot” batters, despite proving that they were league-average players even though they stunk up the field for 5 straight months.

After 4 months, BTW, “cold” players actually hit 7 points better than their projection for the last 2 months of the season, even though their actual season-to-date performance was 49 points worse. The “hot” players hit only 10 points better than their projection despite hitting 52 points better over the first 4 months.

Let’s look at the numbers in another way. Let’s say that we are 2 months into the season, similar to the present time. How do .350 projected hitters fare for the rest of the season if we split them into two groups: One, those that have been “cold” so far and those that have been “hot.” This is like our Butler or Gomez, Morse or Hosmer question.

I looked at all “hot” and “cold” players who were projected at greater than .330 after 2 months into the season. The “hot” ones, the Carlos Gomez’ and Michael Morse’s, hit .403 for 2 months, and were then projected at .352. How did they hit over the rest of the season? .352.

What about the “cold” hitters who were also projected at greater than .330? These are the Butler’s and Hosmer’s. They hit a collective .303 for the first 2 months of the season, their projection was .352, the same as the “hot” hitters, and their wOBA for the last 4 months was .349! Wow. Both groups of good hitters (according to their projections) hit almost exactly the same. They were both projected at .353 and one group hit .352 and the other hit .349. Of course the “hot” group got 56 more PA per player over the remainder of the season, despite being projected the same and performing essentially the same.

Let’s try those same hitters who are projected at better than .330, but who have been “hot” or “cold” for 5 months rather than only 2.

Cold

Projected: .350 Season-to-date: .311 ROS: .351

Hot

Projected: .354 Season-to-date: .393 ROS: .363

Again, after 5 months, the players projected well who have been hot are undervalued by the projection, but not nearly as much as the season-to-date performance might suggest. Good players who have been cold for 5 months hit exactly as projected and the “cold” 5 months has no predictive value, other than how it changes the up-to-date projection.

For players who are projected poorly, less than a .320 wOBA, the 5-month hot ones outperform their projections and the cold ones under-perform their projections, both by around 8 points. After 2 months, there is no difference – both “hot” and “cold” players perform at around their projected levels over the last 4 months of the season.

So what are our conclusions? Until we get into the last month or two of the season, season-to-date stats provide virtually no useful information once we have a credible projection for a player. For “hot” players, we might “bump” the projection by a few points in wOBA even 2 or 3 months into the season – apparently the projection is slightly under-valuing these players for some reason. However, it does not appear to be correct to prefer a “hot” player like Gomez versus a “cold” one like Butler when the “cold” player is projected at 25 points better, regardless of the time-frame. Later in the season, at around the 4th or 5th month, we might need to “bump” our projection, at least my projection, by 10 or 15 points to account for a torrid first 4 or 5 months. However, the 20 or 25 point better player, according to the projection, is still the better choice.

For “cold” players, season-to-date stats appear to provide no information whatsoever over and above a player’s projection, regardless of what point in the season we are at. So, when should we be worried about a hitter if he is performing far below his “expected” performance? Never. If you want a good estimate of his future performance, simply use his projection and ignore his putrid season-to-date stats.

In the next installment, I am going to look at the spread of performance for hot and cold players. You might hypothesize that while being hot or cold for 2 or 3 months has almost no effect on the next few months of performance, perhaps it does change the distribution of that performance among the group of  hot and cold players.

 

 

Note: These are rules of thumb which apply 90-99% of the time (or so). Some of them have a few or even many exceptions and nuances to consider. I do believe, however, that if every manager followed these religiously, even without employing any exceptions or considering any of the nuances, that he would be much better off than the status quo. There are also many other suggestions, commandments, and considerations that I would use, that are not included in this list.

1)      Though shalt never use individual batter/pitcher matchups, recent batter or pitcher stats, or even seasonal batter or pitcher stats. Ever. The only thing that this organization uses are projections based on long-term performance. You will use those constantly.

2)      Thou shalt never, ever use batting average again. wOBA is your new BA. Learn how to construct it and learn what it means.

3)      Thou shalt be given and thou shalt use the following batter/pitcher matchups every game: Each batter’s projection versus each pitcher. They include platoon considerations. Those numbers will be used for all your personnel decisions. They are your new “index cards.”

4)      Thou shalt never issue another IBB again, other than obvious late and close-game situations.

5)      Thou shalt instruct your batters whether to sacrifice bunt or not, in all sacrifice situations, based on a “commit line.” If the defense plays in front of that line, thy batters will hit away. If they play behind the line, thy batters will bunt. If they are at the commit line, they may do as they please. Each batter will have his own commit line against each pitcher. Some batters will never bunt.

6)      Thou shalt never sacrifice with runners at first and third, even with a pitcher at bat. You may squeeze if you want. With 1 out and a runner on 1st only your worst hitting pitchers will bunt.

7)      Thou shalt keep thy starter in or remove him based on two things and two things only: One, his pitch count, and two, the number of times he has faced the order. Remember that ALL pitchers lose 1/3 of a run in ERA each time through the order, regardless of how they are pitching thus far.

8)      Thou shalt remove thy starter for a pinch hitter in a high leverage situation if he is facing the order for the 3rd time or more, regardless of how he is pitching.

9)      Speaking of leverage, thou shalt be given a leverage chart with score, inning, runners, and outs. Use it!

10)   Thou shalt, if at all possible, use thy best pitchers in high leverage situations and thy worst pitchers in low leverage situations, regardless of the score or inning.  Remember that “best” and “worst” are based on your new “index cards” (batter v. pitcher projections) or your chart which contains each pitcher’s generic projection. It is never based on how they did yesterday, last week, or even the entire season. Thou sometimes may use “specialty” pitchers, such as when a GDP or a K are at a premium.

11)   Thou shalt be given a chart for every base runner and several of the most common inning, out, and score situations. There will be a number next to each player’s name for each situation. If the pitcher’s time home plus the catcher’s pop time are less than that number, thy runner will not steal. If it is greater, thy runner may steal. No runner shall steal second base with a lefty pitcher on the mound.

12)   Thou shalt not let thy heart be troubled by the outcome of your decisions. No one who works for this team will ever question your decision based on the outcome. Each decision you make is either right, wrong, or a toss-up, before we know, and regardless of, the outcome.

13)   Thou shalt be held responsible for your decisions, also regardless of the outcome. If your decisions are contrary to what we believe as an organization, we would like to hear your explanation and we will discuss it with you. However, you are expected to make the right decisions at all times, based on the beliefs and philosophies of the organization. We don’t care what the fans or the media think.  We will take care of that. We will all make sure that our players are on the same page as we are.

14)   Finally, thou shalt know that we respect and admire your leadership and motivational skills. That is one of the reasons we hired you. However, if you are not on board with our decision-making processes and willing to employ them at all times, please find yourself another team to manage.

Yesterday, I posted an article describing how I modeled to some extent a way to tell whether and by how much pitchers may be able to pitch in such a way as to allow fewer or more runs than their components, including the more subtle ones, like balks, SB/CS, WP, catcher PB, GIDP, and ROE suggest.

For various reasons, I suggest taking these numbers with a grain of salt. For one thing, I need to tweak my RA9 simulator to take into consideration a few more of these subtle components. For another, there may be some things that stick with a pitcher from year to year that have nothing to do with his “RA9 skill” but which serve to increase or decrease run scoring, given the same set of components. Two of these are a pitcher’s outfielder arms and the vagueries of his home park, which both have an effect on base runner advances on hits and outs. Using a pitcher’s actual sac flies against will mitigate this, but the sim is also using league averages for base runner advances on hits, which, as I said, can vary from pitchers to pitcher, and tend to persist from year to year (if a pitcher stays on the same team) based on his outfielders and his home park. Like DIPS, it would be better to do these correlations only on pitchers who switch teams, but I fear that the sample would be too small to get any meaningful results.

Anyway, I have a database now of the last 10 years’ differences between a pitcher’s RA9 and his sim RA9 (the runs per 27 outs generated by my sim), for all pitchers who threw to at least 100 batters in a season.

First here are some interesting categorical observations:

Jared Cross, of Steamer projections, suggested to me that perhaps some pitchers, like lefties, might hold base runners on first base better than others, and therefore depress scoring a little as compared to the sim, which uses league-average base running advancement numbers. Well, lefties actually did a hair worse in my database. Their RA9 was .02 greater than their sim RA. Righties were -.01 better. That does not necessarily mean that RHP have some kind of RA skill that LHP do not have. It is more likely a bias in the sim that I am not correcting for.

How about number of pitches in a pitcher’s repertoire. I hypothesized that pitchers with more pitches would be better able to tailor their approach to the situation. For example, with a base open, you want your pitcher to be able to throw lots of good off-speed pitches in order to induce a strikeout or weak contact, whereas you don’t mind if he walks the batter.

I was wrong. Pitchers with 3 or more pitches that they throw at least 10% of the time are .01 runs worse in RA9. Pitchers with only 2 or fewer pitches, are .02 runs better. I have no idea why that is.

How about pitchers who are just flat out good in their components such that their sim RA is low, like under 4.00 runs? Their RA9 is .04 worse. Again, their might be some bias in the sim which is causing that. Or perhaps if you just go out and there “air it out” and try and get as many outs and strikeouts as possible, regardless of the situation, you are not pitching optimally.

Conversely, pitchers with a sim RA of 4.5 or greater shave .03 points off their RA9. If you are over 5 in your sim RA, your actual RA9 is .07 points better and if you are below 3.5, your RA9 is .07 runs higher. So, there probably is something about having extreme components that even the sim is not picking up. I’m not sure what that could be. Or, perhaps if you are simply not that good of a pitcher, you have to find ways to minimize run scoring above and beyond the hits and walks you allow overall.

For the NL pitchers, their RA9 is .05 runs better than their sim RA, and for the AL, they are .05 runs worse. So the sim is not doing a good job with respect to the leagues, likely because of pitchers batting. I’m not sure why, but I need to fix that. For now, I’ll adjust a pitcher’s sim RA according to his league.

You might think that younger pitchers would be “throwers” and older ones would be “pitchers” and thus their RA skill would reflect that. This time you would be right – to some extent.

Pitchers less than 26 years old were .01 runs worse in RA9. Pitchers older than 30 were .03 better. But that might just reflect the fact that pitchers older than 30 are just not very good – remember, we have a bias in terms of quality of the sim RA and the difference between that and regular RA9.

Actually, even when I control for the quality of the pitcher, the older pitchers had more RA skill than the younger ones by around .02 to .04 runs. As you can see, none of these effects, even if they are other than noise, is very large.

Finally, here are the lists of the 10 best and worst pitchers with respect to “RA skill,” with no commentary. I adjusted for the “quality of the sim RA” bias, as well as the league bias. Again, take these with a large grain of salt, considering the discussion above.

Best, 2004-2013:

Sean Chacon -.18

Steve Trachsel -.18

Francisco Rodriguez -.18

Jose Mijares -.17

Scott Linebrink -.16

Roy Oswalt -.16

Dennys Reyes -.15

Dave Riske -.15

Ian Snell -.15

5 others tied for 10th.

Worst:

Derek Lowe .27

Luke Hochevar .20

Randy Johnson .19

Jeremy Bonderman .18

Blaine Boyer .18

Rich Hill .18

Jason Johnson .18

5 others tied for 8th place.

(None of these pitchers stand out to me one way or another. The “good” ones are not any you would expect, I don’t think.)

We showed in The Book that there is a small but palpable “pitching from the stretch” talent. That of course would effect a pitcher’s RA as compared to some kind of base runner and “timing” neutral measure like FIP or component ERA, or really any of the ERA estimators.

As well, a pitcher’s ability to tailor his approach to the situation, runners, outs, score, batter, etc., would also implicate some kind of “RA talent,” again, as compared to a “timing” neutral RA estimator.

A few months ago I looked to see if RE24 results for pitchers showed any kind of talent for pitching to the situation, by comparing that to the results of a straight linear weights analysis or even a BaseRuns measure. I found no year-to-year correlations for the difference between RE24 and regular linear weights. In other words, I was trying to see if some pitchers were able to change their approach to benefit them in certain bases/outs situations more than other pitchers. I was surprised that there was no discernible correlation, i.e., that it didn’t seem to be much of a skill if at all. You would think that some pitchers would either be smarter than others or have a certain skill set that would enable them, for example, to get more K with a runner on 3rd and less than 2 outs, more walks and fewer hits with a base open, or fewer home runs with runners on base or with 2 outs and no one on base. Obviously all pitchers, on the average, vary their approach a lot with respect to these things, but I found nothing much when doing these correlations. Essentially an “r” of zero.

To some extent the pitching from the stretch talent should show up in comparing RE24 to regular lwts, but it didn’t, so again, I was a little surprised at the results.

Anyway, I decided to try one more thing.

I used my “pitching sim” to compute a component ERA for each pitcher. I tried to include everything that would create or not create runs while he was pitching, like WP/PB, SB/CS, GIDP, roe, in addition to s,d,t,hr,bb, and so. I considered an IBB as a 1/2 BB in the sim, since I didn’t program IBB into it.

So now, for each year, I recorded the difference between a pitcher’s RA9 and his simulated component RA9, and then ran year-to-year correlations. This was again to see if I could find a “RA talent” wherever it may lie – clutch pitching, stretch talent, approach talent, etc.

I got a small year-to-year correlation which, as always, varied with the underlying sample size – TBF in each of the paired years. When I limited it to pitchers with at least 500 TBF in each year, I got an “r” of .142 with an average PA of 791 in each year. That comes out to a 50% regression at around 5000 PA, or 5 years for a full-time starter, similar to BABIP for pitchers. In other words, the “stabilization” point was around 5,000 TBF.

If that .142 is accurate (at 2 sigma the confidence interval is .072 to .211), I think that is pretty interesting. For example, notable “ERA whiz” Tom Glavine from 2001 to 2006, was an average of .246 in RA9 better than his sim RA9 (simulated component RA). If we regress that difference 50%, we get .133 runs per game, which is pretty sizable I think. That is over 1/3 of a win per season. Notable “ERA hack” Ricky Nolasco from 2008 to 2010 (I only looked at 2001-2010) was an average of .357 worse in his ERA. Regress that 62.5%, and we get .134 runs worse per season, also 1/3 of a win.

So, for example, if you want to know how to reconcile fWAR (FG) and bWAR (B-R) for pitchers, take the difference and regress according to the number of TBF, using the formula 5000/(5000+TBF) for the amount of regression.

Here are a couple more interesting ones, off the top of my head. I thought that Livan Hernandez seemed like a crafty pitcher, despite having inferior stuff late in his career. Sure enough, he out-pitched his components by around .164 runs per game over 9 seasons. After regressing, that’s .105 rpg.

The other name that popped into my head was Wakefield. I always wondered if a knuckler was able to pitch to the situation as well as other pitchers could. It does not seem like they can, with only one pitch with comparatively little control. His RA9 was .143 worse than his components suggest, despite his FIP being .3 runs per 9 worse than his ERA! After regressing, he is around .095 worse than his simulated component RA.

Of course, after looking at Wake, we have to check Dickey as well. He didn’t start throwing a knuckle ball until 2005, and then only half the time. His average difference between RA9 and simulated RA9 is .03 on the good side, but our sample size for him is small with a total of only 1600 TBF, implying a regression of 76%.

If you want the numbers on any of your favorite or no-so-favorite pitchers, let me know in the comments section.

If anyone is out there (hello? helloooo?), as promised, here are the AL team expected winning percentages and their actual winning percentages, conglomerated over the last 5 years. In case you were waiting with bated breath, as I have been.

Combined results for all five years (AL 2009-2013), in order of the “best” teams to the “worst:”

Team

My WP

Vegas WP

Actual WP

Diff

My Starters

Actual Starters

My Batting

Actual Batting

NYA

.546

.566

.585

.039

98

99

.30

.45

TEX

.538

.546

.558

.020

102

95

.14

.24

OAK

.498

.490

.517

.019

104

101

-.08

.07

LAA

.508

.526

.522

.014

103

106

.07

.17

TBA

.556

.544

.562

.006

100

102

.24

.17

BAL

.460

.452

.463

.003

110

115

-.03

-.27

DET

.548

.547

.550

.002

97

91

.21

.31

BOS

.546

.596

.546

.000

99

98

.26

.36

CHW

.489

.450

.488

-.001

99

97

-.16

-.29

TOR

.479

.482

.478

-.001

106

107

-.05

.12

MIN

.468

.469

.464

-.004

108

109

-.07

-.07

SEA

.462

.464

.446

-.016

106

106

-.26

-.36

KCR

.474

.460

.444

-.030

108

106

-.22

-.28

CLE

.492

.469

.462

-.030

108

109

.13

.01

HOU

.420

.420

.386

-.034

106

109

-.46

-.61

I find this chart quite interesting. As with the NL, it looks to me like the top over-performing teams are managed by stable high-profile, peer and player respected guys – Torre, Washington, Maddon, Scioscia, Leyland, Showalter.

Also, as with the NL teams, much of the differences between my model and the actual results are due to over-regression on my part, especially on offense. Keep in mind that I do include defense and base running in my model, so there may be some similar biases there.

Even after accounting for too much regression, some of the teams completely surprised me with respect to my model. Look at Oakland’s batting. I had them projected as a minus -.08 run per game team and somehow they managed to produce .07 rpg. That’s a huge miss over many players and many years. There has to be something going on there. Perhaps they know a lot more about their young hitters than we (I) do. That extra offense alone accounts for 16 points in WP, almost all of their 19 point over-performance. Even the A’s pitching outdid my projections.

Say what you will about the Yankees, but even though my undershooting their offense cost my model 16 points in WP, they still over-performed by a whopping 39 points, or 6.3 wins per season! I’m sure Rivera had a little to do with that even though my model includes him as closer. Then there’s the Yankee Mystique!

Again, even accounting for my too-aggressive regression, I completely missed the mark with the TOR, CLE, and BAL offense. Amazingly, while the Orioles pitched 5 points in FIP- worse than I projected and .24 runs per game worse on offense, they somehow managed to equal my projection.

Other notable anomalies are the Rangers’ and Tigers’ pitching. Those two starting staffs outdid me by seven and six points in FIP-, respectively, which is around 1/4 run in ERA – 18 points in WP. Texas did indeed win games at a 20 point clip better than I expected, but the Tigers, despite out-pitching my projections by 18 points in WP, AND outhitting me by another 11 points in WP, somehow managed to only win .3 games per season more than I expected. Must be that Leyland (anti-) magic!

Ok, enough of the bad Posnanski and Woody Allen rants and back to some interesting baseball analysis – sort of. I’m not exactly sure what to make of this, but I think you might find it interesting, especially if you are a fan of a particular team, which I’m pretty sure most of you are.

I went back five years and compared every team’s performance in each and every game to what would be expected based on their lineup that day, their starting pitcher, an estimate of their reliever and pinch hitter usage for that game, as well as the same for their opponent. Basically, I created a win/loss model for every game over the last five years. I didn’t simulate the game as I have done in the past. Instead, I used a theoretical model to estimate mean runs scored for each team, given a real-time projection for all of the relevant players, as well as the run-scoring environment, based on the year, league, and ambient conditions, like the weather and park (among other things).

When I say “real-time” projections, they are actually not up-to-the game projections. They are running projections for the year, updated once per month. So, for the first month of every season, I am using pre-season projections, then for the second month, I am using pre-season projections updated to include the first month’s performance, etc.

For a “sanity check” I am also keeping track of a consensus expectation for each game, as reflected by the Las Vegas line, the closing line at Pinnacle Sports Book, one of the largest and most respected online sports books in the internet betosphere.

The results I will present are the combined numbers for all five years, 2009 to 2013. Basically, you will see something like, “The Royals had an expected 5-year winning% of .487 and this is how they actually performed – .457.” I will present two expected WP actually – one from my models and one from the Vegas line. They should be very similar. What is interesting of course is the amount that the actual WP varies from the expected WP for each team. You can make of those variations what you want. They could be due to random chance, bad expectations for whatever reasons, or poor execution by the teams for whatever reasons.

Keep in mind that the composite expectations for the entire 5-year period are based on the expectation of each and every game. And because those expectation are updated every 6 months by my model and presumably every day by the Vegas model, they reflect the changing expected talent of the team as the season progresses. By that, I mean this: Rather than using a pre-season projection for every player and then applying that to the personnel used or presumed used (in the case of the relievers and pinch hitters) in every game that season, after the first 30 games, for example, those projections are updated and thus reflect to some extent, actual performance that season. For example, last year, pre-season, Roy Halladay might have been expected to have a 3.20 ERA or something like that. After he pitched horribly for a few weeks or months, and it was well-known that he was injured, his expected performance presumably changed in my model as well as in the Vegas model. Again, the Vegas model likely changes every day, whereas my model can only change after each month, or 5 times per season.

Here are the combined results for all five years (NL 2009-2013):

Team

My Model

Vegas

Actual

My Exp. Starting Pitching (RA9-)

Actual Starting Pitching (FIP-)

My Exp. Batting (marginal rpg)

Actual Batting (marginal rpg)

ARI

.496

.495

.486

103

103

0

-.08

ATL

.530

.545

.564

100

97

.25

.21

CHC

.488

.478

.446

103

102

-.09

-17

CIN

.522

.517

.536

104

108

.01

.12

COL

.494

.500

.486

102

96

-.04

-.09

MIA

.493

.472

.453

102

102

.01

-.05

LAD

.524

.526

.542

96

99

.02

-.03

MLW

.519

.509

.504

105

108

.13

.30

NYM

.474

.470

.464

106

108

-.02

.01

PHI

.516

.546

.554

96

98

-.01

.07

PIT

.461

.454

.450

109

111

-.19

-.28

SDP

.469

.463

.483

110

115

-.12

-.26

STL

.532

.554

.558

100

98

.23

.40

SFG

.506

.518

.515

98

102

-.19

-.30

WAS

.497

.484

.486

103

103

.01

.07

If you are an American league fan, you’ll have to wait until Part II. This is a lot of work, guys!

By the way, if you think that the Vegas line is remarkably good, and much better than mine, it is at least partly an illusion. They get to “cheat,” and to some extent they do. I can do the same thing, but I don’t. I am not looking at the expected WP and result of each game and then doing some kind of RMS error to test the accuracy of my model and the Vegas “model” on a game-by-game basis. I am comparing the composite results of each model to the composite W/L results of each team, for the entire 5 years. That probably makes little sense, so here is an example which should explain what I mean by the oddsmakers being able to “cheat,” thus making their composite odds close to the actual odds for the entire 5-year period.

Let’s say that before the season starts Vegas thinks that the Nationals are a .430 team. And let’s say that after 3 months, they were a .550 team. Now, Vegas by all rights should have them as something like a .470 team for the rest of the season – numbers for illustration purposes only – and my model should too, assuming that I started off with .430 as well. And let’s say that the updated expected WP of .470 were perfect and that they went .470 for the second half. Vegas and I would have a composite expected WP of .450 for the season, .430 for the first half and .470 for the second half. The Nationals record would be .510 for the season, and both of our models would look pretty bad.

However, Vegas, to some extent uses a team’s W/L record to-date to set the lines, since that’s what the public does and since Vegas assumes that a team’s W/L record, even over a relatively short period of time, is somewhat indicative of their true talent, which it is of course. After the Nats go .550 for the first half, Vegas can set the second-half odds as .500 rather than .470, even if they think that .470 is truly the best estimate of their performance going forward.

One they do that, their composite expected WP for the season will be (.430 + .500) / 2, or .465, rather than my .450. And even if the .470 were correct, and the Nationals go .470 for the second half, whose composite model is going to look better at the end of the season? Theirs will of course.

If Vegas wanted to look even better for the season, they can set the second half lines to .550, on the average. Even if that is completely wrong, and the team goes .470 over the second half, Vegas will look even better at the end of the season! They will be .490 for the season, I will be .450, and the Nats will have a final W/L percentage of .490! Vegas will look perfect and I will look bad, even though we had the same “wrong” expectation for the first half of the season, and I was right on the money for the second half and they were completely and deliberately wrong. Quite the paradox, huh? So take those Vegas lines with a grain of salt as you compare them to my model and to the final composite records of the teams. I’m not saying that my model is necessarily better than the Vegas model, only that in order to fairly compare them, you would have to take them one game at a time, or always look at each team’s prospective results compared to the Vegas line or my model.

Here is the same table as above, ordered by the difference between my expected w/l percentage and each team’s actual w/l percentage. The firth column is that difference. Call those differences whatever you want – luck, team “efficiency,” good or bad managing, player development, team chemistry, etc. I hope you find these numbers as interesting as I do!

Combined results for all five years (NL 2009-2013), in order of the “best” teams to the “worst:”

Team

My Model

Vegas

Actual

Difference

My Exp. Starting Pitching (RA9-)

Actual Starting Pitching (FIP-)

My Exp. Batting (marginal rpg)

Actual Batting (marginal rpg)

PHI

.516

.546

.554

.038

96

98

-.01

.07

ATL

.530

.545

.564

.034

100

97

.25

.21

STL

.532

.554

.558

.026

100

98

.23

.40

LAD

.524

.526

.542

.018

96

99

.02

-.03

SDP

.469

.463

.483

.014

110

115

-.12

-.26

CIN

.522

.517

.536

.014

104

108

.01

.12

SFG

.506

.518

.515

.009

98

102

-.19

-.30

COL

.494

.500

.486

-.008

102

96

-.04

-.09

NYM

.474

.470

.464

-.010

106

108

-.02

.01

PIT

.461

.454

.450

-.010

109

111

-.19

-.28

ARI

.496

.495

.486

-.010

103

103

0

-.08

WAS

.497

.484

.486

-.011

103

103

.01

.07

MLW

.519

.509

.504

-.015

105

108

.13

.30

MIA

.493

.472

.453

-.040

102

102

.01

-.05

CHC

.488

.478

.446

-.042

103

102

-.09

-.17

As you can see from either chart, it appears as if my model over-regresses both batting and starting pitching, especially the former.

Also, a quick and random observation from the above chart – it may mean absolutely nothing. It seems as though those top teams, most of them at least, have had notable, long-term, “players’ managers,” like Manuel, LaRussa, Mattingly, Torre, Black, Bochy, and Baker, while you might not be able to even recall or name most of the managers of the teams at the bottom. It will be interesting to see if the American League teams evince a similar pattern.

Note: After you read the Woody Allen example, please read the note below it, which describes how I screwed up the analysis!

One of the most important concepts in science, and sometimes in life, involves something called Bayesian Probability or Bayes Theorem. Since you are reading a sabermetric blog, you are likely at least somewhat familiar with it. Simply put, it has to do with conditional probability. You have probably read or heard about Bayes with respect to the following AIDS testing hypothetical.

Let’s say that you are not in a high risk group for contracting HIV, the virus that causes AIDS, or, alternatively, you are randomly selected from the adult U.S. population at large. And let’s say that in that population, one in 500 persons is HIV positive. You take an initial ELISA test, and it turns out positive for HIV. What are the chances that you actually carry the disease?

The first thing you need to know is the false positive rate for that particular test. It is also around one in 500. We’ll ignore the fact that there are better, more accurate tests available or that your blood specimen would be given another test if it had a positive ELISA. You might be tempted to think that your chances of carrying the virus is 99.8% or one minus .002, where .002 is the one in 500 false positive rate.

And you would be wrong. Enter Bayes. Since you only had a 1 in 500 chance of being HIV+ going in, there is a prior probability which must be added “to the equation.”

To understand how this works, and to avoid any semi-complex Bayesian formulas, we can frame the analysis like this:

In a population of 500,000 persons, there would be 1,000 carriers, since we specified that the HIV rate was one in 500. All of them would test positive, assuming a zero false-negative rate. Among the 499,000 non-carriers, there would be 998 false positives (a one in 500 chance).

So in our population of 500,000 persons, there are 1,998 positives and only 1,000 of these truly carry the virus. The other 998 positives are false. If you are selected from this population, and have a positive ELISA test, you naturally have a 1,000 in 1,998, or around a 50% chance of having the disease. That is a far cry from 99.8%, and should be somewhat comforting to anyone who fails an initial screening. That is basically how Bayes works, although it can get far more complex than that. It also applies to many, many other important things in life, including the guilt or innocence of a defendant in a criminal or civil prosecution, which I will address next.

Another famous, but less well-known, illustration of Bayes with respect to the criminal justice system, involves a young English woman named Sally Clark who was convicted of killing two of her children in 1999. In 1996, her first-born son died, presumably of Sudden Infant Death Syndrome (SIDS). In 1998, she gave birth to another boy, and he too died at home shortly after birth. She and her husband were soon arrested and charged with the murder of the two boys. The charges against her husband were eventually dropped.

Sally was convicted of both murders and sentenced to life in prison. By the way, she and her husband were affluent attorneys in England. At her trial, the following statistical “evidence” was presented by a pediatrician for the prosecution:

He testified that there was about a 1 in 8,500 chance that a baby in that situation would die of SIDS and therefore the chances that both of her children would perish from natural causes related to that syndrome was 1/8500 times 1/8500, or 1 in 73 million. Sally Clark was convicted largely on the “strength” of the statistical “evidence” that the chance of those babies both dying from SIDS, which was the defense’s assertion, was almost zero.

First of all, the 1 in 73 million might not be accurate. It is possible, in fact likely, according to the medical research, that those two probabilities are not independent. If you want to know the chances of two events occurring, multiplying the chances of one event by the other is only proper when the probability of the two events are independent – Stats 101. In this case, it was estimated by an expert witness for the defense in an appeal, that if one infant in a family dies of SIDS, the chances that another one also dies similarly is 5 to 10 times higher than the initial probability.

So that reduces our probability to between one in 15 million and one in 7 million. In addition, the same expert witness, a Professor of Mathematics who studied the historical SIDS data, argued that the 1 in 8,500 was really closer to 1 in 1,300 due to the gender of the Clark babies and other genetic and environmental characteristics. If that number is accurate, that brings us down to 1 in 227,000 for the chances of her two boys both dying of SIDS. While a far cry from 1 in 73 million, that is still some pretty damning evidence, right?

Wrong! That 1 in 227,000 chance of dying of SIDS, or the inverse, a 99.99955 chance of dying from something other than SIDS, like murder, is like our erroneous 99.8% chance of having HIV when our initial AIDS test is positive. In order to calculate the true odds of Mrs. Clark being guilty of murder based solely on the statistical evidence, we need to know, as with the AIDS test, what the chances are, going in, before we know about the deaths, that a woman like Sally Clark would be a double murderer of her own children. That is exactly the same thing as us needing to know the chances that we are an HIV carrier before we are tested, based upon the population we belong to. Remember, that was 1 in 500, which transformed our odds of having HIV from 99.8% to only 50%.

In this case, it is obviously difficult to estimate that a priori probability, the chances that a woman in Sally Clark’s shoes would murder her only two children back to back. The same mathematician estimated that the chances of Sally Clark being a double murderer, knowing nothing about what actually happened, was much rarer than the chances of both of her infants dying of natural causes. In fact, he claimed that it was 4 to 10 times rarer, which means that out of all young, affluent mothers with two new-born sons, maybe 1 in a million or 1 in 2 million would kill both of their children. That does not seem like an unreasonable estimate to me, although I have no way of knowing that off the top of my head.

So, as with the AIDS test, if there were a population of one million similar women with two newly born boys, around 4 of them (1 in 227,000) would suffer the tragedy of back-to-back deaths by SIDS, and only ½ to 1 would commit double infanticide. So the odds, based solely on these statistics, of Sally Clark being guilty as charged was around 10 to 20%, obviously not nearly enough to convict, and just a tad less than the 72,999,999 to 1 that the prosecution implied at her trial.

Anyway, after spending more than 3 years in prison, she won her case on appeal and was released. The successful appeal was based not only on the newly presented Bayesian evidence, but on the fact that the prosecution withheld evidence that her second baby had had an infection that may have contributed to his death from natural causes. Unfortunately, Sally Clark, unable to deal with the effects of her children’s deaths, the ensuing trial and incarceration, and public humiliation, died of self-inflicted alcohol poisoning 4 years later.

Which brings us to our final example of how Bayes can greatly affect an accused person’s chances of guilt or innocence, and perhaps more importantly, how it can cloud the judgment of the average person who is not statistically savvy, such as the judge and jurors, and the public, in the Clark case.

Unless you avoid the internet and the print tabloids like the plague, which is unlikely since you’re reading this blog, you no doubt know that Woody Allen was accused around 20 years ago of molesting his adopted 7-year old daughter, Dylan Farrow. The case was investigated back then, and no charges were ever filed. Recently, Dylan brought up the issue again in a NY Times article, and Allen issued a rebuttal and denial in his own NY Times op-ed. Dylan’s mother Mia, Woody Allen’s ex-partner, is firmly on the side of Dylan, and various family members are allied with one or the other. Dylan is unwavering in her memories and claims of abuse, and Mia is equally adamant about her belief that their daughter was indeed molested by Woody.

I am not going to get into any of the so-called evidence one way or another or comment on whether I think Woody is guilty or not. Clearly I am not in a position to do the latter. However, I do want to bring up how Bayes comes into play in this situation, much like with the AIDS and SIDS cases described above, and how, in fact, it comes into play in many “he-said, she-said” claims of sexual and physical abuse, whether the alleged victim is a child or an adult. If you have been following along so far, you probably know where I am going with this.

In cases like this, whether there is corroborating evidence or not, it is often alleged by the prosecution or the plaintiff in civil cases, that there is either no reason for the alleged victim to lie about what happened, or that given the emotional and graphic allegations or testimony of the victim, especially if it is a child, common sense tells us that the chances of the victim lying or being mistaken is extremely low. And that may well be the case. However, as you now know or already knew, according to Bayes, that is often not nearly enough to convict a defendant, even in a civil case where the burden on the plaintiff is based on a “preponderance of the evidence.”

Let’s use the Woody Allen case as an example. Again, we are going to ignore any incriminating or exculpatory evidence other than the allegations of Dylan Farrow, the alleged victim, and perhaps the corroborating testimony of her mother. Clearly, Dylan appears to believe that she was molested by Woody when she was seven, and clearly she seems to have been traumatically affected by her recollection of the experience. Please understand that I am not suggesting one way or another whether Dylan or anyone else is telling the truth or not. I have no idea.

Her mother, Mia, although she did not witness the alleged molestation, claims that, shortly after the incident, Dylan told her what happened and that she wholeheartedly believes her. Many people are predicating Allen’s likely guilt on the fact that Dylan seems to clearly remember what happened and that she is a credible person and has no reason to lie, especially at this point in her life and at this point in the timeline of the events. The statute of limitations precludes any criminal charges against Allen, and likely any civil action as well. I would assume however, that hypothetically, if this case were tried in court, the emotional testimony of Dylan would be quite damaging to Woody, as it often is in a sexual abuse case in which the alleged victim testifies.

Now let’s do the same Bayesian analysis that we did in the above two situations, the AIDS testing, and the murder case, and see if we can come up with any estimate as to the likely guilt or innocence of Woody Allen and perhaps other people accused of sexual abuse where the case hinges to a large extent on the credibility the alleged victim and his or her testimony. We’ll have to make some very rough assumptions, and again, we are assuming no other evidence, for or against.

First, we’ll assume that the chances of the victim and perhaps other people who were told of the alleged events by the victim, such as Dylan’s mother, Mia Farrow, lying or being delusional are very slim. So we are actually on the hypothetical prosecution or plaintiff’s side. ‘How is it possible that this victim and/or her mother would be lying about something as serious and traumatic as this?’

Now, even common sense tells is that it is possible, but not likely. I have no idea what the statistics or the assumptions in the field are, but surely there are many cases of fabrication by victims, false repressed memories by victims who are treated by so-called clinicians who specialize in repressed-memories of physical or sexual abuse, memories that are “implanted” in children by unscrupulous parents, etc. There are many documented cases of all of the above and more. Again, I am not saying that this case fits into one of these profiles and that Dylan is lying or mistaken, although clearly that is possible.

Let’s put the number at 1 in a 100 in a case similar to this. I’m not sure that any reasonable person could quarrel too much with that. I could easily make the case that it is higher than that. The population that we are talking about is this: First we have a 7 year-old child. The chances that the recollections of a young child, including the chances that those recollections were planted or at least influenced by an adult, might be faulty, have to be greater than that of an adult. The fact that Woody and Mia were already having severe relationship problems and in a bitter custody dispute also increase the odds that Dylan might have been “coached” or influenced in some manner by her mother. But I’ll leave the odds at 100-1 against. So, Allen is 99% guilty, right? You already know that the answer to that is, “No, not even close.”

So now we have to bring in Thomas Bayes as our expert witness. What are the chances that a random affluent and famous father like Woody Allen, again, not assuming anything else about the case or about Woody’s character or past or future behavior, would molest his 7-year old daughter? Again, I have no idea what that number is, but we’ll also say that it’s 100-1 against. I think it is lower than that, but I could be wrong.

So now, in order to compute the chances that Allen, or anyone else in a similar situation, where the alleged victim is a very credible witness – like we believe that there is a 99% chance they are telling the truth – is guilty, we can simply take the ratio of the prior probability of guilt, assuming no accusations at all, to the chances of the victim lying or otherwise being mistaken. That gives us the odds that the accused is guilty. In this case, it is .01 divided by .01 or 1, which means that it is “even money” that Woody Allen is guilty as charged, again, not nearly enough to convict in a criminal court. Unfortunately, many, perhaps most, people, including jurors in an actual trial, would assume that if there were a 99% chance that the alleged victim was telling the truth, well, the accused is most likely guilty!

Edit: As James in the comments section, Tango on the Book blog, and probably others, have noted, I screwed up the Woody Allen analysis. The only way that Bayes would come into play as I describe would be if we assumed that 1 out of 100 random daughters in a similar situation would make a false accusation against a father like Woody. That seems like a rather implausible assumption, but maybe not – I don’t really know. In any case, if that were true, then while my Bayesian analysis would be correct and it would make Allen have around a 50% chance of being guilty, the chances that Dylan was not telling the truth would not be 1% as I indicated. It would be a little less than 50%.

So, really, the chances that she is telling the truth is equal to the chances of Allen being guilty, as you might expect. In this case, unlike in the other two examples I gave, the intuitive answer is correct, and Bayes is not really implicated. The only way that Bayes would be implicated in the manner I described would be if a prosecutor or plaintiff’s lawyer pointed out that 99% of all daughters do not make false accusations against a father like Woody, therefore there is a 99% chance that she is telling the truth. That would be wrong, but that was not the point I was making. So, mea culpa, I screwed up, and I thank those people who pointed that out to me, and I apologize to the readers. 

I should add this:

The rate of false accusations is probably not significantly related to the rate of true accusations or the actual rate of abuse in any particular population. In other words, if the overall false accusation rate is 5-10% of all accusations, which is what the research suggests, that percentage will not be nearly the same in a population where the actual incidence of abuse is 20% or 5%. The ratio of true to real accusations is probably not constant. What is likely somewhat constant is the percentage of false accusations as compared to the number of potential accusations, although there are surely factors which would make false accusations more or less likely, such as the relationship between the mother and father.

What that means is that the extrinsic (outside of the accusation itself) chance that an accused person is guilty is related to the chances of a false accusation. If in one population the incidence of abuse is 20%, there is probably a much lower chance that a person who makes an accusation is lying, as compared to a population where the incidence of abuse is, say, 5%.

So, if an accused person is otherwise not likely to be guilty but for an accusation, a prosecutor would be misleading the jury if he reported that overall only 5% of all accusations were false therefore the chance that this accusation is false, is also 5%.

If that is hard to understand, imagine a population of persons where the chance of abuse is zero. There will still be some false accusations in that population, and since there will be no real ones, the chances that someone is telling the truth if they accuse someone is zero. The percentage of false accusations is 100%. If the percentage of abuse in a population is very high, then the ratio of false to true accusations will be much lower than the overall 5-10% number.

* And why I am getting tired of writers and analysts picking and choosing one or more of a bushel of statistics to make their (often weak) point.

Let’s first get something out of the way:

Let’s say that you know of this very good baseball player. He is well-respected and beloved on and off the field,  he played for only one, dynastic, team, he has several World Series rings, double digit All-Star appearances, dozens of awards, including 5 Gold Gloves, 5 Silver Sluggers, and a host of other commendations and accolades. Oh, and he dates super models and doesn’t use PEDs (we think).

Does it matter whether he is a 40, 50, 60, 80, or 120 win (WAR) player in terms of his HOF qualifications? I submit that the answer is an easy, “No, it doesn’t” He is a slam dunk HOF’er whether he is indeed a very good, great, or all-time, inner-circle, great player. If you want to debate his goodness or greatness, fine. But it would be disingenuous to debate that in terms of his HOF qualifications. There are no serious groups of persons, including “stat-nerds,” whose consensus is that this player does not belong in the HOF.

Speaking of strawmen, before I lambaste Mr. Posnanski, which is the crux of this post, let me start by giving him some major props for pointing out that this article, by the “esteemed” and “venerable” writer Allen Barra, is tripe. That is Pos’ word – not mine. Indeed, the article is garbage, and Barra, at least when writing about anything remotely related to sabermetrics, is a hack. Unfortunately, Posnanski’s article is not much further behind in tripeness.

Pos’ thesis, I suppose, can be summarized by this, at the beginning of the article:

[Jeter] was a fantastic baseball player. But you know what? Alan Trammell was just about as good.

Here are Alan Trammell’s and Derek Jeter’s neutralized offensive numbers.

Trammell: .289/.357/.420
Jeter: .307/.375/..439

Jeter was a better hitter. But it was closer than you might think.

He points out several times in the article that, “Trammell was almost as good as Jeter, offensively.”

Let’s examine that proposition.

First though, let me comment on the awful argument, “Closer than you think.” Pos should be ashamed of himself for using that in an assertion or argument. It is a terrible way to couch an argument. First of all, how does he know, “What I think?” And who is he referring to when he says, “You?” The problem with that “argument,” if you want to even call it that, is that it is entirely predicated on what the purveyor decides “You are thinking.” Let’s say a player has a career OPS of .850. I can say, “I will prove that he is better than you think, assuming of course that you think that he is worse than .850, and it is up to me to determine what you think.” Or I can say the opposite. “This player is worse than you think, assuming of course, that you think that he better than an .850 player. And I am telling you that you are thinking that (or at least implying that)!”

Sometimes it is obvious what, “You think.” Often times it is not. And that’s even assuming that we know who, “You” is. In this case, is it obvious what, “You think of Jeter’s offense compared to Trammell?” I certainly don’t think so, and I know a thing or two about baseball. I am pretty sure that most knowledgeable baseball people think that both players were pretty good hitters overall and very good hitters for a SS. So, really, what is the point of, “It was closer than you think.” That is a throwaway comment and serves no purpose other than to make a strawman argument.

But that is only the beginning of what’s wrong with this premise and this article in general. He goes on to state or imply two things. One, that their “neutralized” career OPS’s are closer than their raw ones. I guess that is what he means by “closer than you think,” although he should have simply said, “Their neutralized offensive stats are closer than their non-neutralized ones,” rather than assuming what, “I think.”

Anyway, it is true that in non-neutralized OPS, they were 60 points apart, whereas once “neutralized,” at least according to the article, the gap is only 37 points, but:

Yeah, it is closer once “neutralized” (I don’t know where he gets his neutralized numbers from or how they were computed ), but 37 points is a lot man! I don’t think too many people would say that a 37 point difference, especially over 20-year careers, is “close.”

More importantly, a big part of that “neutralization” is due to the different offensive environments. Trammell played in a lower run scoring environment than did Jeter, presumably, at least partially, because of rampant PED use in the 90’s and aughts. Well, if that’s true, and Jeter did not use PED’s, then why should we adjust his offensive accomplishments downward just because many other players, the ones who were putting up artificially inflated and gaudy numbers, were using? Not to mention the fact that he had to face juiced-up pitchers and Trammell did not! In other words, you could easily make the argument, and probably should, that if (you were pretty sure that) a player was not using during the steroid era, that his offensive stats should not be neutralized to account for the inflated offense during that era, assuming that that inflation was due to rampart PED use of course.

Finally, with regard to this, somewhat outlandish, proposition that Jeter and Trammell were similar in offensive value (of course, it depends on your definition of “similar” and “close” which is why using words like that creates “weaselly” arguments), let’s look at the (supposedly) context-neutral offensive runs or wins above replacement (or above average – it doesn’t matter what the baseline is when comparing players’ offensive value) from Fangraphs.

Jeter

369 runs batting, 43 runs base running

Trammell

124 runs batting, 23 runs base running

Whether you want to include base running on “offense” doesn’t matter. Look at the career batting runs. 369 runs to 124. Seriously, what was Posnanski drinking (aha, that’s it – Russian vodka! – he is in Sochi in case you didn’t klnow) when he wrote an entire article mostly about how similar Trammell and Jeter were, offensively, throughout their careers. And remember, these are linear weights batting runs, which are presented as “runs above or below average” compared to a league-average player. In other words, they are neutralized with respect to the run-scoring environment of the league. Again, with respect to PED use during Jeter’s era, we can make an argument that the gap between them is even larger than that.

So, Posnanski tries to make the argument that, “They are not so far apart offensively as some people might think (yeah, the people who look at their stats on Fangraphs!),” by presenting some “neutralized” OPS stats. (And again, he is claiming that a 37-point difference is “close,” which is eminently debatable.)

Before he even finishes, I can make the exact opposite claim – that they are worlds apart offensively, by presenting their career (similar length careers, by the way, although Jeter did play in 300 more games), league and park adjusted batting runs. They are 245 runs, or 24 wins, apart!

That, my friends, is why I am sick and tired of credible writers and even some analysts making their point by cherry picking one (or more than one) of scores of legitimate and semi-legitimate sabermetric and not-so-sabermetric statistics.

But, that’s not all!  I did say that Posnanski’s article was hacktastic, and I didn’t just mean his sketchy use of one (not-so-great) statistic (“neturalized” OPS) to make an even sketchier point.

This:

By Baseball Reference’s defensive WAR Trammell was 22 wins better than a replacement shortstop. Jeter was nine runs worse.

By Fangraphs, Trammell was 76 runs better than a replacement shortstop. Jeter was 139 runs worse.

Is an abomination. First of all, when talking about defense, you should not use the term “replacement” (and you really shouldn’t use it for offense either). Replacement refers to the total package, not to one component of player value. Replacement shortstops, could be average or above-average defenders and awful hitters, decent hitters and terrible defenders, or anything in between. In fact, for various reasons, most replacement players are average or so defenders and poor hitters.

And then he conflates wins and runs (don’t use both in the same paragraph – that  is sure to confuse some readers), although I know that he knows the difference. In fact, I think he means “nine wins” worse in the first sentence, and not, “nine runs worse.” But, that mistake is on him for trying to use both wins and runs when talking about the same thing (Jeter and Trammell’s defense), for no good reason.

Pos then says:

You can buy those numbers or you can partially agree with them or you can throw them out entirely, but there’s no doubt in my mind that Trammell was a better defensive shortstop.

Yeah, yada, yada, yada. Yeah we know. No credible baseball person doesn’t think that Trammell was much the better defender. Unfortunately we are not very certain of how much better he was in terms of career runs/wins. Again, not that it matters in terms of Jeter’s qualifications for, or his eventually being voted into, the HOF. He will obviously be a first-ballot, near-unanimous selection, and rightfully so.

Yes, it is true that Trammell has not gotten his fair due from the HOF voters, for whatever reasons. But, comparing him to Jeter doesn’t help make his case, in my opinion. Jeter is not going into the HOF because he has X number of career WAR. He is going in because he was clearly a very good or great player, and because of the other dozen or more things he has going for him that the voters (and the fans) include, consciously or not, in terms of their consideration. Even if it could be proven that Jeter and Trammell had the exact same context-neutral statistical value over the course of their careers, Jeter could still be reasonably considered a slam dunk HOF’er and Trammell not worthy of induction (I am not saying that he isn’t worthy). It is still the Hall of Fame (which means many different things to many different people) and not the Hall of WAR or the Hall of Your Context-Neutral Statistical Value.

For the record, I love Posnanski’s work in general, but no one is perfect.

In The Book: Playing the Percentages in Baseball, we found that when a batter pinch hits against right-handed relief pitchers (so there are no familiarity or platoon issues), his wOBA is 34 points (10%) worse than when he starts and bats against relievers, after adjusting for the quality of the pitchers in each pool (PH or starter). We called this the pinch hitting penalty.

We postulated that the reason for this was that a player coming off the bench in the middle or towards the end of a game is not as physically or mentally prepared to hit as a starter who has been hitting and playing the field for two or three hours. In addition, some of these pinch hitters are not starting because they are tired or slightly injured.

We also found no evidence that there is a “pinch hitting skill.” In other words, there is no such thing as a “good pinch hitter.” If a hitter has had exceptionally good (or bad) pinch hitting stats, it is likely that that was due to chance alone, and thus it has no predictive value. The best predictor of a batter’s pinch-hitting performance is his regular projection with the appropriate penalty added.

We found a similar situation with designated hitters. However, their penalty was around half that of a pinch hitter, or 17 points (5%) of wOBA. Similar to the pinch hitter, the most likely explanation for this is that the DH is not as physically (and perhaps mentally) prepared for each PA as a player who is constantly engaged in the game. As well, the DH may be slightly injured or tired, especially if he is normally a position player. It makes sense that the DH penalty would be less than the PH penalty, as the DH is more involved in a game than a PH. Pinch hitting is often considered “the hardest job in baseball.” The numbers suggest that that is true. Interestingly, we found a small “DH skill” such that different players seem to have more or less of a true DH penalty.

Andy Dolphin (one of the authors of The Book) revisited the PH penalty issue in this Baseball Prospectus article from 2006. In it, he found a PH penalty of 21 points in wOBA, or 6%, significantly less than what was presented in The Book (34 points).

Tom Thress, on his web site, reports a PH penalty of .009 in “player won-loss records” (offensive performance translated into a “w/l record”), which he says is similar to that found in The Book (34 points). However, he finds an even larger DH penalty of .011 wins, which is more than twice that which we presented in The Book. I assume that .011 is slightly larger than 34 points in wOBA.

So, everyone seems to be in agreement that there is a significant PH and DH penalty, however, there is some disagreement as to the magnitude of each (with empirical data, we can never be sure anyway). I am going to revisit this issue by looking at data from 1998 to 2012. The method I am going to use is the “delta method,” which is common when doing this kind of “either/or” research with many player seasons in which the number of opportunities (in this case, PA) in each “bucket” can vary greatly for each player (for example, a player may have 300 PA in the “either” bucket and only 3 PA in the “or” bucket) and from player to player.

The “delta method” looks something like this: Let’s say that we have 4 players (or player seasons) in our sample, and each player has a certain wOBA and number of PA in bucket A and in bucket B, say, DH and non-DH – the number of PA are in parentheses.

wOBA as DH wOBA as Non-DH
Player 1 .320 (150) .330 (350)
Player 2 .350 (300) .355 (20)
Player 3 .310 (350) .325 (50)
Player 4 .335 (100) .350 (150)

In order to compute the DH penalty (difference between when DH’ing and playing the field) using the “delta method,” we compute the difference for each player separately and take a weighted average of the differences, using the lesser of the two PA (or the harmonic mean) as the weight for each player. In the above example, we have:

((.330 – .320) * 150 + (.355 – .350) * 20 + (.325 – .310) * 50 + (.350 – .335) * 100) / (150 + 20 + 50 + 100)

If you didn’t follow that, that’s fine. You’ll just have to trust me that this is a good way to figure the “average difference” when you have a bunch of different player seasons, each with a different number of opportunities (e.g. PA) in each bucket.

In addition to figuring the PH and DH penalties (in various scenarios, as you will see), I am also going to look at some other interesting “penalty situations” like playing in a day game after a night game, or both games of a double header.

In my calculations, I adjust for the quality of the pitchers faced, the percentage of home and road PA, and the platoon advantage between the batter and pitcher. If I don’t do that, it is possible for one bucket to be inherently more hitter-friendly than the other bucket, either by chance alone or due to some selection bias, or both.

First let’s look at the DH penalty. Remember that in The Book, we found a roughly 17 point penalty, and  Tom Thresh found a penalty that was greater than that of a PH, presumably more than 34 points in wOBA.

Again, my data was from 1998 to 2012, and I excluded all inter-league games. I split the DH samples into two groups: One group had more DH PA than non-DH PA in each season (they were primarily DH’s), and vice versa in the other group (primarily position players).

The DH penalty was the same in both groups – 14 points in wOBA.

The total sample sizes were 10,222 PA for the primarily DH group and 32,797 for the mostly non-DH group. If we combine the two groups, we get a total of 43,019 PA. That number represents the total of the “lesser of the PA” for each player season. One standard deviation in wOBA for that many PA is around 2.5 wOBA points. For the difference between two groups of 43,000 each, it is 3.5 points (the square root of the sum of the variances). So we can say with 95% confidence that the true DH penalty is between 7 and 21 points with the most likely value being 14. This is very close to the 17 point value we presented in The Book.

I expected that the penalty would be greater for position players who occasionally DH’d rather than DH’s who occasionally played in the field. That turned out not to be the case, but given the relatively small sample sizes, the true values could very well be different.

Now let’s move on to pinch hitter penalties. I split those into two groups as well: One, against starting pitchers and the other versus relievers. We would expect the former to show a greater penalty since a “double whammy” would be in effect – first, the “first time through the order” penalty, and second, the “sitting on the bench” penalty. In the reliever group, we would only have the “coming in cold” penalty. I excluded all ninth innings or later.

Versus starting pitchers only, the PH penalty was 19.5 points in 8,523 PA. One SD is 7.9 points, so the 95% confidence interval is a 4 to 35 point penalty.

Versus relievers only, the PH penalty was 12.8 points in 17,634 PA. One SD is 5.5 points – the 95% confidence interval is a 2 to 24 point penalty.

As expected, the penalty versus relievers, where batters typically only face the pitcher for the first and only time in the game, whether they are in the starting lineup or are pinch hitting, is less than that versus the starting pitcher, by around 7 points. Again, keep in mind that the sample sizes are small enough such that the true difference between the starter PH penalty and reliever PH penalty could be the same or could even be reversed. Of course, our prior when applying a Bayesian scheme is that there is a strong likelihood that the true penalty is larger against starting pitchers for the reason explained above. So it is likely that the true difference is similar to the one observed (a 7-point greater penalty versus starters).

Notice that my numbers indicate penalties of a similar magnitude for pinch hitters and designated hitters. The PH penalty is a little higher than the DH penalty when pinch hitters face a starter, and a little lower than the DH penalty when they face a reliever. I expected the PH penalty to be greater than the DH penalty, as we found in The Book. Again, these numbers are based on relatively small sample sizes, so the true PH and DH penalties could be quite different.

Role Penalty (wOBA)
DH 14 points
PH vs. Starters 20 points
PH vs. Relievers 13 points

Now let’s look at some other potential “penalty” situations, such as the second game of a double-header and a day game following a night game.

In a day game following a night game, batters hit 6.2 wOBA points worse than in day games after day games or day games after not playing at all the previous day. The sample size was 95,789 PA. The 95% certainty interval is 1.5 to 11 points.

What about the when a player plays both ends of a double-header (no PH or designated hitters)? Obviously many regulars sit out one or the other game – certainly the catchers.

Batters in the second game of a twin bill lose 8.1 points of wOBA compared to all other games. Unfortunately, the sample is only 9,055 PA, so the 2 SD interval is -7.5 to 23.5. If 8.1 wOBA points (or more) is indeed reflective of the true double-header penalty, it would be wise for teams to sit some of their regulars in one of the two games – which they do of course. It would also behoove teams to make sure that their two starters in a twin bill pitch with the same hand in order to discourage fortuitous platooning by the opposing team.

Finally, I looked at games in which a player and his team (in order to exclude times when the player sat because he wasn’t 100% healthy) did not play the previous day, versus games in which the player had played at least 8 days in a row. I am looking for a “consecutive-game fatigue” penalty and those are the two extremes. I excluded all games in April and all pinch-hitting appearances.

The “penalty” for playing at least 8 days in a row is 4.0 wOBA points in 92,287 PA. One SD is 2.4 so that is not a statistically significant difference. However, with a Bayesian prior such that we expect there to be a “consecutive-game fatigue” penalty, I think we can be fairly confident with the empirical results (although obviously there is not much certainty as to the magnitude).

To see whether the consecutive day result is a “penalty” or the day off result is a bonus, I compared them to all other games.

When a player and his team has had a day off the previous day, the player hits .1 points better than otherwise in 115,471 PA (-4.5 to +4.5). Without running the “consecutive days off” scenario, we can infer that there is an observed penalty when playing at least 8 days in a row, of around 4 points, compared to all other games (the same as compared to after an off-day).

So having a day off is not really a “bonus,” but playing too many days in row creates a penalty. It probably behooves all players to take an occasional day off. Players like Cal Ripken, Steve Garvey, and Miguel Tejada (and others) may have had substantially better careers had they been rested more, at least rate-wise.

I also looked at players who played in fewer days in a row (5, 6, and 7) and found penalties of less than 4 points, suggesting that the more days in a row a player plays, the more his offense is penalized. It would be interesting to see if a day off after several days in a row restores a player to his normal offensive levels.

There are many other situations where batters and pitchers may suffer penalties (or bonuses), such as game(s) after coming back from the DL, getaway (where the home team leaves for another venue) games, Sunday night games, etc.

Unfortunately, I don’t have the time to run all of these potentially interesting scenarios – and I have to leave something for aspiring saberists to do!

Addendum: Tango Tiger suggested I split the DH into “versus relievers and starters.” I did not expect there to be a difference in penalties since, unlike a PH, a DH faces the starter the same number of times as when he isn’t DH’ing. However, I found a penalty difference of 8 points – the DH penalty versus starters was 16.3 and versus relievers, it was 8.3. Maybe the DH becomes “warmer” towards the end of the game, or maybe the difference is a random, statistical blip. I don’t know. We are often faced with these conundrums (what to conclude) when dealing with limited empirical data (relatively small sample sizes). Even if we are statistically confident that an effect exists (or doesn’t), we are are usually quite uncertain as to the magnitude of that effect.

I also looked at getaway (where the home team goes on the road after this game) night games. It has long been postulated that the home team does not perform as well in these games. Indeed, the home team batter penalty in these games was 1.6 wOBA points, again, not a statistically significant difference, but consistent with the Bayesian prior. Interestingly, the road team batters performed .6 points better suggesting that home team pitchers in getaway games might have a small penalty as well.