Recently on twitter I have been harping on the folly of using a player’s season-to-date stats, be it OPS, wOBA, RC+, or some other metric, for anything other than, well, how they have done so far. From a week into the season until the last pitch is thrown in November, we are inundated with articles and TV and radio commentaries about how so and so should be getting more playing time because his OPS is .956 or how player X should be benched or at least dropped in the order because he hitting .245 (in wOBA). Commentators, writers, analysts and fans wonder whether player Y’s unusually great or poor performance is “sustainable,” whether it is a “breakout” likely to continue, an age or injury related decline that portends an end to a career or a temporary blip after said injury is healed.
With web sites such as Fangraphs.com allowing us to look up a player’s current, up-to-date projections which already account for season-to-date performance, the question that all these writers and fans must ask themselves is, “Do these current season stats offer any information over and above the projections that might be helpful in any future decisions, such as whom to play or where to slot a player in the lineup, or simply whom to be optimistic or pessimistic about on your favorite team?”
Sure, if you don’t have a projection for a player, and you know nothing about his history or pedigree, a player’s season-to-date performance tells you something about what he is likely to do in the future, but even then, it depends on the sample size of that performance – at the very least you must regress that performance towards the league mean, the amount of regression being a function of the number of opportunities (PA) underlying the seasonal stats.
However, it is so easy for virtually anyone to look up a player’s projection on Fangraphs, Baseball Prospectus, The Hardball Times, or a host of other fantasy baseball web sites, why should we care about those current stats other than as a reflection of what a certain player has accomplished thus far in the season? Let’s face it, 2 or 3 months into the season, if a player who is projected at .359 (wOBA) is hitting .286, it is human nature to call for his benching, dropping him in the batting order, or simply expecting him to continue to hit in a putrid fashion. Virtually everyone thinks this way, even many astute analysts. It is an example of recency bias, which is one of the most pervasive human traits in all facets of life, including and especially in sports.
Who would you rather have in your lineup – Player A who has a Steamer wOBA projection of .350 but who is hitting .290 4 months into the season or Player B whom Steamer projects at .330, but is hitting .375 with 400 PA in July? If you said, “Player A,” I think you are either lying or you are in a very, very small minority.
Let’s start out by looking at some players whose current projection and season-to-date performance are divergent. I’ll use Steamer ROS (rest-of-season) wOBA projections from Fangraphs as compared to their actual 2014 wOBA. I’ll include anyone who has at least 200 PA and the absolute difference between their wOBA and wOBA projection is at least 40 points. The difference between a .320 and .360 hitter is the difference between an average player and a star player like Pujols or Cano, and the difference between a .280 and a .320 batter is like comparing a light-hitting backup catcher to a league average hitter.
Believe it or not, even though we are 40% into the season, around 20% of all qualified (by PA) players have a current wOBA projection that is more than 39 points greater or less than their season-to-date wOBA.
Players whose projection is higher than their actual
Name, PA, Projected wOBA, Actual wOBA
Cargo 212 .375 .328
Posey 233 .365 .322
Butler 258 .351 .278
Wright 295 .351 .307
Mauer 263 .350 .301
Craig 276 .349 .303
McCann 224 .340 .286
Hosmer 287 .339 .284
Swisher 218 .334 .288
Aoki 269 .330 .285
Brown 236 .329 .252
Alonso 223 .328 .260
Brad Miller 204 .312 .242
Schierholtz 219 .312 .265
Gyorko 221 .311 .215
De Aza 221 .311 .268
Segura 258 .308 .267
Bradley Jr. 214 .308 .263
Cozart 228 .290 .251
Players whose projection is lower than their actual
Name, PA, Projected wOBA, Actual wOBA
Tulo 259 .403 .472
Puig 267 .382 .431
V. Martinez 257 .353 .409
N. Cruz 269 .352 .421
LaRoche 201 .349 .405
Moss 255 .345 .392
Lucroy 258 .340 .398
Seth Smith 209 .337 .403
Carlos Gomez 268 .334 .405
Dunn 226 .331 .373
Morse 239 .329 .377
Frazier 260 .329 .369
Brantley 277 .327 .386
Dozier 300 .316 .357
Solarte 237 .308 .354
Alexi Ramirez 271 .306 .348
Suzuki 209 .302 .348
Now tell the truth: Who would you rather have at the plate tonight or tomorrow, Billy Butler, with his .359 projection and .278 actual, or Carlos Gomez, projected at .334, but currently hitting at .405? How about Hosmer (not to pick on the Royals) or Michael Morse? If you are like most people, you probably would choose Gomez over Butler, despite the fact that he is projected as 25 points worse, and Morse over Hosmer, even though Hosmer is supposedly 10 points better than Morse. (I am ignoring park effects to simplify this part of the analysis.)
So how can we test whether your decision or blindly going with the Steamer projections would likely be the correct thing to do, emotions and recency bias aside? That’s relatively simple, if we are willing to get our hands dirty doing some lengthy and somewhat complicated historical mid-season projections. Luckily, I’ve already done that. I have a database of my own proprietary projections on a month-by-month basis for 2007-2013. So, for example, 2 months into the 2013 season, I have a season-to-date projection for all players. It incorporates their 2009-2012 performance, including AA and AAA, as well as their 2-month performance (again, including the minor leagues) so far in 2013. These projections are park and context neutral. We can then compare the projections with both their season-to-date performance (also context-neutral) and their rest-of-season performance in order to see whether, for example, a player who is projected at .350 even though he has hit .290 after 2 months will perform any differently in the last 4 months of the season than another player who is also projected at .350 but who has hit .410 after 2 months. We can do the same thing after one month (looking at the next 5 months of performance) or 5 months (looking at the final month performance). The results of this analysis should suggest to us whether we would be better off with Butler for the remainder of the season or with Gomez, or with Hosmer or Morse.
I took all players in 2007-2013 whose projection was at least 40 points less than their actual wOBA after one month into the season. They had to have had at least 50 PA. There were 116 such players, or around 20% of all qualified players. Their collective projected wOBA was .341 and they were hitting .412 after one month with an average of 111 PA per player. For the remainder of the season, in a total of 12,922 PA, or 494 PA per player, they hit .346, or 5 points better than their projection, but 66 points worse than their season-to-date performance. Again, all numbers are context (park, opponent, etc.) neutral. One standard deviation in that many PA is 4 points, so a 5 point difference between projected and actual is not statistically significant. There is some suggestion, however, that the projection algorithm is slightly undervaluing the “hot” (as compared to their projection) hitter during the first month of the season, perhaps by giving too little weight to the current season.
What about the players who were “cold” (relative to their projections) the first month of the season? There were 92 such players and they averaged 110 PA during the first month with a .277 wOBA. Their projection after 1 month was .342, slightly higher than the first group. Interestingly, they only averaged 464 PA for the remainder of the season, 30 PA less than the “hot” group, even though they were equivalently projected, suggesting that managers were benching more of the “cold” players or moving them down in the batting order. How did they hit for the remainder of the season? .343 or almost exactly equal to their projection. This suggests that managers are depriving these players of deserved playing time. By the way, after only one month, more than 40% of all qualified players are hitting 40 points better or worse than their projections. That’s a lot of fodder for internet articles and sports talk radio!
You might be thinking, “Well, sure, if a player is “hot” or “cold” after only a month, it probably doesn’t mean anything.” In fact, most commentaries you read or hear will give the standard SSS (small sample size) disclaimer only a month or even two months into the season. But what about halfway into the season? Surely, a player’s season-to-date stats will have stabilized by then and we will be able to identify those young players who have “broken out,” old, washed-up players, or players who have lost their swing or their mental or physical capabilities.
About half into the season, around 9% of all qualified (50 PA per month) players were hitting 40 points or less than their projections in an average of 271 PA. Their collective projection was .334 and their actual performance after 3 months and 271 PA was .283. Basically, these guys, despite being supposed league-average full-time players, stunk for 3 solid months. Surely, they would stink, or at least not be up to “par,” for the rest of the season. After all, wOBA at least starts to “stabilize” after almost 300 PA, right? Well, these guys, just like the “cold” players after one month, hit .335 for the remainder of the season, 1 point better than their projection. So after 1 month or 3 months, their season-to-date performance tells us nothing that our up-to-date projection doesn’t tell us. A player is expected to perform at his projected level regardless of his current season performance after 3 months, at least for the “cold” players. What about the “hot” ones, you know, the ones who may be having a breakout season?
There were also about 9% of all qualified players who were having a “hot” first half. Their collective projection was .339, and their average performance was .391 after 275 PA. How did they hit the remainder of the season? .346, 7 points better than their projection and 45 points worse than their actual performance. Again, there is some suggestion that the projection algorithm is undervaluing these guys for some reason. Again, the “hot” first-half players accumulated 54 more PA over the last 3 months of the season than the “cold” first-half players despite hitting only 11 points better. It seems that managers are over-reacting to that first-half performance, which should hardly be surprising.
Finally, let’s look at the last month of the season as compared to the first 5 months of performance. Do we have a right to ignore projections and simply focus on season-to-date stats when it comes to discussing the future – the last month of the season?
The 5-month “hot” players were hitting .391 in 461 PA. Their projection was .343, and they hit .359 over the last month. So, we are still more than twice as close to the projection than we are to the actual, although there is a strong inference that the projection is not weighting the current season enough or doing something else wrong, at least for the “hot” players.
For the “cold” players, we see the same thing as we do at any point in the season. The season-to-date stats are worthless if you know the projection. 3% of all qualified players (at least 250 PA) hit at least 40 points worse than their projection after 5 months. They were projected at .338, hit .289 for the first 5 months in 413 PA, and then hit .339 in that last month. They only got an average of 70 PA over the last month of the season, as compared to 103 PA for the “hot” batters, despite proving that they were league-average players even though they stunk up the field for 5 straight months.
After 4 months, BTW, “cold” players actually hit 7 points better than their projection for the last 2 months of the season, even though their actual season-to-date performance was 49 points worse. The “hot” players hit only 10 points better than their projection despite hitting 52 points better over the first 4 months.
Let’s look at the numbers in another way. Let’s say that we are 2 months into the season, similar to the present time. How do .350 projected hitters fare for the rest of the season if we split them into two groups: One, those that have been “cold” so far and those that have been “hot.” This is like our Butler or Gomez, Morse or Hosmer question.
I looked at all “hot” and “cold” players who were projected at greater than .330 after 2 months into the season. The “hot” ones, the Carlos Gomez’ and Michael Morse’s, hit .403 for 2 months, and were then projected at .352. How did they hit over the rest of the season? .352.
What about the “cold” hitters who were also projected at greater than .330? These are the Butler’s and Hosmer’s. They hit a collective .303 for the first 2 months of the season, their projection was .352, the same as the “hot” hitters, and their wOBA for the last 4 months was .349! Wow. Both groups of good hitters (according to their projections) hit almost exactly the same. They were both projected at .353 and one group hit .352 and the other hit .349. Of course the “hot” group got 56 more PA per player over the remainder of the season, despite being projected the same and performing essentially the same.
Let’s try those same hitters who are projected at better than .330, but who have been “hot” or “cold” for 5 months rather than only 2.
Cold
Projected: .350 Season-to-date: .311 ROS: .351
Hot
Projected: .354 Season-to-date: .393 ROS: .363
Again, after 5 months, the players projected well who have been hot are undervalued by the projection, but not nearly as much as the season-to-date performance might suggest. Good players who have been cold for 5 months hit exactly as projected and the “cold” 5 months has no predictive value, other than how it changes the up-to-date projection.
For players who are projected poorly, less than a .320 wOBA, the 5-month hot ones outperform their projections and the cold ones under-perform their projections, both by around 8 points. After 2 months, there is no difference – both “hot” and “cold” players perform at around their projected levels over the last 4 months of the season.
So what are our conclusions? Until we get into the last month or two of the season, season-to-date stats provide virtually no useful information once we have a credible projection for a player. For “hot” players, we might “bump” the projection by a few points in wOBA even 2 or 3 months into the season – apparently the projection is slightly under-valuing these players for some reason. However, it does not appear to be correct to prefer a “hot” player like Gomez versus a “cold” one like Butler when the “cold” player is projected at 25 points better, regardless of the time-frame. Later in the season, at around the 4th or 5th month, we might need to “bump” our projection, at least my projection, by 10 or 15 points to account for a torrid first 4 or 5 months. However, the 20 or 25 point better player, according to the projection, is still the better choice.
For “cold” players, season-to-date stats appear to provide no information whatsoever over and above a player’s projection, regardless of what point in the season we are at. So, when should we be worried about a hitter if he is performing far below his “expected” performance? Never. If you want a good estimate of his future performance, simply use his projection and ignore his putrid season-to-date stats.
In the next installment, I am going to look at the spread of performance for hot and cold players. You might hypothesize that while being hot or cold for 2 or 3 months has almost no effect on the next few months of performance, perhaps it does change the distribution of that performance among the group of hot and cold players.
What can a player’s season-to-date performance tell us beyond his up-to-date projection?
Posted: June 12, 2014 in Commentators, ProjectionsRecently on twitter I have been harping on the folly of using a player’s season-to-date stats, be it OPS, wOBA, RC+, or some other metric, for anything other than, well, how they have done so far. From a week into the season until the last pitch is thrown in November, we are inundated with articles and TV and radio commentaries about how so and so should be getting more playing time because his OPS is .956 or how player X should be benched or at least dropped in the order because he hitting .245 (in wOBA). Commentators, writers, analysts and fans wonder whether player Y’s unusually great or poor performance is “sustainable,” whether it is a “breakout” likely to continue, an age or injury related decline that portends an end to a career or a temporary blip after said injury is healed.
With web sites such as Fangraphs.com allowing us to look up a player’s current, up-to-date projections which already account for season-to-date performance, the question that all these writers and fans must ask themselves is, “Do these current season stats offer any information over and above the projections that might be helpful in any future decisions, such as whom to play or where to slot a player in the lineup, or simply whom to be optimistic or pessimistic about on your favorite team?”
Sure, if you don’t have a projection for a player, and you know nothing about his history or pedigree, a player’s season-to-date performance tells you something about what he is likely to do in the future, but even then, it depends on the sample size of that performance – at the very least you must regress that performance towards the league mean, the amount of regression being a function of the number of opportunities (PA) underlying the seasonal stats.
However, it is so easy for virtually anyone to look up a player’s projection on Fangraphs, Baseball Prospectus, The Hardball Times, or a host of other fantasy baseball web sites, why should we care about those current stats other than as a reflection of what a certain player has accomplished thus far in the season? Let’s face it, 2 or 3 months into the season, if a player who is projected at .359 (wOBA) is hitting .286, it is human nature to call for his benching, dropping him in the batting order, or simply expecting him to continue to hit in a putrid fashion. Virtually everyone thinks this way, even many astute analysts. It is an example of recency bias, which is one of the most pervasive human traits in all facets of life, including and especially in sports.
Who would you rather have in your lineup – Player A who has a Steamer wOBA projection of .350 but who is hitting .290 4 months into the season or Player B whom Steamer projects at .330, but is hitting .375 with 400 PA in July? If you said, “Player A,” I think you are either lying or you are in a very, very small minority.
Let’s start out by looking at some players whose current projection and season-to-date performance are divergent. I’ll use Steamer ROS (rest-of-season) wOBA projections from Fangraphs as compared to their actual 2014 wOBA. I’ll include anyone who has at least 200 PA and the absolute difference between their wOBA and wOBA projection is at least 40 points. The difference between a .320 and .360 hitter is the difference between an average player and a star player like Pujols or Cano, and the difference between a .280 and a .320 batter is like comparing a light-hitting backup catcher to a league average hitter.
Believe it or not, even though we are 40% into the season, around 20% of all qualified (by PA) players have a current wOBA projection that is more than 39 points greater or less than their season-to-date wOBA.
Players whose projection is higher than their actual
Name, PA, Projected wOBA, Actual wOBA
Cargo 212 .375 .328
Posey 233 .365 .322
Butler 258 .351 .278
Wright 295 .351 .307
Mauer 263 .350 .301
Craig 276 .349 .303
McCann 224 .340 .286
Hosmer 287 .339 .284
Swisher 218 .334 .288
Aoki 269 .330 .285
Brown 236 .329 .252
Alonso 223 .328 .260
Brad Miller 204 .312 .242
Schierholtz 219 .312 .265
Gyorko 221 .311 .215
De Aza 221 .311 .268
Segura 258 .308 .267
Bradley Jr. 214 .308 .263
Cozart 228 .290 .251
Players whose projection is lower than their actual
Name, PA, Projected wOBA, Actual wOBA
Tulo 259 .403 .472
Puig 267 .382 .431
V. Martinez 257 .353 .409
N. Cruz 269 .352 .421
LaRoche 201 .349 .405
Moss 255 .345 .392
Lucroy 258 .340 .398
Seth Smith 209 .337 .403
Carlos Gomez 268 .334 .405
Dunn 226 .331 .373
Morse 239 .329 .377
Frazier 260 .329 .369
Brantley 277 .327 .386
Dozier 300 .316 .357
Solarte 237 .308 .354
Alexi Ramirez 271 .306 .348
Suzuki 209 .302 .348
Now tell the truth: Who would you rather have at the plate tonight or tomorrow, Billy Butler, with his .359 projection and .278 actual, or Carlos Gomez, projected at .334, but currently hitting at .405? How about Hosmer (not to pick on the Royals) or Michael Morse? If you are like most people, you probably would choose Gomez over Butler, despite the fact that he is projected as 25 points worse, and Morse over Hosmer, even though Hosmer is supposedly 10 points better than Morse. (I am ignoring park effects to simplify this part of the analysis.)
So how can we test whether your decision or blindly going with the Steamer projections would likely be the correct thing to do, emotions and recency bias aside? That’s relatively simple, if we are willing to get our hands dirty doing some lengthy and somewhat complicated historical mid-season projections. Luckily, I’ve already done that. I have a database of my own proprietary projections on a month-by-month basis for 2007-2013. So, for example, 2 months into the 2013 season, I have a season-to-date projection for all players. It incorporates their 2009-2012 performance, including AA and AAA, as well as their 2-month performance (again, including the minor leagues) so far in 2013. These projections are park and context neutral. We can then compare the projections with both their season-to-date performance (also context-neutral) and their rest-of-season performance in order to see whether, for example, a player who is projected at .350 even though he has hit .290 after 2 months will perform any differently in the last 4 months of the season than another player who is also projected at .350 but who has hit .410 after 2 months. We can do the same thing after one month (looking at the next 5 months of performance) or 5 months (looking at the final month performance). The results of this analysis should suggest to us whether we would be better off with Butler for the remainder of the season or with Gomez, or with Hosmer or Morse.
I took all players in 2007-2013 whose projection was at least 40 points less than their actual wOBA after one month into the season. They had to have had at least 50 PA. There were 116 such players, or around 20% of all qualified players. Their collective projected wOBA was .341 and they were hitting .412 after one month with an average of 111 PA per player. For the remainder of the season, in a total of 12,922 PA, or 494 PA per player, they hit .346, or 5 points better than their projection, but 66 points worse than their season-to-date performance. Again, all numbers are context (park, opponent, etc.) neutral. One standard deviation in that many PA is 4 points, so a 5 point difference between projected and actual is not statistically significant. There is some suggestion, however, that the projection algorithm is slightly undervaluing the “hot” (as compared to their projection) hitter during the first month of the season, perhaps by giving too little weight to the current season.
What about the players who were “cold” (relative to their projections) the first month of the season? There were 92 such players and they averaged 110 PA during the first month with a .277 wOBA. Their projection after 1 month was .342, slightly higher than the first group. Interestingly, they only averaged 464 PA for the remainder of the season, 30 PA less than the “hot” group, even though they were equivalently projected, suggesting that managers were benching more of the “cold” players or moving them down in the batting order. How did they hit for the remainder of the season? .343 or almost exactly equal to their projection. This suggests that managers are depriving these players of deserved playing time. By the way, after only one month, more than 40% of all qualified players are hitting 40 points better or worse than their projections. That’s a lot of fodder for internet articles and sports talk radio!
You might be thinking, “Well, sure, if a player is “hot” or “cold” after only a month, it probably doesn’t mean anything.” In fact, most commentaries you read or hear will give the standard SSS (small sample size) disclaimer only a month or even two months into the season. But what about halfway into the season? Surely, a player’s season-to-date stats will have stabilized by then and we will be able to identify those young players who have “broken out,” old, washed-up players, or players who have lost their swing or their mental or physical capabilities.
About half into the season, around 9% of all qualified (50 PA per month) players were hitting 40 points or less than their projections in an average of 271 PA. Their collective projection was .334 and their actual performance after 3 months and 271 PA was .283. Basically, these guys, despite being supposed league-average full-time players, stunk for 3 solid months. Surely, they would stink, or at least not be up to “par,” for the rest of the season. After all, wOBA at least starts to “stabilize” after almost 300 PA, right? Well, these guys, just like the “cold” players after one month, hit .335 for the remainder of the season, 1 point better than their projection. So after 1 month or 3 months, their season-to-date performance tells us nothing that our up-to-date projection doesn’t tell us. A player is expected to perform at his projected level regardless of his current season performance after 3 months, at least for the “cold” players. What about the “hot” ones, you know, the ones who may be having a breakout season?
There were also about 9% of all qualified players who were having a “hot” first half. Their collective projection was .339, and their average performance was .391 after 275 PA. How did they hit the remainder of the season? .346, 7 points better than their projection and 45 points worse than their actual performance. Again, there is some suggestion that the projection algorithm is undervaluing these guys for some reason. Again, the “hot” first-half players accumulated 54 more PA over the last 3 months of the season than the “cold” first-half players despite hitting only 11 points better. It seems that managers are over-reacting to that first-half performance, which should hardly be surprising.
Finally, let’s look at the last month of the season as compared to the first 5 months of performance. Do we have a right to ignore projections and simply focus on season-to-date stats when it comes to discussing the future – the last month of the season?
The 5-month “hot” players were hitting .391 in 461 PA. Their projection was .343, and they hit .359 over the last month. So, we are still more than twice as close to the projection than we are to the actual, although there is a strong inference that the projection is not weighting the current season enough or doing something else wrong, at least for the “hot” players.
For the “cold” players, we see the same thing as we do at any point in the season. The season-to-date stats are worthless if you know the projection. 3% of all qualified players (at least 250 PA) hit at least 40 points worse than their projection after 5 months. They were projected at .338, hit .289 for the first 5 months in 413 PA, and then hit .339 in that last month. They only got an average of 70 PA over the last month of the season, as compared to 103 PA for the “hot” batters, despite proving that they were league-average players even though they stunk up the field for 5 straight months.
After 4 months, BTW, “cold” players actually hit 7 points better than their projection for the last 2 months of the season, even though their actual season-to-date performance was 49 points worse. The “hot” players hit only 10 points better than their projection despite hitting 52 points better over the first 4 months.
Let’s look at the numbers in another way. Let’s say that we are 2 months into the season, similar to the present time. How do .350 projected hitters fare for the rest of the season if we split them into two groups: One, those that have been “cold” so far and those that have been “hot.” This is like our Butler or Gomez, Morse or Hosmer question.
I looked at all “hot” and “cold” players who were projected at greater than .330 after 2 months into the season. The “hot” ones, the Carlos Gomez’ and Michael Morse’s, hit .403 for 2 months, and were then projected at .352. How did they hit over the rest of the season? .352.
What about the “cold” hitters who were also projected at greater than .330? These are the Butler’s and Hosmer’s. They hit a collective .303 for the first 2 months of the season, their projection was .352, the same as the “hot” hitters, and their wOBA for the last 4 months was .349! Wow. Both groups of good hitters (according to their projections) hit almost exactly the same. They were both projected at .353 and one group hit .352 and the other hit .349. Of course the “hot” group got 56 more PA per player over the remainder of the season, despite being projected the same and performing essentially the same.
Let’s try those same hitters who are projected at better than .330, but who have been “hot” or “cold” for 5 months rather than only 2.
Cold
Projected: .350 Season-to-date: .311 ROS: .351
Hot
Projected: .354 Season-to-date: .393 ROS: .363
Again, after 5 months, the players projected well who have been hot are undervalued by the projection, but not nearly as much as the season-to-date performance might suggest. Good players who have been cold for 5 months hit exactly as projected and the “cold” 5 months has no predictive value, other than how it changes the up-to-date projection.
For players who are projected poorly, less than a .320 wOBA, the 5-month hot ones outperform their projections and the cold ones under-perform their projections, both by around 8 points. After 2 months, there is no difference – both “hot” and “cold” players perform at around their projected levels over the last 4 months of the season.
So what are our conclusions? Until we get into the last month or two of the season, season-to-date stats provide virtually no useful information once we have a credible projection for a player. For “hot” players, we might “bump” the projection by a few points in wOBA even 2 or 3 months into the season – apparently the projection is slightly under-valuing these players for some reason. However, it does not appear to be correct to prefer a “hot” player like Gomez versus a “cold” one like Butler when the “cold” player is projected at 25 points better, regardless of the time-frame. Later in the season, at around the 4th or 5th month, we might need to “bump” our projection, at least my projection, by 10 or 15 points to account for a torrid first 4 or 5 months. However, the 20 or 25 point better player, according to the projection, is still the better choice.
For “cold” players, season-to-date stats appear to provide no information whatsoever over and above a player’s projection, regardless of what point in the season we are at. So, when should we be worried about a hitter if he is performing far below his “expected” performance? Never. If you want a good estimate of his future performance, simply use his projection and ignore his putrid season-to-date stats.
In the next installment, I am going to look at the spread of performance for hot and cold players. You might hypothesize that while being hot or cold for 2 or 3 months has almost no effect on the next few months of performance, perhaps it does change the distribution of that performance among the group of hot and cold players.