Processing math: 100%

25 October, 2023

For hitters, cold streaks run colder than hot streaks run hot

This blog post is just the product of a thought exercise: how much information do you get from a certain number of plate appearances? Suppose we observe n=25 plate appearances for a batter. If the batter gets on-base x=5 times, is that the same amount of information as if the batter gets on-base x=10 times? 

The answer is no. As it turns out, the batter obtaining fewer hits is more informative for the batter being "bad" than the batter obtaining more hits is for the batter being "good." How is this possible? Consider the very simple case of forming a standard 95% confidence interval for a binomial proportion. From any statistics textbook, this is just

 

ˆp±1.96ˆp(1ˆp)n

 

where ˆp is the proportion of on-base events for n plate appearances.  Consider the second part, which I will refer to as the "margin of error" and which controls the width of the confidence interval. For n=25 plate appearances, x=5 gives ˆp=5/25=0.2 and gives


1.960.2(10.2)25=0.1568

 

For n=10 on-base events, this gives ˆp=0.4 and 


1.960.4(10.4)25=0.19204

 

The width of the margin of error of the confidence is nearly 15% higher for ˆp=0.4 than for ˆp=0.2! There is more uncertainty with a better result.

Going to a Bayesian framework does not fix this issue, with the possible exception of when heavily weighted priors are being used which would be not justifiable in practice. Suppose that the number of on-base events x in n plate appearances once again follows the (overly simplistic) binomial distribution with parameter p, and p is assumed to have a Beta(1,1) distribution, which is the simple uniform case.


xBin(n,p)

pBeta(1,1)


For the case of x=5 on-base events in n=25 plate appearances, the posterior distribution has form , standard deviation, and 95% central credible interval


p|x=5,n=25Beta(6,21)

SD(p|x=5,n=25)=(6)(21)(6+21)2(6+21+1)=0.0786

95% Central CI: (0.0897,0.3935)


For the case of x=10 on-base events in n=25 plate appearances, the posterior distribution has form , standard deviation, and 95% central credible interval


p|x=10,n=25Beta(11,16)

SD(p|x=10,n=25)=(11)(16)(11+16)2(11+16+1)=0.929

95% Central CI:(0.2335,0.5942) 


Once again, the scenario with worse performance (x=5 on-base events) has a smaller standard deviation, implying there is less posterior uncertainty about the outcome. In addition, the width of the 95% central credible interval is smaller for x=5 (0.3038) than for x=10 (0.3608)$.

So how much information is in the n trials? One way to define information, in a probabilistic sense, is with a concept called the observed information. Observed information is a statistic which measures a concept called the Fisher information. Fisher information measures the amount of information that a sample carries about an unknown parameter θ. Unfortunately, calculating this requires knowing the parameter in question, and so it is usually estimated. The log-likelihood of a set of observations ˜x={x1,x2,,xn} is defined as


(θ|˜x)=ni=1log[f(xi|θ)]

 

And the observed information is defined as the negative second derivative of the log-likelihood, taken with respect to θ.

 

I(˜x)=ddθ2(θ|˜x)

 

Note that Bayesians may replace the log-likelihood with the log-posterior distribution of θ.

 In general the observed information must be calculated for every model, but is known for certain models. For the binomial distribution, the observed information is

 

I(ˆp)=nˆp(1ˆp)


where ˆp=x/n. Hence, for the case of x=5 on-base events in n=25 plate appearances, I(0.2)=156.25. For the case of x=10 on-base events in n=25 plate appearances, I(0.4)=104.1667. There is quite literally more information in the case where the batter performed worse.

For pitchers, the opposite is the case. If we assume that the number of events in a fixed number of trials (such as runs allowed per 9 innings or walks plus hits per inning pitched), this most appropriate simple distribution is the Poisson distribution with parameter λ. For n trials, the observed information is


I(ˆλ)=nhatλ


where ˆλ=ˉx, the sample mean of observations. 

Imagine two pitchers: one has allowed x=30 walks plus hits in n=25 innings pitched, while the other has allowed x=20 walks plus hits in n=25 innings pitched. For which pitcher do we have more information about their abilities?

For the first pitcher, their sample WHIP is ˆλ=30/25=1.2 and their observed information is I(1.2)=25/1.2=20.8333. For the second pitcher, their sample WHIP is ˆλ=20/30=0.8 and their observed information is I(0.8)=25/0.8=31.25. Hence, we have more information about the pitcher who has performed better.

So the situation is reversed for batters and hitters. For batters, we tend to have more information when they perform poorly. For pitchers, we tend to have more information when the perform well. This suggests certain managerial strategies in small samples: it is justifiable to pull a poorly performing batter, but also perhaps justifiable to allow a poorly performing pitcher to have more innings. We just have more information about the bad hitter than we do about the bad pitcher, thanks to information theory.

06 April, 2023

2022 and 2023 Stabilization Points

Hello everyone! It's been another couple of years, and I'm ready to update stabilization points again. These are my estimated stabilization points for the 2022 and 2023 MLB seasons, once again using the maximum likelihood method on the totals that I used for previous years. This method is explained in my articles  Estimating Theoretical Stabilization Points and WHIP Stabilization by the Gamma-Poisson Model. As usual, all data and code I used for this post can be found on my github. I make no claims about the stability, efficiency, or optimality of my code.

I've included standard error estimates for 2022 and 2023, but these should not be used to perform any kinds of tests or intervals to compare to the values from previous years, as those values are estimates themselves with their own standard errors, and approximately 5/6 of the data is common between the two estimates. The calculations I performed for 2015 can be found here for batting statistics and here for pitching statistics. The calculations for 2016 can be found here. The 2017 calculations can be found here. The 2018 calculations can be found here. The 2019 calculations can be found here. I didn't do calculations in 2020 because of the pandemic in general. The 2021 calculations can be found here.


The cutoff values I picked were the minimum number of events (PA, AB, TBF, BIP, etc. - the denominators in the formulas) in order to be considered for a year. These cutoff values, and the choice of 6 years worth of data (2016-2021 for the 2022 stabilization points and 2017 - 2022 for the 2023 stabilization points) were picked fairly arbitrarily. This is consistent with my previous work, though I do have concerns about including rates from, for example, the covid year and juiced ball era. However, including fewer years means less accurate estimates. A tradeoff must be made, and I tried to go with what was reasonable (based on seeing what others were doing and my own knowledge of baseball) and what seemed to work well in practice.

Offensive Statistics

2022 Statistics


StatFormula2022 ˆM2022 SE(ˆM)2022 ˆμCutoffOBP(H + BB + HBP)/PA316.4819.720.331300BABIP(H - HR)/(AB-SO-HR+SF)459.4450.100.304300BAH/AB473.8638.690.263300SO RateSO/PA51.892.200.210300BB Rate(BB-IBB)/(PA-IBB)105.364.960.0833001B Rate1B/PA195.2210.550.1473002B Rate2B/PA1197.83153.680.0473003B Rate3B/PA561.0151.430.004300XBH Rate(2B + 3B)/PA1002.35114.030.051300HR RateHR/PA155.398.290.035300HBP RateHBP/PA248.1015.530.011300


2023 Statistics


StatFormula2023 ˆM2023 SE(ˆM)2023 ˆμCutoffOBP(H + BB + HBP)/PA301.1118.460.329300BABIP(H - HR)/(AB-SO-HR+SF)426.8045.290.302300BAH/AB434.5134.080.259300SO RateSO/PA53.162.250.213300BB Rate(BB-IBB)/(PA-IBB)107.365.060.0833001B Rate1B/PA196.9710.650.1453002B Rate2B/PA1189.22151.900.0473003B Rate3B/PA634.1462.080.005300XBH Rate(2B + 3B)/PA1035.82120.930.051300HR RateHR/PA156.778.340.035300HBP RateHBP/PA256.2416.040.011300


In general, a larger stabilization point will be due to a decreased spread of talent levels - as talent levels get closer together, more extreme stats become less and less likely, and will be shrunk harder towards the mean. Consequently, it takes more observations to know that a player's high or low stats (relative to the rest of the league) are real and not just a fluke of randomness. Similarly, smaller stabilization points will point towards an increase in the spread of talent levels.

This is a good opportunity to compare the stabilization points I calculated for the 2016 season to the stabilization points for the 2023 season, as the 2023 season includes data from 2017-2022, so there is no crossover of information between them.


StatFormula2023 ˆM2016 ˆMOBP(H + BB + HBP)/PA301.11301.32BABIP(H - HR)/(AB-SO-HR+SF)426.80433.04BAH/AB434.51491.20SO RateSO/PA53.1649.23BB Rate(BB-IBB)/(PA-IBB)107.36112.441B Rate1B/PA196.97223.862B Rate2B/PA1189.221169.753B Rate3B/PA634.14365.06XBH Rate(2B + 3B)/PA1035.821075.41HR RateHR/PA156.77126.35HBP RateHBP/PA256.24300.97



What is most apparent is the stability of most statistics. The stabilization point for OBP, BABIP, SO Rate, BB rate, 2B rate, and XBH rate are nearly identical, indicating that the spread of abilities within this distribution is roughly the same now as it is in 2016. Stabilization points for BA, 1B rate, HR Rate, and HBP rate are fairly close, indicating not much change. The big outlier is 3B rate, or the rate of triples. Though the estimated probability of a triple per PA is approximately 0.005 in both seasons, the stabilization rate has nearly doubled from 2016 to 2023. This is indicative that the spread in the ability to triples has increased - though the league average rate of triples has remained the same, there are fewer batters that have a "true" triples-hitting ability which is much higher or lower than the league average.


Pitching Statistics 

2022 Statistics


StatFormula2022 ˆM2022 SE(ˆM)2022 ˆμCutoffBABIP(H-HR)/(GB + FB + LD)929.31165.620.284300GB RateGB/(GB + FB + LD)66.564.380.4393001FB RateFB/(GB + FB + LD)61.794.030.351300LD RateLD/(GB + FB + LD)1692.45467.980.210300HR/FB RateHR/FB715.15226.830.135100SO RateSO/TBF80.134.000.220400HR RateHR/TBF1102.12175.150.032400BB Rate(BB-IBB)/(TBF-IBB)256.2820.370.074400HBP RateHBP/TBF931.55131.510.009400Hit rateH/TBF414.0333.460.230400OBP(H + BB + HBP)/TBF395.8335.750.312400WHIP(H + BB)/IP*58.494.401.2880ER RateER/IP*54.504.070.46580Extra BF(TBF - 3IP*)/IP*61.964.751.2380



* When dividing by IP, I corrected the 0.1 and 0.2 representations to 0.33 and 0.67, respectively. 

2023 Statistics


StatFormula2023 ˆM2023 SE(ˆM)2023 ˆμCutoffBABIP(H-HR)/(GB + FB + LD)809.54134.290.282300GB RateGB/(GB + FB + LD)66.244.400.4343001FB RateFB/(GB + FB + LD)59.403.900.357300LD RateLD/(GB + FB + LD)1596.98429.070.209300HR/FB RateHR/FB386.3477.010.133100SO RateSO/TBF77.464.850.223400HR RateHR/TBF942.58134.480.032400BB Rate(BB-IBB)/(TBF-IBB)258.7820.840.073400HBP RateHBP/TBF766.4098.750.009400Hit rateH/TBF391.5530.960.227400OBP(H + BB + HBP)/TBF358.5031.640.309400WHIP(H + BB)/IP*54.964.071.2780ER RateER/IP*50.333.680.45980Extra BF(TBF - 3IP*)/IP*58.004.371.2280


* When dividing by IP, I corrected the 0.1 and 0.2 representations to 0.33 and 0.67, respectively. 

Once again, this is a good opportunity to compare the stabilization rates for 2016 to the stabilization rates for 2023.

StatFormula2023 ˆM2016 ˆMBABIP(H-HR)/(GB + FB + LD)809.541408.72GB RateGB/(GB + FB + LD)66.2465.52FB RateFB/(GB + FB + LD)59.4061.96LD RateLD/(GB + FB + LD)1596.98768.42HR/FB RateHR/FB386.34505.11SO RateSO/TBF77.4690.94HR RateHR/TBF942.58931.59BB Rate(BB-IBB)/(TBF-IBB)258.78221.25HBP RateHBP/TBF766.40989.30Hit rateH/TBF391.55623.35OBP(H + BB + HBP)/TBF358.50524.73WHIP(H + BB)/IP*54.9677.2ER RateER/IP*50.3359.55Extra BF(TBF - 3IP*)/IP*58.0075.79

Comparing 2023 to 2016, the outliers are obvious: the stabilization point for pitcher BABIP has nearly halved since then, while the stabilization point for line drive rate has nearly doubled (and similarly for hit rate). Given that the estimated mean pitcher BABIP and line drive rate are similar for the two years (0.284/0.210 for 2023 and 0.289/0.203 for 2016), this indicates a change in the spread of abilities. Simply put, there is a much lower spread of pitcher BABIP "true" abilities, and with it, a much higher spread of line drive rates. Simply put, teams appear to be willing to trade more or less line drives for less variance in the batting average when the ball is in play.

 

Usage

 

Aside from the obvious use of knowing approximately when results are half due to luck and half  skill, these stabilization points (along with league means) can be used to provide very basic confidence intervals and prediction intervals for estimates that have been shrunk towards the population mean, as demonstrated in my article From Stabilization to Interval Estimation.

For example, suppose that in the first half, a player has an on-base percentage of 0.380 in 300 plate appearances, corresponding to 114 on-base events. A 95% confidence interval using my empirical Bayesian techniques (based on a normal-normal model) is

114+0.329301.11300+301.11±1.960.329(10.329)301.11+300=(0.317,0.392)

That is, we believe the player's true on-base percentage to be between 0.317 and 0.392 with 95% confidence. I used a normal distribution for talent levels with a normal approximation to the binomial for the distribution of observed OBP, but that is not the only possible choice - it just resulted in the simplest formulas for the intervals.

Suppose that the player will get an additional ˜n=250 PA in the second half of the season. A 95% prediction interval for his OBP over those PA is given by

114+0.329301.11300+301.11±1.960.329(10.329)301.11+300+0.329(10.329)250=(0.285,0.424) 

That is, 95% of the time the player's OBP over the 250 PA in the second half of the season should be between 0.285 and 0.424. These intervals are overly optimistic and "dumb" in that they take only the league mean and variance and the player's own statistics into account, representing an advantage only over 95% "unshrunk" intervals, but when I tested them in my article "From Stabilization to Interval Estimation," they worked well for prediction.

As usual, all my data and code can be found on my github. I wrote a general function in R to calculate the stabilization point for any basic counting stat, or unweighted sums of counting stats like OBP (I am still working on weighted sums so I can apply this to things like wOBA). The function returns the estimated league mean of the statistic and estimated stabilization point, a standard error for the stabilization point, and what model was used (I only have two programmed in - 1 for the beta-binomial and 2 for the gamma-Poisson). It also gives a plot of the estimated stabilization at different numbers of events, with 95% confidence bounds.

> stabilize(h$H + h$BB + h$HBP, h$PA, cutoff = 300, 1)  
$Parameters
[1]   0.3287902 301.1076958

$Standard.Error
[1] 18.45775

$Model
[1] "Beta-Binomial"






The confidence bounds are created from the estimates ˆM and SE(ˆM) above and the formula

(nn+ˆM)±1.96[n(n+ˆM)2]SE(ˆM)

which is obtained from the applying the delta method to the function p(ˆM)=n/(n+ˆM). Note that the mean and prediction intervals I gave do not take SE(ˆM) into account (ignoring the uncertainty surrounding the correct shrinkage amount, which is indicated by the confidence bounds above), but this is not a huge problem - if you don't believe me, plug slightly different values of M into the formulas yourself and see that the resulting intervals do not change much.

As always, feel free to post any comments or suggestions.



02 April, 2021

2021 Stabilization Points

These are my estimated stabilization points for the 2021 MLB season, once again using the maximum likelihood method on the totals that I used for previous years. This method is explained in my articles  Estimating Theoretical Stabilization Points and WHIP Stabilization by the Gamma-Poisson Model.

However, good news! In the past two years, I've had some research on reliability for non-normal data corrected, expanded upon, and published in academic journals. I can definitively say that my maximum likelihood estimator is accurately estimating the reliability of these statistics exactly the same as Cronbach's alpha or KR-20 and performs as well or better than Cronbach's alpha, assuming the model is correct, which - while no model is correct - I believe is very accurate. The article can be found here (for the preprint, click here). I also published a paper with some KR-20 and KR-21 reliability estimators specifically for exponential family distributions such as binomial, Poisson, etc. The article can be found here (for the preprint, click here). These estimators are a little more efficient for small sample sizes but for large sample sizes such as in this case, however, the estimators should be nearly identical.

As usual, all data and code I used for this post can be found on my github. I make no claims about the stability, efficiency, or optimality of my code.

I've included standard error estimates for 2021, but these should not be used to perform any kinds of tests or intervals to compare to the values from previous years, as those values are estimates themselves with their own standard errors, and approximately 5/6 of the data is common between the two estimates. The calculations I performed for 2015 can be found here for batting statistics and here for pitching statistics. The calculations for 2016 can be found here. The 2017 calculations can be found here. The 2018 calculations can be found here. The 2019 calculations can be found here. I didn't do calculations in 2020 because of the pandemic in general.


The cutoff values I picked were the minimum number of events (PA, AB, TBF, BIP, etc. - the denominators in the formulas) in order to be considered for a year. These cutoff values, and the choice of 6 years worth of data (2015-20120), were picked fairly arbitrarily - I tried to go with what was reasonable (based on seeing what others were doing and my own knowledge of baseball) and what seemed to work well in practice.

Offensive Statistics


StatFormulaˆMSE(ˆM)ˆμCutoff2019 ˆMOBP(H + BB + HBP)/PA302.5718.390.331300295.20BABIP(H - HR)/(AB-SO-HR+SF)451.2447.220.306300431.49BAH/AB511.7142.780.265300488.49SO RateSO/PA50.372.120.20530049.05BB Rate(BB-IBB)/(PA-IBB)100.474.670.080300104.081B Rate1B/PA191.1710.200.150300197.432B Rate2B/PA1242.67162.270.0473001200.463B Rate3B/PA481.1128.740.005300421.91XBH Rate(2B + 3B)/PA1059.31124.090.0523001070.09HR RateHR/PA146.007.680.034300141.80HBP RateHBP/PA261.1316.560.010300266.92



In general, a larger stabilization point will be due to a decreased spread of talent levels - as talent levels get closer together, more extreme stats become less and less likely, and will be shrunk harder towards the mean. Consequently, it takes more observations to know that a player's high or low stats (relative to the rest of the league) are real and not just a fluke of randomness. Similarly, smaller stabilization points will point towards an increase in the spread of talent levels.

The stabilization point of the 3B rate increased dramatically by approximately two standard deviations, indicating that the talent level of hitting triples has clustered more closely around its mean. In general, however, most stabilization points are roughly the same as the previous year, taking into account that year-to-year and sample-to-sample variation in estimates is expected even if the true stabilization points are not changing.

Pitching Statistics 


StatFormulaˆMSE(ˆM)ˆμCutoff2019 ˆMBABIP(H-HR)/(GB + FB + LD)1061.43197.340.2863001184.38GB RateGB/(GB + FB + LD)66.204.250.44330064.51FB RateFB/(GB + FB + LD)62.333.970.34630060.68LD RateLD/(GB + FB + LD)1773.66486.120.2113002197.02HR/FB RateHR/FB529.40129.100.130100351.53SO RateSO/TBF80.784.970.21440090.86HR RateHR/TBF959.57133.0730.031400764.48BB Rate(BB-IBB)/(TBF-IBB)251.2219.470.072400230.09HBP RateHBP/TBF1035.90153.680.009400906.25Hit rateH/TBF453.3037.520.232400496.56OBP(H + BB + HBP)/TBF407.3636.330.313400443.60WHIP(H + BB)/IP*63.384.791.298067.84ER RateER/IP*57.734.300.4608057.97Extra BF(TBF - 3IP*)/IP*64.704.921.238067.23



* When dividing by IP, I corrected the 0.1 and 0.2 representations to 0.33 and 0.67, respectively. 

Most are the same, but the HR/FB stabilization point has shifted up dramatically given its standard error, indicating a likely change in true talent level and not just sample-to-sample and year-to-year variation. This indicates that the distribution of HR/FB talent levels is clustering around its mean, possibly indicating a change in approach by pitchers or batters over the past two years. The mean has also shifted up over the previous calculation. Similarly, the HR rate stabilization point and mean have increased. Conversely, the strikeout rate stabilization rate has decreased, indicating less clustering of talent levels around the mean, and the mean has also increased.

Usage

 

Aside from the obvious use of knowing approximately when results are half due to luck and half  skill, these stabilization points (along with league means) can be used to provide very basic confidence intervals and prediction intervals for estimates that have been shrunk towards the population mean, as demonstrated in my article From Stabilization to Interval Estimation.

For example, suppose that in the first half, a player has an on-base percentage of 0.380 in 300 plate appearances, corresponding to 114 on-base events. A 95% confidence interval using my empirical Bayesian techniques (based on a normal-normal model) is

114+0.331302.57300+302.57±1.960.331(10.331)302.57+300=(0.318,0.392)

That is, we believe the player's true on-base percentage to be between 0.317 and 0.392 with 95% confidence. I used a normal distribution for talent levels with a normal approximation to the binomial for the distribution of observed OBP, but that is not the only possible choice - it just resulted in the simplest formulas for the intervals.

Suppose that the player will get an additional ˜n=250 PA in the second half of the season. A 95% prediction interval for his OBP over those PA is given by

114+0.331302.57300+302.57±1.960.331(10.331)302.57+300+0.331(10.331)250=(0.286,0.425) 

That is, 95% of the time the player's OBP over the 250 PA in the second half of the season should be between 0.285 and 0.424. These intervals are overly optimistic and "dumb" in that they take only the league mean and variance and the player's own statistics into account, representing an advantage only over 95% "unshrunk" intervals, but when I tested them in my article "From Stabilization to Interval Estimation," they worked well for prediction.

As usual, all my data and code can be found on my github. I wrote a general function in R to calculate the stabilization point for any basic counting stat, or unweighted sums of counting stats like OBP (I am still working on weighted sums so I can apply this to things like wOBA). The function returns the estimated league mean of the statistic and estimated stabilization point, a standard error for the stabilization point, and what model was used (I only have two programmed in - 1 for the beta-binomial and 2 for the gamma-Poisson). It also gives a plot of the estimated stabilization at different numbers of events, with 95% confidence bounds.

> stabilize(h$H + h$BB + h$HBP, h$PA, cutoff = 300, 1)  
$Parameters
[1]   0.3306363 302.5670532

$Standard.Error
[1] 18.38593

$Model
[1] "Beta-Binomial"





The confidence bounds are created from the estimates ˆM and SE(ˆM) above and the formula

(nn+ˆM)±1.96[n(n+ˆM)2]SE(ˆM)

which is obtained from the applying the delta method to the function p(ˆM)=n/(n+ˆM). Note that the mean and prediction intervals I gave do not take SE(ˆM) into account (ignoring the uncertainty surrounding the correct shrinkage amount, which is indicated by the confidence bounds above), but this is not a huge problem - if you don't believe me, plug slightly different values of M into the formulas yourself and see that the resulting intervals do not change much.

As always, feel free to post any comments or suggestions.


21 April, 2019

2019 Stabilization Points

These are my estimated stabilization points for the 2019 MLB season, once again using the maximum likelihood method on the totals that I used for previous years. This method is explained in my articles Estimating Theoretical Stabilization Points and WHIP Stabilization by the Gamma-Poisson Model.

(As usual, all data and code I used can be found on my github. I make no claims about the stability, efficiency, or optimality of my code.) 

I've included standard error estimates for 2019, but these should not be used to perform any kinds of tests or intervals to compare to the values from previous years, as those values are estimates themselves with their own standard errors, and approximately 5/6 of the data is common between the two estimates. The calculations I performed for 2015 can be found here for batting statistics and here for pitching statistics. The calculations for 2016 can be found here. The 2017 calculations can be found here. The 2018 calculations can be found here.

The cutoff values I picked were the minimum number of events (PA, AB, TBF, BIP, etc. - the denominators in the formulas) in order to be considered for a year. These cutoff values, and the choice of 6 years worth of data (2013-2018), were picked fairly arbitrarily - I tried to go with what was reasonable (based on seeing what others were doing and my own knowledge of baseball) and what seemed to work well in practice.

Offensive Statistics


StatFormulaˆMSE(ˆM)ˆμCutoff2018 ˆMOBP(H + BB + HBP)/PA295.2016.260.329300302.27BABIP(H - HR)/(AB-SO-HR+SF)431.4939.760.306300429.47BAH/AB488.4936.520.264300463.19SO RateSO/PA49.051.880.19830048.74BB Rate(BB-IBB)/(PA-IBB)104.084.450.078300108.841B Rate1B/PA197.439.720.154300200.942B Rate2B/PA1200.46140.370.0473001164.823B Rate3B/PA421.9131.670.005300390.75XBH Rate(2B + 3B)/PA1070.09115.960.0523001064.01HR RateHR/PA141.806.780.030300132.52HBP RateHBP/PA266.9215.740.009300280.00


In general, a larger stabilization point will be due to a decreased spread of talent levels - as talent levels get closer together, more extreme stats become less and less likely, and will be shrunk harder towards the mean. Consequently, it takes more observations to know that a player's high or low stats (relative to the rest of the league) are real and not just a fluke of randomness. Similarly, smaller stabilization points will point towards an increase in the spread of talent levels.

Noticeably, the stabilization point for the HR rate has increased over the past four years, indicating less variance in talent level of hitting home runs. Meanwhile, the stabilization point for HBP rate has decreased over the past four years, suggesting increased variance in """talent""" level of getting hit by pitches.

Pitching Statistics 


StatFormulaˆMSE(ˆM)ˆμCutoff2018 ˆMBABIP(H-HR)/(GB + FB + LD)1184.38206.630.2883001322.70GB RateGB/(GB + FB + LD)64.513.660.44630063.12FB RateFB/(GB + FB + LD)60.683.410.34430059.80LD RateLD/(GB + FB + LD)2197.02622.020.2103002157.15HR/FB RateHR/FB351.5356.050.117100388.61SO RateSO/TBF90.865.070.20440093.52HR RateHR/TBF764.4882.780.028400790.97BB Rate(BB-IBB)/(TBF-IBB)230.0915.460.071400238.70HBP RateHBP/TBF906.25109.630.009400935.61Hit rateH/TBF496.5639.480.233400536.32OBP(H + BB + HBP)/TBF443.6036.420.312400472.09WHIP(H + BB)/IP*67.844.691.288071.10ER RateER/IP*57.973.870.4448058.59Extra BF(TBF - 3IP*)/IP*67.234.641.228069.11


* When dividing by IP, I corrected the 0.1 and 0.2 representations to 0.33 and 0.67, respectively. 

Most statistics this year shifted not just in stabilization point, but also in mean, possibly indicating a shift in the pitching environment. The stabilization points which did shift tended to shift down, indicating an increased spread of variation around the mean talent levels.

Usage

 


Aside from the obvious use of knowing approximately when results are half due to luck and half  skill, these stabilization points (along with league means) can be used to provide very basic confidence intervals and prediction intervals for estimates that have been shrunk towards the population mean, as demonstrated in my article From Stabilization to Interval Estimation.

For example, suppose that in the first half, a player has an on-base percentage of 0.380 in 300 plate appearances, corresponding to 114 on-base events. A 95% confidence interval using my empirical Bayesian techniques (based on a normal-normal model) is

114+0.329295.20300+295.20±1.960.329(10.329)295.20+300=(0.317,0.392)

That is, we believe the player's true on-base percentage to be between 0.317 and 0.392 with 95% confidence. I used a normal distribution for talent levels with a normal approximation to the binomial for the distribution of observed OBP, but that is not the only possible choice - it just resulted in the simplest formulas for the intervals.

Suppose that the player will get an additional ˜n=250 PA in the second half of the season. A 95% prediction interval for his OBP over those PA is given by

114+0.329295.20300+295.20±1.960.329(10.329)295.20+300+0.329(10.329)250=(0.285,0.424) 

That is, 95% of the time the player's OBP over the 250 PA in the second half of the season should be between 0.285 and 0.424. These intervals are overly optimistic and "dumb" in that they take only the league mean and variance and the player's own statistics into account, representing an advantage only over 95% "unshrunk" intervals, but when I tested them in my article "From Stabilization to Interval Estimation," they worked well for prediction.

As usual, all my data and code can be found on my github. I wrote a general function in R to calculate the stabilization point for any basic counting stat, or unweighted sums of counting stats like OBP (I am still working on weighted sums so I can apply this to things like wOBA). The function returns the estimated league mean of the statistic and estimated stabilization point, a standard error for the stabilization point, and what model was used (I only have two programmed in - 1 for the beta-binomial and 2 for the gamma-Poisson). It also gives a plot of the estimated stabilization at different numbers of events, with 95% confidence bounds.

> stabilize(h$H + h$BB + h$HBP, h$PA, cutoff = 300, 1)  
$Parameters
[1]   0.3285272 295.1970047

$Standard.Error
[1] 16.25874

$Model
[1] "Beta-Binomial"



The confidence bounds are created from the estimates ˆM and SE(ˆM) above and the formula

(nn+ˆM)±1.96[n(n+ˆM)2]SE(ˆM)

which is obtained from the applying the delta method to the function p(ˆM)=n/(n+ˆM). Note that the mean and prediction intervals I gave do not take SE(ˆM) into account (ignoring the uncertainty surrounding the correct shrinkage amount, which is indicated by the confidence bounds above), but this is not a huge problem - if you don't believe me, plug slightly different values of M into the formulas yourself and see that the resulting intervals do not change much.

As always, feel free to post any comments or suggestions.

05 September, 2018

2018 Stabilization Points

So this post is waaaaay late in the 2018 season. I've been busy! But, I'm doing this again since it's pretty easy to do. But I am copying and pasting the text from the posts from the last two years, because I can.

These are my estimated stabilization points for the 2018 MLB season, once again using the maximum likelihood method on the totals that I used for previous years. This method is explained in my articles Estimating Theoretical Stabilization Points and WHIP Stabilization by the Gamma-Poisson Model.

(As usual, all data and code I used can be found on my github. I make no claims about the stability, efficiency, or optimality of my code.) 

I've included standard error estimates for 2018, but these should not be used to perform any kinds of tests or intervals to compare to the values from previous years, as those values are estimates themselves with their own standard errors, and approximately 5/6 of the data is common between the two estimates. The calculations I performed for 2015 can be found here for batting statistics and here for pitching statistics. The calculations for 2016 can be found here. The 2017 calculations can be found here.

The cutoff values I picked were the minimum number of events (PA, AB, TBF, BIP, etc. - the denominators in the formulas) in order to be considered for a year. These cutoff values, and the choice of 6 years worth of data (2012-2017), were picked fairly arbitrarily - I tried to go with what was reasonable (based on seeing what others were doing and my own knowledge of baseball) and what seemed to work well in practice.

Offensive Statistics


StatFormulaˆMSE(ˆM)ˆμCutoff2017 ˆMOBP(H + BB + HBP)/PA302.2716.880.329300303.77BABIP(H - HR)/(AB-SO-HR+SF)429.4739.300.306300442.62BAH/AB463.1933.940.266300466.09SO RateSO/PA48.741.880.19430049.02BB Rate(BB-IBB)/(PA-IBB)108.844.720.077300113.641B Rate1B/PA200.949.990.156300215.292B Rate2B/PA1164.82134.260.0473001230.963B Rate3B/PA390.7528.720.005300358.92XBH Rate(2B + 3B)/PA1064.01115.550.0523001063.76HR RateHR/PA132.526.310.030300129.02HBP RateHBP/PA280.0016.890.009300299.39


In general, a larger stabilization point will be due to a decreased spread of talent levels - as talent levels get closer together, more extreme stats become less and less likely, and will be shrunk  harder towards the mean. Consequently, it takes more observations to know that a player's high or low stats (relative to the rest of the league) are real and not just a fluke of randomness. Similarly, smaller stabilization points will point towards an increase in the spread of talent levels.

Pitching Statistics 


StatFormulaˆMSE(ˆM)ˆμCutoff2016 ˆMBABIP(H-HR)/(GB + FB + LD)1322.70244.540.2893001356.06GB RateGB/(GB + FB + LD)63.123.550.45030063.12FB RateFB/(GB + FB + LD)59.863.340.34130059.80LD RateLD/(GB + FB + LD)2157.15586.960.2093001497.65HR/FB RateHR/FB388.6165.280.115100464.60SO RateSO/TBF93.525.250.19940094.62HR RateHR/TBF790.9786.340.029400942.62BB Rate(BB-IBB)/(TBF-IBB)238.7016.100.070400237.53HBP RateHBP/TBF935.61115.060.008400954.09Hit rateH/TBF536.3243.990.235400550.69OBP(H + BB + HBP)/TBF472.0939.510.313400496.39WHIP(H + BB)/IP*71.104.961.298074.68ER RateER/IP*58.593.910.4478062.82Extra BF(TBF - 3IP*)/IP*69.114.791.228073.11


* When dividing by IP, I corrected the 0.1 and 0.2 representations to 0.33 and 0.67, respectively. 

Most statistics are roughly the same; however, the line drive stabilization point has increased quite a bit, having doubled in 2016 from 2015. This is not a mistake - it corresponds to a decrease in the variance of line drive rates. Noticeably, the HR rate variance increased, and so the HR rate stabilization point decreased. This indicates a shift in the MLB pitching environment in these particular areas, and points to a weakness in the method - if the underlying league distribution of talent level of a statistic is changing rapidly, this method will fail to account for the change and may be inaccurate.

 

Usage

 


Aside from the obvious use of knowing approximately when results are half due to luck and half from skill, these stabilization points (along with league means) can be used to provide very basic confidence intervals and prediction intervals for estimates that have been shrunk towards the population mean, as demonstrated in my article From Stabilization to Interval Estimation. I believe the confidence intervals from my method should be similar to the intervals from Sean Dolinar's great fangraphs article A New Way to Look at Sample Size, though I have not personally tested this, and am not familiar with the Cronbach's alpha methodology he uses (or with reliability analysis in general).

For example, suppose that in the first half, a player has an on-base percentage of 0.380 in 300 plate appearances, corresponding to 114 on-base events. A 95% confidence interval using my empirical Bayesian techniques (based on a normal-normal model) is

114+0.329301.32300+301.32±1.960.329(10.329)301.32+300=(0.317,0.392)

That is, we believe the player's true on-base percentage to be between 0.317 and 0.392 with 95% confidence. I used a normal distribution for talent levels with a normal approximation to the binomial for the distribution of observed OBP, but that is not the only possible choice - it just resulted in the simplest formulas for the intervals.

Suppose that the player will get an additional ˜n=250 PA in the second half of the season. A 95% prediction interval for his OBP over those PA is given by

114+0.329301.32300+301.32±1.960.329(10.329)301.32+300+0.329(10.329)250=(0.285,0.424) 

That is, 95% of the time the player's OBP over the 250 PA in the second half of the season should be between 0.285 and 0.424. These intervals are overly optimistic and "dumb" in that they take only the league mean and variance and the player's own statistics into account, representing an advantage only over 95% unshrunk intervals, but when I tested them in my article "From Stabilization to Interval Estimation", they worked well for prediction.

As usual, all my data and code can be found on my github. I wrote a general function in R to calculate the stabilization point for any basic counting stat, or unweighted sums of counting stats like OBP (I am still working on weighted sums so I can apply this to things like wOBA). The function returns the estimated league mean of the statistic and estimated stabilization point, a standard error for the stabilization point, and what model was used (I only have two programmed in - 1 for the beta-binomial and 2 for the gamma-Poisson). It also gives a plot of the estimated stabilization at different numbers of events, with 95% confidence bounds.

> stabilize(h$H + h$BB + h$HBP, h$PA, cutoff = 300, 1)  
$Parameters
[1]   0.329098 301.317682

$Standard.Error
[1] 16.92138

$Model
[1] "Beta-Binomial"



The confidence bounds are created from the estimates ˆM and SE(ˆM) above and the formula

(nn+ˆM)±1.96[n(n+ˆM)2]SE(ˆM)

which is obtained from the applying the delta method to the function p(ˆM)=n/(n+ˆM). Note that the mean and prediction intervals I gave do not take SE(ˆM) into account (ignoring the uncertainty surrounding the correct shrinkage amount, which is indicated by the confidence bounds above), but this is not a huge problem - if you don't believe me, plug slightly different values of M into the formulas yourself and see that the resulting intervals do not change much.

Maybe somebody else out there might find this useful. As always, feel free to post any comments or suggestions!

24 April, 2017

2017 Stabilization Points

Once again, I recalculated stabilization points for 2017 MLB data, once again using the maximum likelihood method on the totals that I used for 2015 and 2016. This method is explained in my articles Estimating Theoretical Stabilization Points and WHIP Stabilization by the Gamma-Poisson Model.

(As usual, all data and code I used can be found on my github. I make no claims about the stability, efficiency, or optimality of my code.) 

I've included standard error estimates for 2017, but these should not be used to perform any kinds of tests or intervals to compare to the values from previous years, are those values are estimates themselves with their own standard errors and approximately 5/6 of the data is common between the two estimates. The calculations I performed for 2015 can be found here for batting statistics and here for pitching statistics. The calculations for 2016 can be found here.

The cutoff values I picked were the minimum number of events (PA, AB, TBF, BIP, etc. - the denominators in the formulas) in order to be considered for a year. These cutoff values, and the choice of 6 years worth of data, were picked fairly arbitrarily - I tried to go with what was reasonable (based on seeing what others were doing and my own knowledge of baseball) and what seemed to work well in practice.

Offensive Statistics


StatFormulaˆMSE(ˆM)ˆμCutoff2016 ˆMOBP(H + BB + HBP)/PA303.7717.080.328300301.32BABIP(H - HR)/(AB-SO-HR+SF)442.6240.550.306300433.04BAH/AB466.0934.300.266300491.20SO RateSO/PA49.021.900.18830049.23BB Rate(BB-IBB)/(PA-IBB)113.645.000.077300112.441B Rate1B/PA215.2910.950.157300223.862B Rate2B/PA1230.96148.480.0473001169.753B Rate3B/PA358.9225.710.005300365.06XBH Rate(2B + 3B)/PA1063.76116.540.0523001075.41HR RateHR/PA129.026.180.028300126.35HBP RateHBP/PA299.3918.600.009300300.97


In general, a larger stabilization point will be due to a decreased spread of talent levels - as talent levels get closer together, more extreme stats become less and less likely, and will be shrunk  harder towards the mean. Consequently, it takes more observations to know that a player's high or low stats (relative to the rest of the league) are real and not just a fluke of randomness. Similarly, smaller stabilization points will point towards an increase in the spread of talent levels.

Pitching Statistics 


StatFormulaˆMSE(ˆM)ˆμCutoff2016 ˆMBABIP(H-HR)/(GB + FB + LD)1356.06247.480.2893001408.72GB RateGB/(GB + FB + LD)64.003.560.45030063.53FB RateFB/(GB + FB + LD)61.733.420.34230059.80LD RateLD/(GB + FB + LD)1497.65296.210.208300731.02HR/FB RateHR/FB464.6085.510.108100488.53SO RateSO/TBF94.625.290.19440093.15HR RateHR/TBF942.62110.660.026400949.02BB Rate(BB-IBB)/(TBF-IBB)237.5315.840.069400236.87HBP RateHBP/TBF954.09115.600.008400939.00Hit rateH/TBF550.6945.630.235400559.18OBP(H + BB + HBP)/TBF496.3941.810.312400526.77WHIP(H + BB)/IP*74.685.251.298078.97ER RateER/IP*62.824.240.4408063.08Extra BF(TBF - 3IP*)/IP*73.115.111.228075.79


* When dividing by IP, I corrected the 0.1 and 0.2 representations to 0.33 and 0.67, respectively. 

Most statistics are roughly the same; however, the line drive stabilization point has roughly doubled. I checked my calculations for both years and this is not a mistake. It corresponds to a decrease in the variance of line drive rates. It should also be noted that the average line drive rate increased from 0.203 to 0.208 - there are perhaps remnants of an odd 2010 that is no longer included in the data set.

 

Usage

 

Note: This section is largely unchanged from the previous year's version. The formulas given here work for "counting" offensive stats (OBP, BA, etc.).

Aside from the obvious use of knowing approximately when results are half due to luck and half from skill, these stabilization points (along with league means) can be used to provide very basic confidence intervals and prediction intervals for estimates that have been shrunk towards the population mean, as demonstrated in my article From Stabilization to Interval Estimation. I believe the confidence intervals from my method should be similar to the intervals from Sean Dolinar's great fangraphs article A New Way to Look at Sample Size, though I have not personally tested this, and am not familiar with the Cronbach's alpha methodology he uses (or with reliability analysis in general).

For example, suppose that in the first half, a player has an on-base percentage of 0.380 in 300 plate appearances, corresponding to 114 on-base events. A 95% confidence interval using my empirical Bayesian techniques (based on a normal-normal model) is

114+0.329301.32300+301.32±1.960.329(10.329)301.32+300=(0.317,0.392)

That is, we believe the player's true on-base percentage to be between 0.317 and 0.392 with 95% confidence. I used a normal distribution for talent levels with a normal approximation to the binomial for the distribution of observed OBP, but that is not the only possible choice - it just resulted in the simplest formulas for the intervals.

Suppose that the player will get an additional ˜n=250 PA in the second half of the season. A 95% prediction interval for his OBP over those PA is given by

114+0.329301.32300+301.32±1.960.329(10.329)301.32+300+0.329(10.329)250=(0.285,0.424) 

That is, 95% of the time the player's OBP over the 250 PA in the second half of the season should be between 0.285 and 0.424. These intervals are overly optimistic and "dumb" in that they take only the league mean and variance and the player's own statistics into account, representing an advantage only over 95% unshrunk intervals, but when I tested them in my article "From Stabilization to Interval Estimation", they worked well for prediction.

As usual, all my data and code can be found on my github. I wrote a general function in R to calculate the stabilization point for any basic counting stat, or unweighted sums of counting stats like OBP (I am still working on weighted sums so I can apply this to things like wOBA ). The function returns the estimated league mean of the statistic and estimated stabilization point, a standard error for the stabilization point, and what model was used (I only have two programmed in - 1 for the beta-binomial and 2 for the gamma-Poisson). It also gives a plot of the estimated stabilization at different numbers of events, with 95% confidence bounds.

> stabilize(h$H + h$BB + h$HBP, h$PA, cutoff = 300, 1)  
$Parameters
[1]   0.329098 301.317682

$Standard.Error
[1] 16.92138

$Model
[1] "Beta-Binomial"



The confidence bounds are created from the estimates ˆM and SE(ˆM) above and the formula

(nn+ˆM)±1.96[n(n+ˆM)2]SE(ˆM)

which is obtained from the applying the delta method to the function p(ˆM)=n/(n+ˆM). Note that the mean and prediction intervals I gave do not take SE(ˆM) into account (ignoring the uncertainty surrounding the correct shrinkage amount, which is indicated by the confidence bounds above), but this is not a huge problem - if you don't believe me, plug slightly different values of M into the formulas yourself and see that the resulting intervals do not change much.

Maybe somebody else out there might find this useful. As always, feel free to post any comments or suggestions!

03 September, 2016

2016 Win Total Predictions (Through August 31)


These predictions are based on my own silly estimator, which I know can be improved with some effort on my part.  There's some work related to this estimator that I'm trying to get published academically, so I won't talk about the technical details yet (not that they're particularly mind-blowing anyway). These predictions include all games played before through August 31 break.

As a side note, I noticed that my projections are very similar to the Fangraphs projections on the same day. I'm sure we're both calculating the projections from completely different methods, but it's reassuring that others have arrived at basically the same conclusions. Theirs have also have playoff projections, though mine have intervals attached to them.

I set the nominal coverage at 95% (meaning the way I calculated it the intervals should get it right 95% of the time), but based on tests of earlier seasons at this point in the season the actual coverage is around 94%, with intervals usually being one game off if and when they are off.

Intervals are inclusive. All win totals assume a 162 game schedule.

TeamLowerMeanUpperTrue Win TotalCurrent Wins/GamesARI6368.827471.6156/133ATL5762.256868.4150/133BAL8186.579281.4272/133BOS8590.419691.774/133CHC98103.63109100.5985/132CHW7176.858377.6162/131CIN6267.97469.6755/132CLE8792.689890.0376/132COL7378.638481.7264/133DET8186.959283.5172/133HOU8186.249285.1471/133KCR7883.098978.6969/133LAA6772.937877.859/133LAD8489.449586.2674/133MIA7681.668781.9167/133MIL6570.137673.3457/133MIN5661.986870.149/132NYM7883.618981.5869/133NYY7883.869080.2869/132OAK6469.887571.9257/133PHI6772.427869.3860/133PIT7782.378880.3367/131SDP6368.737474.1455/132SEA7782.558881.368/133SFG82889486.3972/132STL8186.259287.6970/132TBR6570.727679.4756/132TEX8994.4510083.6180/134TOR8691.93978976/133WSN8994.8210094.0178/133


These quantiles are based off of a distribution - I've uploaded a picture of each team's distribution to imgur. The bars in red are the win total values covered by the 95% interval. The blue line represents my estimate of the team's "True Win Total" based on its performance - so if the blue line is to the left of the peak, the team is predicted to finish "lucky" - more wins than would be expected based on their talent level - and if the blue line is to the right of the peak, the team is predicted to finish "unlucky" - fewer wins that would be expected based on their talent level.

It's still difficult to predict final win totals even at the beginning of September - intervals have a width of approximately 11-12 games. The Texas Ranges have been lucky this season, with a projected win total over 10 games larger than their estimated true talent level!  Conversely, the Tampa Bay Rays have been unlucky, with a projected win total 10 games lower than their true talent level.

The Chicago Cubs have a good chance at winning 105+ games. My system believes they are a "true" 101 win team. Conversely, the system believes that the worst team is the Atlanta Braves, which are a "true" 68 win team (though the Minnesota Twins are projected to have the worst record at 62 wins).



Terminology




To explain the difference between "Mean" and "True Win Total"  - imagine flipping a fair coin 10 times. The number of heads you expect is 5 - this is what I have called "True Win Total," representing my best guess at the true ability of the team over 162 games. However, if you pause halfway through and note that in the first 5 flips there were 4 heads, the predicted total number of heads becomes 4+0.5(5)=6.5 - this is what I have called "Mean", representing the expected number of wins based on true ability over the remaining schedule added to the current number of wins (from the beginning of the season until the all-star break).