Performance Perspectives Blog

Should there be a minimum number of months to calculate standard deviation?

by | Jan 2, 2013

You’re probably familiar with the expression “just because you can do something, doesn’t mean you should.” There have been several times where this saying has come in handy, and this post deals with one more case: standard deviation.

In performance measurement we can calculate standard deviation across accounts within single time period, to measure dispersion. For example, when reporting your 2012 composite return for your GIPS(R) (Global Investment Performance Standards) composite presentations, if you have six or more accounts that were present for the full year, you’re required to report a measure of dispersion, and standard deviation is often the measure of choice. And so, standard deviation can serve as a measure of dispersion.

Standard deviation can also be used to measure volatility (or variability, if you prefer) across a time period. GIPS now requires compliant firms to report an ex post (i.e., backward looking), annualized, three year standard deviation on an annual basis, for the composite and its benchmark. It is measured against the prior 36 months. If the firm does not have returns for the prior 36 months, they are exempt from reporting it. Thus, standard deviation can be a measure of volatility, variability, or risk (as standard deviation in this form serves as a risk measure).

I recently read Michale J. Mauboussin’s new book, The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing. He writes about the problems associated with relying on a small sample size. He cites examples which other authors, too, have referenced. For example, it is often the case that small counties in the United States will exhibit the lowest rate of incidence of certain forms of cancer, which may prompt some to think that moving to smaller counties might be best for their health. Until they also learn that the highest incidence of these forms of cancer are also found in small counties. The point: we often see outliers arrive in smaller sample sizes.

This caused me to think about standard deviation. I am sometimes asked whether it is appropriate for a firm that doesn’t yet have 36 months of returns, to show standard deviation for the period they do have returns for. For example, if they have 12 or 24 months. I usually say that I think it’s fine to do this. But I’m now wondering, is it?

Standard deviation assumes a normal distribution, and in statistics we expect to see a minimum of 30 periods included in the calculation. The problem with a smaller sample size (12 or 24, for example) is that the results may be misleading.

In our industry, the “general rule” is not to annualize returns for periods less than a year: this isn’t because it’s a small sample size (although perhaps that might actually be a valid reason, too), but because it causes you to use past performance to predict future results. Should there be a similar rule not to report standard deviation in cases where the firm doesn’t have at least 30 months of returns?

Free Subscription!

The Journal of Performance Measurement

The Performance Measurement Resource.

Click to Subscribe