In the March 27, 2012 issue of the WSJ, Dan Fitzpatrick and Victoria McGrane discussed how large banks who released “stress test” results are being questioned by the Federal Reserve on the approaches they took to derive their results (“Banks Stress Over Fed Test Methods”). It seems that there can be multiple ways to calculate the results, which can then be materially different.
In a recent software verification assignment, I discovered that our client employed an unusual way to calculate some of their statistics, including the Sharpe and information ratios. In both cases, they annualized the numerators and denominators, and then did the calculations. For example:
As you might expect, we get different results. And while the differences are often not material, who’s to say that they won’t be?
Question: is there anything wrong with the way our client does their math?
Answer: No! It’s ‘non-traditional,” but there is no prohibition for a firm to calculate a risk statistic differently, provided they disclose their method.
Even what I show as the “traditional” way isn’t what Nobel Laureate Bill Sharpe advocates today. In a 1994 Journal of Portfolio Management article (aptly named, “The Sharpe Ratio”), Sharpe altered his earlier formula, so that the denominator isn’t the standard deviation of the portfolio returns, but rather the standard deviation of the equity risk premium (which we find in the numerator; i.e., the differences between the portfolio return and risk free rate). In spite of this revision, it appears that most firms still use the formula that Sharpe introduced in his 1966 Journal of Business article (“Mutual Fund Performance”).
Certain formulas are sacrosanct, and shouldn’t be altered (though there may be some inherent options available within them, which need to be specified), such as standard deviation, Modified Dietz, and the IRR. But many of the risk measures have been implemented in different fashions, which can make cross-comparisons a challenge. Thus the need to document how you do the math.