For example, while conducting a recent GIPS verification for Reams Asset Management, a division of Scout Investments, I found the following shown for their Unconstrained Fixed Income Composite:

What can we tell from this? Not much.

Okay, the composite had a significant out performance relative to the index (more than 200 bps); but, look at that standard deviation; looks like a lot of risk was taken! If one truly believes in the value of standard deviation, might it be a good idea to move to the next step? That is, to require a risk-adjusted measure, such as (what seems to be the logical choice in this case, given that the risk measure is standard deviation) the Sharpe ratio?

But also observe that we are showing a one-year return and a three-year standard deviation, meaning the match up isn’t perfect (and is arguably misleading), and so, let’s report what isn’t required (but perhaps should also be?): that is, the three-year annualized returns!

A lot more insightful, right?

In this particular case, the benchmark is an absolute index, so the differences are a bit more pronounced than they might otherwise be. But the point is, I believe, still valid: to compare one-year returns with three-year risk statistics is, as we like to say, *mixing apples and oranges*. And, showing returns and a risk measure doesn’t quite *do the job.*

And so, I encourage the GIPS Executive Committee to:

- Require, in addition to the 3-year annualized standard deviations, the corresponding 3-year annualized returns
- Require the Sharpe Ratio.

What do you think?

David,

I definitely agree that risk measures need to be compared with return measures that fully match. However, matching means many things. While this includes that both are for the same period and that both are either absolute or both are relative, it also includes that both are geometric or both are arithmetic in compounding and in how differences are defined, etc. So one has to question on many levels any report that provides a (daily?) compounded one-year return alongside an annualized arithmetic standard deviation for three years.

A very similar argument also needs to be applied to attribution: An explanation of returns without a matching explanations of risk and/or risk-adjusted returns makes, according to the fundamental tenet of modern portfolio theory, performance attribution dangerous. Knowing the degree to which a fund’s sector allocation decision contributed to its active return over the past three years can be very misleading if one also does not know the degree to which the same investment decision contributed to, i.e., the tracking error or information ratio over that same period according to the same methodology.

Andre

Andre, I appreciate your point. And while we could derive standard deviation using a geometric method, the convention is arithmetic. As for attribution, I am of the opinion that arithmetic should continue to dominate, and money-weighting be employed, since the manager controls the allocation and selection decisions (which are cash flows at the subportfolio level). Yes, matching makes sense, but (a) sometimes it may actually be inappropriate or (b) based on convention, just simply not employed. This doesn’t mean that convention is correct, as I often oppose conventional methods. And so, perhaps more needs to be said for geometric standard deviation, as well as geometric approaches to all risk measures: perhaps there’s an article somewhere in here for you!

We need to think about investing and performance analysis the way we think of getting dressed. When we say that our clothes “match” we do not mean that they are identical in color, fabric, pattern, etc. Rather, we mean that they are well coordinated and help to create a cohesive and appropriate ensemble. So, we should understand that investment and performance terms can “match” without being identical in their calculation, so long as they are are not “mismatched” by making different assumptions, or otherwise contradicting each other. With that in mind, let’s look at the points already raised here.

First, everyone should agree that performance measures across different (in this case “contradictory”) time periods make no sense and serve no purpose. If we want to examine the results of a one-year period, then BOTH risk and return must be evaluated over that time period. That’s easy.

When we get to the question regarding a geometric return and an arithmetically-derived standard deviation we DO NOT have statistics that “don’t match.” There is no inconsistency here. Why? Because they answer questions that are best answered by the calculation methods being used. The return question is really “What return was actually earned over this period?” or “What was the rate of growth of on initial investment in this manager over this period?” That question can only be answered by a geometric return. As to risk, we are really answering the question “What degree of return dispersion in a single year should we expect for this manager in a given year?” This question is answered correctly using an arithmetic average return as the center of the distribution of returns.

No inconsistency here. Different questions/different calculation methods. We need to be reminded to make certain that we understand the question being asked before we rush off to perform calculations – or make rules for best practices.

Thanks Steve; excellent points. The current requirement (annual returns, and a 3-year standard deviation) creates a mismatch, which has potential to confuse and mislead. Thus, my recommendation for a new requirement for a corresponding 3-year annualized return, to sit right next to the standard deviation, to provide appropriate and inconsistent information.

But, will anyone listen? I guess we’ll have to wait and see.