Performance Perspectives Blog

What do bond rating agencies and mutual fund peer group providers have in common?

by | Jun 12, 2017

When I learned of the recent Barron’s article touting the success of an Eaton Vance municipal bond fund, based on the ratings given by two mutual fund peer group providers, a thought immediately occurred to me: it sounded exactly like the problem that resulted from the bond rating agencies granting AAA ratings to mortgage backed securities that were filled with highly risky, subprime mortgages.

Why the link between mutual fund peer group providers and bond rating agencies?

Whenever I create a blog post, I try to find an image that represents its message. In thinking of this post, I pondered what description might fit both mutual fund peer group providers and bond rating agencies. One thought in particular resonated: watchdog. Aren’t these parties, in a senses, watchdogs for investors?

Bond rating agencies are supposed to be independent analysts who scrutinize the bond issuers to assign ratings that best represent each bond issue’s riskiness. Because they provide this service, bond purchasers don’t have to. If an investor wants to invest in high quality, low risk bonds, they will have to pay more and gain a lower interest rate, befitting such credit worthiness. On the other hand, if the investor is willing to take more risk, they can spend less and earn a higher return, albeit with an associated higher level of risk.

In a similar way, mutual fund peer group providers, such as Morningstar and Lipper, scrutinize mutual funds. Based on their analysis, they (a) assign the funds to the appropriate peer group (categorization) and (b) assign a rating (ranking) to represent the success of the fund over various periods, from both a return and risk perspective.

What happens when these agencies fail us?

We saw what happened in 2007/2008, with the “sub-prime mortgage crisis.” And while we cannot lay the entire blame for what occurred on the rating agencies (there was plenty to go around), these agencies were apparently worthy recipients of some of it.

I addressed what occurred in the Barron’s piece in a recent blog post. And so, there’s no need to rehash it again here.

I will say that I believe that the rules that Morningstar and Lipper apparently use need to be revisited.

Someone from Thomson Reuters (who now own Lipper) was kind enough to comment on that post. As he explained it, to be included in the three-year ranking, the fund must be present in that category for at least one year. In addition, they review the fund relative to where it was and where it is now, in order to determine if there’s been a significant difference. They apparently concluded that the fund was worthy of being placed in the intermediate category.

And, despite not hearing from anyone from Morningstar, we must be forced to conclude that, despite the changes in the fund’s strategy, it could be included in the category for not just the three year, but also the five and “overall” periods.

But, there is clearly disagreement.

A few of our municipal bond manager clients echoed their concerns with the article, which appeared to rely primarily on Morningstar’s ranking. The question is an obvious one:

Is it fair or appropriate to include a fund that had been “long-term” in an “intermediate” category, when it was investing in that strategy for a fraction of the time?

I don’t have access to the data. But it would be an interesting statistical analytical exercise to compare the fund’s returns prior to the date of the change, with funds in the intermediate category. There are standard statistical tests that can be run to determine if its inclusion for that period was appropriate.

And so, in fairness to both Morningstar and Thomson Reuters Lipper, I cannot objectively state that this inclusion was wrong.

What I can say, however, is that many managers question it.

Furthermore, the Barron’s article failed to make this shift clear. Were readers of the article therefore mislead or ill informed? That’s hard to say, too. But, “in the spirit of full disclosure,” the addition of that little detail would have been welcome.

Another thought: one might also wonder why have two categories (long term and intermediate) if, in reality, they’re similar enough to include a fund that was previously in the long term category into the intermediate?

Mutual fund peer group providers as watchdogs

There is obviously no reason to suspect that a crisis like ’07/’08 will result from any inappropriate categorizing or ranking of mutual funds.

But, investors rely on the rankings to make investment decisions. I have heard, through the grapevine, that Eaton Vance is touting the five star ratings. And one might reasonably ask, “can we blame them?”

But if you look at some of the other comments that have come in, you’ll see that there is some disappointment in what’s transpired.

What’s next?

I think I’m done with this matter. I was, and still am, disturbed by it.

Last week, I sent an email to the author of the Barron’s article; he didn’t respond.  I sent a note to someone at Morningstar: again, no response.

I had hoped to see a followup to the article in this weekend’s Barron’s. Perhaps some amplification on the piece? But, there wasn’t.

There was a related article (“Credit Risk Looms in Muni Land,” by Amey Stone) that gave the “top pick” to yet another Eaton Vance fund. I sent a note to Amey, with a link to my earlier blog post.

I did find a bit of irony, however, on one of the issue’s last pages (30). There, we find a comic box with the following caption: “Can you take these inaccuracies and untruths and get some science behind them?” Need I say more?

Free Subscription!

The Journal of Performance Measurement

The Performance Measurement Resource.

Click to Subscribe