Saturday, August 27, 2016

How the Class of 2020’s Retirement Plans Will Be Different

Each year the good folks at Beloit College produce a “Mindset List” providing a look at the cultural touchstones that shape the lives of students about to enter college. So, in what ways will their retirement plans differ from those of their parents?

In the most recent list (they’ve been doing it since 1998), the Beloit Mindset List notes that for the class of 2020 (among other things):
  • There has always been a digital swap meet called eBay.
  • They never heard Harry Caray try to sing during the seventh inning at Wrigley Field.
  • Vladimir Putin has always been calling the shots at the Kremlin.
  • Elian Gonzalez, who would like to visit the U.S. again someday, has always been back in Cuba.
  • The Ali/Frazier boxing match for their generation was between the daughters of Muhammad and Joe.
  • NFL coaches have always had the opportunity to throw a red flag and question the ref.
  • Snowboarding has always been an Olympic sport.
  • John Elway and Wayne Gretzky have always been retired.
So, what about their retirement plans? Well, for the Class of 2020:
  • There have always been 401(k)s.
  • They’ve always had a Roth option available to them (401(k) or IRA).
  • They’ve always worried that Social Security wouldn’t be available to pay benefits (in that, they’re much like their parents at their age).
  • They’ve always had a call center to reach out to with questions about their retirement plan.
  • They’ve never had to wait to be eligible to start saving in their 401(k) (their parents generally had to wait a year).
  • They’ve never had to sign up for their 401(k) plan (their 401(k) automatically enrolls new hires).
  • They’ve never had to make an investment choice in their 401(k) plan (their 401(k) has long had a QDIA default option).
  • They’ve always had fee information available to them on their 401(k) statement (it remains to be seen if they’ll understand it any better than their parents).
  • They’ve always known what their 401(k) balance would equal in monthly installment payments.
  • They’ve always had an advisor available to answer their questions.
Most importantly, they’ll have the advantage of time, a full career to save and build, to save at higher rates, and to invest more efficiently and effectively.

- Nevin E. Adams, JD

Saturday, August 20, 2016

Boiling Points

One could hardly read the headlines this past week without experiencing a certain sense of déjà vu.

After all, it’s been not quite 10 years since the then relatively obscure St. Louis-based law firm of Schlichter, Bogard & Denton launched about a dozen of what have come to be referred to as “excessive fee” lawsuits.

Not that the recent batch of suits targeting multi-billion dollar university plans are a mere recounting of the charges leveled against their private sector counterparts. No, in the years since then the Schlichter law firm has sharpened their pencils, and their criticism. Those early suits focused on what, in comparison to the most recent waves, seem almost quaintly simplistic: allegedly undisclosed revenue-sharing practices, the use of non-institutional class shares by large 401(k) plans, the apparent lack of participant disclosure of hard-dollar fees (which even then were disclosed to regulators), and even the presentation of ostensibly passive funds as actively managed.

In the lawsuits that have been filed in recent months, it’s no longer enough to offer institutional class shares — one must now consider the (potentially) even less expensive alternatives of separately managed accounts and collective trusts. Actively managed fund options are routinely disparaged, while the only reasonable fee structure for recordkeeping fees is declared to be a per participant charge. The use of proprietary fund options, rather than being viewed as a testament to the organization’s confidence in its investment management acumen, is portrayed as a de facto fiduciary violation since the investment management fees associated with those options inure to the benefit of the firm sponsoring the plan.

The most recent suits targeting university plans add to the standard charges leveled against 401(k) plans some that are peculiar to that universe — notably providing a “dizzying” array of fund options that plaintiffs claim results not only in participant paralysis, but in the obfuscation of fees and the decision to employ multiple recordkeepers. The proof statement that these practices are inappropriate? Comparisons with the standards and averages of 401(k) plans.

Overlooked in the burst of headlines and allegations is that we know very little about these plans other than what the plaintiffs allege. We aren’t told anything about the employer match, for instance, not to mention participant rates, nor is the subject of outcomes mentioned. We know nothing of the services rendered for these fees, only that, of the investment funds on the menu, cheaper and ostensibly comparable alternatives were available. The plaintiffs’ argument seems to be, if there were cheaper alternatives available, the ones chosen were, by definition, unreasonable, regardless of the services provided.

One other thing overlooked in the burst of lawsuits is that precious few of these cases have actually made their way to trial, and that among those that have, on the issues of fund choices and fund pricing, the courts seemed inclined to give the plan fiduciaries the benefit of the doubt. The Schlichter firm’s own press releases now not only tout the 20 such complaints the firm had filed as of early August, but that in 2009 they “won the only full trial of an 401(k) excessive fee case.” A case in which the attorney fees turned out to be more than triple that of the recovery won by plaintiffs. But if the record at trial is more checkered than many appreciate, plan fiduciaries can’t ignore the fact that in the lawsuits it has brought, the Schlichter firm has succeeded in securing nine settlements.

A popular aphorism holds that if you put a frog in a pot of boiling water, it will immediately hop out, recognizing the peril that that water represents. But if you put the same frog in a pot of cold water and then slowly bring it to a boil, the frog will stay put, since the danger creeps up on it in a less noticeable fashion.

Perhaps that explains why, 10 years after the first claims were filed, so many multi-billion dollar retirement plans still remain vulnerable.

Though by now, that fiduciary pot is surely boiling — and has been for some time.

- Nevin E. Adams, JD

Saturday, August 06, 2016

5 Ways Industry Surveys Can Be Misleading

As human beings, we’re drawn to perspectives, including surveys and studies that validate our sense of the world. This “confirmation bias,” as it’s called, is the tendency to search for, interpret, favor, and recall information in a way that confirms our preexisting beliefs or hypotheses. It also tends to make us discount findings that run afoul of our existing beliefs.

In its simplest terms then, when you see a headline that confirms your sense of the world, you’ll be naturally inclined to embrace and remember it as a validation of what you already perceive reality to be. Even if the grounds supporting that premise are shaky, sketchy, or (shudder) downright scurrilous.

Here are some things to look for – likely in the fine print – as you evaluate those findings.

There can be a difference between what people say they will (or might) do and what they actually will.

No matter how well targeted they are, surveys (and studies that incorporate the outcome of surveys) must rely on what individuals tell us they will do in specific circumstances, particularly in circumstances where the decision is hypothetical. When you’re dealing with something that hasn’t actually occurred, there’s not much help for that, but there’s plenty of evidence to suggest that, once given an opportunity to act on the actual choice(s), people act differently than their response to a survey might suggest.

For example, people tend to be less prone to action in reality than they indicate they will be – inertia being one of the most powerful forces in human nature, apparently. Also, sometimes survey respondents indicate a preference for what they think is the “right” answer, or what they think the individual conducting the survey expects, rather than what they actually think. That, of course, is why the positioning and framing of the question can be so important (as a side note, whenever possible, it helps to see the actual questions asked, and the responses available).

The bottom line is that when what people tell you they will do, or even what kind of product they would like to buy, if you later find that they don’t, just remember that there may be more powerful forces at work.

There can be a difference between what people think they have and reality.

Since, particularly with retirement plans, there are so few good sources of data at the participant level, much of what gets picked up in academic research is based on information that is “self-reported,” which is to say, it’s what people tell the people taking the survey. The most prevalent is, perhaps, the Survey of Consumer Finance (SCF), conducted by the Federal Reserve every three years.

The source is certainly credible, but the basis is phone interviews with individuals about a variety of aspects of their financial status, including a few questions on their retirement savings, expectations about pensions, etc. In that sense, it tells you what the individuals surveyed have (or perhaps wish they had), but not necessarily what they actually have.

Perhaps more significantly, the SCF surveys different people every three years, so be wary of the trendlines that are drawn from its findings – such as increases or decreases in retirement savings. Those who do are comparing apples and oranges – more precisely the savings of one group of individuals to a completely different group of people… three years later.

The survey sample size and composition matter.

Especially when people position their findings as representative of a particular group, you want to make sure that that group is, in fact, adequately represented. Perhaps needless to say, the smaller the sampling size – or the larger the statistical error – the less reliable the results.

Case in point: Several months ago, I stumbled across a survey that purported to capture a big shift in advisors’ response to the Labor Department’s fiduciary regulation. Except that between the two points in time when they assessed the shift in sentiment, they wound up talking to two completely different types of advisors. So, while the surveying firm – and the instrument – were ostensibly the same, the conclusions drawn as a shift in sentiment could have been nothing more than a difference in perspective between two completely different groups of people.

Consider the source.

Human beings have certain biases – and so do the organizations that conduct and pay to surveys and studies conducted. Not that sponsored research can’t provide valuable insights. But approach with caution the conclusions drawn by those that tell you that everybody wants to buy the type of product offered by the firm(s) that have underwritten the survey.

When you ask may matter as much as what you ask.

Objective surveys can be complicated instruments to create, and identifying and garnering responses from the “right” audiences can be an even more challenging undertaking. That said, people’s perspectives on certain issues are often influenced by events around them – and a question asked in January can generate an entirely different response even a month later, much less a year after the fact.

For example, a 2015 survey of plan sponsor sentiment on a topic like 401(k) fee litigation is unlikely to produce identical results to one conducted in the past 30 days, nor would an advisor survey about the fiduciary regulation prior to the publication of the final rule as to its impact. Down in those footnotes about sample size/composition, you’ll likely find an indication as to when the survey was conducted. There’s nothing wrong with recycling survey results, properly disclosed. But things do change, and you need to be careful about any conclusions drawn from old data.

Not to mention the conclusions you might be otherwise inclined to draw from conclusions about old data.

- Nevin E. Adams, JD

See also: