The expert problem - Mawer

by Mawer Investment Management, via The Art of Boring Blog

A better understanding of how and when to utilize experts helps prevent wasting time and money. 

Captain Chelsey “Sully” Sullenberger knew immediately that something was wrong. Birds filled the entire windscreen, pelting the aircraft like heavy hail, as a burning smell filled the cockpit from the jet engines. The plane suddenly lost forward momentum and started slowing down. Sullenberger knew the situation was critical—the plane had lost thrust in both engines at a low speed and a low altitude over New York City, one of the most densely populated areas on the planet. Putting his hand on the side stick and invoking protocol for transfer control, he coolly stated, “My aircraft.” First officer Jeffrey Skiles immediately replied, “Your aircraft,” and the captain took over. After radioing air traffic control, Sullenberger realized that they’d never make it to LaGuardia or Teterboro. He contacted air traffic control again: We’re going into the Hudson.

Flight 1549 and its crewmembers are often held up as examples of highly trained professionals acting calmly under pressure. The plane had only been in the air for a minute and a half when it hit a flock of geese and it was only one minute later when Sullenberger made the call to land in the Hudson River—a decision that saved all 155 people on board and likely many others below. It was a remarkable display of expertise: in less time than it takes to brush your teeth, Sullenberger determined that the plane could not reach an airport, identified a workable solution, and then proceeded to glide the plane towards the river, and land it on water at over 200mph. The crew got everyone out of the aircraft safely.

Sullenberger’s successful emergency water landing illustrates the value of experts. Experts exist because we need to rely on others for knowledge and skills we don’t have, whether for haircuts, brain surgery, or maneuvering a failing plane. They provide comfort and certainty in a world that is often chaotic and some of them routinely save lives. However, in some cases, society might exalt experts too much. Not only do we frequently look to experts in situations where they are not necessarily warranted, we also often trust in individuals who are undeserving of this title.

Understanding how to think about expertise is important to making good decisions. With a better understanding of how and when to utilize experts, we may avoid wasting our time, money and other resources. It is therefore useful to ask: When are experts required? How can we ensure we select someone legitimate? Moreover, how does this apply to investment management?

Two Kinds of Problems

The crisis faced by Sullenberger is a prime example of a situation where an expert is necessary. When your plane is plummeting towards the earth at over 200mph, you really hope your captain has the skill to land the plane without incident. Similarly, you hope your brain surgeon has a deft and steady hand. These are “expert problems”—problems that are best solved by the mind of an experienced and skilled individual. The challenge is that many situations do not fall into this category.

While “expert” problems are defined as those for which there is a clear best practice or technical solution, like flying a plane or performing surgery, “wisdom of crowds” problems tend to involve uncertainty and there often isn’t just one best solution. In these cases, experts tend to perform worse on average than a group. Forecasting currency prices and determining the strategy of a major corporation are more likely to be “wisdom of crowds” problems.

In The Wisdom of Crowds, James Surowiecki illustrates the distinction between problems that require experts for resolution and those that require the wisdom of crowds by asking participants to guess the number of jellybeans in a jar.1 In this situation, expertise is not required; rather, we are better served by gathering many independent opinions and taking their average. When we perform this exercise multiple times, the averages are consistently closer to the true number of jellybeans.

Understanding this distinction is critical. If the problem we face is truly an expert problem, clearly the best approach is to look for expertise. But if it isn’t a problem suitable for an expert, applying one to the situation may yield worse results than if we tapped into the wisdom of the masses.

The Making of an Expert

jellybean jarLet’s assume an expert is required. How do we go about finding one? While this may seem like a straightforward task, it can be more challenging than we may think as many “experts” fall woefully short of the title. There is a long list of studies that show that individuals with decades of experience often do not outperform their peers. One study conducted by K. Anders Ericsson and Neil Charness shows that psychotherapists with advanced degrees and many years of practice were no better than novice therapists at helping their patients.2 Another study revealed that sommeliers, after hundreds of training hours, had no ability to discern between French and Californian wines.3

What these studies demonstrate is that experience alone does not make an expert. In the Harvard Business Review article The Making of an Expert, authors Ericsson, Prietula and Cokely argue that real expertise must (1) lead to performance that is consistently superior to that of the expert’s peers, and (2) produce concrete results, i.e., heart surgeons can’t just have surgical skills; they must have successful patient outcomes.4 To this list, we add a third: (3) any positive performance cannot be mostly attributed to luck.

Consistently superior performance, concrete results, and skill; an expert must meet all three conditions. Unfortunately many of those who claim the title fail on one or more of these measures. Adding to the confusion is the fact that expertise can manifest in a knowledge base or a skill, or both. One can be an expert on the history of British football (knowledge) without the ability to play like Christian Ronaldo (skill). Moreover, even when we understand what makes an expert and what kind of expertise we are looking for, identifying one can still be difficult.

There are four major factors we examine when determining an individual’s expertise which we refer to as MASC: Measurability, Authority Bias, Skill versus Luck, and Cost.

Measurability

When selecting an expert, it is important to determine how to measure his or her expertise. To select the best, we must know who has outperformed. Expertise is easiest to measure when the feedback loop is short and direct and the measure of success is objective. The 100 meter sprint in the Olympics is a good example: there is a clear measure of success (who gets over the line first), the feedback loop is short (under 10 seconds) and it is clear that the inputs of the athletes directly result in the outputs.

Identifying expertise gets considerably more challenging when the feedback loop is longer and causality cannot be easily established. As an example, it can be very hard to measure performance of a manager who follows a long-term buy-and-hold strategy. Not only can it take years to gather enough data to evaluate performance, it is usually difficult to know whether this performance was a matter of skill (his or her inputs) or luck (external forces).

Identifying expertise also becomes more difficult as the subjectivity of success increases. While it is straightforward to know who wins in competitive sports, many areas where expertise is required are considerably less objective to measure. How do we determine the best art? What makes an excellent haircut? Subjectivity makes evaluation—and thus expert identification—difficult.

Sometimes the challenge is an absence of performance information. It’s one thing to observe confusing performance metrics; it’s another to have no data at all. This is a challenge we can run into when searching for a good lawyer. Not only is the legal world full of technical jargon that confuses the outsider, it is also a bit of a black box. Where do we go to get reliable data on a lawyer’s performance? How can we compare the performance of one lawyer versus another? This lack of transparency makes it difficult to identify a good lawyer.

Beware the Authority Bias

On Tuesday, June 17, 2014, America’s “most trusted doctor,” Dr. Mehmet Oz, (“Dr. Oz”) gave testimony at a Senate hearing on the marketing of nutritional supplements. The issue at hand was the doctor’s frequent use of words such as “magical” to describe untested alternative medicines and weight loss supplements, often without having scientific evidence to back up his recommendations. It was a very public embarrassment for the doctor who has built a brand empire that includes a nationally syndicated TV show, magazine and many bestselling books. And it was not the first time he received similar criticism as Popular Science and The New Yorker both published articles criticizing Oz for giving ”non-scientific” advice before.

Dr. Oz is a very good cautionary tale of Authority Bias—our tendency to place greater trust in individuals we perceive to have authority. Not only is Dr. Oz a cardiothoracic surgeon with degrees from Ivy League schools, Harvard and University of Pennsylvania, he was endorsed by Oprah. One might assume that these credentials would translate into excellent health recommendations. Unfortunately, simply because someone is perceived as an authority figure does not mean he or she has actual expertise, just like a PhD in finance from Harvard does not mean you are Warren Buffett. In the case of Dr. Oz, researchers found that only a third of his strongest recommendations were backed by science!5

Authority Bias is a major hurdle when determining expertise. Many people mistakenly assume that the so-called experts or “authorities” have superior skill or knowledge. This causes them to dismiss their own judgment and trust in the authority figure, often without looking for sufficient evidence of results.

Skill vs. Luck

Roy Sullivan was thirty years old when he found himself trapped in a fire lookout tower in the middle of a thunderstorm. The young park ranger began to panic as lightning repeatedly struck the tower and fire started to spread inside. Sullivan ran outside where he was struck by lightning only a few feet away; the bolt entered his leg and exited his big toe. It was a bad and unlikely strike—but it would not be his last. Before Sullivan died in 1983, he would be allegedly struck by lightning six more times. He eventually started to run away from clouds that were “following him.”

For perspective, the odds of getting struck by lightning once are roughly three thousand to one. The odds of getting struck by lightning on seven different occasions are approximately twenty-two septillion to one or: 22,000,000,000,000,000,000,000,000 to 1.6

It is easy to forget the staggering role that randomness plays in our lives. While we are not likely to be as unlucky as Sullivan, we are all subject to strings of good and bad luck, including experts. Therefore, we must take the role of randomness into consideration when evaluating an expert’s performance.

Some domains are dominated by skill (chess), others by luck (playing the lottery) and the rest by some combination of the two (soccer, poker). The challenge is to understand the degree to which luck is influencing an outcome and to find the skilled players within the domains where luck plays a big role. Ultimately, many people who claim expertise may have just been lucky. This is especially true in the world of investing.

As an example, consider the stock recommendations of sell-side analysts.7 We would wager that most sell-side analysts believe they can pick stocks that will go up. But evidence for this ability seems weak. Our colleague, Justin Anderson, recently studied this exact supposition. He was curious to know how many sell-side analysts seemed to consistently exhibit skill in selecting stocks that outperform over time. He studied a universe of more than 2,000 U.S. sell-side analysts and evaluated their history of stock recommendations.

His initial results have been very interesting. Within his universe of approximately 2,000 sell-side analysts in the U.S., only 4%, or approximately 180 analysts, seem to demonstrate non-random skill in picking stocks. What’s more, the skill level seems to concentrate around the top 1%. This suggests that the difference between the best analysts and everyone else is very large.

In reality, these findings are less shocking than they may seem. The competition within investing is deep and the influence of randomness significant; therefore, we would expect very few analysts to consistently outperform their peers. His results suggest that any claims of expertise by sell-side analysts should be taken with a hefty grain of salt. Our findings and experience suggest that while many sell-side analysts have useful knowledge on certain industries, very few possess the ability to stock pick consistently.

Hence, when determining expertise, we must appreciate the role that luck can play in influencing outcomes. In situations where luck heavily influences outcomes, talent and good processes will generally produce positive results over the long-term and with enough trials. But in the short-term, this isn’t always the case. In the short-term, a talented individual can be unlucky, while a woefully incompetent individual can have a string of good fortune. This implies that the more luck influences outcomes (instead of purely skill) (1) the more number of tests the evaluation requires, and (2) the more important it is to evaluate the expert’s process in every situation.

Process is important because, ultimately, it is something we can control. While short-term results may be skewed by randomness, over a long enough window, a good process tends to result in good outcomes. We need to investigate not only how good one's process is, but also how well one sticks to it and improves it over time.

Cost

The final factor in identifying an expert is one of cost. It is often possible to overcome the challenges of measurement and skill versus luck by applying more resources to the problem. But this can involve time and money that we do not have. Who wants to invest 15 hours researching the best hair stylist in the city?

Cost is often increased by greater asymmetry of information, unclear metrics and decentralized sources of information. Unfortunately, the greater the cost and effort of the search, the more likely we are to get lazy or stressed and resort to weak processes. We start picking mutual funds because our neighbour told us to. We trust the doctor on TV because he is likeable and appeared on Oprah. We pick the expert who tells a good story. In short, we rely too heavily on anecdotal evidence and un-vetted recommendations. Of course, this doesn’t matter when the consequences are low, like when perhaps selecting a masseuse. Sometimes the cost of a bad choice is worth it if it can save us time. But weak processes can be problematic when the expertise we require has a big impact on our lives.

Avoiding Fakes

It’s much easier to avoid the “experts” who are not deserving of the title if we have strategies to evaluate their performance. Below are three that can assist in this effort:


1. Focus on tangible results

Just as heart surgeons should not be judged on their surgical skills alone, experts must demonstrate the ability to produce positive outcomes. Hair stylists must be able to give us the haircuts we want and pilots must be able to get us safely to our destination. It is not enough to have great hair or knowledge of airplanes; proof of concrete results matters.

This is important to note because many people confuse knowledge with skill and many supposed experts mask their lack of skill with the appearance of knowledge. Since there is, by definition, a gap of knowledge between us and an expert, the perception of knowledge can be easy to fake. Therefore, we must demand objective facts as proof points.

Of course, there are times when the expertise we seek is knowledge/advice (like from Dr. Oz). In these situations, it can be tricky to find objective measures of success. However, in these moments we can still try to assess (1) the likelihood that they have the necessary facts, and (2) whether reasonable conclusions are being drawn. To test their knowledge of the facts we can research their education, look for peer reviews, listen for whether they mention specific numbers and sources, and pay attention to whether they present multiple perspectives. To evaluate whether they are drawing reasonable conclusions, we can ask them to explain their process, where their logic could fail, and cross-reference these points with others.


2
. Differentiate between skill and luck

In our search for expertise, we must understand how much randomness can influence outcomes. We don’t want to mistakenly attribute someone’s performance to skill when it was luck! There are several ways this can be approached as Michael Mauboussin illustrates in The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing.8

One factor to look at is the breadth of competition. The deeper the field of competition, the less skill tends to matter and the more luck plays a role. This is obvious in any elite Ironman event: in these competitions, the athletes’ skill levels are often so close that rankings can be greatly influenced by randomness. When racers are crossing the finish line minutes away from one another, even the smallest of unlucky events is significant. One flat tire or a misplaced shoe at transition can mean a dramatic difference in the final result. Indeed, we see a similar pattern in investing: there is a deep pool of global competition and managers’ performance relative to others is often influenced significantly by luck.

Another factor to consider is transitivity. Transitivity exists when A>B and B>C implies that A>C. If the Red Sox beat the Blue Jays, and the Blue Jays beat the Yankees, a transitive relationship would imply that the Red Sox should therefore beat the Yankees. When transitivity is present, it is more likely that skill dominates. When transitivity breaks down, we see more influence from luck.

Another signal to look for is mean reversion: the tendency for outcomes to revert back to the average over time. Generally speaking, the more an activity is driven by randomness, the stronger the force of mean reversion will be on outcomes. As an example, the performance of mutual fund managers tends to exhibit strong mean reversion: a manager could be top quartile in one quarter and fourth quartile the next. This signals that the level of luck to skill in investing is high.

Ultimately, the role of luck influences our evaluation process. The more luck is a factor in success, the wider the testing window needs to be to compensate for randomness and the more we need to look for that individual or group sticking consistently to their process. Moreover, if a domain’s outcomes are truly determined exclusively by luck, we shouldn’t seek expertise in this field in the first place. Imagine having an expert panel forecasting the next lottery number! As silly as it may sound, it’s not a far cry from consulting “experts” like psychics or currency forecasters for their predictions.


3. Question authority

One of the best ways to protect against the so-called experts is to question the expertise of authority figures. Just because someone has credentials behind their name or appears on TV does not mean he has have a superior ability. While a degree from a good university or a professional designation signals that a knowledge base should be there, it does not guarantee that it is.

On a related note, it is important to fight the fear of asking questions. Many of us experience anxiety when we want to ask an expert questions…so we don’t. Whether this is due to a fear of looking stupid or politeness we don’t know, but we are doing ourselves a disservice when we avoid asking questions that would help our understanding of a situation. Some of the most important and powerful information we’ve received from management teams have come from asking painfully simple questions. Never be afraid of asking the simple question; fear the expert who refuses to answer it.

Expertise in Investment Management

So how does this relate to investment management? Is investing an “expert problem” or is the industry deluding itself in thinking that expertise can exist? To answer this question, a few observations on investing are necessary.

First, asset markets, such as stock and bond markets, are complex, adaptive systems in which the future is virtually never known with certainty. Attempting to make predictions in these systems is arguably futile. Even if an investor believes strongly that a company is “quality,” he or she cannot know for certain how a stock will perform in a stated timeframe or whether it will outperform its peers.

This inherent uncertainty implies that there is no one “best answer” in investing. Unlike flying a plane or heart surgery, we cannot know for sure what the action should be in a given situation. Yes, we can have theories on what will work and what will not—but having theories is different than knowing what must be done. Expert problems are characterized by the ability of an individual with deep expertise to know what solution is best in the situation. This is not the case in investing.

Second, investing is a notoriously difficult industry to measure. It can take decades to evaluate whether or not a particular strategy was successful, and even then, it is difficult to know whether performance was due to the investor’s skill, luck or both. This suggests that investing is not an expert problem. In the case of doctors and pilots, we can point to specific outcomes that demonstrate expertise (more people live). We cannot do this in investing because we really can’t know for sure how much randomness influenced the outcomes.

Third, markets are intricately complex systems involving massive amounts of hidden and unhidden information. The idea that any one person can possess enough knowledge to be the “expert” on the market given the massive quantity of information is absurd. Even if it were possible for an individual to hold all of the information that is publicly available in his or her mind, that person could still never know exactly what every other investor is thinking—which would be a key input into knowing how one should act.

Thus, in our opinion—admittedly a controversial view—investment management is simply not an expert problem. It is not a solution best solved by one individual’s expertise and there is little evidence to suggest that most professional money managers meet the three required criteria (outperformance, skill, tangible outcome).

Therefore, investing is a “jellybean-type problem,” more appropriately tackled by capturing the wisdom of crowds. It calls for a very different approach. Instead of relying on individual fund managers to be experts, it means building teams that debate ideas and work together. According to Surowiecki, there are three conditions that must be true in order to capture the collective wisdom of crowds. We need (1) genuine independence of opinion (and thus diversity of opinions); (2) incentives for individuals to provide their opinion, and (3) an aggregation mechanism.

What does this mean in practice? If we assume that investing is not an expert problem, we need to build a system that can meet the three conditions above to capture the wisdom of crowds. It means that investment management firms should focus less on hiring “experts” and instead put their energy towards building teams and a system that enables clients’ portfolios to be resilient over time.

And while investing may not be an “expert” problem by the truest definition, professional money managers can have expertise in two areas that are extremely important to clients: (1) they can have expertise in building resilient client portfolios; and (2) they can have expertise in building systems that can tap the wisdom of crowds. Expertise in both of these areas may deliver significant value for clients.

Final Thoughts

At Mawer we rely on expertise every day—expertise such as building financial models and identifying management teams that are good operators. And we regularly tap the knowledge of experts to improve our understanding of industries and business models. Yet we are very careful not to delude ourselves into thinking that we, or any others, are “experts” on investing as a whole. Not only does investing not seem like an expert problem, we see no real evidence to suggest that “investing experts” exist. Socrates was once known as the wisest man in Athens because he claimed he “didn’t know.”

When it comes down to investment management, our focus is to ensure our team has expertise in building resilient portfolios for our clients that are positioned to withstand whatever unfolds. This is feasible. It ultimately means adhering to a disciplined process.

It pays to be skeptical of the expert label. This extends far beyond investing. While expertise like Sullenberger’s exists in the world, it manifests far less frequently than we would believe. So when the search for an expert really matters—when our financial security or our very lives depend on it—it makes sense to have a process that can help us identify it.

True expertise is rare.


1 Surowiecki, James. The Wisdom of Crowds. Random House of Canada Books, Toronto: 2004.
2 Ericsson, K. Anders and Neil Charness. Expert Performance: Its Structure and Acquisition. “American Psychologist.” August 1994.
3 Trillin, Calvin, “The Red and the White: Is it possible that wine connoisseurs can’t tell them apart?” The New Yorker. August 19, 2002 issue.
4 Ericsson, K.A., Prietula, M.J., Cokely, E.T. The Making of an Expert. “Harvard Business Review,” 2007 Jul-Aug; 85(7-8): 114-21, 193.
5 Korownyk, Christina, James McCormack, Vanessa Lam et al., Televised medical talk shows—what they recommend and the evidence to support their recommendations: a prospective observational study. “British Medical Journal” 2014; 349.
6 National Geographic News.com “Flash Facts about Lightning.” June 24, 2005. Web: June 2, 2015. National Geographic estimates the odds of getting struck by lightning to be 1 in 3,000 in your lifetime (assuming an 80 year lifespan), while Wikipedia puts this figure closer to 1 in 12,000. It is important to mention that Sullivan’s occupation as a park ranger could have increased his chances of getting hit—but even then, getting hit 7 times would still be longer than the earth has been around.
7 Those who work at investment banks and publish their recommendations publicly.
8 Mauboussin, Michael. The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing. “Harvard Business Review,” 2012.

This post was originally published at Mawer Investment Management

Total
0
Shares
Previous Article

BOMBARDIER CL B SV (BBD.B.TO) TSX - Aug 05, 2015

Next Article

Guy Haselmann: Act on the Warnings

Related Posts
Subscribe to AdvisorAnalyst.com notifications
Watch. Listen. Read. Raise your average.