Why Quants Don't Pick Stocks

by Corey Hoffstein, Newfound Research


This post is available as a PDF download here.

  • Quant is a broad word with many job descriptions in finance. In asset management, a quant is someone who applies mathematical (usually statistical) techniques to analyzing the securities market, usually with an eye towards identifying investment opportunities.
  • Quants rely on factors: systematic investment approaches that capture and explain the return difference between different cohorts of securities.
  • Factors are all about buying baskets of things; quants like to avoid idiosyncratic risk because we do not believe investors are compensated for bearing the additional risk.
  • For a market to function, someone has to perform information discovery on individual stocks. Can this role be filled by quants or does the market require stock pickers to function?

“Quant” is a word of with many connotations.  For some, they are the numerical wizards conjuring new sources of alpha.  For others, the out-of-touch wonks who caused the financial crisis.

At the broadest, most sweeping generalization, quants specialize in applying mathematical techniques and methods – mostly statistical – to financial markets.

In finance, there are all sorts of quants.  Some work deriving new pricing theories for derivatives; others work managing and modeling risk; and still others – like us – work to identify empirical pricing relationships for building portfolios.

We’ll dedicate this commentary to our parents and significant others, who still have no idea what we do for a living.

More Risk, More Reward

In the 1960s, the Capital Asset Pricing Model (“CAPM”) was introduced.  CAPM provides a model for pricing an individual security.  Known as a “single factor” model, CAPM modeled a stock’s excess return as being one-part overall market return and one-part idiosyncratic return.

Here the Greek letter that looks like a "B" – known as “beta” – estimates the stock’s sensitivity to market returns.  The Greek letter that looks like an "E" at the end of the equation is the idiosyncratic component.

We’ve all heard the saying, “no risk, no reward.”  In the case of financial markets, the question is: “which risks earn reward?”

As the theory goes, not all of them do.  CAPM states that investors should only be compensated for bearing market risk but not for bearing idiosyncratic risk.  The argument is that beta is undiversifiable: build a portfolio of stocks and the beta component remains.  The idiosyncratic risk, on the other hand, can be diversified away.

For example, a portfolio of 30-40 stocks is typically sufficient to reduce idiosyncratic exposure enough that only the shared market risk remains.  If we did earn compensation for bearing the risk, we could create an arbitrage.  We would build a portfolio of 30-40 stocks to diversify away our idiosyncratic risk – but not the reward! – and then short the broad market to eliminate our market risk.  Hence, it holds that we should not be compensated for bearing idiosyncratic risk.

Here we’ll point out that CAPM stands in direct opposition to the practice of stock picking.  No amount of understanding of idiosyncratic, company specific risk should lead to an expectation of higher returns.

Being Compensated for Non-Market Risk

When a quant says, “risk factor,” what they mean is a characteristic that helps explain why one stock did one thing and another did something else.  In CAPM, the risk factor was the market.  If the market goes up, stocks with more market exposure should go up more while stocks with a lower exposure will go up less.

In 1974, Barr Rosenberg identified that stocks co-varied with a number of other non-market risk factors.  He found that firm-specific characteristics like balance sheet data and industry membership, as well as security characteristics like historical price behavior, were significant in explaining differences between stocks.  In quant terms, we call this “explaining cross-sectional returns.”

In 1977, Sanjoy Basu identified what would become known as the value factor: whether a stock was expensive or cheap (based on book-to-price) was a statistically significant explanatory variable in the cross section of security returns.  Later, Banz (1981) and Reinganum (1981) would discover the size factor, where a company’s capitalization held significant power in explaining cross-sectional returns.

This culminated in the early 1990s with what would become known as the Fama-French 3-Factor (“FF3”) model.  FF3 extends CAPM to introduce the value (“HML” for “high-minus-low”) and size (“SMB” for “small-minus-big”) factors as significant explanatory variables.

Here we need to take a step back and explain how a factor gets an associated “return.”  In CAPM, the answer was obvious: the market return is simply the return of the capitalization-weighted equity market.  But what is the return of the value factor?

Identifying important risk factors is all about identifying variables that explain differences.  For example, with the value factor, Basu found that companies with a high book-to-price value behaved differently than those with a low book-to-price.  To capture this effect, quants build a “long/short” portfolio to capture the return difference in the categories.  In the case of value, the portfolio shorts stocks with low book-to-price values and uses the proceeds to buy stocks with high book-to-price values.  With the expectation that the two legs will behave differently, the long/short portfolio captures this difference.

Risk Factor vs. Risk Premia vs. Anomaly

There is an important distinction in the world of quants between a risk factor, a risk premia, and an anomaly.

A risk factor is something that helps explain the cross-sectional differences in security returns.

A risk premia is the excess return we expect to earn for exposure to a certain risk factor.

For example, “market” exposure is the risk while the “equity risk premium” is the associated risk premium that compensates us for bearing the risk.

Not all risk factors necessarily have associated risk premia, however.  For example, it is well established that stocks tend to behave similarly to other stocks within the same industry group.  Industry classification, then, may be a risk factor that helps us explain why two stocks behave differently.  For example, if I know that a stock is an energy company, and I know how the energy sector does on a given day, I likely have some meaningful information about how that stock did.

We do not expect to earn a premium, however, for investing in a stock in one industry versus another.  Why?  Again, because this risk can be completely diversified away.  Industry classification only helps us decompose the idiosyncratic side of the equation; as we said before, we should not profit from idiosyncratic risk.

So why does FF3 highlight value and size instead of sweeping them into the idiosyncratic component?  At the time of publishing, value and size were both found to have statistically significant premia associated with them.  Historically, investing in small companies had earned you a premium over investing in large ones while investing in cheap companies had earned a premium over expensive ones.

These are called risk premia because the excess compensation can be tied to excess risk being taken.  For example, researchers posit that the value premium comes from being willing to bear the higher distress risk of the companies owned while the size premium might arise from illiquidity risk.[1]

An anomaly is slightly different: it is an excess return that cannot be explained by exposure to risk.  The momentum factor, for example, is the excess return that has been historically generated by short-selling recent underperformers and buying recent outperformers.  This strategy has historically generated significant excess premium, but no convincing argument has been made as to what risk the premium is compensation for.  Often these anomalies are attributed to behavioral biases exhibited by investors (i.e. irrational behavior) or market structure.

Not all investors can profit from these risk premia and anomalies simultaneously.  Rather, they arise because some investors are giving up the return that others capture.  For example, investors unwilling to bear distress risk may sell their stocks at a discount to intrinsic value to entice other investors to buy them.  Hence, value investors get access to these higher risk stocks at a discount.  If they diversify, they should earn a premium.  In a sense, they are acting as “insurer” against the distress risk for investors unwilling to bear it.

Depending on whom you ask, there are between 300-600 published “factors” (either risk premia or anomalies).  As we all know, past performance is not a guarantee of future results.  Unfortunately, past performance is really all we have to work with as quants.  However, as a million analysts pour over the same data, it is likely that a number of spurious factors will be discovered.  From the list of several hundred, there are just a handful that quants broadly accept as being significant: value, momentum, illiquidity, low-volatility, and quality.

That is, by no means, a comprehensive list.  We know at least a few quants who would disagree with our inclusion of low-volatility and quality.  But that’s what makes the market go ‘round.

Is Stock Picking Fundamentally Flawed?

While stock pickers tend to focus on what makes a company unique or a situation special – the highly idiosyncratic –theory tells us that we should not be compensated for idiosyncratic risk.

Hence, quants don’t pick stocks.  Rather, we buy big baskets of things in hopes to capture the common traits and characteristics (i.e. “factors,” which we think we’ll make money on) and eliminate the unique, idiosyncratic components.  Quants view the good stock pickers as those that are just actually closet factor-investors (whether they know it or not).  Warren Buffett?  He’s a value, quality, anti-beta guy with some leverage.[2]

That’s why we view the smart-beta revolution to be so important.  Nothing against stock pickers, but if you’re just providing factor exposure, we might as well commoditize you with rules-based, low-cost indices.

Of course, theory and reality have to meet somewhere.  Consider the following situation: the market goes 100% passive and index-based.  What happens?  Since everyone is buying the same stocks in the same relative proportion, there can be no relative price changes.  The market completely breaks: there can be no market with only price-takers.

Active investors are, therefore, necessary for functional markets.  This is captured by the Grossman-Stiglitz Paradox.  The paradox states:

  1. For markets to be efficient (i.e. nobody can make excess profit), investors must participate in price discovery;
  2. Because price discovery is expensive, investors must expect compensation for performing it;
  3. Hence for markets to be efficient, someone must be expecting excess profit.

Someone has to perform price discovery.  Someone has to figure out why Coca-Cola is worth something different than Pepsi.  Is that not idiosyncratic information?  If stock pickers are providing this service, they should expect compensation for it: they can’t be so irrational as to do it for free (or worse, at a net loss after costs).

Does it have to be stock pickers who perform this service, though?

To explore this question, we’ve built a market simulation.[3]  We’re going to assume that there are two sides to the market: passive investors and systematic value investors.

We’ll assume our market has 10 stocks in it.  At the beginning, the passive investors hold all the shares: 100,000 of each stock, to be exact.  Each share is priced at exactly $100 and has constant earnings per share.  For each stock, we randomly select an earnings per share level of between $0 and $5.  We assume these earnings are paid out, 100%, as a dividend to shareholders.

Now, if the passive investors never trade, nothing happens.  Fortunately, passive investors often trade for cash-flow reasons (not to mention share issuances, repurchases, index reconstitutions, et cetera – but we’ll ignore these).  In our simulation, we’ll assume that at each step the passive investors, collectively, make an aggregate trade: trying to purchase or sell some shares.  They will always look to transact in proportion to the current market capitalization.

Taking the other side of that trade will be our systematic value investors.  They will look at the dividend yield from each stock and look to buy the cheapest 30% and sell short the most expensive 30% (the middle 40% simply go untraded).  They start with $0, but can freely borrow shares to sell short.[4]

For simplicity, we assume $0.01 bid/ask spreads, meaning that the value investor always buys a penny above price and sells a penny below.[5]

At the end of each period, the stocks pay their dividend to shareholders.[6]

What happens to “valuations” (as measured, here, by dividend yield)?

Valuations converge.  With the publicly available information of price and dividends, our systematic value investors were able to converge prices to a place where investors are largely indifferent between the stocks they hold.

For this service, the value investor earns a premium.  Let’s look at what happens to the portfolio of the value investors:

We can see that the value investors made a significant profit during the repricing period, but once valuations became more-or-less constant to one another, their profit disappeared.  They arbitraged their way out of a job.

It is important to note here that this would not work if we simply introduced momentum investors.  Without an anchor to move towards (like “valuation”), if we seeded the system with some initial price velocity, prices would spiral out of control.  That said, we can see how a momentum trade could help a value trader converge prices in a faster manner.  In a sense, one might argue that momentum traders are leveraging information provided by value traders.

What we see, however, is that stock pickers are not necessary: a systematic value approach helps the market find equilibrium.  Whether that equilibrium is right or wrong, of course, depends on the value metric used.  We could argue, though, that valuation is always in the eye of the beholder.  So long as there are a number of different approaches being employed by the market, systematic value investors can play the role of stock pickers.

That is not to say that stock pickers cannot be successful.  Rather, there is just no expectation that they should earn a premium for bearing high levels of idiosyncratic risk.  We are not saying, however, that stock pickers cannot profit off of idiosyncratic information.

For example, in our simulation, a stock picker may be able to profit in a scenario where they have insight a company will be cutting or growing their dividend in the future (making the value proxy used by value investors invalid).  Ex-post, the value investors would pick up on that information, but information realized before the rest of the market is valuable.

By its very nature, this sort of information is idiosyncratic.  Each situation is unique, special, and must be analyzed individually.  The statement past performance is not a guarantee of future results really applies here.  What is being harvested is not a risk premium or an anomaly: it is pure alpha.

For some, alpha is the purest form of compensation.  It is profits earned for discovering information and providing it to the market.  It is also inconsistent: it requires investors to identify unique opportunities to profit from them.  After all, if a manager claims to have a disciplined process for identifying the alpha opportunities, then it can be systematized and factorized.

Which means that, in the eyes of quants, the approach is unscientific.  It does not give us a way to test and re-test our investment hypotheses.  While we all know past performance is not a guarantee of future results, past performance is really all we have to draw conclusions from when performing quant research.  Without a completely disciplined approach, there is no way to draw statistical conclusions.

For managers that focus on the idiosyncratic, this holds especially true.  Even the best track-record is statistically meaningless for basing our future confidence on if investment opportunities in the future are each unique.

We’d rather focus on areas where we have the expectation of earning a consistent reward.  For us, these are the theoretically and empirically proven risk premia and anomalies like value and momentum.  That’s why quants don’t pick stocks.

[1] It’s worth pointing out that since publishing, evidence for the size premium has waned.  Over the next several years, it may give up its title as a risk premium and be swept back into the category of a risk factor.

[2] See http://docs.lhpedersen.com/BuffettsAlpha.pdf

[3] Despite our usual disdain for simulations, we seem to be using them a lot recently.

[4] The lack of lending cost and margin requirement is obviously not realistic, but we don’t think it meaningfully detracts from the point here.

[5] We could argue that for providing liquidity, the active investors should be compensated and the passive investors should cross the bid/ask spread (see https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2849071).  If this is the case, however, our simulation would break: prices would diverge further from fair value.

[6] We assume, for convenience, that “earnings” and “dividends” are accrued and paid instantaneously, so there is no change in share price.

Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients.

Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of $10bn.

Corey is a frequent speaker on industry panels and contributes to ETF.com, ETF Trends, and Forbes.com’s Great Speculations blog. He was named a 2014 ETF All Star by ETF.com.

Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University.

You can connect with Corey on LinkedIn or Twitter.

Previous Article


Next Article

Do You Know Where Your Global Bond Money Goes?

Related Posts
Subscribe to AdvisorAnalyst.com notifications
Watch. Listen. Read. Raise your average.