by Jason Hsu, Research Affiliates LLC

Download PDF

Every year we invite some of the investment industry’s most creative thinkers to speak about their work at the Research Affiliates’ Advisory Panel conference. Along with Nobel laureates Vernon Smith and Harry Markowitz, the speakers at our 14th annual meeting included Campbell Harvey, Richard Roll, Andrew Karolyi, Bradford Cornell, Andrew Ang, Charles Gave, Tim Jenkinson, and our very own Rob Arnott.1 The richness of the speakers’ presentations beggars any attempt to summarize them; I’ll limit myself to the points I found most intriguing and illuminating. I also acknowledge that this account may reflect my own capacity for misinterpretation as much as the genius of the speakers’ actual research.


Factors Everywhere

Cam Harvey of Duke University’s Fuqua School of Business and the Man Group, who recently completed a six year stint as editor of the Journal of Finance, spoke about revising the traditional t-statistic standard to counter the industry’s collective data-snooping for new factors. Dick Roll presented a protocol for factor identification which helps classify a factor as either behavioral or risk-based in nature. These two topics are at the center of our research agenda (Hsu and Kalesnik, 2014; Hsu, Kalesnik, and Viswanathan, 2015).

Cam has written about the factor proliferation that has resulted from extensive data-mining in academia and the investment industry (Harvey, Liu, and Zhu, 2015; Harvey and Liu, 2015). As of year-end 2014 he and his colleagues turned up 316 supposed factors reported in top journals and selected working papers, with an accelerating pace of new discoveries (roughly 40 per year). Cam’s approach to adjusting the traditional t-stat is mathematically sophisticated but conceptually intuitive. When one runs a backtest to assess a signal that is, in fact, uncorrelated with future returns, the probability of observing a t-stat greater than 2 is 2.5%. However, when thousands upon thousands of such backtests are conducted, the probability of seeing a t-stat greater than 2 starts to approach 100%.

To establish a sensible criterion for hypothesis testing in the age of dirt-cheap computing power, we need to adjust the t-stat for the aggregate number of backtests that might be performed in any given year by researchers collectively. Recognizing that there are a lot more professors and quantitative analysts running a lot more backtests today than 20 years ago, Cam argued that a t-stat threshold of 3 is certainly warranted now. Applying this standard of significance, Cam also concluded that outside of the market factor, the other factors that seem to be pervasive and believable are the old classics: the value, low beta, and momentum effects. The newer anomalies are most likely results of datamining.

I am happy to note that at Research Affiliates we adopt an even more draconian approach to research. For example, Dr. Feifei Li requires a t-stat greater than 4 from our more overzealous junior researchers. Indeed, as we add to our research team and thus the number of backtests that we perform in aggregate, we recognize that our “false discovery” rate also increases meaningfully. We must and have developed procedures for establishing robustness beyond the simple t-stat.

Richard Roll, who was recently appointed Linde Institute Professor of Finance at Caltech, reminded us that there are essentially three types of factor strategies:

  • 1. Those that do not appear to be correlated with macro risk exposures yet generate excess returns
  • 2. Those that are correlated with macro risks and thus produce excess returns
  • 3. Those that seem to be correlated with sources of volatility but don’t give rise to excess returns

Dick proposed an identification scheme which first extracts the macro risk factors through a principal component approach and then determines whether known factor strategies belong to the first, second, or third group. The principal components should be derived from a large universe of tradable portfolios representing diverse asset classes and equity markets as well as proven systematic strategies. Think of the extracted principal components as the primary sources of systematic volatility in the economy. A modified Fama–MacBeth cross-sectional regression approach, which uses only “real” assets to span the cross-section, should then be applied to determine which principal components command a premium and which do not. Then we examine the “canonical” correlation between the principal components and the various factor strategies of interest. This will help us identify which factor strategies derive greater returns than their exposure to systematic volatility would warrant, and which, in contrast, derive less return than their exposure would suggest. For instance, Dick concluded that momentum is almost certainly a free lunch: it creates excess returns without exhibiting any meaningful covariance with true underlying risks (Pukthuanthong and Roll, 2014).

The factor emphasis of the meeting continued with Andrew Ang, the Ann F. Kaplan Professor of Business at Columbia. Andrew presented a framework for factor investing that encourages investors to think more about factors and less about asset classes (Ang, 2014). Andrew argues that factors are like nutrients as asset classes are like meals. Ultimately, what we care about are the vitamins, amino acids, proteins, carbohydrates, and other nutrients we get from meals.

Pages ( 1 of 3 ): 1 23Next »