Little Big Details

by Corey Hoffstein, Newfound Research

This blog post is available as a PDF download here.

  • Limited attention drives us to focus on the big details of investment strategies.
  • Small details can have an outsized impact on performance, especially if they can compound upon one another.
  • To quote Aaron Brown, Head of Risk at AQR: “It takes a lot of compounding to turn a mistake into a disaster. There will never be any shortage of mistakes […].  So it’s the compounding you have to prevent, not the mistakes.”
  • We believe details like how frequently to rebalance and when to rebalance are too often overlooked and can have a dramatic impact on investor results.

In the world of asset management, attention goes to the big details.  Details like: “what are we investing in?”  Or, “what is the process?”  And without a doubt these are very important things.

Yet our limited time and attention often leaves us to gloss over the small details.  Our expectation is that the big things should have a big impact and the small things should have a small impact.

That is not always the case.

In this commentary, we are going to discuss two small things – the frequency of rebalancing and when rebalancing occurs – to show that small things, done wrong, can have an outsized impact on long-term results.

How Frequently Should We Rebalance?

The standard protocol for most systematic strategies is to rank the investment universe based upon some sort of signal or score, with the assumption that a stronger signal forecasts a higher future return.  At each rebalance, the portfolio tilts towards those securities with the strongest signals and away from those with the weakest (or even sells them short).

Assuming our signals correlate strongly with forecasted returns, to maximize our expected return we would want to rebalance as frequently as possible so that we are always holding the securities with the strongest signals.

Solely maximizing expected return, however, with no consideration of risk, may not be prudent.  Consider that the optimal choice to maximize expected return would be to continuously rebalance into the single security with the strongest signal, subjecting the portfolio to a tremendous amount of idiosyncratic risk.

Maximizing expected returns may not result in maximized realized returns, however, as the returns we expect and the returns we realize can be quite different.  In the case where our investment decisions can compound upon themselves, variance can play a dramatic role in returns.

Rather, investors will likely prefer to maximize expected return subject to some risk level or tracking error threshold.  This added wrinkle means that diversification will play an important role in how frequently we need to rebalance, particularly in the face of transaction costs, taxes, operational costs, and whipsaw costs (we’ll shorthand these as “turnover costs”).

To get an idea for this, let’s pretend for a moment that we are managing a value portfolio.  (For simplicity’s sake, we’re going to assume that stocks within our portfolio are held in an equal-weight fashion.)

In the face of turnover costs, how frequently we will want to rebalance our portfolio will depend upon a number of factors:

1. How large is our portfolio?

If there are N stocks in our portfolio, then the average stock will have to fall N/2 positions before it needs to be removed.  Furthermore, the larger N is, the smaller the change is to the portfolio when a security is removed.  Turnover costs for small changes may exceed the increase in marginal risk-adjusted return, and thus we would only want to rebalance at a frequency when there are enough changes to warrant the costs.

All else held equal, larger portfolios will need to be rebalanced less frequently.

2. How fast do signals decay?

For some strategies, signals many months in the future correlate highly to signals today.  For example, value tends to be a strategy where signals decay slowly: having a high score today implies a high probability of having a high score in six to twelve months.  On the other hand, momentum scores decay quickly: a high momentum score today implies little about momentum scores a year from now.

All else held equal, strategies with faster decaying scores will need to be rebalanced more frequently.

3. How much diversification is available?

Let’s assume that it only makes sense to rebalance if M of our N stocks need to be replaced in the portfolio.  As the correlation between stocks in our portfolio increases, the odds of M stocks falling out between two periods will converge towards the probably of just a single stock falling out.  On the other hand, if the stocks are highly diversified from one another, the probably of M falling out will be the probability of a single policy falling out to the Mth power.

All else held equal, more internal diversification within the portfolio decreases the frequency with which we need to rebalance.

4. How noisy are our signals? 

If our score estimates contain an element of noise to them, more frequent rebalancing can needlessly increase turnover costs.That our signals include an element of noise is not an unrealistic assumption.  Consider that most value strategies rely upon some sort of reported fundamental metric (e.g. earnings or book value).  Since these figures are reported on a quarterly basis, when we rebalance will imply that some figures are more up-to-date than others.  In addition, accounting values are imperfect estimates of fundamental value and accounting treatment may vary across firms.

The danger here goes beyond just normal transaction costs, however.  Increasing the frequency of turnover can compound mistakes.  In some cases, the loss of sitting on the wrong signal for an extended period of time could actually be lower than the whipsaw costs incurred by rebalancing more frequently and compounding mistakes.

Consider the following example.  Let’s assume the market has a constant annualized expected return of 6% per year with a volatility of 14%.  We’ll assume we are running a market timing strategy, where we make long or short calls on the market.  To see how accuracy and rebalance frequency affects our risk profile, we can run simulations of what such a strategy might look like and look at the expected maximum drawdown.

Calculations by Newfound Research.  All results are hypothetical.

We can see that increased rebalance frequency can increase the number of errors we make.  Even while those errors may be small for more frequent rebalances, they can compound quickly.

To quote Aaron Brown (Head of Risk at AQR) from his book Red Blooded Risk: “It takes a lot of compounding to turn a mistake into a disaster.  There will never be any shortage of mistakes […].  So it’s the compounding you have to prevent, not the mistakes.”

In investing, it is prudent to hope for the best and prepare for the worst.  Model accuracy is less of a random coin-flip and more of a streaky process.  While long-run accuracy for a model may be 60%, that may be comprised of sub-periods where we have 100% accuracy and sub-periods where we have 0%.  Limiting rebalance frequency can help limit the impact of inaccuracy.

All else held equal, more noise in our signals implies a greater risk of compounding whipsaw in rebalance frequency.

Putting it Together

While these many considerations create a multi-dimensional problem for portfolio rebalancing, we can generally say:

  1. More concentrated portfolios will need to be rebalanced more frequently.
  2. Portfolios driven by signals that decay more quickly will need to be rebalanced more frequently.
  3. Portfolios with greater internal diversification will need to be rebalanced less frequently.
  4. If signals are noisier, the portfolio should be rebalanced less frequently to avoid compounding errors.

With tactical strategies, we often hear investors say that they want to rebalance more frequently.  Why wait a month between rebalances when we can wait a week?  Why wait a week when we can rebalance every day?  Yet what often goes unconsidered is that rebalancing frequency is a double-edged sword.  When the calls are correct, the benefits will compound more quickly.  However, when the calls are incorrect, so will the costs.  The frequency of rebalancing must be chosen so as to strike a balance between the portfolio objective and the risks of compounding mistakes.

When Should We Rebalance?

Once we’ve settled upon how frequently we want to rebalance, we next must consider when to rebalance.

For example, if we chose to rebalance a value strategy annually, we still need to choose whether this rebalance occurs at the end of the year, mid-year, or some other time entirely.  It seems reasonable to assume that the question of when should be largely irrelevant, unless we believe there is some sort of edge to exploit (e.g. turn-of-month effects).

However, we will see that the when can have a profound impact upon long-term returns.  Not because there are necessarily times of the year that are inherently better or worse, but because it can have a dramatic effect on the investment opportunity set.

To shed some light on this effect, we will explore two examples.

First, let’s consider a simple value-based sector-rotation strategy.  In this strategy, we will allocate to the nine primary GICS sectors of the S&P 500 based on a valuation score, tilting towards undervalued sectors and away from overvalued sectors.  The strategy will be rebalanced annually.[1]

To demonstrate the impact of timing luck, we create four different strategies, each rebalancing at the end of a different quarter.  For example, the first strategy will rebalance each December 31st, while the second strategy will rebalance every March 31st, et cetera.  Below, we plot the growth of a dollar in each of the strategies.

Data Source: CSI Analytics.  Results are purely hypothetic and gross of all fees.  Returns assume the reinvestment of all dividends.

While the difference may not seem like much, the annualized return spread between the worst performing (June) and the best performing (December) is 0.54%.  Over the full testing period, this balloons into a 13.5 percentage point difference: enough to get one manager fired and another hired.  In this case, the December manager got lucky while the June manager got unlucky.  There is nothing to say, however, that the results could not have been reversed.  Hence why we call this effect “timing luck.”

This effect can be magnified when the tracking error between the different strategy versions grows.  For our second example, let’s consider a market timing approach popularized by Faber (2006)[2], in which the author applies a simple 10-month moving average system to time market exposure.  Specifically, at the end of each month, the price of a security is compared to the 10-month moving average of its price.  If price is above its average, the security is held long; if price is below its average, the security is sold and cash is held.

While the strategy in the paper is applied at the end of each month, there is no reason the same strategy could not be applied mid-month, or even three-fourths of the way through each month.  To generalize the approach, we will assume there are 21 trading days in each month and use a 210-day moving average, allowing us to create 21 possible strategies.  We will test the approach on the SPDR S&P 500 ETF (“SPY”).

Data Source: CSI Analytics.  Results are purely hypothetic and gross of all fees.  Returns assume the reinvestment of all dividends.

The above graph shows the growth of $1 in each of the 21 possible strategies.  The spread between the best and worst performing strategies is 250bp per year, resulting in a cumulative return difference of 369 percentage points.   Just as importantly, as the strategy is focused on drawdown control, the best drawdown of the group was -17.96% while the worst was -28.76%.

Forget hired versus fired: this is the difference between staying and going out of business.  This is not a totally unrealistic example, either: there are a number of tactical strategies available today that rebalance on a monthly basis that dramatically shift the amount of equity market beta in the portfolio.

To deal with this timing luck, we introduce the concept of “overlapping portfolios” (also sometimes referred to as “tranching”).

The idea behind overlapping portfolios is simple: instead of investing in a single portfolio that rebalances in a discrete frequency, we invest in a several identically managed portfolios that rebalance with the same frequency, but at different times.

Consider the value-based sector rotation example above.  Let’s assume that each of the four return streams belong to a different manager.  A fifth manager comes along who decides to run a “fund-of-funds,” allocating to each of these managers equally.  Investing­ in this fund-of-funds creates the overlapping portfolio approach.

The table below demonstrates the idea.  Each portfolio is formed through an identical process and held for four periods, but when each is formed is offset by a period from the prior.


By spreading our capital out across the four portfolios, we diversify our exposure to the results of timing luck.  Positive returns in one portfolio due to good luck will be (eventually) be offset by negative returns due to bad luck in another.

In fact, under some simplifying assumptions, we can prove this is actually the optimal portfolio design choice to minimize timing luck.  We provide this proof in the Appendix.

Fortunately, the actual implementation of the overlapping portfolio technique can be much simpler than simulating the management of a number of underlying portfolios.  Instead, we can simply average our target weights over time.

Consider our four-manager example.  To implement this, at the end of each quarter we would:

  • Run the portfolio process, identifying the most up-to-date target portfolio using current signals.
  • Set our current portfolio to be the average of the portfolios calculated over the prior four quarters.

By setting our weights to the average of weights generated over the prior rolling four-quarter period, each quarterly generated portfolio is held for a year and given an equal amount of capital.  Hence, we get the overlapping portfolio effect with far less work![3]

The result of this effort is a portfolio that ultimately looks like the average of all the underlying portfolios.  The graph below shows this approach applied to the S&P 500 timing system discussed before.  We can see that timing luck has been eliminated from the equation.

Data Source: CSI Analytics.  Results are purely hypothetic and gross of all fees.  Returns assume the reinvestment of all dividends.

We want to take a moment to address one approach we have seen in the past to dealing with this problem that we believe is incorrect: signal smoothing.  If overlapping portfolios can be thought of running multiple portfolios and averaging the output, signal smoothing is all about averaging the input and running a single portfolio.

In some cases, there is no difference between signal smoothing and overlapping portfolios.  Consider our prior S&P 500 market timing example.  An overlapping portfolio approach would average the target portfolio weights of the prior 21 days.  A signal smoothing approach would average the signals over the prior 21 days.  However, since our signals are our target weights (“in” or “out”), we end up in the same place.

This is not always the case.  In fact, it is rarely the case.  For it to be true, the following equation must hold:

Where  is the vector of inputs at time  and  is the transformation function that takes inputs and returns portfolio weights.  This is a textbook case of a mathematical property known as Jensen’s inequality, which relates the expected value of a function (left-hand side of our equation) to the function of expected values (right-hand side).   

The figure below shows Jensen’s inequality in action.  The function applied to the average of x and y will fall on the bold black line, while the average of the function applied to each of x and y will fall on the lighter line.  We can see that which side of the equation is larger will depend on the nature of the function.  Only in the case that the function is a linear transformation will the two cases be equivalent.

Source: www.probabilitycourse.com

Let’s consider a simple two-asset example to highlight how the difference can play out between signal smoothing and overlapping portfolios.  In this example, we will assume each asset is given a score based on some quantitative model and the asset with the highest score is held by the portfolio.  We’ll assume that portfolio is rebalanced annually.

With an overlapping portfolio approach, we might re-evaluate scores quarterly and create a portfolio that averages weights from the prior four quarters.  A signal smoothing approach, on the other hand, would average signals over the prior four quarters and create a portfolio based upon those signals.

You may have already caught the problem.  In the overlapping portfolio case, portfolio weights can range between 0 and 100% for both assets.  In the signal smoothing case, weights must be either 0% or 100%.  Consider the hypothetical scores and the resulting weights in the tables below.

Hypothetical Signals

Asset #1 Asset #1
Q1 1 0
Q2 1 0.5
Q3 0.5 1
Q4 0.75 0.5
Signal Average 0.812 0.5

Resulting Weights

Asset #1 Asset #1
Q1 100% 0%
Q2 100% 0%
Q3 0% 100%
Q4 100% 0%
Overlapping Portfolios 75% 25%
Signal Smoothing 100% 0%

While the overlapping portfolio approach ended up with 75% allocated to Asset #1 and 25% to Asset #2, the signal smoothing approach ended up with 100% in Asset #1 and 0% in Asset #2.  These are very different results.

Conclusion

The little details in investing often go overlooked.  Yet the small things can have a big impact if not accounted for correctly.  While we often expect the big things to drive results, small things done wrong can have a dramatic compounding effect that can lead to poor performance.


Appendix

For the optimality proof of overlapping portfolios and the reduction of timing luck, please see the appendix in this PDF.


[1] The exact details of how the strategy works is not important for highlighting the impact of timing luck, and hence we have omitted it to avoid introducing unnecessary details.

[2] Faber, Meb, A Quantitative Approach to Tactical Asset Allocation (February 1, 2013). The Journal of Wealth Management, Spring 2007. Available at SSRN: https://ssrn.com/abstract=962461

[3] It is worth noting that this approach does not account for drift that may occur in a portfolio over time.  For higher volatility assets and longer holding periods, accounting for this drift in weights can be important.

Corey is co-founder and Chief Investment Officer of Newfound Research, a quantitative asset manager offering a suite of separately managed accounts and mutual funds. At Newfound, Corey is responsible for portfolio management, investment research, strategy development, and communication of the firm's views to clients.

Prior to offering asset management services, Newfound licensed research from the quantitative investment models developed by Corey. At peak, this research helped steer the tactical allocation decisions for upwards of $10bn.

Corey is a frequent speaker on industry panels and contributes to ETF.com, ETF Trends, and Forbes.com’s Great Speculations blog. He was named a 2014 ETF All Star by ETF.com.

Corey holds a Master of Science in Computational Finance from Carnegie Mellon University and a Bachelor of Science in Computer Science, cum laude, from Cornell University.

 

Copyright © Newfound Research

Total
0
Shares
Previous Article

Streamline Your Advisory Business with These 4 Best Practices

Next Article

The Skeptic's Guide to Sustainable Investing

Related Posts
Subscribe to AdvisorAnalyst.com notifications
Watch. Listen. Read. Raise your average.