Using Random Portfolios To Test Asset Allocation Strategies
by James Picerno, Capital Spectator
Last month I tested random rebalancing strategies based on dates and found that searching for optimal points through time to reset asset allocation may not be terribly productive after all. Letās continue to probe this line of analysis by reviewing the results of randomly changing asset weights for testing rebalancing strategies.
Iāll use the same 11-fund portfolio thatās globally diversified across key asset classes with a starting date of Dec. 31, 2003. The benchmark strategy is simply rebalancing the portfolio at the end of each year back to the initial weights, as defined in the table below.
Letās assume that the benchmark strategy is someoneās best effort at portfolio design. Our fictional investorāletās call him Ronaldāhas thought long and hard about asset allocation and decided that the portfolio mix above is the way to go. Ronald has back-tested the strategy with actual data with the funds listed above and thinks that the results are encouraging. A $100 investment at the start date (Dec. 31, 2003) increased to just over $200 as of Oct. 2, 2015. Ronald concludes that the portfolioās 100% cumulative gain over the sample period is pretty good. He pats himself on the back and goes out for a celebratory dinner, confident that heās built a winning portfolio. In other words, Ronaldās convinced that he possesses a fair amount of skill in the art/science of building and managing portfolios through time.
While Ronaldās away at the Overconfidence CafĆ©, letās analyze his portfolio design by comparing it with random portfolios. Weāll use the same funds in the table above but randomly vary the weights for each of the portfolioās assets. To ensure a fair test, weāll keep the random weights within the same range in the table aboveāa minimum of 5% up to a maximum of 25%. Using R (you can find the code here), weāll create 1,000 portfolios, each with a randomly selected mix of different weights for the 11 funds. To match Ronaldās portfolio, the strategies are 1) rebalanced back to the randomly selected target weights at the end of each year; 2) are always invested in each fund in some degree; 3) but no shorting or leverage is allowed. In sum, the random portfolios are identical to Ronaldās strategy with one exception: the asset weights are allowed to wander within a 5%-to-25% range.
The chart below compares Ronaldās strategy (red line) with the 1,000 portfolios that randomly vary the asset weights (gray area). As you can see, Ronaldās portfolio results are more or less average relative to the randomly generated results. The implication: Ronaldās skill may be a figment of his imagination, courtesy of looking at one return sample, namely, the portfolio he designed and back-tested.
For another perspective, letās compare the ending valueā204āof Ronaldās benchmark portfolio for the sample period with the distribution of ending values for the 1,000 randomly generated strategies (black line). Iāve also added the median outcome for the random portfolios (blue line).
The good news for Ronaldās ego is that his portfolio earned a slightly above-median return. Is that evidence of skill? Maybe, but weād have to run additional tests for a deeper level of confidence before crowning Ronald as an asset allocation wizard.
For instance, we could review the drawdown history of Ronaldās portfolio and compare it with the randomly generated strategies. Analyzing the portfolioās Sharpe ratio, Sortino ratio and other risk metrics in context with the random results is a productive line of inquiry as well. The question is whether Ronaldās portfolio takes on substantially more risk than is necessary to earn what appears to be a middling performance?
The larger point is that by creating 1,000 random portfolios we have a robust data set for analyzing risk and skill. The toy example above can easily be expanded to generate a more nuanced set of random portfolios, offering a more realistic array of investment possibilities. What if we expanded the range of weights to, say, zero up to 50% for any one fund? How would the results change if we added a momentum filter for choosing weights for each fund? We could also impose a volatility limit. The possibilities are endless. Ideally, the testing would be customized to match the investorās risk tolerance, goals, and other factors that are unique to a given strategy under the microscope.
The key lesson is that using a set of random portfolios to evaluate a given strategy is a powerful tool. By comparison, the standard approachācomparing a portfolio to one idealized benchmark or a peer group in the same strategy neighborhoodāis inferior as a methodology for analysis. Why? Because cherry picking a handful of funds for a benchmark isnāt representative of what could happen. If weāre testing a portfolio strategy, itās essential to review all the possible outcomes, even if those outcomes never occurred in the real world.
Granted, random portfolios arenāt a crystal ballāthe futureās still uncertain. But if youāre analyzing investment strategies without the benefit of random portfolios, youāre overlooking a lot of valuable insightāinsight that may spare you grief later on. Maybe someone should tell Ronald.


