This is the longest bull market in US equity history dating back to 1945. The six year run-up in the S&P500 since March 2009 has been truly remarkable, having only been exceeded in price performance terms by the 1929 and 1999 bull markets. It is now also more than 800 days since we experienced even a 10% correction. While such strength and resilience is typically applauded, such exceptional markets rarely end well. Andrew Lapthorne and the SocGen Quant Strategy team try to forecast its demise... So we’re going to try predict the next US equity bear market, a foolish exercise that we admit is more likely to lay us open to ridicule (just ask the ECRI!). We’ll most probably be wrong, but the outcome is not the whole point of the exercise. Rather we hope to build a framework into which we can plug a whole range of macro and fundamental variables to work out their bear market relevance. We hope ultimately to use such a model to rank the importance of the ongoing data series and eventually to plug the resulting probability into some of our asset allocation models. Based on six probit models developed in this report, we calculate the average (median) probability of having a bear market in Q2 2015 at about 18.4% (median = 17.5%). We find the key data series we should pay special attention are: term structure spread, senior loan officer surveys, the S&P 500 profit margin, and various style returns such as the return based on Piotroski scores. Forecasting bear markets What is the chance of the S&P 500 entering a bear market in 2015? Forecasting, as we all know, is something to be avoided, the future is always unknown. But providing a forecast is not the same as assessing potential risks, which is something we should be doing on an ongoing basis and knowing where we are today is a hard enough and worthwhile task in itself. We have already spent a great deal of time trying to assign a probability of the market going up or down in the near future using price data in the form of returns and volatility1, but here we are attempting to provide a directional probability of a more extreme market move based on some commonly used fundamental and macro data. Each day we get bombarded by new information. We get hundreds of new economic data points every week and endless amounts of corporate news flow, which may require some kind of narrative to inform our view. For the large part many of us employ a heuristic approach to the art of prediction; we plot charts, we run regressions, and we apply rules of thumb. But here we are going to attempt something a little bit more systematic and scientific? The idea behind this note is not just to predict bear markets in the US, though we will certainly try, but to create a framework into which we can plug some of these endless streams of data in order to demonstrate their relevance in helping us to predict the coming of a bear market. Can, we ask, develop a more formulaic interpretation of the data? Now, we are not deluded as to the impossibility of this task. Indeed we have ourselves seen and chuckled at numerous economic models over the years that have predicted either something that never happened or the blindingly obvious. To see the dangers of misforecasting something nasty like a recession, you only need to witness the lambasting the Economic Cycle Research Institute (ECRI) took when it predicted a 2012 recession that failed to materialise. We sympathise, but we also emphasise that our purpose is not to simply assess the likely probability of a bear market, but also to fit the data we pore over every day into its rightful slot in a predictive framework. Dating the bear/bull market cycle Assuming the current bull market in the S&P 500 started in March 2009, this bull market is now six years old, making it the longest bull market period in post war history. Over this period the index has risen by over 200%, making it the third strongest six year run since the year 1900. Famously, the two others (1929 and 1999) did not end well. This has also been one of the longest periods not to see even a 10% correction; the current run without a 10% correction has now extended beyond 800 trading days, longer even than in 1987, but still some way off the 1200 days of 2003-2007 – and the amazing 1800 days that led up to the 2000 crash. The point here is that rarely do such strong periods of performance end well, as the lack of pullbacks tend to lead to excess risk taking and leverage. Avoiding the end of such frothy periods in equities then becomes paramount. A model for predicting bear markets Before we can analyze bear market cycles in detail, we need to measure and date them in a systematic way. To do this we can use either of two common approaches. The first is based on a dating algorithm developed by Bry and Boschan (1971), and the second uses a statistical modeling technique known as ‘regime switching’. We provide an overview of each of these in the appendix in the back of this publication. For the purpose of this report, we have adopted the first approach, which gives us our time series of bull and bear market cycle, which we show below. These results also helped to produce the first chart above. We are going to create six probit models. A probit model is a statistical technique that generates the probability of a bear market given a set of observable indicators (a.k.a. covariates). Essentially it looks for variables that have historically tended to correlate with the occurrence of a bear market. This type of model serves two purposes. First, it provides an insight on calculating the probability of a bear market. Second, it focuses attention on those indicators that can predict the coming of a bear market and disregards those that do not. Six models – three macro, three equity focused Three of our six models are based on macro indicators while the other three are based on fundamental equity data. Two of the macro models have a forecasting horizon of 12 months; while the other has a shorter time horizon of just 3 months. The three fundamental models provide a more contemporaneous probability of a bear market and are largely to be used as conformation signals, i.e. does the data we are see today indicate that we are actually in a bear market. * * * Conclusion Adding all this together, we can say our models put the average (median) probability of seeing a bear market in Q2 2015 at about 18.4% (median = 17.5%). Although the suggested probability is not very high, we still need to remain cautious. Market valuations are stretched and the run-up in equity markets since Q2 2009 has been one of the most rapid. US macro surprises have also been very negative so far this year and of course the next bear market may be triggered by a shock outside the scope of the variables used in building the models. This leads to another key message of this report. Besides generating a bear market probability forecast, the models have helped us to identify a set of macro and fundamental variables that we should monitor with a view to anticipating the coming of a bear market. Some of the key signals that are worth paying special attention to are: Flattening of the yield curve Increasing oil prices Decreasing industrial production Widening of junk spreads Strengthening of U.S. dollar Pickup in inflation Shrinking S&P 500 profit margin Tightening of loan standard to small firms Decrease in load demand from large/middle-market firms Increasing S&P 500 leverage Increase in various style returns By systematically monitoring these signals, we can reduce the chance that the next bear market will catch us by surprise and possibly incorporate these probabilities into a systematic market forecasting strategy. These signals should not be a surprise anybody since all of them are popular Wall Street measures, and our contribution is to build a framework into which we can assign weights and importance to all this noise. The work is ongoing... Source: SocGen