Low interest rate projections

John Hussman strategic growth fund is getting absolutely killed in the current bull market but he remains one of the clearest economic thinkers I follow. His research is top draw here are some interesting insights:

“it’s tempting to believe that low interest rates “justify” elevated equity valuations. But as one can show with any straightforward discounting method, even another 5 years of zero short-term interest rates (compared with a more typical 4% short-term yield) would only justify valuations about 20% above historical norms – essentially 5 years x 4%. Instead, current U.S. equity valuations are about 112% above historical norms on reliable measures. To justify current equity market valuations, interest rates would need to be held at zero for the next quarter century. Understand that while suppressing short-term interest rates may encourage yield-seeking speculation that results in rich stock valuations, those rich valuations are still followed by dismal subsequent returns. Emphatically, low interest rates do not raise the future return on stocks – quite the contrary.”

Here are some further insights Hussman brings from Daniel Kahneman.

So why do policy makers so wildly overestimate the real economic effects of monetary policy (while vastly underestimating its effects in distorting financial markets)? In his book, Thinking, Fast and Slow, psychologist and Nobel laureate Daniel Kahneman describes the biases and rules-of-thumb that people often use to estimate the impact of one piece of information in explaining another. When presented with some piece of evidence, some judgements rely on precise calculations and historical estimates. Others, Kahneman writes, “arise from the operation of heuristics that often substitute an easy question for the harder one that was asked… As a result, intuitive predictions are almost completely insensitive to the actual predictive quality of the evidence.”

Kahneman describes the way that these intuitions give rise to inaccurate predictions. First, some piece of evidence – the stance of monetary policy – is provided. The associative memory quickly constructs a story that links the evidence to whatever is to be predicted – the most likely story being that easy monetary policy will boost the economy, while tight monetary policy will slow it. The next step, says Kahneman, is “intensity matching.” The flimsy evidence is ranked in intensity, and that same intensity is used to produce the forecast for the variable to be predicted. So regardless of whether monetary policy is actually correlated with the economy or not, we naturally assume that extreme monetary policy should have similarly extreme effects on the economy, and in the expected direction. As Kahneman writes, “Intensity matching yields predictions that are as extreme as the evidence on which they are based, leading people to give the same answer to two quite different questions.” In this case, one question is “how easy is monetary policy?”, while the other is “where is the economy headed?”

The problem here is that the quality of the evidence – the strength of the correlation – is not being considered. Kahneman offers a way to improve on these intuitive predictions. In the present context, that method would go something like this: 1) Start with an estimate of economic growth in the absence of any monetary intervention; 2) Estimate the rate of economic growth that best seems to match the intensity of monetary policy; 3) Estimate the actual correlation between monetary policy and economic growth (hint: about 0.15); 4) If the correlation is 0.15, move 15% of the distance from the baseline GDP growth to the GDP growth matching monetary policy.

[Geek’s note: You can show statistically that if Zy and Zx are standard normal variables (where, for example, Zy is just GDP growth minus its mean, divided by the standard deviation of GDP growth), Kahneman’s formula gives the best linear estimate of Y given X, since the beta in a regression of Zy on Zx is just the correlation between the two. To illustrate, the mean of quarterly real GDP growth is 3.2% at an annual rate, with a standard deviation of 3.9%. The historical mean of the federal funds rate is about 4.9%, with a standard deviation of 3.9%. So holding the fed funds rate at zero is a Z statistic of -1.25. With a correlation of -0.15 between fed funds and subsequent GDP growth, at best, this translates to a Z statistic for GDP of 0.19, and multiplying by the standard deviation of GDP suggests that holding fed funds at zero would be expected to provide a bump to real GDP growth no greater than about 0.7% annually. That figure strikes us as about right, though in practice, GDP growth in recent years has fallen short of even the baseline that one would have projected in the absence of monetary intervention].

How much impact should we expect a 0.25% increase in the fed funds rate to have on economic growth? 0.25% is only an increase of 0.06 standard deviations in the fed funds rate, which would reasonably be associated with -0.15 x 0.06 = -0.009 standard deviations in GDP growth. So based on the historical relationship between the fed funds rate and subsequent GDP growth, the impact of a quarter-point hike in the fed funds rate would be expected to be a reduction in GDP growth of just four one-hundredths of one percent below what would otherwise be expected in the absence of that change.

Monetary base and likely interest rate

There is no need for explanations almost a century of history explains, that in order to mop up the expanded monetary base interest rates will need to go up. If not the consequences of a dramatically expanded monetary base will need to be dealt with down the line. You would have to categorize the current base as a “fat tail” or “disequlibrium” we cannot remain in this zone without consequences

Ending with a quote from Hussmans weekly letter:

The effect of quantitative easing is to extend and defer the consequences of reckless speculation, provided that low-risk liquidity is viewed as an inferior asset. Quantitative easing doesn’t eliminate the consequences of speculation and overvaluation, and in our judgment only promises to make the fallout more severe. But we should generally expect the worst consequences to emerge at those points when speculative, overvalued, overbought, overbullish conditions are joined by increased risk-aversion, as evidenced by widening credit spreads or subtle deterioration in the uniformity of market internals. Those shifts are clearly evident here, and our immediate concerns could hardly be more acute.

Market Extremes

I wish to highlight 2 important points.

The first one is to stand behind one of the great market analysts of our times, Dr John Hussman. Yes it’s true his reputation has been smashed by his poor performance the last few years. On this I am not able to defend him as much as I would like as I think the processes he adds to his portfolio construction on top of his macro analysis leave little to be desired. I actually don’t even want to go there, rather I want to stand by his rigorous market climate and valuation approach.

These are the key points and like John I am prepared to fall on my sword and face the ridicule.

Meanwhile, the S&P 500 is more than double its historical valuation norms onreliable measures (with about 90% correlation with actual subsequent 10-year market returns), sentiment is lopsided, and we observe dispersion across market internals, along with widening credit spreads. These and similar considerations present a coherent pattern that has been informative in market cycles across a century of history – including the period since 2009. None of those considerations inform us that the U.S. stock market currently presents a desirable opportunity to accept risk.”

Where he refers to a 90% correlation with actual subsequent returns this refers to his valuation model forecaster. If you go through the math you will see how the model works, but you are safe in the assumption that over the long term this model is pretty darn accurate. See below an example of how it looks.

I end my first point with Hussman’s words highlighting how overvalued we currently are, “The equity market is now more overvalued than at any point in history outside of the 2000 peak, and on the measures that we find best correlated with actual subsequent total returns, is 115% above reliable historical norms and only 15% below the 2000 extreme. Unless QE will persist forever, even 3-4 more years of zero short-term interest rates don’t “justify” more than a 12-16% elevation above historical norms.”

My second point is to just highlight an extreme in market momentum we haven’t seen in its history, I am not sure what to draw from it right now but I wanted to document it as I believe it will be significant when we look back over the fullness of time. For 29 days the S&P500 closed above its 5 day moving average, the previous record of 27 days took place in 1928.

Drop and Gain Clustering

I decided to build on John Hussman’s clustering of large moves research. As demonstrated in a previous post we showed how large drops -3% seemed to cluster. This time I superimposed the gains to see if there was a similar pattern the gain behaviour and as you can clearly see there is. My takeaway is volatility begets volatility, but where is the start and the finish? (subject for another time)

---
title: "Drop and Gain Clustering"
author: "Michael Berman"
date: "Thursday, October 30, 2014"
output: html_document
---
 
require(quantmod)
require(PerformanceAnalytics)
 
#get the data of S&P500
getSymbols('SPY', from='1990-01-01')
 
#lets look at it from 1990 to 2015
spy <- SPY['1990/2015']
 
#our baseline, unfiltered results
ret <- ROC(Cl(spy)) 
 
#our comparision, filtered result
filter.d <- Lag(ifelse(ret < -0.02, 1, 0))
drops<- rollapply(filter.d==1,100,sum)
filter.g <- Lag(ifelse(ret > 0.02, 1, 0))
gain<- rollapply(filter.g==1,100,sum)
 
#two versions of plots - A
plot(gain, main = "Drop and Gain Clustering", sub = "sum of 2% movements over 100 prior days")
par(new=T)
plot(drops, main = "Drop and Gain Clustering", labels = FALSE, col = "red")

# plots - B
plot(drops, main = "Drop and Gain Clustering", sub = "sum of 2% movements over 100 prior days", ylab ="drops")
par(new=T)
plot(gain, main = "Drop and Gain Clustering", labels = FALSE, col = "red")
axis(side =4)
mtext("gains", side = 4)

Created by Pretty R at inside-R.org

I am actually not sure of which one is a better way to look at it.