DOW SIGNALS

There are a number of sites that market their signals as great indicators for outperforming the market, let me highlight how careful one needs to be when looking at backtests.

In the strategy I am about to show you we create a ratio between the S&P 500 and the Dow and we trigger a signal to buy the market when the current ratio is above the rolling mean ratio. I will include the code at the bottom for those who want to understand the details, but the table and chart will illustrate my point. A further important point that I must emphasize and will continue to highlight is that I look at risk-adjusted returns as my proxy for out-performance.

So lets begin:

Rplot06 2015-01-15_1532

Yes Sir you beauty, we have here a 24yr backtest where our model handily outperforms a buy and hold with a Sharpe Ratio of 0.32 vs 0.22. So should we bet the farm on this baby? Not so fast I say, lets look at this strategy over some more data, in the table below we look at performance from 1950 a lengthy 64yrs.

2015-01-15_1537

What we see here, is underperformance; so it is very important when considering a model to ensure that the starting date isn’t cherry picked. In this illustration there are very few parameters, and we only tweaked the date. Many people pushing automated models love to “curve-fit” parameters to satisfy the backtest with no basis of reality.

Here is the R code for those that are interested:

require(quantmod)
require(PerformanceAnalytics)
 
getSymbols("^GSPC", from= "1900-01-01")
sp500.weekly <- GSPC[endpoints(GSPC, "weeks"),6]
sp500rets<- ROC(sp500.weekly, type = "discrete", n = 1)
 
DJ<- read.csv('http://www.quandl.com/api/v1/datasets/BCB/UDJIAD1.csv?&auth_token=kvYEqCqKCTyL4anWz5Zv&trim_start=1896-07-14&trim_end=2015-05-12&sort_order=desc', colClasses=c('Date'='Date'))
date<- DJ$Date
values<- DJ[,2]
DJ_xts<- as.xts(values, order.by = as.Date(date, "%d/%m/%Y"))
dj.weekly <- DJ_xts[endpoints(DJ_xts, "weeks"),1]
djrets<- ROC(dj.weekly, type = "discrete", n = 1)
 
data<- merge(sp500.weekly,dj.weekly)
data.sub = data['1950-02-05::']
ratio<- data.sub[,1]/data.sub[,2]
ave.ratio<- rollapply(ratio,20,mean)
lead.lag<- ifelse(ratio >= ave.ratio, "Lead", "Lag")
 
# filtered results investing in S&P500 with the signal
ma_sig <- Lag(ifelse(lead.lag=="Lead", 1, 0))
ma_ret <- sp500rets * ma_sig
 
dowtimer<- cbind(ma_ret,sp500rets) # or
 
colnames(dowtimer) = c('SPX/DJI-Timer','Buy&Hold')
 
maxDrawdown(dowtimer)
table.AnnualizedReturns(dowtimer, Rf= 0.04/52)
charts.PerformanceSummary(dowtimer, Rf = 0.04/52, main="SPX/DJI Timer",geometric=FALSE)

Created by Pretty R at inside-R.org

Bond Timer for S&P500 (Part 2)

As I said the other day when posting Part 1, I was tired. I also don’t wish to delete older posts that may be of little value, rather I want to keep my workings and progress. Maybe a symbolic way of dealing with my “Shadow”. For those readers not into geekish financial maths, give this post a miss.

Let us have another look at the results, this time with some tweaks to the approach and then we will go into some more depth trying to establish whether this method truly adds value to the Buy and Hold approach.

We start by downloading S&P500 and US 10yr Treasury data from Quandl going back to 1962 so we have 53 years of data. I would actually like to use a lot more data as I fear that the secular bond bull cycle may be distorting our results. We create a ratio by dividing the S&P500 by the bond yield (I think working with nominal index and not using log returns may be an issue worth looking into). Once we have a ratio we then smooth it out with a 50 and 200 day moving average. The signal trigger in our model then is to buy the market whenever the 50 day moving average of the ratio is greater than the 200 day. It is quite hard to see on this chart but you can at least see the shape of the ratio relative to the S&P500 (black):

Rplot

So lets look at the results:

2014-12-25_1612

According to the results from 1962 the Sharpe Ratio of the Ratio Cross vs Buy&Hold is better 0.50 vs 0.32. You can also see a superior Max DrawDown ratio, we do not focus on absolute return where buy and hold outperforms as our stated objective is to always focus on risk adjusted returns. I include the tail(sig_ratio) so that you can see that the system is invested at the moment.

Rplot07

 

We now try a different study and look at the Sharpe Ratio on a yearly basis and then look at the mean of the 53 years. Here I get a confusing result which I don’t know how to explain, it seems to say that the Ratio Cross has a Sharpe Ratio of 0.60 vs 0.70 hmmm???. So to take this one step further I tested the market timing ability of my Bond Timer with the Treynor-Mazuy quadratic regression approach, and you can see below that the system produces alpha, which seems to suggest that the Bond Timer is providing market timing value add. Btw, the MarketTiming() function isn’t in the Performance Analytics package on CRAN so I have included the function in my code below.

2014-12-25_1618

Conclusion:

This is still very much a work in progress but I believe the ratio timing method adds value to a buy and hold approach, however I need to explain why I am getting a conflicting result when looking at Sharpe Ratio’s of 1 year at a time.

require(quantmod)
require(PerformanceAnalytics)
require(Quandl)
 
start<- "1962-01-02"
#get the data (S&P500 and US 10yr Treasury)
spy<- Quandl("YAHOO/INDEX_GSPC", authcode="kvYEqCqKCTyL4anWz5Zv", type="xts")
bond<- Quandl("FRED/DGS10", authcode="kvYEqCqKCTyL4anWz5Zv", type="xts")
 
data<- merge(spy[,6],bond[,1])
data<- data["1999::"]
 
# This is the magic signal or is it? It is the Nominal Price of the S&P500 / bond yield. 
# The fact that it is nominal worries me. 
ratio<- data[,1]/data[,2]
ratio<- na.omit(ratio)
# moving averages of the ratio
ratio_short<- rollmean(ratio, k=50, align= "right")
ratio_long<- rollmean(ratio, k=200, align= "right")
 
# I like visual references ignore the red errors in the output console I am too lazy to fix up the axis.
plot(data[,1], main="Mike's Bond Timer")
par(new=TRUE)
plot(ratio_long, col ='red')
axis(4)
par(new=TRUE)
plot(ratio_short, col ='green')
par(new=TRUE)
plot(ratio, col ='blue')
 
#our baseline, unfiltered results
ret <- ROC(data[,1])
 
#our comparision, filtered result. The idea being to trade the short ratio while it is above the long ratio.
sig_ratio <- Lag(ifelse(ratio_short > ratio_long, 1, 0))
sig_ret <- ret * sig_ratio
 
btimer<- cbind(sig_ret, ret)
colnames(btimer) = c('Ratio Cross', 'Buy&Hold')
 
table.AnnualizedReturns(btimer, Rf= 0)
charts.PerformanceSummary(btimer, Rf = 0, main="Bond Timer",geometric=FALSE)
maxDrawdown(btimer)
tail(sig_ratio)
 
# This looks at the Sharpe Ratio on an annual basis as opposed to the whole period.
years <- apply.yearly(btimer, SharpeRatio.annualized)
 
# In order to get rid of NaN's this function I found on StackFlow helps convert them to zero.
is.nan.data.frame <- function(x)
  do.call(cbind, lapply(x, is.nan))
years[is.nan(years)] <- 0
years
sapply(years, mean)
 
# Market Timing Function which is not part of the CRAN release.
MarketTiming <- function (Ra, Rb, Rf = 0, method = c("TM", "HM"))
{ # @author Andrii Babii, Peter Carl
 
  # FUNCTION
 
  Ra = checkData(Ra)
  Rb = checkData(Rb)
  if (!is.null(dim(Rf))) 
    Rf = checkData(Rf)
  Ra.ncols = NCOL(Ra)
  Rb.ncols = NCOL(Rb)
  pairs = expand.grid(1:Ra.ncols, 1)
  method = method[1]
  xRa = Return.excess(Ra, Rf)
  xRb = Return.excess(Rb, Rf)
 
  mt <- function (xRa, xRb)
  {
    switch(method,
           "HM" = { S = xRb > 0 },
           "TM" = { S = xRb }
    )
    R = merge(xRa, xRb, xRb*S)
    R.df = as.data.frame(R)
    model = lm(R.df[, 1] ~ 1 + ., data = R.df[, -1])
    return(coef(model))
  }
 
  result = apply(pairs, 1, FUN = function(n, xRa, xRb) 
    mt(xRa[, n[1]], xRb[, 1]), xRa = xRa, xRb = xRb)
  result = t(result)
 
  if (ncol(Rb) > 1){
    for (i in 2:ncol(xRb)){
      res = apply(pairs, 1, FUN = function(n, xRa, xRb) 
        mt(xRa[, n[1]], xRb[, i]), xRa = xRa, xRb = xRb)
      res = t(res)
      result = rbind(result, res)
    }
  }
 
  rownames(result) = paste(rep(colnames(Ra), ncol(Rb)), "to",  rep(colnames(Rb), each = ncol(Ra)))
  colnames(result) = c("Alpha", "Beta", "Gamma")
  return(result)
}
print(MarketTiming(years[,1],years[,2],Rf=0))

Created by Pretty R at inside-R.org

Correlation ≠ Causation

I just had to put this point down on paper. I was thinking about it earlier and wanted to highlight the point with a strong visual image. I will go into more depth in one of my Sunday letters next year and show how prevalent this line of thinking is in trading.

How silly we seem to be seeing spooks in the dark. Actually there is something called the “confounding variable” which is one of the factors that steers us off course. More about it in the new year.

Lets keep this a little light into the new year. 😉

 

Global Stock Index Watchlist

I am really pretty chuffed with my effort, I spent many hours today working on this R code. I haven’t commented it like I should because it is not yet a complete product. For those who don’t know, R is an open source statistical mathematical programming language, that has taken the data analytic world by storm. I am a novice at writing scripts but enjoy learning; it is my latest hobby.

Here is the output and code which any of you can run on your own PC, the data is from Yahoo. I notice Russia is missing so I must make an effort to include for my next post.

Global Stock Index Watchlist % change

require(quantmod)
require(PerformanceAnalytics)
require(gridExtra)
 
 
# select ticker symbols and time frame from Yahoo World Indexes
index <- c("^AORD",  "^SSEC",	"^HSI",	"^BSESN",	"^JKSE",	"^KLSE",	"^N225",	"^NZ50",	"^STI",	"^KS11",	"^TWII",	"^GSPTSE",	"^GSPC",	"^ATX",	"^BFX",	"^FCHI",	"^GDAXI",	"^SSMI",	"^FTSE",	"GD.AT
")
date_begin <- as.Date("2009-03-01")
date_end <- as.Date("2014-12-31")
 
tickers <- getSymbols(index, from=date_begin, to=date_end, auto.assign=TRUE)
 
dataset <- Ad(get(tickers[1]))
for (i in 2:length(tickers)) { dataset <- merge(dataset, Ad(get(tickers[i]))) }
 
# handle NA values 
data <- na.locf(dataset) # last observation carried forward to get rid of the NA's
data <- na.omit(data) # omit values with NA values
 
# daily returns
daily <- dailyReturn(data[,1])
for (i in 2:ncol(data)) 
    { daily <- merge(daily, dailyReturn(data[,i]))  
    } 
colnames(daily) <- index
day<- tail(daily,n=1)
 
# weekly returns
weekly <- weeklyReturn(data[,1])
for (i in 2:ncol(data)) 
    { weekly <- merge(weekly, weeklyReturn(data[,i]))  
    } 
colnames(weekly) <- index
week<- tail(weekly,n=1)
 
# monthly returns
monthly <- monthlyReturn(data[,1])
for (i in 2:ncol(data)) 
    { monthly <- merge(monthly, monthlyReturn(data[,i]))  
    } 
colnames(monthly) <- index
month<- tail(monthly,n=1)
 
# quarterly returns
quarterly <- quarterlyReturn(data[,1])
for (i in 2:ncol(data)) 
    { quarterly <- merge(quarterly, quarterlyReturn(data[,i]))  
    } 
colnames(quarterly) <- index
quarter<- tail(quarterly,n=1)
 
# Annual returns
annual <- annualReturn(data[,1])
for (i in 2:ncol(data)) 
    { annual <- merge(annual, annualReturn(data[,i]))  
    } 
colnames(annual) <- index
year<- tail(annual,n=1)
 
summary<- rbind(day,week,month,quarter,year)
colnames(summary) <- c("Aussie", "China", "Hong Kong", "India", "Indonesia", "Malaysia", "Japan", "New Zealand", "Singapore", "Korea", "Taiwan", "Canada", "S&P500", "Austria", "Belgium", "France", "Germany", "Swiss", "UK", "Greece")
 
 
# transpose the data
global<- t(summary)
colnames(global)<- c('Daily','Weekly','Monthly','Quarterly','Annual')
global<- as.data.frame(global) 
 
is.num<- sapply(global, is.numeric)
global[is.num]<- lapply(global[is.num], round,4)
global<- global*100
 
print(global)
grid.newpage(recording = FALSE)
grid.table(global)

Created by Pretty R at inside-R.org

Explain your Relevance

I am writing this note to myself to help align my thinking why I believe this blog has relevance.

  1. I want to increase the amount of posts relating to the macro view of the economic landscape, by this I mean posting statistical checkpoints. One always needs to see the world in context. Over time the consistency and clarity of these posts will improve.
  2. One of my greatest insights, gained through the financial and emotional pain of losing money, is that we are mostly fooled by randomness. Remember correlation does not equal causation, causation is a far deeper dimension than correlation and requires a better understanding of the universal laws and by that I include G-d.
  3. There are axiomatic approaches to economics, the markets, life, ….. which represent asymptotic absolute truths. The time-space of this inner wisdom will not provide readers of this blog with precise forecasts; rather I hope to provide a probable risk framework that will enable readers to better navigate the macro environment.
  4. The markets are driven by people, as Ludwig von Mises called it Praxeology, ” the deductive study of human action based on the fact that humans engage in purposeful behavior”. We want to go further than studying the behaviour and formulating what the consequence of this behaviour is. Human action is driven by the way people think, consciously and unconsciously. We want to go deep into the psyche of the market participants, we want to look at the psyche of the markets itself. We want to see if the system is “sick” and if so we want to be warned. While we may not be able to cure the sickness we can take the necessary precautionary measures to avoid suffering from the contagious effects of the ailment.
  5. This blog is an open dialectic with a multifaceted personality trying to make sense of a complex world, the thoughts may be one sided at times, and my goal as is everyone’s is to achieve Wholeness so hopefully being privy to the raw stream of one persons psychic material will provide readers with some insights about me, the markets, life and yourself. I am also a reader of this blog and draw tremendous benefits from reading my thoughts as they were written at a particular moment in time. Many times I don’t even remember writing them, go figure that one.
  6. Hopefully this blog will also provide some sort of entertainment value. While I am at times tremendously serious and contemplative, other times I am silly and childlike and a lot of fun to be around.

Market Extremes

I wish to highlight 2 important points.

The first one is to stand behind one of the great market analysts of our times, Dr John Hussman. Yes it’s true his reputation has been smashed by his poor performance the last few years. On this I am not able to defend him as much as I would like as I think the processes he adds to his portfolio construction on top of his macro analysis leave little to be desired. I actually don’t even want to go there, rather I want to stand by his rigorous market climate and valuation approach.

These are the key points and like John I am prepared to fall on my sword and face the ridicule.

Meanwhile, the S&P 500 is more than double its historical valuation norms onreliable measures (with about 90% correlation with actual subsequent 10-year market returns), sentiment is lopsided, and we observe dispersion across market internals, along with widening credit spreads. These and similar considerations present a coherent pattern that has been informative in market cycles across a century of history – including the period since 2009. None of those considerations inform us that the U.S. stock market currently presents a desirable opportunity to accept risk.”

Where he refers to a 90% correlation with actual subsequent returns this refers to his valuation model forecaster. If you go through the math you will see how the model works, but you are safe in the assumption that over the long term this model is pretty darn accurate. See below an example of how it looks.

I end my first point with Hussman’s words highlighting how overvalued we currently are, “The equity market is now more overvalued than at any point in history outside of the 2000 peak, and on the measures that we find best correlated with actual subsequent total returns, is 115% above reliable historical norms and only 15% below the 2000 extreme. Unless QE will persist forever, even 3-4 more years of zero short-term interest rates don’t “justify” more than a 12-16% elevation above historical norms.”

My second point is to just highlight an extreme in market momentum we haven’t seen in its history, I am not sure what to draw from it right now but I wanted to document it as I believe it will be significant when we look back over the fullness of time. For 29 days the S&P500 closed above its 5 day moving average, the previous record of 27 days took place in 1928.