RSS Feed : QuantStrat Trader

RSS Feed from QuantStrat Trader :

  • Ehlers’s Autocorrelation Periodogram

    This post will introduce John Ehlers’s Autocorrelation Periodogram mechanism–a mechanism designed to dynamically find a lookback period. That is, the most common parameter optimized in backtests is the lookback period.

    Before beginning this post, I must give credit where it’s due, to one Mr. Fabrizio Maccallini, the head of structured derivatives at Nordea Markets in London. You can find the rest of the repository he did for Dr. John Ehlers’s Cycle Analytics for Traders on his github. I am grateful and honored that such intelligent and experienced individuals are helping to bring some of Dr. Ehlers’s methods into R.

    The point of the Ehlers Autocorrelation Periodogram is to dynamically set a period between a minimum and a maximum period length. While I leave the exact explanation of the mechanic to Dr. Ehlers’s book, for all practical intents and purposes, in my opinion, the punchline of this method is to attempt to remove a massive source of overfitting from trading system creation–namely specifying a lookback period.

    SMA of 50 days? 100 days? 200 days? Well, this algorithm takes that possibility of overfitting out of your hands. Simply, specify an upper and lower bound for your lookback, and it does the rest. How well it does it is a topic of discussion for those well-versed in the methodologies of electrical engineering (I’m not), so feel free to leave comments that discuss how well the algorithm does its job, and feel free to blog about it as well.

    In any case, here’s the original algorithm code, courtesy of Mr. Maccallini:

    AGC <- function(loCutoff = 10, hiCutoff = 48, slope = 1.5) {      accSlope = -slope # acceptableSlope = 1.5 dB   ratio = 10 ^ (accSlope / 20)   if ((hiCutoff - loCutoff) > 0)
        factor <-  ratio ^ (2 / (hiCutoff - loCutoff));
      return (factor)
    }
    
    autocorrPeriodogram <- function(x, period1 = 10, period2 = 48, avgLength = 3) {
      # high pass filter
      alpha1 <- (cos(sqrt(2) * pi / period2) + sin(sqrt(2) * pi / period2) - 1) / cos(sqrt(2) * pi / period2)
      hp <- (1 - alpha1 / 2) ^ 2 * (x - 2 * lag(x) + lag(x, 2))
      hp <- hp[-c(1, 2)]
      hp <- filter(hp, (1 - alpha1), method = "recursive")
      hp <- c(NA, NA, hp)
      hp <- xts(hp, order.by = index(x))
      # super smoother
      a1 <- exp(-sqrt(2) * pi / period1)
      b1 <- 2 * a1 * cos(sqrt(2) * pi / period1)
      c2 <- b1
      c3 <- -a1 * a1
      c1 <- 1 - c2 - c3
      filt <- c1 * (hp + lag(hp)) / 2
      leadNAs <- sum(is.na(filt))
      filt <- filt[-c(1: leadNAs)]
      filt <- filter(filt, c(c2, c3), method = "recursive")
      filt <- c(rep(NA, leadNAs), filt)
      filt <- xts(filt, order.by = index(x))
      # Pearson correlation for each value of lag
      autocorr <- matrix(0, period2, length(filt))
      for (lag in 2: period2) {
        # Set the average length as M
        if (avgLength == 0) M <- lag
        else M <- avgLength
        autocorr[lag, ] <- runCor(filt, lag(filt, lag), M)
      }
      autocorr[is.na(autocorr)] <- 0
      # Discrete Fourier transform
      # Correlate autocorrelation values with the cosine and sine of each period of interest
      # The sum of the squares of each value represents relative power at each period
      cosinePart <- sinePart <- sqSum <- R <- Pwr <- matrix(0, period2, length(filt))
      for (period in period1: period2) {
        for (N in 2: period2) {
          cosinePart[period, ] = cosinePart[period, ] + autocorr[N, ] * cos(2 * N * pi / period)
          sinePart[period, ] = sinePart[period, ] + autocorr[N, ] * sin(2 * N * pi / period)
        }
        sqSum[period, ] = cosinePart[period, ] ^ 2 + sinePart[period, ] ^ 2
        R[period, ] <- EMA(sqSum[period, ] ^ 2, ratio = 0.2)
      }
      R[is.na(R)] <- 0
      # Normalising Power
      K <- AGC(period1, period2, 1.5)
      maxPwr <- rep(0, length(filt))   for(period in period1: period2) {     for (i in 1: length(filt)) {       if (R[period, i] >= maxPwr[i]) maxPwr[i] <- R[period, i]
          else maxPwr[i] <- K * maxPwr[i]
        }
      }
      for(period in 2: period2) {
        Pwr[period, ] <- R[period, ] / maxPwr
      }
      # Compute the dominant cycle using the Center of Gravity of the spectrum
      Spx <- Sp <- rep(0, length(filter))
      for(period in period1: period2) {
        Spx <- Spx + period * Pwr[period, ] * (Pwr[period, ] >= 0.5)
        Sp <- Sp + Pwr[period, ] * (Pwr[period, ] >= 0.5)
      }
      dominantCycle <- Spx / Sp
      dominantCycle[is.nan(dominantCycle)] <- 0
      dominantCycle <- xts(dominantCycle, order.by=index(x))
      dominantCycle <- dominantCycle[dominantCycle > 0]
      return(dominantCycle)
      #heatmap(Pwr, Rowv = NA, Colv = NA, na.rm = TRUE, labCol = "", add.expr = lines(dominantCycle, col = 'blue'))
    }
    

    One thing I do notice is that this code uses a loop that says for(i in 1:length(filt)), which is an O(data points) loop, which I view as the plague in R. While I’ve used Rcpp before, it’s been for only the most basic of loops, so this is definitely a place where the algorithm can stand to be improved with Rcpp due to R’s inherent poor looping.

    Those interested in the exact logic of the algorithm will, once again, find it in John Ehlers’s Cycle Analytics For Traders book (see link earlier in the post).

    Of course, the first thing to do is to test how well the algorithm does what it purports to do, which is to dictate the lookback period of an algorithm.

    Let’s run it on some data.

    getSymbols('SPY', from = '1990-01-01')
    
    t1 <- Sys.time()
    out <- autocorrPeriodogram(Ad(SPY), period1 = 120, period2 = 252, avgLength = 3)
    t2 <- Sys.time() print(t2-t1) 

    And the result:

     > t1 <- Sys.time() > out <- autocorrPeriodogram(Ad(SPY), period1 = 120, period2 = 252, avgLength = 3) > t2 <- Sys.time() > print(t2-t1)
    Time difference of 33.25429 secs
    

    Now, what does the algorithm-set lookback period look like?

    plot(out)
    

    Let’s zoom in on 2001 through 2003, when the markets went through some upheaval.

    plot(out['2001::2003']
    

    In this zoomed-in image, we can see that the algorithm’s estimates seem fairly jumpy.

    Here’s some code to feed the algorithm’s estimates of n into an indicator to compute an indicator with a dynamic lookback period as set by Ehlers’s autocorrelation periodogram.

    acpIndicator <- function(x, minPeriod, maxPeriod, indicatorFun = EMA, ...) {
      acpOut <- autocorrPeriodogram(x = x, period1 = minPeriod, period2 = maxPeriod)
      roundedAcpNs <- round(acpOut, 0) # round to the nearest integer
      uniqueVals <- unique(roundedAcpNs) # unique integer values
      out <- xts(rep(NA, length(roundedAcpNs)), order.by=index(roundedAcpNs))
    
      for(i in 1:length(uniqueVals)) { # loop through unique values, compute indicator
        tmp <- indicatorFun(x, n = uniqueVals[i], ...)
        out[roundedAcpNs==uniqueVals[i]] <- tmp[roundedAcpNs==uniqueVals[i]]
      }
      return(out)
    }
    

    And here is the function applied with an SMA, to tune between 120 and 252 days.

    ehlersSMA <- acpIndicator(Ad(SPY), 120, 252, indicatorFun = SMA)
    
    plot(Ad(SPY)['2008::2010'])
    lines(ehlersSMA['2008::2010'], col = 'red')
    

    And the result:

    As seen, this algorithm is less consistent than I would like, at least when it comes to using a simple moving average.

    For now, I’m going to leave this code here, and let people experiment with it. I hope that someone will find that this indicator is helpful to them.

    Thanks for reading.

    NOTES: I am always interested in networking/meet-ups in the northeast (Philadelphia/NYC). Furthermore, if you believe your firm will benefit from my skills, please do not hesitate to reach out to me. My linkedin profile can be found here.

    Lastly, I am volunteering to curate the R section for books on quantocracy. If you have a book about R that can apply to finance, be sure to let me know about it, so that I can review it and possibly recommend it. Thakn you.


    Read more »
  • A Review of Alpha Architect’s (Wes Gray/Jack Vogel) Quantitative Momentum book

    This post will be an in-depth review of Alpha Architect’s Quantitative Momentum book. Overall, in my opinion, the book is terrific for those that are practitioners in fund management in the individual equity space, and still contains ideas worth thinking about outside of that space. However, the system detailed in the book benefits from nested ranking (rank along axis X, take the top decile, rank along axis Y within the top decile in X, and take the top decile along axis Y, essentially restricting selection to 1% of the universe). Furthermore, the book does not do much to touch upon volatility controls, which may have enhanced the system outlined greatly.

    Before I get into the brunt of this post, I’d like to let my readers know that I formalized my nuts and bolts of quantstrat series of posts as a formal datacamp course. Datacamp is a very cheap way to learn a bunch of R, and financial applications are among those topics. My course covers the basics of quantstrat, and if those who complete the course like it, I may very well create more advanced quantstrat modules on datacamp. I’m hoping that the finance courses are well-received, since there are financial topics in R I’d like to learn myself that a 45 minute lecture doesn’t really suffice for (such as Dr. David Matteson’s change points magic, PortfolioAnalytics, and so on). In any case, here’s the link.

    So, let’s start with a summary of the book:

    Part 1 is several chapters that are the giant expose- of why momentum works (or at least, has worked for at least 20 years since 1993)…namely that human biases and irrational behaviors act in certain ways to make the anomaly work. Then there’s also the career risk (AKA it’s a risk factor, and so, if your benchmark is SPY and you run across a 3+ year period of underperformance, you have severe career risk), and essentially, a whole litany of why a professional asset manager would get fired but if you just stick with the anomaly over many many years and ride out multi-year stretches of relative underperformance, you’ll come out ahead in the very long run.

    Generally, I feel like there’s work to be done if this is the best that can be done, but okay, I’ll accept it.

    Essentially, part 1 is for the uninitiated. For those that have been around the momentum block a couple of times, they can skip right past this. Unfortunately, it’s half the book, so that leaves a little bit of a sour taste in the mouth.

    Next, part two is where, in my opinion, the real meat and potatoes of the book–the “how”.

    Essentially, the algorithm can be boiled down into the following:

    Taking the universe of large and mid-cap stocks, do the following:

    1) Sort the stocks into deciles by 2-12 momentum–that is, at the end of every month, calculate momentum by last month’s closing price minus the closing price 12 months ago. Essentially, research states that there’s a reversion effect on the 1-month momentum. However, this effect doesn’t carry over into the ETF universe in my experience.

    2) Here’s the interesting part which makes the book worth picking up on its own (in my opinion): after sorting into deciles, rank the top decile by the following metric: multiply the sign of the 2-12 momentum by the following equation: (% negative returns – % positive). Essentially, the idea here is to determine smoothness of momentum. That is, in the most extreme situation, imagine a stock that did absolutely nothing for 230 days, and then had one massive day that gave it its entire price appreciation (think Google when it had a 10% jump off of better-than-expected numbers reports), and in the other extreme, a stock that simply had each and every single day be a small positive price appreciation. Obviously, you’d want the second type of stock. That’s this idea. Again, sort into deciles, and take the top decile. Therefore, taking the top decile of the top decile leaves you with 1% of the universe. Essentially, this makes the idea very difficult to replicate–since you’d need to track down a massive universe of stocks. That stated, I think the expression is actually a pretty good idea as a stand-in for volatility. That is, regardless of how volatile an asset is–whether it’s as volatile as a commodity like DBC, or as non-volatile as a fixed-income product like SHY, this expression is an interesting way of stating “this path is choppy” vs. “this path is smooth”. I might investigate this expression on my blog further in the future.

    3) Lastly, if the portfolio is turning over quarterly instead of monthly, the best months to turn it over are the months preceding end-of-quarter month (that is, February, May, August, November) because a bunch of amateur asset managers like to “window dress” their portfolios. That is, they had a crummy quarter, so at the last month before they have to send out quarterly statements, they load up on some recent winners so that their clients don’t think they’re as amateur as they really let on, and there’s a bump for this. Similarly, January has some selling anomalies due to tax-loss harvesting. As far as practical implementations go, I think this is a very nice touch. Conceding the fact that turning over every month may be a bit too expensive, I like that Wes and Jack say “sure, you want to turn it over once every three months, but on *which* months?”. It’s a very good question to ask if it means you get an additional percentage point or 150 bps a year from that, as it just might cover the transaction costs and then some.

    All in all, it’s a fairly simple to understand strategy. However, the part that sort of gates off the book to a perfect replication is the difficulty in obtaining the CRSP data.

    However, I do commend Alpha Architect for disclosing the entire algorithm from start to finish.

    Furthermore, if the basic 2-12 momentum is not enough, there’s an appendix detailing other types of momentum ideas (earnings momentum, ranking by distance to 52-week highs, absolute historical momentum, and so on). None of these strategies are really that much better than the basic price momentum strategy, so they’re there for those interested, but it seems there’s nothing really ground-breaking there. That is, if you’re trading once a month, there’s only so many ways of saying “hey, I think this thing is going up!”

    I also like that Wes and Jack touched on the fact that trend-following, while it doesn’t improve overall CAGR or Sharpe, does a massive amount to improve on max drawdown. That is, if faced with the prospect of losing 70-80% of everything, and losing only 30%, that’s an easy choice to make. Trend-following is good, even a simplistic version.

    All in all, I think the book accomplishes what it sets out to do, which is to present a well-researched algorithm. Ultimately, the punchline is on Alpha Architect’s site (I believe they have some sort of monthly stock filter). Furthermore, the book states that there are better risk-adjusted returns when combined with the algorithm outlined in the “quantitative value” book. In my experience, I’ve never had value algorithms impress me in the backtests I’ve done, but I can chalk that up to me being inexperienced with all the various valuation metrics.

    My criticism of the book, however, is this:

    The momentum algorithm in the book misses what I feel is one key component: volatility targeting control. Simply, the paper “momentum has its moments” (which I covered in my hypothesis-driven development series of posts) essentially states that the usual Fama-French momentum strategy does far better from a risk-reward strategy by deleveraging during times of excessive volatility, and avoiding momentum crashes. I’m not sure why Wes and Jack didn’t touch upon this paper, since the implementation is very simple (target/realized volatility = leverage factor). Ideally, I’d love if Wes or Jack could send me the stream of returns for this strategy (preferably daily, but monthly also works).

    Essentially, I think this book is very comprehensive. However, I think it also has a somewhat “don’t try this at home” feel to it due to the data requirement to replicate it. Certainly, if your broker charges you $8 a transaction, it’s not a feasible strategy to drop several thousand bucks a year on transaction costs that’ll just give your returns to your broker. However, I do wonder if the QMOM ETF (from Alpha Architect, of course) is, in fact, a better version of this strategy, outside of the management fee.

    In any case, my final opinion is this: while this book leaves a little bit of knowledge on the table, on a whole, it accomplishes what it sets out to do, is clear with its procedures, and provides several worthwhile ideas. For the price of a non-technical textbook (aka those $60+ books on amazon), this book is a steal.

    4.5/5 stars.

    Thanks for reading.

    NOTE: While I am currently employed in a successful analytics capacity, I am interested in hearing about full-time positions more closely related to the topics on this blog. If you have a full-time position which can benefit from my current skills, please let me know. My Linkedin can be found here.


    Read more »
  • The Problem With Depmix For Online Regime Prediction

    This post will be about attempting to use the Depmix package for online state prediction. While the depmix package performs admirably when it comes to describing the states of the past, when used for one-step-ahead prediction, under the assumption that tomorrow’s state will be identical to today’s, the hidden markov model process found within the package does not perform to expectations.

    So, to start off, this post was motivated by Michael Halls-Moore, who recently posted some R code about using the depmixS4 library to use hidden markov models. Generally, I am loath to create posts on topics I don’t feel I have an absolutely front-to-back understanding of, but I’m doing this in the hope of learning from others on how to appropriately do online state-space prediction, or “regime switching” detection, as it may be called in more financial parlance.

    Here’s Dr. Halls-Moore’s post.

    While I’ve seen the usual theory of hidden markov models (that is, it can rain or it can be sunny, but you can only infer the weather judging by the clothes you see people wearing outside your window when you wake up), and have worked with toy examples in MOOCs (Udacity’s self-driving car course deals with them, if I recall correctly–or maybe it was the AI course), at the end of the day, theory is only as good as how well an implementation can work on real data.

    For this experiment, I decided to take SPY data since inception, and do a full in-sample “backtest” on the data. That is, given that the HMM algorithm from depmix sees the whole history of returns, with this “god’s eye” view of the data, does the algorithm correctly classify the regimes, if the backtest results are any indication?

    Here’s the code to do so, inspired by Dr. Halls-Moore’s.

    require(depmixS4)
    require(quantmod)
    getSymbols('SPY', from = '1990-01-01', src='yahoo', adjust = TRUE)
    spyRets <- na.omit(Return.calculate(Ad(SPY)))
    
    set.seed(123)
    
    hmm <- depmix(SPY.Adjusted ~ 1, family = gaussian(), nstates = 3, data=spyRets)
    hmmfit <- fit(hmm, verbose = FALSE)
    post_probs <- posterior(hmmfit)
    post_probs <- xts(post_probs, order.by=index(spyRets))
    plot(post_probs$state)
    summaryMat <- data.frame(summary(hmmfit))
    colnames(summaryMat) <- c("Intercept", "SD")
    bullState <- which(summaryMat$Intercept > 0)
    bearState <- which(summaryMat$Intercept < 0)
    
    hmmRets <- spyRets * lag(post_probs$state == bullState) - spyRets * lag(post_probs$state == bearState)
    charts.PerformanceSummary(hmmRets)
    table.AnnualizedReturns(hmmRets)
    

    Essentially, while I did select three states, I noted that anything with an intercept above zero is a bull state, and below zero is a bear state, so essentially, it reduces to two states.

    With the result:

    table.AnnualizedReturns(hmmRets)
                              SPY.Adjusted
    Annualized Return               0.1355
    Annualized Std Dev              0.1434
    Annualized Sharpe (Rf=0%)       0.9448
    

    So, not particularly terrible. The algorithm works, kind of, sort of, right?

    Well, let’s try online prediction now.

    require(DoMC)
    
    dailyHMM <- function(data, nPoints) {
      subRets <- data[1:nPoints,]
      hmm <- depmix(SPY.Adjusted ~ 1, family = gaussian(), nstates = 3, data = subRets)
      hmmfit <- fit(hmm, verbose = FALSE)
      post_probs <- posterior(hmmfit)
      summaryMat <- data.frame(summary(hmmfit))
      colnames(summaryMat) <- c("Intercept", "SD")
      bullState <- which(summaryMat$Intercept > 0)
      bearState <- which(summaryMat$Intercept < 0)
      if(last(post_probs$state) %in% bullState) {
        state <- xts(1, order.by=last(index(subRets)))
      } else if (last(post_probs$state) %in% bearState) {
        state <- xts(-1, order.by=last(index(subRets)))
      } else {
        state <- xts(0, order.by=last(index(subRets)))
      }
      colnames(state) <- "State"
      return(state)
    }
    
    # took 3 hours in parallel
    t1 <- Sys.time()
    set.seed(123)
    registerDoMC((detectCores() - 1))
    states <- foreach(i = 500:nrow(spyRets), .combine=rbind) %dopar% {
      dailyHMM(data = spyRets, nPoints = i)
    }
    t2 <- Sys.time()
    print(t2-t1)
    

    So what I did here was I took an expanding window, starting from 500 days since SPY’s inception, and kept increasing it, by one day at a time. My prediction, was, trivially enough, the most recent day, using a 1 for a bull state, and a -1 for a bear state. I ran this process in parallel (on a linux cluster, because windows’s doParallel library seems to not even know that certain packages are loaded, and it’s more messy), and the first big issue is that this process took about three hours on seven cores for about 23 years of data. Not exactly encouraging, but computing time isn’t expensive these days.

    So let’s see if this process actually works.

    First, let’s test if the algorithm does what it’s actually supposed to do and use one day of look-ahead bias (that is, the algorithm tells us the state at the end of the day–how correct is it even for that day?).

    
    onlineRets <- spyRets * states 
    charts.PerformanceSummary(onlineRets)
    table.AnnualizedReturns(onlineRets)
    
    

    With the result:

    > table.AnnualizedReturns(onlineRets)
                              SPY.Adjusted
    Annualized Return               0.2216
    Annualized Std Dev              0.1934
    Annualized Sharpe (Rf=0%)       1.1456
    

    So, allegedly, the algorithm seems to do what it was designed to do, which is to classify a state for a given data set. Now, the most pertinent question: how well do these predictions do even one day ahead? You’d think that state space predictions would be parsimonious from day to day, given the long history, correct?

    
    onlineRets <- spyRets * lag(states)
    charts.PerformanceSummary(onlineRets)
    table.AnnualizedReturns(onlineRets)
    
    

    With the result:

    > table.AnnualizedReturns(onlineRets)
                              SPY.Adjusted
    Annualized Return               0.0172
    Annualized Std Dev              0.1939
    Annualized Sharpe (Rf=0%)       0.0888
    

    That is, without the lookahead bias, the state space prediction algorithm is atrocious. Why is that?

    Well, here’s the plot of the states:

    In short, the online hmm algorithm in the depmix package seems to change its mind very easily, with obvious (negative) implications for actual trading strategies.

    So, that wraps it up for this post. Essentially, the main message here is this: there’s a vast difference between loading doing descriptive analysis (AKA “where have you been, why did things happen”) vs. predictive analysis (that is, “if I correctly predict the future, I get a positive payoff”). In my opinion, while descriptive statistics have their purpose in terms of explaining why a strategy may have performed how it did, ultimately, we’re always looking for better prediction tools. In this case, depmix, at least in this “out-of-the-box” demonstration does not seem to be the tool for that.

    If anyone has had success with using depmix (or other regime-switching algorithm in R) for prediction, I would love to see work that details the procedure taken, as it’s an area I’m looking to expand my toolbox into, but don’t have any particular good leads. Essentially, I’d like to think of this post as me describing my own experiences with the package.

    Thanks for reading.

    NOTE: On Oct. 5th, I will be in New York City. On Oct. 6th, I will be presenting at The Trading Show on the Programming Wars panel.

    NOTE: My current analytics contract is up for review at the end of the year, so I am officially looking for other offers as well. If you have a full-time role which may benefit from the skills you see on my blog, please get in touch with me. My linkedin profile can be found here.


    Read more »
  • An Introduction to Portfolio Component Conditional Value At Risk

    This post will introduce component conditional value at risk mechanics found in PerformanceAnalytics from a paper written by Brian Peterson, Kris Boudt, and Peter Carl. This is a mechanism that is an easy-to-call mechanism for computing component expected shortfall in asset returns as they apply to a portfolio. While the exact mechanics are fairly complex, the upside is that the running time is nearly instantaneous, and this method is a solid tool for including in asset allocation analysis.

    For those interested in an in-depth analysis of the intuition of component conditional value at risk, I refer them to the paper written by Brian Peterson, Peter Carl, and Kris Boudt.

    Essentially, here’s the idea: all assets in a given portfolio have a marginal contribution to its total conditional value at risk (also known as expected shortfall)–that is, the expected loss when the loss surpasses a certain threshold. For instance, if you want to know your 5% expected shortfall, then it’s the average of the worst 5 returns per 100 days, and so on. For returns using daily resolution, the idea of expected shortfall may sound as though there will never be enough data in a sufficiently fast time frame (on one year or less), the formula for expected shortfall in the PerformanceAnalytics defaults to an approximation calculation using a Cornish-Fisher expansion, which delivers very good results so long as the p-value isn’t too extreme (that is, it works for relatively sane p values such as the 1%-10% range).

    Component Conditional Value at Risk has two uses: first off, given no input weights, it uses an equal weight default, which allows it to provide a risk estimate for each individual asset without burdening the researcher to create his or her own correlation/covariance heuristics. Secondly, when provided with a set of weights, the output changes to reflect the contribution of various assets in proportion to those weights. This means that this methodology works very nicely with strategies that exclude assets based on momentum, but need a weighting scheme for the remaining assets. Furthermore, using this methodology also allows an ex-post analysis of risk contribution to see which instrument contributed what to risk.

    First, a demonstration of how the mechanism works using the edhec data set. There is no strategy here, just a demonstration of syntax.

    require(quantmod)
    require(PerformanceAnalytics)
    
    data(edhec)
    
    tmp &lt;- CVaR(edhec, portfolio_method = &quot;component&quot;)
    

    This will assume an equal-weight contribution from all of the funds in the edhec data set.

    So tmp is the contribution to expected shortfall from each of the various edhec managers over the entire time period. Here’s the output:

    $MES
               [,1]
    [1,] 0.03241585
    
    $contribution
     Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
              0.0074750513          -0.0028125166           0.0039422674           0.0069376579           0.0008077760
              Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
              0.0037114666           0.0043125937           0.0007173036           0.0036152960           0.0013693293
            Relative Value          Short Selling         Funds of Funds
              0.0037650911          -0.0048178690           0.0033924063 
    
    $pct_contrib_MES
     Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
                0.23059863            -0.08676361             0.12161541             0.21402052             0.02491917
              Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
                0.11449542             0.13303965             0.02212817             0.11152864             0.04224258
            Relative Value          Short Selling         Funds of Funds
                0.11614968            -0.14862694             0.10465269
    

    The salient part of this is the percent contribution (the last output). Notice that it can be negative, meaning that certain funds gain when others lose. At least, this was the case over the current data set. These assets diversify a portfolio and actually lower expected shortfall.

    &gt; tmp2 &lt;- CVaR(edhec, portfolio_method = &quot;component&quot;, weights = c(rep(.1, 10), rep(0,3)))
    &gt; tmp2
    $MES
               [,1]
    [1,] 0.04017453
    
    $contribution
     Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
              0.0086198045          -0.0046696862           0.0058778855           0.0109152240           0.0009596620
              Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
              0.0054824325           0.0050398011           0.0009638502           0.0044568333           0.0025287234
            Relative Value          Short Selling         Funds of Funds
              0.0000000000           0.0000000000           0.0000000000 
    
    $pct_contrib_MES
     Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
                0.21455894            -0.11623499             0.14630875             0.27169512             0.02388732
              Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
                0.13646538             0.12544767             0.02399157             0.11093679             0.06294345
            Relative Value          Short Selling         Funds of Funds
                0.00000000             0.00000000             0.00000000
    

    In this case, I equally weighted the first ten managers in the edhec data set, and put zero weight in the last three. Furthermore, we can see what happens when the weights are not equal.

    &gt; tmp3 &lt;- CVaR(edhec, portfolio_method = &quot;component&quot;, weights = c(.2, rep(.1, 9), rep(0,3)))
    &gt; tmp3
    $MES
               [,1]
    [1,] 0.04920372
    
    $contribution
     Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
              0.0187406982          -0.0044391078           0.0057235762           0.0102706768           0.0007710434
              Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
              0.0051541429           0.0055944367           0.0008028457           0.0044085104           0.0021768951
            Relative Value          Short Selling         Funds of Funds
              0.0000000000           0.0000000000           0.0000000000 
    
    $pct_contrib_MES
     Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
                0.38087972            -0.09021895             0.11632406             0.20873782             0.01567043
              Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
                0.10475109             0.11369947             0.01631677             0.08959710             0.04424249
            Relative Value          Short Selling         Funds of Funds
                0.00000000             0.00000000             0.00000000
    

    This time, notice that as the weight increased in the convertible arb manager, so too did his contribution to maximum expected shortfall.

    For a future backtest, I would like to make some data requests. I would like to use the universe found in Faber’s Global Asset Allocation book. That said, the simulations in that book go back to 1972, and I was wondering if anyone out there has daily returns for those assets/indices. While some ETFs go back into the early 2000s, there are some that start rather late such as DBC (commodities, early 2006), GLD (gold, early 2004), BWX (foreign bonds, late 2007), and FTY (NAREIT, early 2007). As an eight-year backtest would be a bit short, I was wondering if anyone had data with more history.

    One other thing, I will in New York for the trading show, and speaking on the “programming wars” panel on October 6th.

    Thanks for reading.

    NOTE: While I am currently contracting, I am also looking for a permanent position which can benefit from my skills for when my current contract ends. If you have or are aware of such an opening, I will be happy to speak with you.


    Read more »
  • A Return.Portfolio Wrapper to Automate Harry Long Seeking Alpha Backtests

    This post will cover a function to simplify creating Harry Long type rebalancing strategies from SeekingAlpha for interested readers. As Harry Long has stated, most, if not all of his strategies are more for demonstrative purposes rather than actual recommended investments.

    So, since Harry Long has been posting some more articles on Seeknig Alpha, I’ve had a reader or two ask me to analyze his strategies (again). Instead of doing that, however, I’ll simply put this tool here, which is a wrapper that automates the acquisition of data and simulates portfolio rebalancing with one line of code.

    Here’s the tool.

    require(quantmod)
    require(PerformanceAnalytics)
    require(downloader)
    
    LongSeeker <- function(symbols, weights, rebalance_on = "years", 
                           displayStats = TRUE, outputReturns = FALSE) {
      getSymbols(symbols, src='yahoo', from = '1990-01-01')
      prices <- list()
      for(i in 1:length(symbols)) {
        if(symbols[i] == "ZIV") {
          download("https://www.dropbox.com/s/jk3ortdyru4sg4n/ZIVlong.TXT", destfile="ziv.txt")
          ziv <- xts(read.zoo("ziv.txt", header=TRUE, sep=",", format="%Y-%m-%d"))
          prices[[i]] <- Cl(ziv)
        } else if (symbols[i] == "VXX") {
          download("https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT", 
                   destfile="vxx.txt")
          vxx <- xts(read.zoo("vxx.txt", header=TRUE, sep=",", format="%Y-%m-%d"))
          prices[[i]] <- Cl(vxx)
        }
        else {
          prices[[i]] <- Ad(get(symbols[i]))
        }
      }
      prices <- do.call(cbind, prices)
      prices <- na.locf(prices)
      returns <- na.omit(Return.calculate(prices))
      
      returns$zeroes <- 0
      weights <- c(weights, 1-sum(weights))
      stratReturns <- Return.portfolio(R = returns, weights = weights, rebalance_on = rebalance_on)
      
      if(displayStats) {
        stats <- rbind(table.AnnualizedReturns(stratReturns), maxDrawdown(stratReturns), CalmarRatio(stratReturns))
        rownames(stats)[4] <- "Max Drawdown"
        print(stats)
        charts.PerformanceSummary(stratReturns)
      }
      
      if(outputReturns) {
        return(stratReturns)
      }
    } 
    

    It fetches the data for you (usually from Yahoo, but a big thank you to Mr. Helumth Vollmeier in the case of ZIV and VXX), and has the option of either simply displaying an equity curve and some statistics (CAGR, annualized standard dev, Sharpe, max Drawdown, Calmar), or giving you the return stream as an output if you wish to do more analysis in R.

    Here’s an example of simply getting the statistics, with an 80% XLP/SPLV (they’re more or less interchangeable) and 20% TMF (aka 60% TLT, so an 80/60 portfolio), from one of Harry Long’s articles.

    LongSeeker(c("XLP", "TLT"), c(.8, .6))
    

    Statistics:

    
                              portfolio.returns
    Annualized Return                 0.1321000
    Annualized Std Dev                0.1122000
    Annualized Sharpe (Rf=0%)         1.1782000
    Max Drawdown                      0.2330366
    Calmar Ratio                      0.5670285
    

    Equity curve:

    Nothing out of the ordinary of what we might expect from a balanced equity/bonds portfolio. Generally does well, has its largest drawdown in the financial crisis, and some other bumps in the road, but overall, I’d think a fairly vanilla “set it and forget it” sort of thing.

    And here would be the way to get the stream of individual daily returns, assuming you wanted to rebalance these two instruments weekly, instead of yearly (as is the default).

    tmp <- LongSeeker(c("XLP", "TLT"), c(.8, .6), rebalance_on="weeks",
                        displayStats = FALSE, outputReturns = TRUE)
    

    And now let’s get some statistics.

    table.AnnualizedReturns(tmp)
    maxDrawdown(tmp)
    CalmarRatio(tmp)
    

    Which give:

    > table.AnnualizedReturns(tmp)
                              portfolio.returns
    Annualized Return                    0.1328
    Annualized Std Dev                   0.1137
    Annualized Sharpe (Rf=0%)            1.1681
    > maxDrawdown(tmp)
    [1] 0.2216417
    > CalmarRatio(tmp)
                 portfolio.returns
    Calmar Ratio         0.5990087
    

    Turns out, moving the rebalancing from annually to weekly didn’t have much of an effect here (besides give a bunch of money to your broker, if you factored in transaction costs, which this doesn’t).

    So, that’s how this tool works. The results, of course, begin from the latest instrument’s inception. The trick, in my opinion, is to try and find proxy substitutes with longer histories for newer ETFs that are simply leveraged ETFs, such as using a 60% weight in TLT with an 80% weight in XLP instead of a 20% weight in TMF with 80% allocation in SPLV.

    For instance, here are some proxies:

    SPXL = XLP
    SPXL/UPRO = SPY * 3
    TMF = TLT * 3

    That said, I’ve worked with Harry Long before, and he develops more sophisticated strategies behind the scenes, so I’d recommend that SeekingAlpha readers take his publicly released strategies as concept demonstrations, as opposed to fully-fledged investment ideas, and contact Mr. Long himself about more customized, private solutions for investment institutions if you are so interested.

    Thanks for reading.

    NOTE: I am currently in the northeast. While I am currently contracting, I am interested in networking with individuals or firms with regards to potential collaboration opportunities.


    Read more »
  • How To Compute Turnover With Return.Portfolio in R

    This post will demonstrate how to take into account turnover when dealing with returns-based data using PerformanceAnalytics and the Return.Portfolio function in R. It will demonstrate this on a basic strategy on the nine sector SPDRs.

    So, first off, this is in response to a question posed by one Robert Wages on the R-SIG-Finance mailing list. While there are many individuals out there with a plethora of questions (many of which can be found to be demonstrated on this blog already), occasionally, there will be an industry veteran, a PhD statistics student from Stanford, or other very intelligent individual that will ask a question on a topic that I haven’t yet touched on this blog, which will prompt a post to demonstrate another technical aspect found in R. This is one of those times.

    So, this demonstration will be about computing turnover in returns space using the PerformanceAnalytics package. Simply, outside of the PortfolioAnalytics package, PerformanceAnalytics with its Return.Portfolio function is the go-to R package for portfolio management simulations, as it can take a set of weights, a set of returns, and generate a set of portfolio returns for analysis with the rest of PerformanceAnalytics’s functions.

    Again, the strategy is this: take the 9 three-letter sector SPDRs (since there are four-letter ETFs now), and at the end of every month, if the adjusted price is above its 200-day moving average, invest into it. Normalize across all invested sectors (that is, 1/9th if invested into all 9, 100% into 1 if only 1 invested into, 100% cash, denoted with a zero return vector, if no sectors are invested into). It’s a simple, toy strategy, as the strategy isn’t the point of the demonstration.

    Here’s the basic setup code:

    require(TTR)
    require(PerformanceAnalytics)
    require(quantmod)
    
    symbols <- c("XLF", "XLK", "XLU", "XLE", "XLP", "XLF", "XLB", "XLV", "XLY")
    getSymbols(symbols, src='yahoo', from = '1990-01-01-01')
    prices <- list()
    for(i in 1:length(symbols)) {
      tmp <- Ad(get(symbols[[i]]))
      prices[[i]] <- tmp
    }
    prices <- do.call(cbind, prices)
    
    # Our signal is a simple adjusted price over 200 day SMA
    signal <- prices > xts(apply(prices, 2, SMA, n = 200), order.by=index(prices))
    
    # equal weight all assets with price above SMA200
    returns <- Return.calculate(prices)
    weights <- signal/(rowSums(signal)+1e-16)
    
    # With Return.portfolio, need all weights to sum to 1
    weights$zeroes <- 1 - rowSums(weights)
    returns$zeroes <- 0
    
    monthlyWeights <- na.omit(weights[endpoints(weights, on = 'months'),])
    weights <- na.omit(weights)
    returns <- na.omit(returns)
    

    So, get the SPDRs, put them together, compute their returns, generate the signal, and create the zero vector, since Return.Portfolio treats weights less than 1 as a withdrawal, and weights above 1 as the addition of more capital (big FYI here).

    Now, here’s how to compute turnover:

    out <- Return.portfolio(R = returns, weights = monthlyWeights, verbose = TRUE)
    beginWeights <- out$BOP.Weight
    endWeights <- out$EOP.Weight
    txns <- beginWeights - lag(endWeights)
    monthlyTO <- xts(rowSums(abs(txns[,1:9])), order.by=index(txns))
    plot(monthlyTO)
    

    So, the trick is this: when you call Return.portfolio, use the verbose = TRUE option. This creates several objects, among them returns, BOP.Weight, and EOP.Weight. These stand for Beginning Of Period Weight, and End Of Period Weight.

    The way that turnover is computed is simply the difference between how the day’s return moves the allocated portfolio from its previous ending point to where that portfolio actually stands at the beginning of next period. That is, the end of period weight is the beginning of period drift after taking into account the day’s drift/return for that asset. The new beginning of period weight is the end of period weight plus any transacting that would have been done. Thus, in order to find the actual transactions (or turnover), one subtracts the previous end of period weight from the beginning of period weight.

    This is what such transactions look like for this strategy.

    Something we can do with such data is take a one-year rolling turnover, accomplished with the following code:

    yearlyTO <- runSum(monthlyTO, 252)
    plot(yearlyTO, main = "running one year turnover")
    

    It looks like this:

    This essentially means that one year’s worth of two-way turnover (that is, if selling an entirely invested portfolio is 100% turnover, and buying an entirely new set of assets is another 100%, then two-way turnover is 200%) is around 800% at maximum. That may be pretty high for some people.

    Now, here’s the application when you penalize transaction costs at 20 basis points per percentage point traded (that is, it costs 20 cents to transact $100).

    txnCosts <- monthlyTO * -.0020
    retsWithTxnCosts <- out$returns + txnCosts
    compare <- na.omit(cbind(out$returns, retsWithTxnCosts))
    colnames(compare) <- c("NoTxnCosts", "TxnCosts20BPs")
    charts.PerformanceSummary(compare)
    table.AnnualizedReturns(compare)
    

    And the result:

    
                              NoTxnCosts TxnCosts20BPs
    Annualized Return             0.0587        0.0489
    Annualized Std Dev            0.1554        0.1553
    Annualized Sharpe (Rf=0%)     0.3781        0.3149
    

    So, at 20 basis points on transaction costs, that takes about one percent in returns per year out of this (admittedly, terrible) strategy. This is far from negligible.

    So, that is how you actually compute turnover and transaction costs. In this case, the transaction cost model was very simple. However, given that Return.portfolio returns transactions at the individual asset level, one could get as complex as they would like with modeling the transaction costs.

    Thanks for reading.

    NOTE: I will be giving a lightning talk at R/Finance, so for those attending, you’ll be able to find me there.


    Read more »
  • Create Amazing Looking Backtests With This One Wrong–I Mean Weird–Trick! (And Some Troubling Logical Invest Results)

    This post will outline an easy-to-make mistake in writing vectorized backtests–namely in using a signal obtained at the end of a period to enter (or exit) a position in that same period. The difference in results one obtains is massive.

    Today, I saw two separate posts from Alpha Architect and Mike Harris both referencing a paper by Valeriy Zakamulin on the fact that some previous trend-following research by Glabadanidis was done with shoddy results, and that Glabadanidis’s results were only reproducible through instituting lookahead bias.

    The following code shows how to reproduce this lookahead bias.

    First, the setup of a basic moving average strategy on the S&P 500 index from as far back as Yahoo data will provide.

    require(quantmod)
    require(xts)
    require(TTR)
    require(PerformanceAnalytics)
    
    getSymbols('^GSPC', src='yahoo', from = '1900-01-01')
    monthlyGSPC <- Ad(GSPC)[endpoints(GSPC, on = 'months')]
    
    # change this line for signal lookback
    movAvg <- SMA(monthlyGSPC, 10)
    
    signal <- monthlyGSPC > movAvg
    gspcRets <- Return.calculate(monthlyGSPC)
    

    And here is how to institute the lookahead bias.

    lookahead <- signal * gspcRets
    correct <- lag(signal) * gspcRets
    

    These are the “results”:

    compare <- na.omit(cbind(gspcRets, lookahead, correct))
    colnames(compare) <- c("S&P 500", "Lookahead", "Correct")
    charts.PerformanceSummary(compare)
    rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))
    logRets <- log(cumprod(1+compare))
    chart.TimeSeries(logRets, legend.loc='topleft')
    

    Of course, this equity curve is of no use, so here’s one in log scale.

    As can be seen, lookahead bias makes a massive difference.

    Here are the numerical results:

                                S&P 500  Lookahead   Correct
    Annualized Return         0.0740000 0.15550000 0.0695000
    Annualized Std Dev        0.1441000 0.09800000 0.1050000
    Annualized Sharpe (Rf=0%) 0.5133000 1.58670000 0.6623000
    Worst Drawdown            0.5255586 0.08729914 0.2699789
    Calmar Ratio              0.1407286 1.78119192 0.2575219
    

    Again, absolutely ridiculous.

    Note that when using Return.Portfolio (the function in PerformanceAnalytics), that package will automatically give you the next period’s return, instead of the current one, for your weights. However, for those writing “simple” backtests that can be quickly done using vectorized operations, an off-by-one error can make all the difference between a backtest in the realm of reasonable, and pure nonsense. However, should one wish to test for said nonsense when faced with impossible-to-replicate results, the mechanics demonstrated above are the way to do it.

    Now, onto other news: I’d like to thank Gerald M for staying on top of one of the Logical Invest strategies–namely, their simple global market rotation strategy outlined in an article from an earlier blog post.

    Up until March 2015 (the date of the blog post), the strategy had performed well. However, after said date?

    It has been a complete disaster, which, in hindsight, was evident when I passed it through the hypothesis-driven development framework process I wrote about earlier.

    So, while there has been a great deal written about not simply throwing away a strategy because of short-term underperformance, and that anomalies such as momentum and value exist because of career risk due to said short-term underperformance, it’s never a good thing when a strategy creates historically large losses, particularly after being published in such a humble corner of the quantitative financial world.

    In any case, this was a post demonstrating some mechanics, and an update on a strategy I blogged about not too long ago.

    Thanks for reading.

    NOTE: I am always interested in hearing about new opportunities which may benefit from my expertise, and am always happy to network. You can find my LinkedIn profile here.


    Read more »
  • Are R^2s Useful In Finance? Hypothesis-Driven Development In Reverse

    This post will shed light on the values of R^2s behind two rather simplistic strategies — the simple 10 month SMA, and its relative, the 10 month momentum (which is simply a difference of SMAs, as Alpha Architect showed in their book DIY Financial Advisor.

    Not too long ago, a friend of mine named Josh asked me a question regarding R^2s in finance. He’s finishing up his PhD in statistics at Stanford, so when people like that ask me questions, I’d like to answer them. His assertion is that in some instances, models that have less than perfect predictive power (EG R^2s of .4, for instance), can still deliver very promising predictions, and that if someone were to have a financial model that was able to explain 40% of the variance of returns, they could happily retire with that model making them very wealthy. Indeed, .4 is a very optimistic outlook (to put it lightly), as this post will show.

    In order to illustrate this example, I took two “staple” strategies — buy SPY when its closing monthly price is above its ten month simple moving average, and when its ten month momentum (basically the difference of a ten month moving average and its lag) is positive. While these models are simplistic, they are ubiquitously talked about, and many momentum strategies are an improvement upon these baseline, “out-of-the-box” strategies.

    Here’s the code to do that:

    require(xts)
    require(quantmod)
    require(PerformanceAnalytics)
    require(TTR)
    
    getSymbols('SPY', from = '1990-01-01', src = 'yahoo')
    adjustedPrices <- Ad(SPY)
    monthlyAdj <- to.monthly(adjustedPrices, OHLC=TRUE)
    
    spySMA <- SMA(Cl(monthlyAdj), 10)
    spyROC <- ROC(Cl(monthlyAdj), 10)
    spyRets <- Return.calculate(Cl(monthlyAdj))
    
    smaRatio <- Cl(monthlyAdj)/spySMA - 1
    smaSig <- smaRatio > 0
    rocSig <- spyROC > 0
    
    smaRets <- lag(smaSig) * spyRets
    rocRets <- lag(rocSig) * spyRets
    

    And here are the results:

    strats <- na.omit(cbind(smaRets, rocRets, spyRets))
    colnames(strats) <- c("SMA10", "MOM10", "BuyHold")
    charts.PerformanceSummary(strats, main = "strategies")
    rbind(table.AnnualizedReturns(strats), maxDrawdown(strats), CalmarRatio(strats))
    

                                  SMA10     MOM10   BuyHold
    Annualized Return         0.0975000 0.1039000 0.0893000
    Annualized Std Dev        0.1043000 0.1080000 0.1479000
    Annualized Sharpe (Rf=0%) 0.9346000 0.9616000 0.6035000
    Worst Drawdown            0.1663487 0.1656176 0.5078482
    Calmar Ratio              0.5860332 0.6270657 0.1757849
    

    In short, the SMA10 and the 10-month momentum (aka ROC 10 aka MOM10) both handily outperform the buy and hold, not only in absolute returns, but especially in risk-adjusted returns (Sharpe and Calmar ratios). Again, simplistic analysis, and many models get much more sophisticated than this, but once again, simple, illustrative example using two strategies that outperform a benchmark (over the long term, anyway).

    Now, the question is, what was the R^2 of these models? To answer this, I took a rolling five-year window that essentially asked: how well did these quantities (the ratio between the closing price and the moving average – 1, or the ten month momentum) predict the next month’s returns? That is, what proportion of the variance is explained through the monthly returns regressed against the previous month’s signals in numerical form (perhaps not the best framing, as the signal is binary as opposed to continuous which is what is being regressed, but let’s set that aside, again, for the sake of illustration).

    Here’s the code to generate the answer.

    predictorsAndPredicted <- na.omit(cbind(lag(smaRatio), lag(spyROC), spyRets))
    R2s <- list()
    for(i in 1:(nrow(predictorsAndPredicted)-59))  { #rolling five-year regression
      subset <- predictorsAndPredicted[i:(i+59),]
      smaLM <- lm(subset[,3]~subset[,1])
      smaR2 <- summary(smaLM)$r.squared
      rocLM <- lm(subset[,3]~subset[,2])
      rocR2 <- summary(rocLM)$r.squared
      R2row <- xts(cbind(smaR2, rocR2), order.by=last(index(subset)))
      R2s[[i]] <- R2row
    }
    R2s <- do.call(rbind, R2s)
    par(mfrow=c(1,1))
    colnames(R2s) <- c("SMA", "Momentum")
    chart.TimeSeries(R2s, main = "R2s", legend.loc = 'topleft')
    

    And the answer, in pictorial form:

    In short, even in the best case scenarios, namely, crises which provide momentum/trend-following/call it what you will its raison d’etre, that is, its risk management appeal, the proportion of variance explained by the actual signal quantities was very small. In the best of times, around 20%. But then again, think about what the R^2 value actually is–it’s the percentage of variance explained by a predictor. If a small set of signals (let alone one) was able to explain the majority of the change in the returns of the S&P 500, or even a not-insignificant portion, such a person would stand to become very wealthy. More to the point, given that two strategies that handily outperform the market have R^2s that are exceptionally low for extended periods of time, it goes to show that holding the R^2 up as some form of statistical holy grail certainly is incorrect in the general sense, and anyone who does so either is painting with too broad a brush, is creating disingenuous arguments, or should simply attempt to understand another field which may not work the way their intuition tells them.

    Thanks for reading.


    Read more »
  • A Book Review of ReSolve Asset Management’s Adaptive Asset Allocation

    This review will review the “Adaptive Asset Allocation: Dynamic Global Portfolios to Profit in Good Times – and Bad” book by the people at ReSolve Asset Management. Overall, this book is a definite must-read for those who have never been exposed to the ideas within it. However, when it comes to a solution that can be fully replicated, this book is lacking.

    Okay, it’s been a while since I reviewed my last book, DIY Financial Advisor, from the awesome people at Alpha Architect. This book in my opinion, is set up in a similar sort of format.

    This is the structure of the book, and my reviews along with it:

    Part 1: Why in the heck you actually need to have a diversified portfolio, and why a diversified portfolio is a good thing. In a world in which there is so much emphasis put on single-security performance, this is certainly something that absolutely must be stated for those not familiar with portfolio theory. It highlights the example of two people–one from Abbott Labs, and one from Enron, who had so much of their savings concentrated in their company’s stock. Mr. Abbott got hit hard and changed his outlook on how to save for retirement, and Mr. Enron was never heard from again. Long story short: a diversified portfolio is good, and a properly diversified portfolio can offset one asset’s zigs with another asset’s zags. This is the key to establishing a stream of returns that will help meet financial goals. Basically, this is your common sense story (humans love being told stories) so as to motivate you to read the rest of the book. It does its job, though for someone like me, it’s more akin to a big “wait for it, wait for it…and there’s the reason why we should read on, as expected”.

    Part 2: Something not often brought up in many corners of the quant world (because it’s real life boring stuff) is the importance not only of average returns, but *when* those returns are achieved. Namely, imagine your everyday saver. At the beginning of their careers, they’re taking home less salary and have less money in their retirement portfolio (or speculation portfolio, but the book uses retirement portfolio). As they get into middle age and closer to retirement, they have a lot more money in said retirement portfolio. Thus, strong returns are most vital when there is more cash available *to* the portfolio, and the difference between mediocre returns at the beginning and strong returns at the end of one’s working life as opposed to vice versa is *astronomical* and cannot be understated. Furthermore, once *in* retirement, strong returns in the early years matter far more than returns in the later years once money has been withdrawn out of the portfolio (though I’d hope that a portfolio’s returns can be so strong that one can simply “live off the interest”). Or, put more intuitively: when you have $10,000 in your portfolio, a 20% drawdown doesn’t exactly hurt because you can make more money and put more into your retirement account. But when you’re 62 and have $500,000 and suddenly lose 30% of everything, well, that’s massive. How much an investor wants to avoid such a scenario cannot be understated. Warren Buffett once said that if you can’t bear to lose 50% of everything, you shouldn’t be in stocks. I really like this part of the book because it shows just how dangerous the ideas of “a 50% drawdown is unavoidable” and other “stay invested for the long haul” refrains are. Essentially, this part of the book makes a resounding statement that any financial adviser keeping his or her clients invested in equities when they’re near retirement age is doing something not very advisable, to put it lightly. In my opinion, those who advise pension funds should especially keep this section of the book in mind, since for some people, the long-term may be coming to an end, and what matters is not only steady returns, but to make sure the strategy doesn’t fall off a cliff and destroy decades of hard-earned savings.

    Part 3: This part is also one that is a very important read. First off, it lays out in clear terms that the long-term forward-looking valuations for equities are at rock bottom. That is, the expected forward 15-year returns are very low, using approximately 75 years of evidence. Currently, according to the book, equity valuations imply a *negative* 15-year forward return. However, one thing I *will* take issue with is that while forward-looking long-term returns for equities may be very low, if one believed this chart and only invested in the stock market when forecast 15-year returns were above the long term average, one would have missed out on both the 2003-2007 bull runs, *and* the one since 2009 that’s just about over. So, while the book makes a strong case for caution, readers should also take the chart with a grain of salt in my opinion. However, another aspect of portfolio construction that this book covers is how to construct a robust (assets for any economic environment) and coherent (asset classes balanced in number) universe for implementation with any asset allocation algorithm. I think this bears repeating: universe selection is an extremely important topic in the discussion of asset allocation, yet there is very little discussion about it. Most research/topics simply take some “conventional universe”, such as “all stocks on the NYSE”, or “all the stocks in the S&P 500”, or “the entire set of the 50-60 most liquid futures” without consideration for robustness and coherence. This book is the first source I’ve seen that actually puts this topic under a magnifying glass besides “finger in the air pick and choose”.

    Part 4: and here’s where I level my main criticism at this book. For those that have read “Adaptive Asset Allocation: A Primer”, this section of the book is basically one giant copy and paste. It’s all one large buildup to “momentum rank + min-variance optimization”. All well and good, until there’s very little detail beyond the basics as to how the minimum variance portfolio was constructed. Namely, what exactly is the minimum variance algorithm in use? Is it one of the poor variants susceptible to numerical instability inherent in inverting sample covariance matrices? Or is it a heuristic like David Varadi’s minimum variance and minimum correlation algorithm? The one feeling I absolutely could not shake was that this book had a perfect opportunity to present a robust approach to minimum variance, and instead, it’s long on concept, short on details. While the theory of “maximize return for unit risk” is all well and good, the actual algorithm to implement that theory into practice is not trivial, with the solutions taught to undergrads and master’s students having some well-known weaknesses. On top of this, one thing that got hammered into my head in the past was that ranking *also* had a weakness at the inclusion/exclusion point. E.G. if, out of ten assets, the fifth asset had a momentum of say, 10.9%, and the sixth asset had a momentum of 10.8%, how are we so sure the fifth is so much better? And while I realize that this book was ultimately meant to be a primer, in my opinion, it would have been a no-objections five-star if there were an appendix that actually went into some detail on how to go from the simple concepts and included a small numerical example of some algorithms that may address the well-known weaknesses. This doesn’t mean Greek/mathematical jargon. Just an appendix that acknowledged that not every reader is someone only picking up his first or second book about systematic investing, and that some of us are familiar with the “whys” and are more interested in the “hows”. Furthermore, I’d really love to know where the authors of this book got their data to back-date some of these ETFs into the 90s.

    Part 5: some more formal research on topics already covered in the rest of the book–namely a section about how many independent bets one can take as the number of assets grow, if I remember it correctly. Long story short? You *easily* get the most bang for your buck among disparate asset classes, such as treasuries of various duration, commodities, developed vs. emerging equities, and so on, as opposed to trying to pick among stocks in the same asset class (though there’s some potential for alpha there…just…a lot less than you imagine). So in case the idea of asset class selection, not stock selection wasn’t beaten into the reader’s head before this point, this part should do the trick. The other research paper is something I briefly skimmed over which went into more depth about volatility and retirement portfolios, though I felt that the book covered this topic earlier on to a sufficient degree by building up the intuition using very understandable scenarios.

    So that’s the review of the book. Overall, it’s a very solid piece of writing, and as far as establishing the *why*, it does an absolutely superb job. For those that aren’t familiar with the concepts in this book, this is definitely a must-read, and ASAP.

    However, for those familiar with most of the concepts and looking for a detailed “how” procedure, this book does not deliver as much as I would have liked. And I realize that while it’s a bad idea to publish secret sauce, I bought this book in the hope of being exposed to a new algorithm presented in the understandable and intuitive language that the rest of the book was written in, and was left wanting.

    Still, that by no means diminishes the impact of the rest of the book. For those who are more likely to be its target audience, it’s a 5/5. For those that wanted some specifics, it still has its gem on universe construction.

    Overall, I rate it a 4/5.

    Thanks for reading.


    Read more »
  • On The Relationship Between the SMA and Momentum

    Happy new year. This post will be a quick one covering the relationship between the simple moving average and time series momentum. The implication is that one can potentially derive better time series momentum indicators than the classical one applied in so many papers.

    Okay, so the main idea for this post is quite simple:

    I’m sure we’re all familiar with classical momentum. That is, the price now compared to the price however long ago (3 months, 10 months, 12 months, etc.). E.G. P(now) – P(10)
    And I’m sure everyone is familiar with the simple moving average indicator, as well. E.G. SMA(10).

    Well, as it turns out, these two quantities are actually related.

    It turns out, if instead of expressing momentum as the difference of two numbers, it is expressed as the sum of returns, it can be written (for a 10 month momentum) as:

    MOM_10 = return of this month + return of last month + return of 2 months ago + … + return of 9 months ago, for a total of 10 months in our little example.

    This can be written as MOM_10 = (P(0) – P(1)) + (P(1) – P(2)) + … + (P(9) – P(10)). (Each difference within parentheses denotes one month’s worth of returns.)

    Which can then be rewritten by associative arithmetic as: (P(0) + P(1) + … + P(9)) – (P(1) + P(2) + … + P(10)).

    In other words, momentum — aka the difference between two prices, can be rewritten as the difference between two cumulative sums of prices. And what is a simple moving average? Simply a cumulative sum of prices divided by however many prices summed over.

    Here’s some R code to demonstrate.

    require(quantmod)
    require(TTR)
    require(PerformanceAnalytics)
    
    getSymbols('SPY', from = '1990-01-01')
    monthlySPY <- Ad(SPY)[endpoints(SPY, on = 'months')]
    monthlySPYrets <- Return.calculate(monthlySPY)
    #dividing by 10 since that's the moving average period for comparison
    signalTSMOM <- (monthlySPY - lag(monthlySPY, 10))/10 
    signalDiffMA <- diff(SMA(monthlySPY, 10))
    
    # rounding just 
    sum(round(signalTSMOM, 3)==round(signalDiffMA, 3), na.rm=TRUE)
    

    With the resulting number of times these two signals are equal:

    [1] 267
    

    In short, every time.

    Now, what exactly is the punchline of this little example? Here’s the punchline:

    The simple moving average is…fairly simplistic as far as filters go. It works as a pedagogical example, but it has some well known weaknesses regarding lag, windowing effects, and so on.

    Here’s a toy example how one can get a different momentum signal by changing the filter.

    toyStrat <- monthlySPYrets * lag(signalTSMOM > 0)
    
    emaSignal <- diff(EMA(monthlySPY, 10))
    emaStrat <- monthlySPYrets * lag(emaSignal > 0)
    
    comparison <- cbind(toyStrat, emaStrat)
    colnames(comparison) <- c("DiffSMA10", "DiffEMA10")
    charts.PerformanceSummary(comparison)
    table.AnnualizedReturns(comparison)
    

    With the following results:

                              DiffSMA10 DiffEMA10
    Annualized Return            0.1051    0.0937
    Annualized Std Dev           0.1086    0.1076
    Annualized Sharpe (Rf=0%)    0.9680    0.8706
    

    While the difference of EMA10 strategy didn’t do better than the difference of SMA10 (aka standard 10-month momentum), that’s not the point. The point is that the momentum signal is derived from a simple moving average filter, and that by using a different filter, one can still use a momentum type of strategy.

    Or, put differently, the main/general takeaway here is that momentum is the slope of a filter, and one can compute momentum in an infinite number of ways depending on the filter used, and can come up with a myriad of different momentum strategies.

    Thanks for reading.

    NOTE: I am currently contracting in Chicago, and am always open to networking. Contact me at my email at ilya.kipnis@gmail.com or find me on my LinkedIn here.


    Read more »

Copyright Use-R.com 2012 - 2016 ©