Home Business & Finance
I Introduction and outline
In this introductory chapter, we review and comment on academic studies on technical analysis.
1.1 Technical analysis before the 1970s
Despite its popularity among practitioners, academic literature concerning technical analysis before the 1970s almost unanimously concluded that technical analysis is of no economic value in practice.
Started by analyzing the weekly forecasting results of well-known professional agencies, such as financial sen-ices and fire insurance companies, in the period January 1928 through June 1932, Cowles (1933) found no statistically significant forecasting performance. Furthermore, Cowles (1933) considered a 26- year record forecasting of William Peter Hamilton in the period December 1903 until his death in December 1929. During this period Hamilton wrote 255 editorials in the Wall Street Journal and presented forecasts for the stock market based on the Dow Theory. Cowles (1933) found that Hamilton failed to beat the simple continuous investment in the DJIA or the DJIA after correcting for brokerage charges and cash dividends. On 90 occasions Hamilton announced changes in the outlook for the market. Cowles (1933) found only half of his announcements were successful, not better than predicting purely by guess. Cowles (1944) repeated the analysis for 11 forecasting companies for the longer period January 1928 through July 1943, and still no significant evidence of forecasting power of analysts was discovered.
While Cowles (1933, 1944) focused on testing analysts’ advice, other academics shifted their attention to the behavior of time series of speculative prices. Working (1934), Kendall (1953) and Roberts (1959) found for series of speculative prices, such as American commodity prices of wheat and cotton, British indices of industrial share prices and the DJIA, that successive price changes, as measured by auto-correlation, were linearly independent, and that these series behaved much like random walks.
Since the dependence in price changes can be highly nonlinear and too complicated to be captured by the standard linear statistical tools, such as autocorrelations, Alexander (1961) began defining filters to reveal possible trends in stock prices which may be masked by the jiggling of the market. A filter strategy buys when price increases by x percent from a recent low and sells when price declines by x percent from a recent high. Thus filters can be used to identify local peaks and troughs according to the filter size. After applying several filters to the DJIA in the period 1897-1929 and the S&P Industrials in the period of 1929- 1959, Alexander (1961) concluded that in speculative markets a price move, once initiated, tended to persist. However he also noticed that commissions could reduce the profitability. Mandelbrot (1963, p.418) noted that there was a flaw in the computations of Alexander (1961), since he assumed that the trader could buy exactly at the low plus x percent and could sell exactly at the high minus x percent. However in real trading this would probably not be the case. In Alexander (1964) the computing mistake was corrected and allowance was made for transaction costs. The filter rules still reported considerable excess profits over the buy-and-hold strategy, but transaction costs wiped out all the profits.
Fama (1965) tried to show with various tests that price changes were independent and that history stock prices could not be used to make valuable future predictions. He applied serial correlation tests, runs tests and Alexander’s filter technique to daily data of 30 individual stocks quoted in the DJIA in the period January 1956 through September 1962. The serial correlation tests indicated that the dependence in successive price changes was either extremely small or non-existent. Also the runs tests did not show a large degree of dependence. Profits of the filter techniques were calculated by trading blocks of 100 shares and were corrected for dividends and transaction costs. The results gave no profitability'. Moreover, even if there was some dependence, Fama (1965) argued that this dependence was too small to be profitably exploited because of transaction costs. Fama and Blume (1966) further applied Alexander’s filters approach to the same data set as in Fama (1965). They found that the buy-and-hold strategy' could not consistently be outperformed.
Levy (1971) first examined 32 possible forms of five point chart patterns, i.e. a pattern with two highs and three lows or two lows and three highs, which were claimed to represent channels, wedges, diamonds, symmetrical triangles, (reverse) head-and-shoulders, triple tops, and triple bottoms. Local extrema were determined with the help of Alexander’s (1961) filter techniques. After trading costs were taken into account it was concluded that none of the 32 patterns gave any evidence of profitable forecasting ability in either bullish or bearish direction when applied to 548 NYSE securities in the period July 1964 through July 1969.
In summary, early academic empirical studies concluded that successive price changes were independent and that trading strategies based on technical analysis were nonprofitable. These empirical findings combined with the theory' of Paul Samuelson (1965), published in his influential paper ‘Proof that Properly Anticipated Prices Fluctuate Randomly’, led to the efficient markets hypothesis (EMH). Eugene Fama (1970) reviewed the theoretical and empirical literature on the EMH and concluded that the evidence in support of the EMH was very extensive, and that contradictory evidence was sparse. Since then the EMH has been the central paradigm in financial economics. According to this hypothesis it is not possible to exploit any information set to predict future price changes. Therefore, trading systems based on past information should not generate profits in excess of equilibrium expected profits or returns. It becomes commonly accepted in academic field that the study of past price trends and patterns is no more useful in predicting future price movements.
1.2 Technical analysis during 1990s–2000s
Little work on technical analysis appeared during the 1970s and 1980s due to the dominance of efficient market hypothesis in financial paradigm.
A critical problem concerning technical analysis is data snooping. Data snooping is the generic term of the danger that the best forecasting model found in a given data set by a certain specification search is just the result of chance instead of the result of truly superior forecasting power. Jensen and Benington (1969, p.470) argued: “Likewise given enough computer time, we are sure that we can find a mechanical trading rule which works on a table of random numbers-provided of course that we are allowed to test the same rule on the same table of numbers which we used to discover the rule.”
Brock et al. (1992) claimed that they mitigated the problem of data snooping by (1) reporting the results of all tested trading strategies, (2) utilizing a vert' long data set, and (3) emphasizing the robustness of the results across various non-overlapping subperiods for statistical inference. They tested the forecastabil- ity of a set of 26 simple technical trading rules by applying them to the closing prices of the DJIA in the period January 1897 through December 1986, nearly 90 years of data. The set of trading rules consists of moving average strategies and support-and-resistance rules, very popular trading rules among technical trading practitioners. Brock et al. (1992) found that all trading rules reported significant profits above the buy-and-hold benchmark in all periods by using simple t-ratios as test statistics. Brock et al. (1992) found that the patterns uncovered by their technical trading rules could not be explained by first order autocorrelation and by changing expected returns caused by changes in volatility. Therefore Brock et al. (1992) concluded that the conclusion reached in earlier studies that technical analysis was useless might have been premature.
The strong results of Brock et al. (1992) resulted in renewed interest in academia for testing the predictability of technical trading rules in the 1990s. The following will mainly present the major results concerning the predictability of technical analysis.
Levich and Thomas (1993) were the first to apply the bootstrap methodology, as introduced by Brock et al. (1992), to exchange rate data. Six filters and three moving averages were applied to the U.S. Dollar closing settlement prices of the BP, CD, DEM, JPY and SF futures contracts traded at the International Monetary Market of the Chicago Mercantile Exchange over the period 1973.01- 1990.12. Consistent with Brock et al. (1992), they found that the simple technical trading rules generated unusual profits (no corrections are made for transaction costs) and that a random walk model could not explain these profits.
Lee and Mathur (1995) remarked that surveys in favor of technical trading if applied to exchange rate data, were mostly conducted on U.S. Dollar denominated currencies and conjectured that the positive results were likely due to the central bank intervention. Therefore they tested market efficiency of European foreign exchange markets by applying 45 different crossover moving- average trading strategies to six European spot cross-rates (JPY/BP, DEM/ BP, JPY/DEM, SF/DEM and JPY/SF) over 1988.05-1993.12. After a correction for 0.1% transaction costs per trade, they found that moving-average trading rules were marginally profitable only for the JPY/DEM and JPY/SF cross rates. Further it was found that in periods during which central bank intervention was believed to have taken place, trading rules did not show to be profitable in the European cross rates. Finally Lee and Mathur (1995) applied a recursively optimizing test procedure with a rolling window for the purpose of testing out-of-sample forecasting power. Every year the best trading rule of the previous half-year was applied. Also this out-of-sample test procedure rejected the null hypothesis that moving averages had forecasting power.
Bessembinder and Chan (1995) tested whether the trading rule set of Brock et al. (1992) had forecasting power when applied to the stock market indices of Japan, Hong Kong, South Korea, Malaysia, Thailand and Taiwan over 1975.01-1989.12. They found that the rules were most successful in the markets of Malaysia, Thailand and Taiwan if the break-even round-trip transaction costs were set to be 1.57% on average. They concluded that excess profits over the buy-and-hold could be made, but emphasized the fact that the relative riskiness of the technical trading strategies was not controlled.
For the UK stock market Hudson et al. (1996) tested the trading rule set of Brock et al. (1992) on daily data of the Financial Times Industrial Ordinary index over 1935.07-1994.01. They found that the trading rules on average generated an excess return of 0.8% per transaction over the buy-and-hold, but that the costs of implementing the strategy were at least 1% per transaction. Further results show that over the subperiod 1981-1994, the trading rules seemed to lose their forecasting power. Hence Hudson et al. (1996) concluded that although the technical trading rules examined did have predictive ability, their use would not allow investors to make excess returns in the presence of costly trading. Additionally Mills (1997) simultaneously found in the case of zero transaction costs with the bootstrap technique introduced by Brock et al. (1992) that the good results for the period 1935-1980 could not be explained by an AR- ARCH model for the daily returns. Again, for the period after 1980 it was found that the trading rules did not generate statistically significant results.
Bessembinder and Chan (1998) replicated the calculations of Brock et al. (1992) for the period 1926-1991 to assess the economic significance of the Brock et al. (1992) findings. Corrections were made for transaction costs and dividends and for non-synchronous trading. One-month treasury bills were used as proxy for the risk-free interest rate if no trading position was held in the market. It was computed that one-way break-even transaction costs were approximately 0.39% for the full sample. Although Bessembinder and Chan (1998) confirmed the results of Brock et al. (1992), they concluded that there was little reason to view the evidence of Brock et al. (1992) as indicative of market inefficiency.
Fernandez-Rodriguez et al. (2001) replicated the testing procedures of Brock et al. (1992) for daily data of the General Index of the Madrid Stock Exchange (IGBM) in the period January 1966 through October 1997. They found that, if transaction costs were not taken into consideration, technical trading rules were found to have forecastability in the Madrid Stock Exchange. Furthermore, the bootstrap results indicated that the forecasting power of the technical trading rules could not be explained by several null models for stock returns such as the AR(1), GARCH and GARCH-in-Mean models.
Ratner and Leal (1999) applied ten moving-average trading rules to daily local index inflation corrected closing levels for Argentina (Bolsa Indices General), Brazil (Indices BOVESPA), Chile (Indices General de Precios), India (Bombay Sensitive), Korea (Seoul Composite Index), Malaysia (Kuala Lumpur Composite Index), Mexico (Indice de Precios у Cotaciones), the Philippines (Manila Composite Index), Taiwan (Taipei Weighted Price Index) and Thailand (Bangkok S.E.T.) over 1982.01-1995.04. After correcting for transaction costs, the rules appeared to be significantly profitable only in Taiwan, Thailand and Mexico.
Isakov and Hollistein (1999) tested simple technical trading rules on the Swiss Bank Corporation (SBC) General Index and on some of its individual stocks UBS, ABB, Nestle, Ciba-Geigy and Zurich in the period 1969-1997. They were the first who augmented moving-average trading strategies with momentum indicators or oscillators, so called relative strength or stochastics. These oscillators were expected to indicate when an asset was overbought or oversold and were supposed to give appropriate signals when to step in or out of the market. Isakov and Hollistein (1999) found that the use of oscillators did not add to the performance of the moving averages. For the basic moving average strategies they found an average yearly excess return of 18% on the SBC index. However it was concluded that in the presence of trading costs the rules were only profitable if the costs were not higher than 0.3-0.7% per transaction.
LeBaron (2000a) reviewed the paper of Brock et al. (1992) and tested whether the results found for the DJIA in the period 1897-1986 also held for the period after 1986. Two technical trading rules were applied to the data set, namely the 150-day single crossover moving average rule, because the research of Brock et al. (1992) pointed out that this rule performed consistently well over a couple of subperiods, and a 150-day momentum strategy. LeBaron (2000a) found that the results of Brock et al. (1992) changed dramatically in the period 1988-1999. The trading rules seemed to have lost their predictive ability. For the period 1897-1986 the results could not be explained by a random walk model for stock returns, but for the period 1988-1999, in contrast, it was concluded that the null of a random walk could not be rejected. LeBaron
(2000b) tested a 30-week single crossover moving-average trading strategy on weekly data at the close of London markets on Wednesdays of the U.S. Dollar against the BP, DEM and JPY in the period June 1973 through May 1998. It was found that the strategy performed very well on all three exchange rates in the subperiod 1973-1989, yielding significant positive excess returns of 8, 6.8 and 10.2% yearly for the BP, DM and JPY respectively. However for the subperiod 1990-1998 the results were not significant anymore.
Coutts and Cheung (2000) applied the technical trading rule set of Brock et al. (1992) to daily data of the Hang Seng Index quoted at the Hong Kong Stock Exchange (HKSE) over 1985.10-1997.07. They found that although the trading range break-out rules had better results than the moving averages, they could not profitably be exploited after correcting for transaction costs. In contrast, Ming et al. (2000) found significant forecasting power for the strategies of Brock et al. (1992) when applied to the Kuala Lumpur Composite Index (KLCI) even after correction for transaction costs.
Detry and Gregoire (2001) tested 10 moving-average trading rules of Brock et al. (1992) on the indices of all 15 countries in the European Union. They found that their results strongly supported the conclusion of Brock et al. (1992) for the predictive ability of moving average rules.
Neftci (1991) showed that technical patterns could be fully characterized by using appropriate sequences of local minima and maxima. Hence it was concluded that any pattern can potentially be formalized. Osier and Chang (1995) were the first to evaluate the predictive power of head-and-shoulders patterns using a computer-implemented algorithm in foreign exchange rates. The features of the head- and-shoulders pattern were defined to be described by local minima and maxima that were found by applying Alexander’s (1961) filter techniques. The pattern recognition algorithm was applied to six currencies (JPY, DEM, CD, SF, FF and BP against the USD) in the period March 1973 to June 1994. Significance was tested with the bootstrap methodology described by Brock et al. (1992) under the null of a random walk and GARCH model. It was found that the head-and-shoulders pattern had significant predictive power for the DEM and the JPY, also after correcting for transaction costs and interest rate differentials.
Lo et al. (2000) developed a pattern recognition algorithm based on non- parametric kernel regression to detect (inverse) head-and-shoulders, broadening tops and bottoms, triangle tops and bottoms, rectangle tops and bottoms, and double tops and bottoms patterns that were the most difficult to quantify analytically. The pattern recognition algorithm was applied to hundreds of NYSE and NASDAQ quoted stocks in the period 1962-1996. It was found that technical patterns did provide incremental information, especially for NASDAQ stocks. Further it was found that the most common patterns were double tops and bottoms, and (inverted) head-and-shoulders.
In summary, stimulated by the findings of Brock et al. (1992), voluminous literature investigated the forecasting power of technical analysis in the 1990s and 2000s, however conclusions on the economic value of technical analysis remained controversial.
1.3 Recent advances in technical analysis
Goyal and Welch (2008) showed that a long list of macroeconomic and financial predictors from the literature fail to deliver consistently superior out-of-sample forecasts of the U.S. equity premium relative to a simple forecast based on the historical average (constant expected equity premium model). Recent work on technical analysis mainly concerns whether technical indicators are informative predictors, as measured by out-of-sample R-square (Campbell and Thompson, 2008). The following will present the main results in academia.
Neely et al. (2014) analyzed monthly out-of-sample forecasts of the U.S. equity risk premium based on popular technical indicators (moving average rule, momentum rule and on-balance volume) in comparison to that of a set of well-known macroeconomic variables, and found that technical indicators had statistically and economically significant out-of-sample forecasting power and frequently outperformed the macroeconomic variables. Furthermore, they found out-of-sample predictability was closely connected to the business cycle for both technical indicators and macroeconomic variables, although in a complementary manner: technical indicators detected the typical decline in the equity risk premium near cyclical peaks, while macroeconomic variables more readily picked up the typical rise near cyclical troughs. It was concluded that utilizing information from both technical indicators and macroeconomic variables substantially increased the out-of-sample gains relative to using either macroeconomic variables or technical indicators alone.
Huang et al. (2015) extended the traditional predictive regression model to a state-dependent one, in which a state variable was used to indicate an up- or down-market. They found that U.S. stock returns for one to 12 months could be predicted negatively in the up-market and positively in the down-market by a mean reversion indicator that was defined as the past year cumulative return of the market portfolio minus its long-term mean and standardized by its annualized volatility, and this predictive pattern was found to be robust to cross- sectional portfolios sorted by size, book-to-market ratio, industry, momentum, and long- and short-term reversals.
Goh et al. (2013) studied the predictability of technical indicators (moving average rule and on-balance volume) for U.S. government bond risk premia. They found that technical indicators had economically and statistically significant forecasting power both in- and out-of-sample, and for both short- and long-term government bonds, and that technical indicators were more useful than economic variables. Furthermore, they found that a forecasting model that combines information in technical indicators together with economic variables substantially outperformed forecasts based on models using economic variables only.
Using the intraday data of the S&P500 ETF over 1993.02.01-2013.12.31, Gao et al. (2015) documented an intraday momentum pattern that the first half-hour return on the market predicted the last half-hour return on the market. The predictability was both statistically and economically significant, and was stronger on more volatile days, higher volume days, recession days and some macroeconomic news release days. Moreover, they also found that the intraday momentum was also strong for ten other most actively traded ETFs.
In summary, regarding out-of-sample predictability, recent work seems to confirm that technical analysis is informative for predicting future price changes.
In financial practice technical analysis is not free from being criticized because of its highly subjective nature: the geometric shapes in historical price charts are often in the eyes of the beholder. It is said that there are probably as many methods of combining and interpreting the various techniques as there are chartists themselves.
The attitude of many academics towards technical analysis is described by Malkiel (1996, p. 139): “Obviously, I’m biased against the chartist. This is not only a personal predilection but a professional one as well. Technical analysis is anathema to the academic world. We love to pick on it. Our bullying tactics are prompted by two considerations: (1) after paying transaction costs, the method does not do better than a buy-and-hold strategy for investors, and (2) it’s easy to pick on. And while it may seem a bit unfair to pick on such a sorry target, just remember: It’s your money we are trying to save.”
Due to its lack of theoretical underpinnings, the technical analysis, in some circles, is known as “voodoo finance” and chart reading is believed to share a pedestal with alchemy.
Outline of this book
This book is structured with five parts. Part I has two chapters. Chapter 1 presents a comprehensive review on technical analysis. Chapter 2 presents the outline of this book. Part II introduces the basic concepts and statistical properties of Candlestick. There are two chapters in Part II. Chapter 3 gives the basic concepts of candlestick. Chapter 4 presents the basic statistical properties of candlestick charts. These properties establish the foundations of candlestick forecasting. Part III presents the statistical models. There are two chapters in this part. Chapter 5 proposes a decomposition-based vector autoregressive (DVAR) model for predicting returns. Compared with the traditional return-based time series modeling technique, the DVAR model employs the high, low and closing prices, which makes it more efficient in information using. Chapter 6 shows, using both theoretical explanation and empirical evidence, that upper shadow and lower shadow are informative for predicting asset returns in DVAR model. Parts II and III are the cores of this book. Part IV presents the empirical applications of candlestick forecasting. Based on the statistical properties in Parts II and III, Part IV shows empirically how these statistical properties can be used in practice. There are five chapters in this part. Chapter 7 shows with an empirical example that the statistical properties of candlestick charts can be used in market volatility timing. Chapter 8 presents an empirical example to show how the statistical properties of candlestick can be used to improve range forecasting. Chapter 9 demonstrates with an empirical example that the statistical properties of candlestick charts can be used to investigate information spillover effect across financial markets. Chapters 10-11 show empirically that the statistical properties of candlestick charts can be used to improve return forecasting. Part V concludes and proposes directions for future studies.