Chapter 1 Introduction The volatility of the stock market noted in several literatures had boosted the attempt to determine its source and its effects particularly predictions and future impact on trading in the stock market (1990;  1988;  1990;1991).[1] However, due to the inherent uncertainty ( 1965)[2] in stock market trading, scholars advocated several tests and methods in stock market forecasting. Thus, models were established by notable scholars, the most popular of which was Fama’s model (1981, 1970;  1988;  1977)[3] who advocated the efficient market hypothesis (EMH).

In testing the weak form of market efficiency of EMH, ( 2003;  2003; 1965; 1962; 1962)[4],  random walk model was utilized since it was generally agreed upon that price changes are random and past changes were not useful in forecasting future price changes particularly after transaction costs were taken into account. While there had been contentions on this issue (1988;  1988)[5], their findings were inconclusive. (1970)[6] made a distinction between three forms of EMH: the weak form, the semi-strong form, and the strong form. The weak form of the hypothesis was touted as more consistent than the other models (2003).[7]


The random walk model was also used to predict future prices using past data ( 2003;  1999;  1999)[8] which  (1991)[9] expanded to predict future shocks using accounting and macroeconomic variables. This model is usually used in developed capital markets such as the United States and the United Kingdom.


Despite the strong following of the EMH and the weak form of EMH, several scholars emerged challenging the model arguing that stock prices adjust slowly to information. (1976;  1981).[10]


In his 1999 work,  acknowledged that the EMH is not a bullet proof description of price formation. However, there has yet to be a specific model of price formation that can replace the EMH. Following this heed,  (2003)[11] proposed that the EMH paradigm be refined.


This study attempted to contribute to the literature proving that Fama’s model of weak test EMH continues to be relevant in explaining and predicting future stock market trading and prices in developed markets. Specifically, the UK market was utilized in testing this model using the FTSE 100 index.


 


  Chapter II Literature Review

 


This part of the study will be discussing the relevant literature connected with the study of the weak form hypothesis of FTSE 100. This part of the study accounts the works that has been published on a topic by accredited scholars and researchers. All this would allow the readers to map the field and position your research within the context. Moreover, this part of the study justifies the reason for research. This is closely connected with demonstrating that is known in the field. It is the knowledge of the field that allows one to identify the gap, which the research could fill. Concurrently, it allows the researcher to establish the theoretical framework and methodological focus.

 


Stock Prices

In attempt to provide a glimpse of the significance of this study on the existing related literature, particularly in the field of trading, this part of the chapter shall be discussing studies on the unpredictability of stock prices. Much of the literature on program trading considers its effect on stock price volatility.  (1990) examine the consequences of program trading occurring on “triple-witching days,” that is, dates when multiple derivative contracts on stocks simultaneously expire. As heavy program trading frequently occurs on these expiration dates, Stoll and Whaley’s evidence of higher volatility suggests that program trading can be linked to increased volatility.


 (1988)[12] studied the impact of stock-index futures and finds that volatility does not increase after the introduction of multiple derivative contracts. Since these contracts are frequently involved in program trading strategies, an increase in stock price volatility would be consistent with a program trading effect.  (1989)[13] note that this result depends on the sample period.  (1989)[14] finds only a slight increase in volatility during the 1980s, suggesting that the increased program trading activity that was facilitated by futures trading had, at most, a very modest effect on volatility.  (1989, 1991)[15] find that the volatility of stocks included in the Major Market Index (MMI) rose after the MMI futures contract was introduced. Their risk decomposition indicates that the systematic risk of these stocks rose. Since the MMI futures contract is frequently involved in program trading, this finding suggests that program trading led to higher volatility.


Moreover,  (1991)[16] investigate returns on the Standard and Poor 500 since the 1930s. They find that changes in volatility are conditional on the length of the holding period. There is strong evidence of an increase in return volatility during the 1980s for 15-minute holding periods. When longer holding periods are examined, it is much less evident that volatility has changed.  (1990)[17] suggests a conceptual distinction between the volatility of price changes and price-change velocity. While statistical tests frequently demonstrate no change in volatility levels, the speed of price adjustments does appear to have increased during the 1980s. Froot and Perold (1990)[18] decompose price changes into bid-ask bounce, nontrading effects, and noncontemporaneous cross-stock correlations. They demonstrate that price adjustments occurred more rapidly during the 1980s. Concurrently, direct investigation of the effects of program trading finds temporary increases in volatility that are most prominent in index arbitrage activities.  (1990)[19] review much of this evidence.  (1988) regresses various measures of daily price volatility on program trading intensity, finding no significant effect. A Securities and Exchange Commission study (1989) finds a positive association between daily volatility of changes in the Dow Jones Index and levels of program trading activity.  (1989)[20] finds a significant relationship between price volatility and program trading activity in the three days prior to the October 19, 1987, market break. ,  (1990) and  (1991)[21] investigate intraday program trading, finding that responses to program trades are similar to those found for block trades. Using “GARCH” estimation procedures,  (1994) finds a modest increase in the volatility of returns for one-day holding periods associated with sell program activity. Thus, the evidence is inconclusive.


  Trading in Uncertainty

Trading in the stock market is often shrouded with uncertainty. The best-known articulation of the nature of uncertainty comes out of the Keynesian tradition. Keynes took issue with his predecessors and contemporaries, for whom at any given time facts and expectations were assumed to be given in a definite and calculable form; and risks were supposed to be capable of an exact actuarial computation. According to  concerning future economic events there is no scientific basis on which to form any calculable probability whatever. One simply does not know (1937). Partly underlying this non-probabilistic notion of uncertainty is the fact that unlike throws of dice, each economic event is unique in its interrelationship with an ever-changing environment ( 1952). The future’s vagueness is also due to the effects that today’s actions, taken under the shadow of uncertainty, have on tomorrow’s outcomes: Future states of the world are not given, but are themselves partly functions of the guesses about them currently being made. Because economic choices are not repeatable, and expectations both reflect and affect the future in unpredictable ways, economic agents cannot converge to an understanding of some “true” model of the world. Concurrently, the implications for financial pricing are stark. In (1936,)[22] discussion of the stock market, whether investors are well or poorly informed, acting for the long term or to outguess the crowd, pricing is intrinsically speculative. But this speculation is neither “rational” (unbiased) nor “irrational” (wrong), as these terms are used in neoclassical parlance; true fundamental asset values in that sense simply cannot exist because of the nature of knowledge and knowing. Under these circumstances, investors rely on “convention” to order their decision making (1936)[23]: “The essence of this convention lies in assuming that the existing state of affairs will continue indefinitely, except in so far as we have specific reasons to expect a change.” At any time, convention provides a lens through which information is filtered.


Nevertheless, one need not take this to mean that the financial markets are always and everywhere just a wild casino. Embedded in this process at any point in time is a view of the “fundamentals” underlying prospects for individual assets and the economy. Indeed, investor activity aimed at ferreting out information, and anticipating its effect on prices, is constant and intense. But what is seen as the fundamentals changes in ways that are not uniquely related to given economic conditions. The fundamentals are a social construct. They evolve through the interaction of convention, information-gathering, market activity, investor psychology, and economic events. In “normal times” (1936),[24] the convention that has governed behavior in the recent past seems unproblematic and can actually lend a degree of stability to markets. At other times, existing convention may be shattered by events, or opposing conventional views of the fundamentals may be contending. Keynes used the term “confidence” to describe the broadly prevailing degree of belief that decision makers attach to their expectations about the future. Uncertainty means that investor confidence may fall away precipitously in periods of economic turbulence. Thus, a practical theory of the future based on convention has certain marked characteristics. In particular, being based on so flimsy a foundation, it is subject to sudden and violent changes (1937)[25]. It is this quality of Keynes’s framework that suits it admirably for analyzing a period of tumultuous financial change. Financial markets may be unstable, and their participants’ actions and shifting standards may exacerbate instability. With “conventional decision making” under true uncertainty, expectations tend to be “endogenous” (1993)[26]–they feed back upon themselves through their effect on unfolding market events. To illustrate,  (1977)[27] has used this kind of framework to describe the progressively rosier outlook and riskier behavior of financial market participants during a boom period, leading ultimately to unsupportable commitments and a bust. Nonetheless, Minsky’s theory of the evolution of financial fragility is indicative of questions left unanswered, even when uncertainty and endogenous expectations have been accounted for. The historical record of past financial disasters is freely available to market participants.


Concomitantly, competition under uncertainty can create instability. For Marx, at bottom, this is because competition is intrinsically “anarchic.” Goaded by the search for profits and the pressure of competitors, and linked in effect but not in action, firms make choices whose aggregate result may be incoherence. The most complete anarchy reigns among the capitalists themselves, and within this anarchy the social interconnection of production prevails over individual caprice only as an overwhelming natural law (1894). In this framework, there is no auctioneered equilibrium price vector to pre-coordinate economic activity.


 


Fama’s Model

Research into volatility itself has stimulated research into momentum and financial herding.  (1995), and Wermers (1999)[28] came to the conclusion that a large part of herding behavior occurs when investors “momentum-follow,” and Nofsinger and Sias (1999)[29] found evidence that implicates the use of momentum strategies by growth-oriented funds as an important source of herding. What is worthy of note is that momentum and herding have a notable impact on market price that is not related to economic or financial fundamentals. Market price seldom corresponds to intrinsic value and this disequilibrium can continue for extensive periods of time. Moreover, whereas economic and financial fundamentals will affect value, they are not the main movers of stock prices. In this regard,  (1981)[30] found that a substantial fraction of return variation couldn’t be explained by macroeconomic news.  (1984)[31] found that news about weather conditions, the principal source of variation in the price of orange juice, explains only about 10% of the movement in orange juice futures prices.  (1988)[32] further found that it is difficult to account for more than one-third of the monthly variation in individual stock returns on the basis of systematic economic influences. When investigating which factors moved share price,  (1989)[33] found that macroeconomic news explains only about one-fifth of the movement in stock prices, and they state: “The view that movement in stock prices reflect something other than news about fundamental values is consistent with evidence on the correlates of ex post returns (1989). (1991)[34] established that the main driver of stock returns was changes in volatility, and that fundamental economic and financial factors were not the main drivers of changes in volatility. In fact, they found that as few as one-quarter of the volatility shifts are associated with the release of significant (financial and economic) information.


It is in the uncertainty of market behavior brought about the emergence of forecasting techniques such as the efficient market hypothesis (EMH). Nevertheless, debate continues over the validity of the efficient market hypothesis (EMH), which holds that security prices fully reflect all available information at any given time.  (1970)[35] has categorized market efficiency into three levels in which the definition of information varies into the weak form, which deals with the information contained in previous prices or price trends; the semi-strong form, which broadens the definition to include all publicly available information; and the strong form, which broadens the definition to include even privately held information. Exhaustive tests have been conducted to determine the level of efficiency of large financial markets. This paper shifts the focus of the debate to a smaller speculative market – sports gambling – where new, more definitive evidence is available that suggests that this more thinly traded market achieves a high degree of weak form efficiency. If the EMH is true, no systematic trading strategy will result in significantly abnormal returns. At the weak level this refers to strategies based on previous prices or price trends.


Basically, the efficient market hypothesis (EMH) states simply that it is impossible to consistently outperform the market on a risk-adjusted basis after transaction costs and taxes. It forms the basic benchmark of analysis in financial economics and can be described, less formally. The EMH has been extensively tested since the late 1960s, so much so that (1978 h)[36] has said that it may be the most extensively tested proposition in all the social sciences. In short, the EMH suggests that stock markets are ‘rationally’ priced.  (1988)[37], among others, find that stock returns are somewhat predictable when analyzed over longer terms than the daily or weekly models often used to test the propositions of the EMH.


A related problem is the sensitivity of results to the choice of variables included in the empirical model. To illustrate,  (1982)[38] report that their result supporting EMH based on the monthly New York Stock Exchange stock price index and M1 is overturned when their estimating equation includes the federal funds rate. Unfortunately, similar misspecification problems are found in many past studies. One pertinent issue examined in many previous studies is whether all available information is incorporated into current stock prices. In other words, do lags of economic variables have an important influence on current and future stock price movements? The proponents of EMH argue that stock prices respond only to unanticipated changes in macroeconomic variables ( 1982;  1983)[39].  (1981)[40] finds a significant expected inflation-stock return relationship, however, when the previous year’s growth rate in the monetary base is included in the regression.


Similarly, there are also controversies with regard to the effects of inflation on stock prices. Contrary to traditional belief,  (1977) and (1983)[41] find that unexpected inflation and stock prices are negatively related.  (1983)[42] claim that such a relationship is spurious (1986)[43] and provide evidence in favor of an expected inflation and stock price relationship.


It is in Fama’s model that the modern EMH, as illustrated in this portion of the chapter, has been among the primary indicators of market behavior. The study made by Fama basically  associates between daily price changes for stocks in the Dow Jones Industrial Average (DJIA) over the time period January 1958 through September 1962. He examines interdependency by conducting runs tests and tests for serial correlation using various lags on log daily price relatives. For the one-day lag, a positive correlation is found for 22 of the 30 securities in the sample. The correlation coefficients, however, are generally small. (The average correlation coefficient is only .0262.) Furthermore, although 11 of the securities have correlation coefficients that are significantly different from zero at the .05 levels, two of these 11 securities have negative correlation coefficients. Results from the runs tests find too few runs for 26 of the 30 securities in the sample, indicating a tendency for daily return signs to repeat. For eight of the 30 securities, the deviation from expectation is significant at the .05 levels. From these results  stresses that () the absolute size of the serial correlation coefficients is always quite small” and that “the percentage differences between the actual and expected number of runs are quite small. He concludes that there is no evidence of important dependence from either an investment or a statistical point of view. A number of other studies during this same time period reached similar conclusions when examining price dependency for daily security returns in other national markets or for different time intervals or different commodities in the United States market. Fama’s article remains the authoritative source on daily price dependency for security returns.


 


Weak Form of Market Efficiency: Related Studies

A few studies conducted on the test of efficient market hypothesis (EMH) in emerging markets compared to the volume of studies published on the developed market. It is generally assume that the emerging markets are less efficient than the developed market. The definition of emerging market highlighted on the growth potentiality as well as rapid growth of size of the market. However, it is not unlikely that the market participants are not well informed and behaving irrational compare to well organize markets. The causes of lack of financial development especially in capital markets are due to certain market imperfection such as transaction costs, lack of timely information, cost of acquiring new information, and possibly greater uncertainty about the future ( 1971; 1972; 1973).[44] The different researchers define the emerging market in different ways. According to  (1981),[45] who asserts the nature of the emerging market in terms of information availability where prices cannot be assumed to fully reflect all available information. It cannot be assumed that investors will correctly interpret the information that is released. The corporation has greater potential to influence its own stock market price and there is a greater possibility that its price will move about in a manner not justified by the information available.


And with this open market policy, in the emerging markets speculations are common; large investors can easily speculate the market. As a less organized market without market makers and timely available information, there is always remain as a possibility to make profit by large investors and insiders. The ability to predict stock price changes based on a given set of information lies behind the notion of stock market efficiency. The lower the market efficiency, the greater the predictability of stock price changes.


The Weak Form tests measures whether past series of share prices or returns can be used to successfully predict future share prices or returns. The major empirical investigation of the above test measures the statistical dependence between price changes. If no dependence is found such as when price changes are random, then this provides evidence in support of the WFEMH, which implies that no profitable investment trading strategy can be derived based on past prices. On the other hand, if dependence is found, for example, price increases generally followed by price increases in the next period and vice versa; clearly indicates that this can be the basis of profitable investment rule and violates the assumption of the Weak Form. However, whether any trading rule is profitable depends largely on the operating cost such as brokerage cost, interest cost, trading settlement procedure and on whether transactions can be made at the exact prices quoted in the market.


In general, the results of previous research evidence that the market of developed economies are generally weak form efficient. That means the successive returns are independent and follow random walk (1965,1970).[46]  On the other hand, the research findings on the market of developing and less developed countries are controversial. Some of the researcher find evidence of weak form efficiency and cannot reject the random-walk hypothesis in emerging markets (1986; 1994; 1995).[47] Whereas the others find the evidence of non-randomness stock price behavior and reject the weak-form efficiency in the developing and emerging markets (1996 , 1998).[48]


The early studies on testing weak form efficiency started on the developed market, generally agree with the support of weak-form efficiency of the market considering a low degree of serial correlation and transaction cost (1962;  1962;  1965).[49] All of the studies support the proposition that price changes are random and past changes were not useful in forecasting future price changes particularly after transaction costs were taken into account. However, there are some studies, which found the predictability of share price changes (1988;  1988)[50] in developed markets but they did not reached to a conclusion about profitable trading rules.


 (1988)[51] suggest that noise trading, trading by investors, whose demand for shares is determined by factors other than their expected returns provides a plausible explanation for the transitory component in stock prices. And they suggest constructing and testing theories of noise trading as well as theories of changing risk factors could account for the characteristics of stock returns auto-correlogram they found.  (1988)[52] conclude that auto-correlation’s may reflect market inefficiency or time-varying equilibrium expected returns generated by rational investor behavior and neither view suggests, however, the patterns of auto-correlation should be stable for a long sample period.  and  (1994)[53] found that the technical trading rules have predictive power but not sufficient to enable excess return in U.K market. Similarly, (1997)[54] also conclude that past returns have predictive power in Australian market but the degree of predictability of return is not so high.


Overall, the empirical studies on developed market shows no profitability from using past records of price series supports the weak-form efficiency of the EMH in general. On the other hand, the research findings of weak-form efficiency on the market of developing and less developed markets are controversial. Most of the less developed market suffers with the problem of thin trading. In addition, in smaller markets, it is easier for large traders to manipulate the market. Though it is generally believe that the emerging markets are less efficient, the empirical evidence does not always support the thought. There are two groups of findings; the first group find weak-form efficiency in developing and less developed markets are (1986), (on the Kuala Lumpur Stock Exchange);  1992, (in major Asian markets);  1994 (on the Nairobi Stock Exchange) and Ojah and Karemera 1999, (on the four Latin American countries market) despite the problems of thin trading. [55]


On the other hand, the latter group, who evidence that the market of developing and less developed markets are not efficient in weak-sense are  (1993),[56] on the stock market of Korea and Taiwan; in a world bank study by  (1995),[57] report significant serial correlation in equity returns from 19 emerging markets and suggest that stock prices in emerging markets violates weak form EMH; similar findings are reported by  (1994) for most emerging markets.  (1998)[58] has examined the behavior of stock price in the Saudi Financial market seeking evidence that for weak-form efficiency and find that the market is not weak-form efficient. He explained that the inefficiency might be due to delay in operations and high transaction cost, thinness of trading and illuiquidity in the market.  (1978) and . (1996)[59] find the evidence of non-randomness stock price behavior and the market inefficiency (not weak-form efficient) on the Johannesburg stock Exchange and on the Indian market.


 


Testing Weak Form Efficiency


The Efficient Market Theory (EMH) in its weak form declares that all past information are included in the price ( 1999).[60] If the market is weak efficient there cannot be any opportunities to predict future price movements using past prices (1991).[61] Nevertheless, there are techniques to work out price movements with patterns of past data such as technical analysis; and if these techniques provide opportunities to beat the market, it proves its inefficiency in weak form (2003).[62] However, Higgins (1992)[63] states technical analysis is of no use ( 1992).  In addition,  (2003) suggest that the inconsistent performance of technical analysts have inconsistent performance so technical analysis is indeed useless in beating the market.


The weak form applied to stock markets postulates that share prices fully reflect all available information ( 1991),[64] therefore securities prices follow a martingale. This is a straightforward implication of the assumption that investors fully exploit all available relevant information in trading and therefore arbitrage away all predictable profit opportunities. The above argument abstracts from transaction costs in the presence of which some relevant information might be ignored and not included in the price. In this sense the EMH could be restated in terms of (un)predictability of excess (abnormal) returns ( 1997).[65]


Studies by Mills on the FTSE proved that it is possible to predict future prices using past data ( 2003).[66] So there is evidence that the FTSE is inefficient. Nevertheless most evidences indicate that the market is most of the time weak efficient (Samuels et al., 1999).[67] FTSE is a group of an independent company which creates and provides management of indices and associated data services. FTSE has no capital markets involvement.


Within the UK financial system the equity market played a rather insignificant role as a source of capital. It is interesting to analyze a market that has characteristics of the developed ones. Weak form efficiency tests are applied to the ftse100 because all the data required is currently available. Using tests such as the runs test, variance ratio tests, AR and the market model it is concluded that the UK equity market is a weak form efficient. Similar research on capital markets in some other European economies shows that they do exhibit this property ( 1999).[68] Capital market efficiency is usually studied in the countries with most developed capital markets, like USA and UK.


 


THE EFFICIENT MARKET HYPOTHESIS AND


THE RANDOM WALK METHOD


 


The market is found to be somewhat inefficient and simple wagering strategies are identified that result in profitable returns ( 2001). Considerable research in economics and finance has been devoted to the investigation of the efficient markets hypothesis. An issue that is the subject of intense debate among academics and financial professionals is the Efficient Market Hypothesis (EMH) ( 1992).[69] The fundamental question is whether prices fully reflect available information. If not, then in financial markets, it would be possible for an investor to devise a strategy that would earn above-average returns ( 2001).[70]


 


EMH evolved in the 1960s from the Ph.D. dissertation of . According to a (1965),[71] an efficient market is defined as a market where there are large numbers of rational, profit-maximizers actively competing, with each trying to predict future market values of individual securities, and where important current information is almost freely available to all participants.  (1965)[72] adds that in an efficient market, competition among the many intelligent participants leads to a situation where, at any point in time, actual prices of individual securities already reflect the effects of information based both on events that have already occurred and on events which, as of now, the market expects to take place in the future. In other words, in an efficient market at any point in time the actual price of a security will be a good estimate of its intrinsic value.


Moreover, (1965)[73] and  (2003)[74] state that security prices correctly and almost immediately reflect all information and expectations fully reflect all available information. EMH says that one cannot consistently outperform the stock market due to the random nature in which information arrives and the fact that prices react and adjust almost immediately to reflect the latest information ( 2003).[75] Therefore, it assumes that at any given time, the market correctly prices all securities; and the result is that securities cannot be overpriced or underpriced for a long enough period of time to profit therefrom  (2003).[76]


Most individuals buy and sell under the assumption that the securities they are buying are worth more than the price that they are paying, while securities that they are selling are worth less than the selling price (1965).[77] But if markets are efficient and current prices fully reflect all information, then buying and selling securities in an attempt to outperform the market will effectively be a game of chance rather than skill (1992).[78]


The random walk theory asserts that price movements will not follow any patterns or trends and that past price movements cannot be used to predict future price movements ( 1992).[79] This theory is primarily based on the “The Theory of Speculation” (1900) by  who concludes that the mathematical expectation of the speculator is zero. Bachelier describes this condition as a “fair game.” Moreover, economic theory teaches the notion that in a perfectly efficient stock market, prices should follow a random walk. Under a random walk, historical data on prices and volume have no value in predicting future stock prices. In other words, statistical analysis and “technical analysis” is useless and trying to time the market is a fool’s errand ( 2003).[80]          


 


(1970)[81] made a distinction between three forms of EMH: the weak form, the semi-strong form, and the strong form. The weak form of the hypothesis suggests that past prices or returns reflect future prices or returns ( 2003).[82] It also asserts that all past market prices and data are fully reflected in securities prices, therefore, technical analysis is of no use (1992).[83]


The inconsistent performance of technical analysts suggests that this form holds (2003).[84] However,  (1991)[85] expands the concept of the weak form to include predicting future returns with the use of accounting or macroeconomic variables.  (2003)[86] state that the evidence of predictability of returns provides an argument against the weak form.


On the other hand,  (1991)[87] states that the semi-strong form of EMH asserts that security prices reflect all publicly available information. ( 1992) says that this form asserts that all publicly available information is fully reflected in securities prices so fundamental analysis is of no use. In addition, according to  (2003),[88] there are no undervalued or overvalued securities and thus, trading rules are incapable of producing superior returns. When new information is released, it is fully incorporated into the price rather speedily. The availability of intraday data enabled tests which offer evidence of public information impacting stock prices within minutes ( 1984;  1996).[89] The semi-strong form has been the basis for most empirical research on the tests of market efficiency; however, recent research is including the weak form on the test ( 2003).[90]


The strong form suggests that securities prices reflect all available information, even private information ( 1991).[91] According to  (1992), this form asserts that all information is fully reflected in securities prices; as a result, even insider information is of no use.  (1998)[92] provides sufficient evidence that insiders profit from trading on information not already incorporated into prices. Hence the strong form does not hold in a world with an uneven playing field ( 2003).[93]


There are many empirical studies that attempt to contradict the efficient market hypothesis (Higgins, 1992).[94] Researchers have documented some technical anomalies and stock market anomalies that may offer some hope for traders. According to  (1992),[95] the search for anomalies is effectively the search for systems or patterns that can be used to outperform passive strategies. The EMH became more controversial after the detection of these anomalies (2003).[96]


These phenomena have been rightly referred to as anomalies because they cannot be explained within the existing paradigm of EMH ( 2003).[97] It clearly suggests that information alone is not moving the prices (1984).[98] These anomalies have led researchers to question the EMH and to investigate alternate modes of theorizing market behavior. Some of the more popular anomalies are discussed below.


 (1976)[99] documented the so-called “The January Effect” in which there is an evidence of higher mean returns in January as compared to other months. They used the NYSE stocks (1904-1974) and discovered that the average return for the month of January was 3.48 percent as compared to only .42 percent for the other months. The evidence is supported by  (1992),[100] and (1993)[101] who made later studies. This finding also applies to other countries (1983),[102] for bonds ( 1986).[103]


In the Monday Effect,  (1980)[104] discovers that, through his analysis of daily returns of stocks (1953-1977), there is a tendency for returns to be negative on Mondays whereas they are positive on the other days of the week. He notes that these negative returns are caused only by the weekend effect and not by a general closed-market effect (2003).[105] In Addition,  (1994)[106] find significantly negative returns on Monday in nine countries. However,  (2001[107]) finds that the weekend effect in the UK has disappeared in the 1990s.


Another documented anomaly is the Small Firm Effect.  (1981)[108] who analyzed the 1936-1975 period reveals that excess returns would have been earned by holding stocks of low capitalization companies.


 (2003)[109] state that there is substantial documented evidence on both over and under-reaction to earnings announcements.  (1985)[110] has strong evidence that is consistent with stock prices overreacting to current changes in earnings.  (1993)[111] provides evidence that is consistent with the initial reaction being too small, and being completed over a period of at least six months. In addition, Ou and  (1989)[112] argue that the market underutilizes financial statement information. Thus, the evidence suggests that information is not impounded in prices instantaneously as the EMH would predict (2003).[113]


Researchers also discovered the “The Weather” anomaly.   (1993)[114] shows that the New York Stock Exchange index tends to be negative when it is cloudy. In addition, recent studies find that stock market returns are positively correlated with sunshine, and that rain and snow have no predictive power (2001). [115]


Behavioral finance, which claims that one can beat the market, has been challenging the foundations of the efficient market hypothesis ( 1997). [116] Under this new school, behavioral finance theorists argue that by carefully studying investor behavior, active money managers can identify profitable clues about what stocks to buy and when; and believe that investors make predictable and systematic mistakes when processing information about the stock market (1997).[117] Behavioral finance which is supported by evidence from cognitive psychology also believes that what cause people to make errors in judgment (overconfidence, greed, fear) can be observed, recorded and exploited ( 1997).[118]


 


 (1985)[119] analysis of long-term return anomalies as early as 1933, shows that when stocks were ranked on three- to five-year past returns, past winners tended to be future losers and visa versa. The theory asserts that stocks performing poorly will actually do very well on average over the next few years, so it is good strategy to buy these undervalued stocks (1999).


 


 (1985)[120] attribute these long-term return reversals to investor overreaction, which is the notion that investors overreact to information about companies and stocks, sending prices to unnaturally high or low levels ( 1999). However in the subsequent years, investors will realize they were overreacting to the news and stock prices will drift to their correct levels. In the lag time, investors who are aware of the overreaction can make money on the correcting price drift (1985).[121]


Moreover, according to (1985),[122] investors may also make the mistake of underreacting to financial news; for example, after a company announces good news, such as higher than expected returns, investors may initially underreact to the news, not pushing the stock price high enough and only gradually incorporating its full import into the stock price. Supporters of behavioral finance say that evidence of investor overreaction and underreaction signals an inefficient market, however, they have not made a case strong enough to replace the efficient market theory ( 1999).[123]


 (1997)[124] considers that there is indeed a developing literature that challenges the EMH, and argues that stock prices adjust slowly to information. With this, it is suggested that one must examine returns over long horizons to get a full view of market inefficiency ( 1997).[125]


According to (1992),[126] the paradox of efficient markets is that if every investor believed a market were efficient, then the market would not be efficient because no one would analyze securities. In effect, efficient markets depend on market participants who believe the market is inefficient and trade securities in an attempt to outperform the market. The studies on EMH have made an invaluable contribution to the understanding of the securities market ( 2003).[127] However, there seems to be growing discontentment with the theory. While it is true that the market responds to new information, it is now clear that information is not the only variable affecting security valuation.


(1999)[128] acknowledges that the efficient market hypothesis is not a bullet-proof description of price formation, but that following the standard scientific rule, market efficiency can only be replaced by a better specific model of price formation, itself potentially rejectable by empirical tests. Moreover,  (1999)[129] argues that the behavioral finance camp has not come up with a good alternative to market efficiency.


With this,  (2003)[130] propose that the EMH paradigm be refined to embody the psychological and speculative aspects of the stock market.  (1976)[131] remarks that, “We should always be open to the idea of developing new paradigms incorporating some aspect of decision making that has heretofore been neglected.” (205)  (2003)[132] also suggest that the critics of EMH must present unambiguous, consistent, and direct empirical evidence on the irrational aspect of stock market behavior.


 


Efficient market theory

Efficient markets theory is a field of economics which seeks to explain the workings of capital markets such as the stock market. According to the theory of efficient markets, in an efficient market the prices of stocks will reflect a rational assessment of the true underlying worth of stocks; the prices will have fully and accurately discounted (taken account of) all available information (news). The theory assumes several things including perfect information, instantaneous receipt of news, a marketplace with many small participants (rather than one or more large ones with the power to influence prices). Since the theory assumes that news arises randomly in the future (otherwise the non-randomness would be analysed,forecast and incorporated within prices already), then the theory predicts that stock prices will approximate to a Brownian motion pattern of price movement and that technical analysis (and statistical forecasting) are likely to be fruitless.


This efficient process of price determination can be contrasted with an inefficient market in which, according to the theory, the pre-conditions for efficient pricing (perfect information, many small market participants) have not been met and prices may be determined by factors such as insider trading, institutional buying power, mis-information, panic and stock market bubbles. A central part of this theory is the Efficient market hypothesis.


 


Efficient Market Hypothesis

The efficient market hypothesis (EMH) asserts that stock prices are determined by a discounting process such that they equal the discounted value (present value) of expected future cash flows. It further states that stock prices already reflect all known information and are therefore accurate, and that the future flow of news (that will determine future stock prices) is random and unknowable (in the present). The EMH is the central part of Efficient Markets Theory (EMT).


The efficient market hypothesis implies that it is not generally possible to make above average returns in the stock market by trading (including market timing), except through luck or obtaining and trading on inside information. There are three common forms in which the efficient markets hypothesis is commonly stated – weak form efficiency, semi-strong form efficiency and strong form efficiency, each of which have different implications for how markets work.


  Weak-form efficiency

No excess returns can be earned by using investment strategies based on historical share prices or other financial data. Weak-form efficiency implies that Technical analysis will not be able to produce excess returns. To test for weak-form efficiency it is sufficient to use statistical investigations on time series data of prices. In a weak-form efficient market current share prices are the best, unbiased, estimate of the value of the security. The only factor that affects these prices is the introduction of previously unknown news. News is generally assumed to occur randomly, so share price changes must also therefore be random.


Notice that this looks stationary and quite random: a pattern that we previously fitted with the mean model. Hence, the forecasting model suggested by this plot is



…where alpha is the mean of the first difference , i.e., the average change one period to the next. If we rearrange this equation to put Y(t) by itself on the left, we get:



 


 


Chapter III


METHODOLOGY


This chapter discussed the research methods available for the study and what is applicable for it to use. Likewise, the chapter presented how the research will be implemented and how to come up with pertinent findings. This study tries to present or test the weak form of efficient market hypothesis applied to FTSE 100. Many observers dispute the assumption that market participants are rational, or that markets behave consistently with the efficient market hypothesis, especially in its stronger forms. Many economists, mathematicians and market practitioners cannot believe that man-made markets are strong-form efficient when there are prima facie reasons for inefficiency including the slow diffusion of information, the relatively great power of some market participants (e.g. financial institutions), and the existence of apparently sophisticated professional investors.


The efficient market hypothesis was introduced in the late 1960s and the prevailing view prior to that time was that markets were inefficient. Inefficiency was commonly believed to exist e.g. in the United States and United Kingdom stock markets. However, earlier work by Kendall suggested that changes in UK stock market prices were random. Later work by Brealey and Dryden, and also by Cunningham found that there were no significant dependences in price changes suggesting that the UK stock market was weak-form efficient.


 


An ‘efficient’ market is defined as a market where there are large numbers of rational, profit-maximizers actively competing, with each trying to predict future market values of individual securities, and where important current information is almost freely available to all participants. In an efficient market, competition among the many intelligent participants leads to a situation where, at any point in time, actual prices of individual securities already reflect the effects of information based both on events that have already occurred and on events which, as of now, the market expects to take place in the future. In other words, in an efficient market at any point in time the actual price of a security will be a good estimate of its intrinsic value.


The income model is one example of how econometrics is used, and how it is useful to determine trends and relationships between variables. Other uses may include forecasting prices, inflation rates, or interest rates. Econometrics provides the methodology to economists to make quantitative predications using statistical data.


Chapter IV PRESENTATION, INTERPRETATION AND ANALYSIS OF DATA

 


This section presents the data gathered from the stock market of the United Kingdom. I used both quantitative and qualitative approaches. The researcher tries to determine and test the weak form of market hypothesis applied to FTSE 100. Economic theory teaches the notion that in a perfectly efficient stock market prices should follow a random walk. Under a random walk, historical data on prices and volume have no value in predicting future stock prices. In other words, statistical analysis and “technical analysis” is useless and trying to time the market is a fool’s errand. Many in the academic finance community now hold that stock prices do have some degree of predictability. Using the most rigorous and credible methods, the academic finance community now generally recognizes that stock returns can deviate from a random walk; and that there is value in technical analysis. Nonetheless, efficient market hypothesis and/or random walk hypothesis should not be simply rejected. Rather, it should be treated as the base case to which alternatives can be compared.


 


ECONOMETRIC ANALYSIS OF UK STICK MARKET FTSE 100 INDEX


 


This section examined UK’s stock market using the FTSE 100 index. This issue has generated substantial debate in recent years and has significant implications for stock market efficiency. The study employs the regression-based tests of  (1988)[133] that focus on the autocorrelations of stock returns for increasing holding periods.


 


Previous studies have found predictability in return data, but most have relied on short-horizon data and found only a small degree of systematic change in prices (1988; 1986;  1987;  1988; , 1988).[134] But stock prices may mean revert very slowly as Summers (1986)[135] argues; hence, predictability may be evident only at longer return horizons. The tests of (1988)[136] explicitly capture  (1986)[137] insight. Their evidence, based on U.S. data for industry and decile price indexes, implies considerable predictability for the 1926 to 1985 period, especially for smaller firms. They find much less predictability, however, in the 1940 to 1985 subperiod. Because the present study covers the 1970 to 1989 period, the results are broadly consistent with those of h (1988).[138]


 


This section worked on three assumptions. Assumption 1 strengthens specifications by imposing a rate of convergence. This smoothness condition is typically imposed in spectral analysis (1999).[139] This extension was recommended by a referee who suggested the case of a stationary long-memory series [X.sub.t] ([d.sub.X] [is less than] 1/2) observed with an added short-memory noise component for which [Beta] [is less than or equal to] 2[d.sub.X] [is less than] 1. This case has empirical relevance because it happens in some long-memory stochastic volatility models. In addition, in certain bivariate models of volume and volatility, the short-memory component could affect the long-memory components of both volume or volatility (leverage effect) implying that [Beta] [is less than] 1.


 


Assumption 2 establishes that the process is linear with finite fourth moment. The assumption of linear fourth-order stationary processes has also been employed in the parametric literature (1990; 1997).[140] Notice that the restriction of constant conditional innovations variances could be relaxed by assuming boundedness of the eighth moment as shown by  (1999).[141] Assumption 3 is a regularity condition similar to the ones imposed in the parametric case.


 


Assumption 3 [which is assumption A4' of Robinson (1995 , 1999)[142] for the p = 1 case] imposes an upper bound in the rate of increase of m with n, necessary to control the bias from high frequencies. Notice that this upper bound is especially restrictive when [Beta] is small, suggesting that when the long-memory series is observed with an added noise or when the leverage effect is important, the chosen m should be smaller. It also imposes a mild lower rate for m when tapering is applied.


 


(1988)[143] develop their test from a simple model of stock price behavior. The log of a stock’s price, p(t), is the sum of a random walk component, q(t), and a slowly decaying stationary component, z(t) = az(t-1) + e(t), where e(t) is white noise and a is close to, but less than, unity.


 


A test for the presence of stationary components in p(t) can be based on the estimated pattern of the |Beta~(T)s for increasingly large T. If a stationary component exists, one expects to see negative values for |Beta~(T) that become significantly different from zero at some horizon. The estimates of |Beta~(T) also provide insight into the economic (as opposed to just statistical) importance of the stationary component. From equation (2) and equation (3), the ratio of the variance of |z(t+T) – z(t)~ to the variance of r(t, t+T) equals 2|Beta~(T) when T is large. That is, the estimated |Beta~(T)s can be used to measure the percent of total return variance due to the stationary component, z(t).


 


The empirical analysis relied on the UK FTSE 100 index. Included stocks comprise a representative sample of large, medium, and small capitalization companies from each local market and are meant to replicate the industry composition of each market. The UK FTSE index attempt to include 60 percent of the total capitalization of each market. Stocks of nondomiciled companies and investment funds are excluded from the indexes, and companies with restricted float due to dominant shareholders or cross-ownership also are avoided. The MSCI indexes are calculated using Laspeyeres’ concept of a weighted arithmetic average together with the concept of chain linking. The weights equal the number of shares outstanding. The data are monthly and cover the period 1969:12 to 1989:12. All of the indexes share an identical base, December 1969 = 100.


 


Returns are computed as the log difference of a country’s price index for the appropriate return horizon. The returns are valued in local currencies and are annualized. Both nominal and real returns are studied. Consumer price data needed for the calculation of real returns are from the International Monetary Fund’s International Financial Statistics (1985=100).


 


|Beta~(T) is estimated for each of UK’s stock market using ordinary least squares for return horizons ranging from one month to four years.(3) For horizons greater than one month, returns are constructed using overlapping price data. Thus, the standard errors of the estimated |Beta~(T)s are adjusted for the sample autocorrelation of overlapping residuals using the method of Hansen and Hodrick (1980).


When faced with a time series that shows irregular growth, such as Series #2 analyzed earlier, the best strategy may not be to try to directly predict the level of the series at each period (i.e., the quantity Y(t)). Instead, it may be better to try to predict the change that occurs from one period to the next (i.e., the quantity Y(t)-Y(t-1)). In other words, it may be helpful to look at the first difference of the series, to see if a predictable pattern can be discerned there. For practical purposes, it is just as good to predict the next change as to predict the next level of the series, since the predicted change can always be added to the current level to yield a predicted level. Here’s a plot of the first difference of the irregular growth series from UK’s FTSE 100 index analysis:


 


In other words, I predict that this period’s value will equal last period’s value plus a constant representing the average change between periods. This is the so-called “random walk” model: it assumes that, from one period to the next, the original time series merely takes a random “step” away from its last recorded position. (Think of an inebriated person who steps randomly to the left or right at the same time as he steps forward: the path he traces will be a random walk.)


 


Notice that (a) the one-step forecasts within the sample merely “shadow” the observed data, lagging exactly one period behind, and (b) the long-term forecasts outside the sample follow a horizontal straight line anchored on the last observed value. The error measures and residual randomness tests for this model are very much superior to those of the linear trend model, as will be seen below. However, the horizontal appearance of the long-term forecasts is somewhat unsatisfactory if we believe that the upward trend observed in the past is genuine.


The result showed that most of the price indexes studied have exhibited behavior during the past two decades inconsistent with mean reversion. With regard to nominal returns, only eight of 18 indexes have predominantly negative |Beta~(T)s across return horizons. Recall that mean reversion in prices implies a negative |Beta~(T), as temporarily low returns.  (1988),[144] by contrast, find no significant stationary component for either the equal-weighted or value-weighted NYSE market portfolio from 1941 to 1985. (Their findings for market portfolios, as opposed to their results for industry and


 


A growing literature has revealed that stock prices are predictable to some extent. This paper examined whether UK’s stock price indexes mean revert using regression-based tests developed by  (1988).[145] The results indicate that indexes in a majority of the countries are not stationary. For the indexes that do mean revert, stationary components account for an economically significant part of the total return variance. The mean reversion that does occur arises both from common and country-specific factors.


 


1 The text provides only an abbreviated description of the Fama/French model. Readers are directed to the original paper for a full explanation.


 


2 Because interest lies in examining the time-series properties of prices, the dividend yield is excluded from the return calculation.


 


3 Prior to estimation, Phillips-Perron unit root tests are performed on the log level and first difference of the log level of each index. The results show that log prices are integrated of order one, while first differences, which here constitute returns, are stationary.


 


Because stock-market trading volume is nonstationary, several detrending procedures for the volume data have been considered in the empirical finance literature. For instance,  (1992)[146] fitted a quadratic polynomial trend,  (1996)[147] estimated nonparametrically the trend of the logarithm of the volume series using both equally and unequally weighted moving averages, and  (1999)[148] fitted a linear trend. There is a lack of rigorous statistical theory, however, on the effects of detrending for the inference on the long-memory parameters of nonstationary long-memory processes. Hence, the determination of a detrending mechanism that would allow for inference on the long-memory parameter of stock-market volume is still an unsolved problem.


 


Moreover, because it has been repeatedly shown that a main feature of return volatility (for exchange-rate returns and for stock-market returns) is the presence of long memory ( 1993; 1996;  1996; 1998),[149] it is of interest to test if stock-market volume exhibits long memory and, if this is the case, to test also if return volatility and volume share the same long-memory parameter d. These two implications were derived by (1999)[150]–see their equations (5) and (6)–but their testing procedure has two drawbacks. First, when applied to data linearly (or nonlinearly) detrended, the estimators of the long-memory parameters have unknown properties. Second, it is unknown a priori that the long-memory parameter of volume lies in the stationary region (d [is less than] .5), and in fact they reported some cases for which d [is greater than] .5. In this section, we use a tapering procedure to overcome both difficulties.


 


Modeling (conditional) variances has been one of the most important topics in the stochastic analysis of financial time series ( 2001).[151] These models are readily interpreted as autoregressive moving average (ARMA)- and autoregressive integrated moving average (ARIMA)-type models of the (conditional) variance ( 1996).[152] That is, the dependence of the conditional variance on the past decays exponentially with increasing lag. Moreover, long-range dependence in the (conditional) variance of financial time series, in particular stock-market indexes, has recently attracted considerable attention in the literature (1993; 1994; 1996;  1996).[153]


 


In particular,  (1993)[154] found substantially high correlation between absolute returns and power-transformed absolute returns of some stock-market indexes for long lags. Independently  (1996)[155] arrived at similar results–namely, long memory in volatility series. Both studies appear to argue against short-range dependent specifications of the (conditional) variance based on squared return series. In practical applications, it is often very difficult to find the “right” model and, in particular, to decide whether a series is stationary or has a deterministic or stochastic trend or whether there may be long-range correlations. (In fact, often a combination of these may be present.)


 


, the author New Rules for the New Economy (1998)[156] predicted the following:


 


* Increasing returns. As the number of connections between people and things add up, the consequences of those connections multiply out even faster; so that the initial successes are not self-limiting, but self-feeding.


 


* Plentitude, [1] not scarcity. As manufacturing techniques perfect the art of making copies plentiful, value is created by abundance, rather than scarcity, inverting traditional business propositions.


 


* Follow the free. As resource scarcity gives way to abundance, generosity begets wealth. Following the free rehearses the inevitable fall of prices, and takes advantage of the only true scarcity: human attention.


 


* Opportunities before efficiencies. As fortunes are made by training machines to be ever more efficient, there is yet far greater wealth to be had by unleashing the inefficient discovery and creation of new opportunities.


 


One of the most remarkable features of the new economy has been the stock market capitalisations attached to recently established businesses (2001).[157] In 2000 Gisco Systems, a leading producer of networking software and hardware, founded in 1986, acquired the highest market value of any company in the world, overtaking long-established businesses such as General Electric and Exxon Mobil.


 


In February 2000, Lastminute.com, a business established in April 1998, whose cumulative revenues were less than [pounds]1 million, was floated on the London Stock Exchange at a price which implied a market value of around [pounds]350 million. Fees to advisers and underwriters totalled [pounds]7.7 million and the company raised [pounds]70 million in cash. The purposes for which this money was to be used were largely unspecified and little of it appears to have been spent. Autonomy and Bookham Technologies – two companies which have never made a profit and whose 1999 revenues were [pounds]16.5 million and [pounds]3.5 million respectively – displaced the likes of Scottish and Newcastle Breweries ([pounds]3,300 million turnover, [pounds]400 million profit) and Thames Water ([pounds]1.4 million turnover, [pounds]380 million profit) from the FTSE index of the 100 leading UK companies.


 


Such valuations cannot be supported by conventional rules based on historic earnings, and this is an area in which the need for new approaches has been emphasized.


 


“In forecasting the performance of high-growth companies like Amazon, we must not be constrained by current performance. Instead of starting from the present – the usual practice of DCF valuations – we should start by thinking about what the industry and the company could look like when they evolve from today’s very high growth, unstable condition to a sustainable, moderate-growth state in the future; and then extrapolate back to current performance. The future growth state should be defined by metrics such as the ultimate penetration rate, average revenue per customer, and sustainable gross margins. Just as important as the characteristics of the industry and company in this future state is the point when it actually begins” (Desmet et al., 2000).


 


Chapter V


CONCLUSION


 


The relevance of these valuation models depends, therefore, on the assertion that the business landscape has fundamentally changed: that traditional rules of business success have been modified by technological developments in two key ways. First, barriers to entry are much more substantial than in old-economy businesses. Second, the beneficiaries of these barriers to entry will be existing but recently-established firms, even though their scale of operations is very small relative to the potential size of the market. First-mover advantages are far more significant than in old-economy businesses. These are different but related claims and both propositions are considered in the following section on business strategy. For the moment, however, I note that doubts had set in by 2001, and Amazon’s market value had fallen to .7 billion at 31 March, 2001.


 


The valuation of stock markets as a whole is discussed in an earlier review article ( 1999).[158] In 1996, when Mr n famously talked of “irrational exuberance”, the US stock market was priced, relative to corporate earnings, at levels on a par with 1929 and ahead of the two other peaks of valuation in the twentieth century, 1901 and 1966 (2000).[159] Thereafter, the headline Dow-Jones index rose from 6437.1 (5 December, 1996) to an all-time high of 11722.98 (14 January, 2000).


 


The argument here also is that traditional valuation criteria, such as price-earnings ratios, have been made obsolete by the new economy. Alternative views of appropriate market levels centre round applications of the dividend growth model.


 


It begins from the risk-free bond rate, adding a premium for the additional risk associated with investment in equities   (1999).[160] The anticipated growth of dividends has three components. In the long run, we might expect the growth of corporate earnings and the dividend stream from them to follow study examined the intraday and interday dynamics of both the level of and the rate of growth of national income.


 


This changes in the FTSE (Financial Times-Stock Exchange) 100 index. Like numerous previous studies, there was a find significant evidence of mean reversion and hence predictability in pricing changes measured over high (minute-by-minute) and low (daily) frequencies. For low-frequency data, predictability is driven neither by arbitrage activity nor by microstructure effects. Rather, it is a statistical illusion that is the result of overdifferencing a trend-stationary series.


 


 




Credit:ivythesis.typepad.com


0 comments:

Post a Comment

 
Top