This section examined UK’s stock market using the FTSE 100 index. This issue has generated substantial debate in recent years and has significant implications for stock market efficiency. The study employs the regression-based tests of Fama and French (1988)[1] that focus on the autocorrelations of stock returns for increasing holding periods.


 


Previous studies have found predictability in return data, but most have relied on short-horizon data and found only a small degree of systematic change in prices (Lo and MacKinlay, 1988; French and Roll, 1986; O’Brien, 1987; Poterba and Summers, 1988; and Cochrane and Sbordone, 1988).[2] But stock prices may mean revert very slowly as Summers (1986)[3] argues; hence, predictability may be evident only at longer return horizons. The tests of Fama and French (1988)[4] explicitly capture Summers’ (1986)[5] insight. Their evidence, based on U.S. data for industry and decile price indexes, implies considerable predictability for the 1926 to 1985 period, especially for smaller firms. They find much less predictability, however, in the 1940 to 1985 subperiod. Because the present study covers the 1970 to 1989 period, the results are broadly consistent with those of Fama and French (1988).[6]


 


This section worked on three assumptions. Assumption 1 strengthens specifications by imposing a rate of convergence. This smoothness condition is typically imposed in spectral analysis (Lobato 1999).[7] This extension was recommended by a referee who suggested the case of a stationary long-memory series [X.sub.t] ([d.sub.X] [is less than] 1/2) observed with an added short-memory noise component for which [Beta] [is less than or equal to] 2[d.sub.X] [is less than] 1. This case has empirical relevance because it happens in some long-memory stochastic volatility models. In addition, in certain bivariate models of volume and volatility, the short-memory component could affect the long-memory components of both volume or volatility (leverage effect) implying that [Beta] [is less than] 1.


 


Assumption 2 establishes that the process is linear with finite fourth moment. The assumption of linear fourth-order stationary processes has also been employed in the parametric literature (Giraitis and Surgailis 1990; Hosoya 1997).[8] Notice that the restriction of constant conditional innovations variances could be relaxed by assuming boundedness of the eighth moment as shown by Robinson and Henry (1999).[9] Assumption 3 is a regularity condition similar to the ones imposed in the parametric case.


 


Assumption 3 [which is assumption A4' of Robinson (1995 cited in Robinson and Henry, 1999)[10] for the p = 1 case] imposes an upper bound in the rate of increase of m with n, necessary to control the bias from high frequencies. Notice that this upper bound is especially restrictive when [Beta] is small, suggesting that when the long-memory series is observed with an added noise or when the leverage effect is important, the chosen m should be smaller. It also imposes a mild lower rate for m when tapering is applied.


 


Fama and French (1988)[11] develop their test from a simple model of stock price behavior. The log of a stock’s price, p(t), is the sum of a random walk component, q(t), and a slowly decaying stationary component, z(t) = az(t-1) + e(t), where e(t) is white noise and a is close to, but less than, unity.


 


A test for the presence of stationary components in p(t) can be based on the estimated pattern of the |Beta~(T)s for increasingly large T. If a stationary component exists, one expects to see negative values for |Beta~(T) that become significantly different from zero at some horizon. The estimates of |Beta~(T) also provide insight into the economic (as opposed to just statistical) importance of the stationary component. From equation (2) and equation (3), the ratio of the variance of |z(t+T) – z(t)~ to the variance of r(t, t+T) equals 2|Beta~(T) when T is large. That is, the estimated |Beta~(T)s can be used to measure the percent of total return variance due to the stationary component, z(t).


 


The empirical analysis relied on the UK FTSE 100 index. Included stocks comprise a representative sample of large, medium, and small capitalization companies from each local market and are meant to replicate the industry composition of each market. The UK FTSE index attempt to include 60 percent of the total capitalization of each market. Stocks of nondomiciled companies and investment funds are excluded from the indexes, and companies with restricted float due to dominant shareholders or cross-ownership also are avoided. The MSCI indexes are calculated using Laspeyeres’ concept of a weighted arithmetic average together with the concept of chain linking. The weights equal the number of shares outstanding. The data are monthly and cover the period 1969:12 to 1989:12. All of the indexes share an identical base, December 1969 = 100.


 


Returns are computed as the log difference of a country’s price index for the appropriate return horizon. The returns are valued in local currencies and are annualized. Both nominal and real returns are studied. Consumer price data needed for the calculation of real returns are from the International Monetary Fund’s International Financial Statistics (1985=100).


 


|Beta~(T) is estimated for each of UK’s stock market using ordinary least squares for return horizons ranging from one month to four years.(3) For horizons greater than one month, returns are constructed using overlapping price data. Thus, the standard errors of the estimated |Beta~(T)s are adjusted for the sample autocorrelation of overlapping residuals using the method of Hansen and Hodrick (1980).


When faced with a time series that shows irregular growth, such as Series #2 analyzed earlier, the best strategy may not be to try to directly predict the level of the series at each period (i.e., the quantity Y(t)). Instead, it may be better to try to predict the change that occurs from one period to the next (i.e., the quantity Y(t)-Y(t-1)). In other words, it may be helpful to look at the first difference of the series, to see if a predictable pattern can be discerned there. For practical purposes, it is just as good to predict the next change as to predict the next level of the series, since the predicted change can always be added to the current level to yield a predicted level. Here’s a plot of the first difference of the irregular growth series from UK’s FTSE 100 index analysis:



In other words, I predict that this period’s value will equal last period’s value plus a constant representing the average change between periods. This is the so-called “random walk” model: it assumes that, from one period to the next, the original time series merely takes a random “step” away from its last recorded position. (Think of an inebriated person who steps randomly to the left or right at the same time as he steps forward: the path he traces will be a random walk.)



Notice that (a) the one-step forecasts within the sample merely “shadow” the observed data, lagging exactly one period behind, and (b) the long-term forecasts outside the sample follow a horizontal straight line anchored on the last observed value. The error measures and residual randomness tests for this model are very much superior to those of the linear trend model, as will be seen below. However, the horizontal appearance of the long-term forecasts is somewhat unsatisfactory if we believe that the upward trend observed in the past is genuine.


The result showed that most of the price indexes studied have exhibited behavior during the past two decades inconsistent with mean reversion. With regard to nominal returns, only eight of 18 indexes have predominantly negative |Beta~(T)s across return horizons. Recall that mean reversion in prices implies a negative |Beta~(T), as temporarily low returns. Fama and French (1988),[12] by contrast, find no significant stationary component for either the equal-weighted or value-weighted NYSE market portfolio from 1941 to 1985. (Their findings for market portfolios, as opposed to their results for industry and


 


A growing literature has revealed that stock prices are predictable to some extent. This paper examined whether UK’s stock price indexes mean revert using regression-based tests developed by Fama and French (1988).[13] The results indicate that indexes in a majority of the countries are not stationary. For the indexes that do mean revert, stationary components account for an economically significant part of the total return variance. The mean reversion that does occur arises both from common and country-specific factors.


 


1 The text provides only an abbreviated description of the Fama/French model. Readers are directed to the original paper for a full explanation.


 


2 Because interest lies in examining the time-series properties of prices, the dividend yield is excluded from the return calculation.


 


3 Prior to estimation, Phillips-Perron unit root tests are performed on the log level and first difference of the log level of each index. The results show that log prices are integrated of order one, while first differences, which here constitute returns, are stationary.


 


Because stock-market trading volume is nonstationary, several detrending procedures for the volume data have been considered in the empirical finance literature. For instance, Gallant et al. (1992)[14] fitted a quadratic polynomial trend, Andersen (1996)[15] estimated nonparametrically the trend of the logarithm of the volume series using both equally and unequally weighted moving averages, and Bollerslev and Jubinski (1999)[16] fitted a linear trend. There is a lack of rigorous statistical theory, however, on the effects of detrending for the inference on the long-memory parameters of nonstationary long-memory processes. Hence, the determination of a detrending mechanism that would allow for inference on the long-memory parameter of stock-market volume is still an unsolved problem.


 


Moreover, because it has been repeatedly shown that a main feature of return volatility (for exchange-rate returns and for stock-market returns) is the presence of long memory (Ding, Granger, and Engle 1993; Bollerslev and Mikkelsen 1996; Baillie, Bollerslev, and Mikkelsen 1996; Lobato and Savin 1998),[17] it is of interest to test if stock-market volume exhibits long memory and, if this is the case, to test also if return volatility and volume share the same long-memory parameter d. These two implications were derived by Bollerslev and Jubinski (1999)[18]–see their equations (5) and (6)–but their testing procedure has two drawbacks. First, when applied to data linearly (or nonlinearly) detrended, the estimators of the long-memory parameters have unknown properties. Second, it is unknown a priori that the long-memory parameter of volume lies in the stationary region (d [is less than] .5), and in fact they reported some cases for which d [is greater than] .5. In this section, we use a tapering procedure to overcome both difficulties.


 


Modeling (conditional) variances has been one of the most important topics in the stochastic analysis of financial time series (Beran and Ocker, 2001).[19] These models are readily interpreted as autoregressive moving average (ARMA)- and autoregressive integrated moving average (ARIMA)-type models of the (conditional) variance (Bollerslev and Mikkelsen 1996).[20] That is, the dependence of the conditional variance on the past decays exponentially with increasing lag. Moreover, long-range dependence in the (conditional) variance of financial time series, in particular stock-market indexes, has recently attracted considerable attention in the literature (Ding, Granger, and Engle 1993; Crato and de Lima 1994; Bollerslev and Mikkelsen 1996; Ding and Granger 1996).[21]


 


In particular, Ding et al. (1993)[22] found substantially high correlation between absolute returns and power-transformed absolute returns of some stock-market indexes for long lags. Independently Baillie, Bollerslev, and Mikkelsen (1996)[23] arrived at similar results–namely, long memory in volatility series. Both studies appear to argue against short-range dependent specifications of the (conditional) variance based on squared return series. In practical applications, it is often very difficult to find the “right” model and, in particular, to decide whether a series is stationary or has a deterministic or stochastic trend or whether there may be long-range correlations. (In fact, often a combination of these may be present.)


 


Kevin Kelly, the author New Rules for the New Economy (Kelly, 1998, p.161)[24] predicted the following:


 


* Increasing returns. As the number of connections between people and things add up, the consequences of those connections multiply out even faster; so that the initial successes are not self-limiting, but self-feeding.


 


* Plentitude, [1] not scarcity. As manufacturing techniques perfect the art of making copies plentiful, value is created by abundance, rather than scarcity, inverting traditional business propositions.


 


* Follow the free. As resource scarcity gives way to abundance, generosity begets wealth. Following the free rehearses the inevitable fall of prices, and takes advantage of the only true scarcity: human attention.


 


* Opportunities before efficiencies. As fortunes are made by training machines to be ever more efficient, there is yet far greater wealth to be had by unleashing the inefficient discovery and creation of new opportunities.


 


One of the most remarkable features of the new economy has been the stock market capitalisations attached to recently established businesses (Kay, 2001).[25] In 2000 Gisco Systems, a leading producer of networking software and hardware, founded in 1986, acquired the highest market value of any company in the world, overtaking long-established businesses such as General Electric and Exxon Mobil.


 


In February 2000, Lastminute.com, a business established in April 1998, whose cumulative revenues were less than [pounds]1 million, was floated on the London Stock Exchange at a price which implied a market value of around [pounds]350 million. Fees to advisers and underwriters totalled [pounds]7.7 million and the company raised [pounds]70 million in cash. The purposes for which this money was to be used were largely unspecified and little of it appears to have been spent. Autonomy and Bookham Technologies – two companies which have never made a profit and whose 1999 revenues were [pounds]16.5 million and [pounds]3.5 million respectively – displaced the likes of Scottish and Newcastle Breweries ([pounds]3,300 million turnover, [pounds]400 million profit) and Thames Water ([pounds]1.4 million turnover, [pounds]380 million profit) from the FTSE index of the 100 leading UK companies.


 


Such valuations cannot be supported by conventional rules based on historic earnings, and this is an area in which the need for new approaches has been emphasized.


 


“In forecasting the performance of high-growth companies like Amazon, we must not be constrained by current performance. Instead of starting from the present – the usual practice of DCF valuations – we should start by thinking about what the industry and the company could look like when they evolve from today’s very high growth, unstable condition to a sustainable, moderate-growth state in the future; and then extrapolate back to current performance. The future growth state should be defined by metrics such as the ultimate penetration rate, average revenue per customer, and sustainable gross margins. Just as important as the characteristics of the industry and company in this future state is the point when it actually begins” (Desmet et al., 2000).



 


[1] Supra Fama and French (1988)


[2] Supra Lo and MacKinlay; French, K.R., and Roll, R. (1986) Stock Return Variances: The Arrival of Information and the Reaction of Traders. Journal of Financial Economics, 17, pp. 5-26; O’Brien, J.M. (1987) Testing For Transitory Elements in Stock Prices. Mimeographed, Board of Governors of the Federal Reserve System; Supra Poterba and Summers; Cochrane, J.H. and Sbordone, A. (1988) Multivariate Estimates of the Permanent Components of GNP and Stock Prices. Journal of Economic Dynamics and Control, 12, pp. 255-296.


 


[3] Supra Summers


[4] Supra Fama and French


[5] Supra Summers


[6] Supra Fama and French


[7] Lobato, I. N. (1999) A Semiparametric Two Step Estimator in a Multivariate Long Memory Model. Journal of Econometrics, 90, 129-153.


 


[8] Giraitis, L., and Surgailis, D. (1990) A Central Limit Theorem for Quadratic Forms in Strongly Dependent Linear Variables and its Application to Asymptotic Normality of Whittle’s Estimate. Probability Theory and Related Fields, 86, 87-104; Hosoya, Y. (1997) A Limit Theory for Long-Range Dependence and Statistical Inference on Related Models. The Annals of Statistics, 25, 105-137.


 


[9] Robinson, P. M., and Henry, M. (1999) Long and Short Memory Conditional Heteroskedasticity in Estimating the Memory Parameter of Levels. Econometric Theory, 15, 299-336.


 


[10] Ibid


[11] Supra Fama and French


[12] Ibid


[13] Ibid


[14] Gallant, A. R., Rossi, P. E., and Tauchen, G. E. (1992), “Stock Prices and Volume,” Review of Financial Studies, 5, 199-242.


 


[15] Andersen, T. G. (1996), “Return Volatility and Trading Volume: An Information Flow Interpretation of Stochastic Volatility,” Journal of Finance 60, 169-204.


 


[16] Bollerslev, T., and Jubinski, D. (1999), “Equity Trading Volume a Volatility: Latent Information Arrivals and Common Long-run Dependencies,” Journal of Business & Economic Statistics, 17, 9-21.


[17] Ding, Z., Granger, C. W. J., and Engle, R. F. (1993), “A Long Memory Property of Stock Market Returns and a New Model” Journal of Empirical; Bollerslev, T., and Mikkelsen, H. O. (1996), “Modeling and Pricing Long Memory in Stock Market Volatility,” Journal of Econometrics, 73, 151-184; Baillie, R. T., Bollerslev, T., and Mikkelsen, H. O. (1996), “Fractional Integrated Generalized Autoregressive Conditional Heteroskedasticit, Journal of Econometrics, 74, 3-30; Lobato, I. N., and Robinson, P. M. (1998), “A Nonparametric Test for I(0),” Review of Economic Studies, 65, 475-495.


 


[18] Supra Bollerslev and Jubinski


[19] Beran, J. and Ocker, D. (2001) Volatility of Stock-Market Indexes–An Analysis Based on SEMIFAR Models. Journal of Business & Economic Statistics, Vol. 19.


 


[20] Bollerslev and Mikkelsen


 


[21] Supra Ding and Granger, 1993; Ding, Z., and Granger, C. W. J. (1996), “Modeling Volatility Persistence of Speculative Returns: A New Approach,” Journal of Econometrics, 73, 185-215; Supra Crato and de Lima; Supra Bollerslev and Mikkelsen.


 


[22] Ibid


[23] Supra Baillie, Bollerslev, and Mikkelsen


 


[24] Kelly, K. (1998), New Rules for the New Economy: 10 Ways the Network Economy is Changing Everything, London, Fourth Estate.


 


[25] Kay, J. (2001). What became of the new economy? National Institute Economic Review.


 




Credit:ivythesis.typepad.com


0 comments:

Post a Comment

 
Top