Random Walk, Ergodicity versus Predictability – The Case of the Budapest Stock Exchange

In financial markets, the term ‘random walk’ is frequently used in relation to price movement over a period of time. This highly expressive term simply means that prices do not follow a predictable trend, and so previous movements are unsuitable as a basis for speculation regarding future price changes. There exists, however, another model which is based on the ergodic theorem, and this says that past and present probability distribution define the probability distribution which will dictate future market prices. Clearly, the ‘random walk’ hypothesis and the ergodic theorem are polar opposites, and, whilst the concept of uncertainty is closely linked to the former, the latter suggests that forecasting is, in fact, possible. This paper examines how the theory of efficient markets and the efficiency of the market itself provide the means to resolve this contradiction. We provide empirical proof concerning the ‘random walk’ theory, for both the recession and post-recession periods in the case of the stock index of the Hungarian stock market.


Introduction
In the present study we follow two objectives: the first is a critical presentation/confrontation of the scientific literature regarding the random walk and efficient market hypotheses and also the ergodic principle. Secondly, we carry through an empirical investigation to help us take sides in the debate regarding the random walk character versus the ergodic character of stock index prices, as illustrated by an emerging market index, the BUX index of the Hungarian Stock Exchange. The paper is organised as follows: section 1 discusses the relevant literature for the random walk/market efficiency theory, section 2 discusses the ergodic principle, section 3 presents and discusses the empirical results, and finally, section 4 concludes. The contribution/novelty of this paper is two-fold: firstly, it uses very recent daily data on the stock index to examine its random walk character and secondly, it is (to our knowledge) the first article in which the connection is made between the random walk character of a market index and the non-ergodicity of the index.

Market Efficiency and Random Walk as a Basis for the Evolution of Stock Market Prices
The notion of an efficient market is tightly linked to the name of Chicago professor Eugene Fama. However one can already encounter its content in George Gibson (1889), and the works of Bachelier (1900), Cowles (1933), Holbrook Working (1949), Kendall (1953), Cootner (1964), Samuelson (1965a) all emphasize the non-forecastability of stock market prices, although they haven't employed the term "efficient market" yet. Muth (1961) constructed a notion very similar to the efficient market hypothesis (EMH in the following), namely the theory of "rational expectations". This, in contrast to the so-called "adaptive expectations", states that decision-makers do not only take into account past information in their forecasting decisions, but also all available certain and uncertain information (conditional expected value model). The earliest application of this theory was on foreign exchange markets where they assumed that rational expectations lead to stabilizing speculations through negative feedback, thereby lowering market volatility. In today's terminology "rational expectations" and "efficient markets" are synonyms, the first is used more often by macro-economists, and the latter by financial economists.
The earliest empirical studies revealed that the prices on the financial markets exhibit an evolution path that can best be described by a random walk (Bachelier, 1900). This means very briefly that the price changes are random and independent of each other. This was more deeply 166 Random Walk, Ergodicity versus Predictability -The Case of the Budapest Stock Exchange demonstrated by Kendall 1 regarding a large spectrum of raw material and stock markets. 2 . However, the first truly mathematically rigorous application of the random walk hypothesis to the modern science of economics can be linked to the works of Samuelson 3 : his contribution is best summarized by the title of his article, namely that there is evidence that properly anticipated prices fluctuate randomly. The prices on an informationally efficient market must be unpredictable provided that they are properly anticipated, i.e. they fully incorporate the expectations and information of the entirety of market participants. Fama 4 (1970) constructed based on this the whole edifice of the efficient market theory, convincingly proving that on an active market which incorporates many informed and intelligent investors, the stocks are correctly evaluated and their price reflects all the available information.
The random walk hypothesis, the theory of efficient markets and the non-predictability of stock market prices were closely interwoven from the very beginning. Samuelson 5 first gave a verbal proof of the inevitability of the random walk as a stock price evolution trajectory, and second, he provided a formal proof for the unnecessary character of forecasting. He also showed that under certain conditions the forward prices -in the case of raw materialsmight exhibit the characteristics of random walks. Samuelson's formalized proof is based on a fundamental feature of the conditional expectations: if we do successive forecasts of prices, then these forecasts are conditional expectations, therefore the prior expectation of the next forecast is equivalent to today's forecast. In other words, the best solution today for tomorrow's forecast is simply today's forecast. Samuelson's proof relies on the fact that the forward prices are fluctuating randomly. Its foundational hypothesis is that the market equates the forward prices with the conditional expectations for the spot price on the forward deal settlement day 6 . Samuelson argues that the expected profit from holding the forward position will be zero. Samuelson, by defining forward prices as being the conditional expectations of future spot prices, has made progress in strengthening the theory of financially efficient markets. Fama (1965Fama ( , 1970Fama ( , 1976) did a great deal of work to promote these notions by combining Samuelson's theoretical basis with empirical proof.
The main reason for the existence of efficient markets is the intense competition among investors to benefit from new information. The ability to identify over-and under-priced shares is an invaluable skill as it enables investors to buy some below their "real" value, or to sell others for more than they are worth. Many investors, therefore, devote a good deal of time and energy to searching for mispriced shares, but, inevitably, as more analysts compete with each other and try to gain an edge in the area of over-and undervalued shares, the less likely they are to find and exploit these profitably. On balance, only a relatively small number of analysts will actually profit from locating such securities. For the overwhelming majority of investors, the profit to be derived from information analysis cannot be expected to exceed transaction costs (For more on this topic see Clarke-Jandik-Mandelker, 2001). In conclusion the random walk signals that the price movements don't follow any trend or trajectory, and the past prices are useless for drawing conclusions about the future price development.
Following the Fama definition, in the next decades there has been an intensive debate in the literature about the nature of market efficiency and many so-called "market anomalies" were documented. One particular important aspect is that the very notion of efficiency underwent some refinements, e.g. the notion of "imperfectly efficient" markets (Grosmann-Stiglitz 1980): on an imperfectly efficient market, there may well be opportunities for abnormal investment performance, but these always imply extra costs with respect to the superior information gathering capacity and superior analyst coverage. In the same article the authors define the so-called "paradox of passive investment" a.k.a. the "Grosmann-Stiglitz paradox": according to this the perfectly efficient market is not even theoretically possible, because if there's no possibility of obtaining extra return from the processing of new information, then no one will bother analysing the relevant information hence these information cannot be incorporated into the market prices. Concisely: in order for the market to be eventually or generally efficient, it must be inefficient on the short term. The efficiency implies inefficiency and vice-versa.
Since many decades now, starting with the 1970's, the theory of efficient markets and the random walk of security prices, hold the central position in the financial theory as well as in the market practices. Nevertheless, there have been attempts at questioning the time independence of price changes. Lo-MacKinley 7 asserts that the serial correlation of share prices is significantly different from zero. Therefore, there is possibility for short-term returns on the share prices when investors realize that the share prices move consequently in the same direction (herding effect). Shiller 8 believes that this herding effect lead to the irrational bubble named "dot-com boom" of the 1990's. Fama 9 in his famous response to the behavioural school of thought argues that while at first indeed investors may over-or underreact to information, in time their reaction is complete.
In the efficient market theory, the prices of the liquid assets traded on the market fluctuate around the actuarial price, this fluctuation (the difference between the actuarial price and the actual price) being called "noise". The noise is plainly the weighted average of the probabilities of having each share price deviate in the forthcoming period.
Black 10 's study about noise convincingly demonstrates the importance of noise in general and of noise trading in particular in the security prices and in the formation of random walks. The fundamental significance of this work is that unlike Samuelson's implicit proof, it gives an explicit argumentation about the central role of a random walk. The noise is one of the components of security prices, which, in its nature is akin to return. Black 11 gives a surprising answer to the question: what is the actual motivation for noise trading? According to him, the noise traders trade something that they believe to be information but which, in reality, is only noise. More generally, at any given moment there are investors who trade from others reasons than informatione.g. those who trade for covering their unexpected liquidity needs -and these investors are willing to pay an extra fee for the privilege of being able to carry out their transaction instantaneously. It's important to realize that noise is the opposite of information: the market participants usually trade based on information, and while noise can facilitate the functioning of the financial markets, it is also a source of disorder in this functioning.
According to Black (1986) in his particularly important observation, both price and value are viewed as a geometrical random walk process with an average that differs from zero. The unpredictability of price fluctuation is due to the fact that average percentage changes both in price and value will themselves change in the future. The value process average will change due to advances and novelties in taste and style, in technology and the economy. This particular average may decline substantially when value rises and may increase if value decreases. Black also emphasises that the short-term volatility of prices will be greater than that of value. Noise, in this particular context, is independent of information, and so, when the variance of price movement percentage caused by noise equals the variance of price movement percentage caused by information, then the variance of the price movement percentage measured daily, as Black puts it, will be almost double that of the variance of change in value. Over the longer term, however, these values will converge. Since price reflects value, price variance over a longer period of time will be much less than double that of 9  One of the most prominent advocates of the efficient market theory besides Fama is Burton Malkiel, whose many-edition book, "A random walk down Wall Street", Malkiel (1992) reinforces the EMH of the American stock market with several empirical studies. Fama (1965) analyses the returns of 30 stocks between 1957-1962 with the help of parametric autocorrelation and non-parametric runs-tests. The autocorrelation coefficients that he obtains are very small and the differences between different runs are insignificant, the market seems to be efficient at least in the weak form. Lo-McKinlay (1988) applies variance ratio tests and, contrary to Malkiel, they advocate non-randomness. Their book, which is a collection of several econometrical topics, is titled "A non-random walk down Wall Street". The authors analysed weekly returns of stocks from the NYSE (New York Stock Exchange) and AMEX (American Stock Exchange) between 1962-1985, establishing that the EMH can be rejected. Contrary to portfolio-level studies, in the case of individual stocks and especially in the case of small capitalization papers, there is a negative autocorrelation, which is partly due to the effect of infrequent trading in the view of the authors.
According to Poterba and Summers (1988), there is a mean-reverting component of stock market prices which becomes significant only on longer intervals.
During the 1990's there was a growing body of literature which documented forecasting patterns on the U.S. markets. Fama and French (1992) find that the autocorrelation of returns becomes negative on a 2 year horizon, reaches its minimum in 3-5 years, following which, on longer intervals it approaches again zero (U-shaped autocorrelation functions). Campbell (1991) applies a variance decomposition from which it emerges that the greater part of the variance of market prices gives information about the expected future return and not about the expected future dividend.
On the European markets Galesne (1974) applies the Alexander-filter on the stocks of the Paris Stock Market between 1957 and 1971 but arrives at the conclusion that this strategy couldn't have been profitably exploited. Brock et al. (1992) tests moving average and support-resistance lines between 1897 and 1986 for daily index data. They apply a more modern "bootstrapping" methodology 12 and conclude that there is significant nonlinear dependence in the returns which can be exploited economically.
An often cited source regarding the chaotic behaviour of the developed stock markets is Edgar Peters (1994), who performed calculations on the American market with high frequency (3 minute resolution) data for the S&P 500 index between 1989-1992. His main finding is an estimation regarding the so-called Hurst exponent for which he obtained 168 Random Walk, Ergodicity versus Predictability -The Case of the Budapest Stock Exchange the value of 0,63 (this implies a persistent time series with strong autocorrelation).
Regarding the dynamics of the empirical studies it can be said that the earlier, simpler tests (e.g. the filter tests) generally reinforced the weak form efficiency of the markets whereas the later, more complex techniques (bootstrap, nonlinear modelling, neural networks, fuzzy methodology) provided grounds for the technical analysis as well.
Ball and Brown (1968) set the stage for the case studies examining the semistrong form efficiency of the market. The authors examine the effect of 261 announcements of 261 American public companies, distinguishing between "good" and "bad" announcements. Their results show that the market anticipated correctly even a month earlier the announcements, integrating efficiently the information into the stock prices.
Another very often cited work is Fama et. al (1969)(referenced many times as Fama-Fisher-Jensen-Roll, FFJR) in which the authors apply for the first time the CAPM model for determining the abnormal returns. The authors analyse the effect of 940 stock splits on the NYSE between 1927-1959. Their work hypothesis is that contrary to a perfectly efficient market where a stock split doesn't have any influence whatsoever on the prices and returns, on a real market the stock split signals positive expectations regarding the future dividends on the side of investors. They use symmetric, 20-months event windows and successfully demonstrate the fact that the information regarding the stock split incorporates efficiently into the prices. Despite many criticisms of the model (too long event windows, the asynchronicity of monthly returns and the announcements), this work has laid the foundation for many further studies.
A significant part of the researchers aims at examining the efficiency of dividend announcements.
The adepts of the signalling theory (Bhattacharya (1979), Rodriguez (1992)) think that the dividend announcements and the corporate dividend policy can be used as value-transmitters, in the sense that growing dividends transmit signals referring to more optimistic perspectives. The usage of the dividend as means of signalling shows that the different signalling tools are not perfect substitutes (Asquith and Mullins 1986). Patell and Wolfson (1984), instead of dividend announcements, examine earnings announcements on intraday data. According to their results it is only profitable to trade based on the announced information during the first half hours after an announcement, which reinforces the semistrong efficiency of the market.
Another group of researchers measures the effect of changes in capital structure on returns. Keown and Pinkerton (1981) analyses announcements of planned corporate acquisitions as relevant events using 150 day long asymmetric event windows. Their results reinforce the semistrong form efficiency.
Myers and Majluf's(1984) work is also of basic importance in which they document on the American stock market that the raises in capital cause a drastic drop in stock prices. Asquith and Mullins (1986) sustain these findings, according to them the announcement of the capital raise causes on average a 3% drop in stock prices. Hamon and Jaquillat (1992) report similar findings on the French stock market.
To summarise, we can conclude that share prices follow random walks -and this, typically, is closely linked to the theory of efficient markets. The more efficient a market is, the more random the sequence of price movements will be. The random walk, therefore, can be defined by price variations that are independent of each other in time.

The Ergodic Principle and Future Security Price Variations
For market players it has always been a challenge to predict future price movements, since those who succeed can profit both substantially and rapidly. The efficient market theory and the random walk process provide an agnostic answer to questions concerning inexplicable fluctuations in price and the pointlessness of forecasting. In financial theory (in the narrow sense) and in economics (in a much broader sense) there exists another model which is based on the ergodic principle. This claims that the probability distribution of the past and the present defines the probability distribution that will govern future market price outcomes. Consequently, the future is never uncertain, since it is only risky in terms of probability, which, in fact, cannot be grasped by present human knowledge. Referring to Keynes's (1936) well-known concept of uncertainty, the post-Keynesian Davidson (1982Davidson ( -1983 has argued against the ergodic principle (Davidson, 2007:30-35; 102-102; 110-112).
The ergodic theorem describes the behaviour of dynamic systems operating on a long-term basis. This basic standard states that, under certain conditions, a time average of a function exists along the whole trajectory and it is related to the space average. Birkhoff (1931Birkhoff ( , 1942, Neumann (1932) and Kolmogorov (1934) start from the observation that, in general, time and space averages may differ. However, if a transformation is ergodic, and the measure does not vary, then the time average is equal to the space average almost everywhere. If we take an integrable function in space starting from an initial point (in accordance with the ergodic distribution) and measure the average of the function for the universe (for the whole population), then we will arrive at the time average. When time approaches infinity, the time average also approaches a limit; this limit equals the weighted average of the function value located at all points of space (with averages that are described by the same probability distribution) which is termed the spatial average. Samuelson (1969) writes that, if they were to transfer economics "from the realm of history into the realm of science," then they must impose the "ergodic hypothesis on their theory" (Samuelson, 1969:12). In other words, he has made the ergodic principle the sine qua non of the scientific method of economics. Lucas-Sargent (1981) also demands that the principle behind the ergodic principle should be the sole scientific method for economics. As they claim, "science demands rigour in character, consistency and mathematical verifications. Thus, if economics aims to belong to the field of science, it must bear these characteristics." Samuelson, in addition, insists that economists must accept the ergodic principles in their models when dealing with economics as a science, equal to physics, astronomy and chemistry.
All the above leaves little doubt that elevating the ergodic principle to the rank of model must have been a theoretical declaration of science rather than the outcomes of an observation relying on empirical evidence which, at the same time, meant a U-turn against Keynes's uncertainty principle. Back in the days when Keynes (1936) composed his "General Theory," he could not have been acquainted with the ergodic theorem of stochastic processes. Nevertheless, Keynes (1939), when criticising Tinbergen's econometric method, notes that Tinbergen's method is not valid for every single prediction made for the economy since economic data are "not homogeneous" over time. The lack of homogeneity is the minimum condition for non-ergodic processes. Therefore, using the vocabulary of the post-Keynesian era, Keynes's uncertainty principle regarding the economic future is based on the demand that the economic system is governed by non-ergodic stochastic processes.
The ergodic principle assumes that the economic future is predetermined since the economy is conducted by an existing ergodic stochastic process. A technical explanation for the difference between the ergodic and non-ergodic stochastic processes is provided by Davidson (2009). The ergodic principle predicts that the future is predetermined by existing parameters (market fundamentals). Therefore, the future is possible to forecast realistically by analysing present and past data so that a probability distribution conducting future momentums can be gained. To put it another way, provided that future events are presumably generated by an ergodic stochastic process (applying the vocabulary of mathematical statistics), the future can be predetermined as well as detected today by analysing the statistical probability of past and present data regarding market fundamentals. If the system is non-ergodic, then probability distributions of past and present do not provide a statistically reliable estimate for the probability of future events.
If one understands the economy as being stochastic, then future outcomes will be determined by a probability distribution. Davidson (2012), in his particular argument, denies the future accessibility of data. Logically speaking, in order for income-producing experts in finance to prepare statistically credible forecasts for future parameters, decision makers are to take samples from the future and then to analyse the findings. Since this is not possible, the premise is as follows: if the economy is regarded as a stochastic process, then it enables the analyser to assert that the samples taken from past and present are equivalent to the samples taken from the future. This presumption is known as the ergodic principle -which basically claims that the future is regarded simply as the statistical shadow of the past. If we were to accept the ergodic principle as the sole universal truth, then it would be viable to calculate probability distribution on the basis of historical market data -which is statistically equivalent to samples taken from the future and then analysed. It is exclusively the concept of the ergodic principle that enables the past, present and future to be rolled into one.
This approach markedly contradicts Keynes who believed that economic systems advance in time from an irreversible past towards an uncertain, statistically unpredictable future, and where the decision-making practices of individuals on spending outflows are executed by admitting that they do not have access to future outcomes. Had Keynes, in his own time, been acquainted with the classical ergodic principle, he would have rejected this concept since this approach specifically claims that all future outcomes are actuarially certain -that is, the future is describable or predictable on the basis of existing market data. Efficient market practices presume that the essential information, both past and present, is available to all decision-makers. The neoclassical theory assumes that participants in the market have "rational expectations" with respect to any decisions made today having probable outcomes tomorrow. Lucas's (1978) theory of rational expectation claims that although individuals presumably make decisions on the basis of their own subjective probability distribution, nevertheless, should their expectations be rational, these subjective distributions must be identical to the objective probability distribution which would decide outcomes at any given future moment. In other words, contemporary participants in the market should somehow possess statistically reliable information on the probability distribution of future events in the world that could happen at some specific future moment.
The post-Keynesian Davidson (1982Davidson ( -1983Davidson ( , 2007, by re-directing the ergodic principle to Keynes' analysis, assumes that the financial system is determined by non-ergodic stochastic processes. In a non-ergodic world, the probability functions of today or of the past are not reliable in calculating the probability distribution of any future outcomes, and if these cannot be predicted reliably on the basis of past and present data, then there is no actuarial base for insurance companies to provide security for holders of these assets against unfavourable outcomes. Consequently, it was no surprise that insurance companies offering protection against possible unfavourable future outcomes resulting from assets being traded in these failing securitised markets realised that they would lose billions of dollars more than they had expected (Morgenson, 2008). In a non-ergodic world, it is impossible to estimate actuarially future insurance pay-outs. Keynes and post-Keynesians reject the presumption that an individual may have some control over the economic future, as it is not predetermined. As they say, the future is uncertain, and its wildcat nature is not based on probability. The next three citations serve to raise more questions regarding the recognition of future instances as precisely as possible. Taylor-Shipley's article (2009) written during the financial crisis of 2008-2010 reads as follows in terms of predictability: "Probability and Statistics just don't feel right for many problems... They give the impression of allowing fairly for the eventualities... and then something unexpected happens....Those of a more pragmatic nature would want some measure of credibility such as the extent of applicability to a theory or a problem. In complex systems, the predictability that is so successful in the controlled worlds of the lab and engineering has not worked and yet theories claiming predictability have misled policy makers and continue to do so." Hicks (1977:vii) notes the following: "We must suppose that people somehow cannot see in any models what will happen, and that they are aware of the fact that they just do not have the means to know what will eventually happen." Hicks, who accepted Keynes's framework, reckons that, as well as the actual uncertainty conditions, people often recognise that they do not -indeed, cannot -possess all the attributes of rational behaviour. Davidson (2008) explains the relationship between future uncertainty and the drastic changes in the market: as long as the future is uncertain and not merely risky in terms of probabilistic measures, the price at which liquid assets are sold at a given future moment in a free market could vary dramatically in an instant. According to the worst scenario, liquid financial assets could become unsellable at any price as the market collapses, creating toxic assets in a chaotic manner. This is exactly what happened in the mortgage-backed securities market, especially when sub-prime mortgage derivatives were formulated.
From this, it can be deduced that the random walk hypothesis and the ergodic principle, in theory, are polar opposites. While the former is strongly related to the concept of uncertainty, the latter is, in contrast, associated with justifying the possibility of forecasting. There is empirical proof of the random walk hypothesis, but the ergodic principle can be regarded as an attempt to resolve uncertainty. We shall now examine certain empirical aspects about the testing of the random walk hypothesis and its implications on the theory of efficient markets.

The Random Walk Model
The early approaches, as previously stated, analysed the random walk of stock market prices, and afterwards they identified randomness as the most important econometric feature of market efficiency.
The mathematical model of random walk. In case of a random walk the stock price at a given time (St) is the sum of the previous price (St-1) and a normally distributed random variable with zero mean and constant variance 13 (ε): If we take this deeper into the past:

Empirical Testing of the Random Walk
Coming back to the specification of random walk presented previously, the 1 t t S S ε − = + stochastic process (in the present case the price-process) can be called a random walk if, in the language of time series econometrics, the process contains exactly one unit root. This means that if we replace the process by its first differences, we arrive at a stationary process (a process with constant variance): In this equation, the L variable is called "lag operator" and it is simply the value of the variable lagged one period. Because we can use first (and not higher) order differences to arrive at stationarity, we say that the random walk is a "first order integrated" process.
One can use so-called "unit root" tests and "stationarity" tests for the econometric testing of a random walk.
Unit root tests. The unit root tests that are most often used are the Augmented Dickey-Fuller (1979) and Phillips-Perron (PP) (1988) tests.
The Dickey-Fuller (DF) test is based on the following first order autoregressive (AR(1)) process: where μ and φ are parameters, and t ε is white noise. Y is a random walk (first order integrated) exactly when φ = 1. If 0<φ<1 then the process is stationary, and if φ>1, the process has an explosive behaviour. According to the null hypothesis of the DF test, the process has exactly one unit root: H 0 : φ=1.
The "Augmented Dickey Fuller" (ADF) test also takes into consideration the higher order differences: The ADF test is very easy to use: the joint null hypothesis is: μ=0 and γ=0. If this hypothesis cannot be rejected, then the process contains exactly one unit root, in other words, the price evolution is a random walk.
In this study, we will use a slightly improved version of the test, the so-called ADF-GLS 15 test (or DF-GLS test) which was developed by Elliott, Rothenberg and Stock (1996) (ERS) as a modification of the augmented Dickey-Fuller test (ADF). For series featuring deterministic components in the form of a constant or a linear trend, ERS developed an asymptotically point optimal test to detect a unit root. This testing procedure dominates other existing unit root tests in terms of power. It locally de-trends (de-means) data series to efficiently estimate the deterministic parameters of the series, and use the transformed data to perform a usual ADF unit root test. This procedure helps to remove the means and linear trends for series that are not far from the non-stationary region.
The null hypothesis of the Phillips-Perron test is the same as the one of ADF, except that it contains a correction term for eliminating the autocorrelation of the residuals.
Stationarity tests. The stationarity tests are different from the unit root tests in that they possess an inverted hypothesis system: the null hypothesis is stationarity rather than the first order integration. A very often-used stationarity test is the so-called KPSS test (Kwiatkowski-Phillips-Schmidt-Shin 1992), with the following test function: where T -the length of the time series, St -the partial sum of the residuals, ) (l T σ -the estimated volatility of the residuals The KPSS test should be performed for the logarithmic returns, and if their process is stationary (the null cannot be rejected), then the prices follow a random walk.
In practice, it can happen that the unit root tests (ADF, PP) and the stationarity tests (KPSS) lead to contradictory results. In such cases, one can suspect the presence of structural beaks (e.g. regime change) or so-called fractional integration 15 Augmented Dickey Fuller -Generalized Least Squares (the long-term memory of prices).
Independence tests. One of the more relatively recent and quite popular general independence tests is the so-called "BDS test" (Brock-Deschert-Scheinkmann-LeBaron 1996). The null hypothesis of the test is that the examined time series values come from an independent, identically distributed (iid) random variable. The strength of the BDS test comes precisely from the fact that it doesn't assume the normal distribution of data, instead it can handle any kind of distribution (using the so-called "bootstrap" technique), and that it is sensitive towards more types of dependency (linear, non-linear, chaotic). If this null hypothesis cannot be rejected then that is evidence for almost perfect randomness. However, if it is rejected then we still cannot determine the character of the dependence relationship, the only certainty is that our data is not perfectly random and this can lead to suspicion against the weak form efficiency of the stock market.

Previous Results Concerning the Budapest Stock Exchange (BSE)
Rappai's (1995) cointegration analysis states that if there are certain stocks, which are not cointegrated with the other stocks, then the market must be inefficient in its medium and strong form, because the uncorrelated ("independent") stocks don't react to new information. However, examining 13 stock prices the conclusion is that there is cointegration present and for all papers the random walk and the weak-form efficiency is validated.
Grubits (1995 a, b) examines the Pick Szeged shares using event study methodology between September 1993 and February 1994. According to him the stock prices reflect the new information on the very day of the announcements. However, there were still some abnormal returns on the days following an announcement, the price changes only returned to the expected levels on the second day following the announcement.
Palágyi (1999) analyses the MOL shares. Testing daily returns during one and a half-year, he points out that the distribution of returns is far from the normal, rather it can be approximated by a Levy-type stabile distribution.
Andor-Ormos-Szabó (1999) tests the independence of daily and monthly time series of certain Hungarian stocks on a sample of ten years. The authors use the runs-test proposed by Fama. The autocorrelation coefficients calculated for different length periods show that the stock prices satisfy the requirements of randomness, the market is at least weak-form efficient. Analysing the period from April 1994 until June 1999, they find that while the Hungarian market fulfilled the weak efficiency criterion the whole time, the Polish and Czech markets were not efficient at first, but converged later towards efficiency.
Alács and Jánosi (2001) present a stochastic differential equation for the BUX index, which has as root a stationary Lévy function, the Levy distribution being also more fat tailed than the normal distribution. A special characteristic of the BUX time series is the frequent presence of silent periods without index changes. The proposed equation can model also this feature by interpreting one of the noise terms as an intermittent Wiener process, which is just another term for the random walk.
Marton's (2001) analysis is overarching many phenomena. The author performs autocorrelation tests on the BUX between 1991 and 2000, also examining the seasonal features of the index. Although the short term autocorrelation coefficients are bigger for the BUX than for New York's "Dow Jones Industrial Average" (DJIA), these autocorrelations are not significant from an economic perspective. The inflation-corrected BUX, on the longer run -especially for one year -showed negative autocorrelation, in accordance with the international results of Poterba-Summers (1988), but contradicting Fama-French's(1988) U-shaped autocorrelation patterns according to which the generally negative autocorrelation reaches its minimum on a 3-5 year period. Testing the "day of the week effect" Marton reaches different results than Andor et al. (1999), stating that although the Wednesday returns were significantly greater than those of the other weekdays, this provided no basis for profitable transactions because of the high volatility of transaction costs and returns. Marton's conclusion overall is that the weak form efficiency of the Hungarian market is fulfilled.
Palágyi (2002) in his PhD thesis examines the data between 1996 and 1998. This is a high frequency analysis because it estimates the distribution of intra-day and per trade returns. The author analyses the stock price and returns for four shares (MOL, OTP, Matáv, TVK) on different time scales: trade-wise, per price change and physical time. The first step of modelling the share price is the independence test of the share price time series using autocorrelation tests for each trade. The first order, negative autocorrelation converged to zero but slower than on the American market, which signifies the stronger presence of long-term memory in the case of the Budapest Stock Exchange.
Lukács (2003) calculates the correlations between daily closing prices and market capitalization for 21 shares. The conclusion is that the variance of returns decreased together with the increase in capitalization, which can be a manifestation of the small firm (size premium) effect. In addition, there's a normality test, which rejects the normal distribution of returns. Gilmore and McManus (2003) examine the existence of weak-form efficiency in the stock markets of the Czech Republic, Hungary, and Poland for the period July 1995 -September 2000. They employ the variance ratio test (VR) of Lo and MacKinlay (1999) yielding mixed results concerning the random-walk properties of the indexes. However, they also use a comparison between forecasts from a naive model with ARIMA 16 and GARCH 17 alternatives, with results that unanimously reject the random-walk hypothesis for the three Central European equity markets.
Vajda (2003) tests the strong form of efficiency examining insider trading in 14 stocks between 1997-2002. According to him, more than three quarters of the announced insider trades on the BSE were selling orders. The reason for this in the author's view is the increased need towards liquidity and diversification of the investors. The examination of abnormal returns shows that the investors didn't perceive the insider trades as signals and they didn't induce further trading.
Smith and Ryoo (2003) test the hypothesis that stock market price indices follow a random walk for five European emerging markets: Greece, Hungary, Poland, Portugal and Turkey, using the multiple variance ratio test. With the exception of Turkey, the random walk hypothesis is rejected for all four stock markets. Factors, which can influence this, are examined and liquidity is found to be the most important: The Turkish market is much more liquid than the other markets therefore the price discovery process is more intense here leading to a more random, efficient behaviour.
Vosvrda and Zikes's (2004) findings reveal that the Czech and Hungarian stock market indices are predictable, whereas that of Poland is not. The authors apply the variance ratio test (VR) of Lo and MacKinlay (1999), rejecting the null hypothesis of random walk for the BUX and the PX-50 indices, and accepting it for the WIG index. The returns on all three indices are conditionally heteroskedastic (therefore the authors apply a GARCH-t specification) and non-normal. For comparison, they also perform the analysis on the DAX index of the Deutsche Boerse, finding that the null hypothesis of random walk cannot be rejected for the DAX. Finally, they apply the BDS test on the standardized residuals to determine, whether the GARCH model removes all non-linearities in the time-series of returns: the results of the BDS test give them confidence that the ARIMA-GARCH models were appropriately specified. It is important to note however that in the conclusions, the authors emphasize that the predictability of stock returns need not be a symptom of market inefficiency.
Molnár (2006) carries through an analysis regarding the distribution and autocorrelations of daily returns on the BUX over a six-year period, between 1996 and 2002. Comparing ten indexes from ten different countries, he finds that the BUX showed the most leptokurtic distribution of returnssimilar high peaks and fat tails were exhibited by the Brazilian, Hong Kong and Warsaw stock indexes. This is in 16 Autoregressive Integrated Moving Average 17 Generalized Autoregressive Conditional Heteroskedasticity accordance with Marton's (2001, pg. 79) conclusion that in crisis periods the swings of the BUX index were even higher than that of the similarly categorized (from a riskiness point of view) WIG index.
When it comes to functional efficiency, Pálosi (2006) reveals that analysing the stock markets of six countries (Hungary, Poland, Slovenia, The Czech Republic, Lithuania, Estonia, Latvia) between 1995-2006, the so-called synchronicity index significantly decreased which is a sign of improving functional efficiency.

The Authors' Results
In the following, we will concentrate on the examination of one developing stock market index: the BUX index of the Budapest Stock Exchange. There's a plethora of methods to analyse the random vs non-random behaviour of a stock index but we shall only perform the calculations pertaining to the methods presented earlier. We will perform our analysis separately for two distinct periods: one during the Great Recession of 2008-2012 (from Q1-2007 until Q4-2012) and one afterwards (from Q1 2013 until Q4 2014). 18 The calculations were done using the Gretl open source econometric software, the tests being applied on daily closing prices.
One of the first aspects to notice is the non-normality of the BUX index in both periods, which is a situation often encountered in the case of stock indexes (Annex 2 and 4). The graphs illustrate the left-sided asymmetry of index prices (negative "skewness" values). Furthermore the value of "kurtosis" is also quite high in the post-recession period (positive excess kurtosis), the distribution is fat-tailed, "leptokurtic". The fat tails (which imply more frequent than the normal, extremely high or low prices) can be best described by a stable, infinite variance distribution (such as the Levy-or Cauchy distribution), in accordance with the results of Palágyi (1999). Somewhat counter-intuitively, the excess kurtosis for the recession period is negative which means that the extreme events are less frequent than in the case of the normal distribution. Apparently, the recession period is not dominated by extreme volatility. These skewness and kurtosis values (annex 1 and 3) are combined in the normality tests such as the Jarque-Bera, whose low p-values demonstrate the non-normality of the index prices. In itself, the non-normality does not mean that the index prices are not random and predictable, but it provides sufficient grounds for suspicion, which warrants the performance of further independence tests.
The tests conducted for the random walk character of the BUX are the ADF-GLS (unit root) and KPSS (stationarity) with the following results: 1. Based on the ADF-GLS test (including a trend in the testing), we cannot reject the null hypothesis of one unit root (the test statistic values are smaller than any of the critical values), therefore we conclude that the BUX index prices follow a random walk both in the recession and in the post-recession period ( fig.2). This is in accordance with Syriopoulos (2003) who uses ADF and PP unit root tests. 2. The results of the KPSS stationarity test indicate that we should reject the null hypothesis of stationarity in favour of the alternative hypothesis of a random walk (the p-values are very low for both periods, see fig.3). Therefore, this test reinforces the previous finding based on the ADF-GLS test that the BUX index prices follow a random walk.
Despite of this considerable amount of evidence in favour of the random character of the index prices, there's one more test that we performed in order to control for any type of non-linear dependency of the index prices: the BDS test, already presented in the previous subchapter.
To this end, we performed own calculations using the Eviews 7.0 econometrical software (BDS was not available in Gretl), running the BDS test for the BUX daily closing prices for both periods: BUX_Recession (2007 Q1 -2012 Q4) and BUX_Normal (2013 Q1 -2014 Q4). The results ( fig.4) are significant at each correlation dimension, i.e. the null hypothesis of iid (independent, identically distributed) can be rejected, the index values do not stem from an independent, identical probability distribution, in accordance with other studies from the literature, in particular Rebedia(2014). However, we must be careful in interpreting this result: although it seems that this implies a certain degree of dependence in the time series of index prices, this doesn't constitute evidence against the efficiency of the stock market: it only means that there's a certain non-linear (chaotic) dependency in the prices, so the random walk we used to describe it, is not perfectly random. This doesn't mean in itself that based on this non-linear dependency certain investors can systematically outperform the market which is a sine qua non condition for disproving market efficiency.  In conclusion, the results of the ADF-GLS and KPSS unit root and stationarity tests convincingly demonstrated the random walk character of the BUX index. This result is robust in itself; however, it comes with a grain of salt, in that this randomness doesn't mean perfect independence: it doesn't preclude non-linear dependencies as illustrated by the BDS test. Overall, the market efficiency of the Budapest Stock Exchange cannot be questioned based on this body of evidence.
Immediately, the question arises whether this conclusion is a specificity of the analysed period or is it more general and it can be stated that the BUX index has always been a random walk?
To give an answer to this question we performed the aforementioned tests for the remaining period of the BUX index, namely between January 1st 1991 and November 15th 2007. In this period, similarly to other stock indices, the BUX doesn't exhibit normal distribution. The distribution is left-symmetric (the skewness indicator is negative), moreover the kurtosis value is quite high (16), the distribution is leptokurtic. This fat-tailed, highly peaked distribution implies more frequent than the normal excessively high or low returns, and it can rather be described by a distribution with infinite variance (such as the Levy or Cauchy distribution) in accordance with Palágyi (1999). The skewness and kurtosis figures are combined in the Jarque-Bera test of normality, which in our case rejects the null hypothesis of normality. The fact that normality is rejected doesn't mean that independence and randomness is rejected but it provides sufficient grounds for further testing.
The tests conducted for the absolute values of the BUX index are: the ADF and KPSS unit root tests and the Geweke, Porter-Hudak test for examining fractional integration (Geweke-Porter-Hudak (1983)). The tests, once again performed with the Gretl econometrical software gave the following results: 1. Based on the ADF test we cannot reject the null hypothesis of one unit root, therefore the BUX data follow a random walk (in accordance with Syriopoulos (2003) who uses ADF and PP unit root tests on daily index returns of several emerging markets between January 1997 and September 2003). 2. According to the results of the KPSS test the null hypothesis of stationarity can be rejected, the BUX process is a random walk in accordance with the result of the ADF test. 3. Since the two unit-root tests aren't contradictory, one doesn't suspect fractional integration. This is further enforced by the Geweke, Porter-Hudak test which gives an estimated integration degree of 0,99, therefore we cannot speak of long term memory in the case of BUX.
We conducted the following tests for the daily log-returns of the BUX: the BDS independence test in Eviews 7.0, and the Hurst exponent in Gretl (Hurst, 1951).
The BDS test results are significant under every correlation dimension, i.e. the null hypothesis can be rejected, the BUX time series doesn't come from an independent, identically distributed distribution. This alone however doesn't imply the rejection of the efficiency of the market. Furthermore we examined to what degree can the BUX returns be regarded as white noise, predicted by the random walk model? The estimated Hurst-exponent turned out to be H = 0,59-which is pretty close to the perfect white noise value of H=0,5.
Considering all these test results, corroborated with the test results for the recession-and post-recession periods previously presented, we can conclude with some confidence that indeed, the BUX index ever since its introduction in 1991, has shown the characteristics of a random walk.
Given this conclusion the natural question arises: is the random walk an isolated characteristic of the Budapest Stock Exchange or is it shared by other stock exchanges in the region as well? To answer this question we performed further analysis of the relevant literature with the following conclusions: Luhan et al (2011) aim at the application of methods of efficient market tests on the Prague stock exchange in the period of 2007-2010. In particular, they employ market anomalies tests, e.g. testing the January effect (the stock returns in January are significantly higher than those of other months), rejecting it, implying in their view the efficiency of the Czech market.
Kristoufek and Vosvrda (2012) introduce a new measure for the capital market efficiency. The measure takes into consideration the correlation structure of the returns (long-term and short-term memory) and local herding behaviour (fractal dimension). Their analysis covers the period 2000-2011. The efficiency measure is taken as a distance from an ideal efficient market situation. In their analysis the Japanese market turned out to be the most efficient (Nikkei index), but notably the Hungarian market came out third (BUX index), the Czech Republic (PX index) ninth place, ahead of many developed markets. The Austrian market (ATX index) was placed 21 st , while the Polish market (WIG index) 27 th , somewhere in the middle of the top.
The data set used by Dritsaki (2011) consists of stock market indices for the Visegrad Countries (Poland, Czech Republic, Hungary and Slovakia). Data are monthly covering the period from April 1997 until February 2010. The author is applying analysis of autocorrelation and unit root tests. Both methods imply that the Visegrad countries' stock market indices have a unit root and follow a random walk process. This confirms in the author's view the weak-form efficiency of Visegrad countries.
According to Gajdošová et al's (2011) results, anomalies such as the day-of the week effect on the examined stock markets (Hungary, Poland, Czech Republic, Slovakia and Turkey) appear only during the period of financial crisis (they also subdivided their examined time interval into crisis and pre-crisis periods). The selected stocks markets seem to be efficient, except the period of the global financial crisis, so the crisis brings the inefficiency into the behaviour of stock markets.
One of the earlier but most cited studies concerning the Vienna Stock Exchange, Huber (1997) uses the multiple variance ratio test procedure to test for a random walk of stock returns. At first he finds that with daily data the test rejects the random walk hypothesis for each share and for both indices (Wiener Boerse Kammer index, ATX index). Testing the hypothesis on a subsample running from 1990 to 1992 suggests that, as the market becomes institutionally more mature and more liquid, returns approach a random walk. Moreover, individual shares seem to follow a random walk when weekly returns are considered.
Millionis and Papanagiotou (2011) show on three stock exchanges namely New York Stock Exchange (NYSE), Athens Stock Exchange (ASE) and Vienna Stock Exchange (VSE), that technical analysis can be used to generate abnormal returns that outperform the buy and hold strategy, so they question the random walk character of the Austrian market prices.
So overall, there is an overwhelming evidence that stock market efficiency and the random walk model is not just pertaining to the Hungarian Stock market but is a characteristic of the entire region.

Conclusions
Our line of thought relies on human characteristics, examining the extent to which people are able to predict financial market outcomes. Prominent supporters of the efficient market theory claim that all the information available to the public that induces any change in the prices of securities is efficiently incorporated into the current prices of securities. Therefore, unless investors have access to special (or insider) information which is not fully available to the public, investors are inclined to anticipate price variations.
In Keynes's theory, as opposed to the classical efficient market theory, people recognise the uncertainty of the future. As Keynes says, if the participants in the market are inclined to think that the future is more unstable today than it was yesterday, then they will reduce their cash-flow obligations in order to strengthen their liquidity position. Keynes writes about the penetrating nature of uncertainty: "To assume that the future is predictable may lead to the misinterpretation of behavioural principles" (Keynes, 1937:122). Thus, the more time elapses between the appearance of choice and its consequences, the more certain it is for individuals to assume that their decisions must be made under ambiguous circumstances.
It should be emphasized that the empirical findings, evidences that are partly our own results partly the ones referenced from the literature, are circumstantial when it comes to the question of market efficiency. Instead of directly testing the informational efficiency, which is a very daunting task, we presented different specifications for the randomness and independence of stock index prices. These are properly speaking time series models that are able in most cases to capture some sort of deviation in a certain direction and with a certain magnitude, from perfect randomness. However, these deviations can hardly and seldom be harnessed for obtaining extra return. In this respect, we found no significant difference between the stock index behaviour in the recession and post-recession periods.
In the recent decades, many studies have been undertaken, examining typically one or the other aspect of the efficiency of the Hungarian stock market. These studies have mostly reinforced the previous findings from the literature, but sometimes they contradicted them. From this tableau of the literature, there's a very important conclusion to be drawn, namely that even if one can document departures from perfect efficiency, these cannot be systematically and robustly exploited in order to obtain extra profits.
Summarizing we can affirm that there are many explanations for short term oscillations but all signs point to the fact that even short term predictability and forecast is impossible. Moreover, all we can say about the expected value of short-term price evolution is what can be drawn from the explanations provided by the model of market efficiency. This is in favour of the Grosmann-Stiglitz-type of efficiency (cf. page 2), which can be best formulated in the following: "there is no perfect randomness, but there is efficiency".