Michel Verlaine

A Critical Analysis of Financial Market Efficiency


The recent subprime crisis has somehow led to a questioning of the so-called efficient market hypothesis (EMH). In fact, the EMH has been one of the main finance paradigms since Eugene Fama’s seminal work.[1] The central tenet is that information is rapidly integrated into prices. It goes as follows: as information is rapidly integrated into prices, you cannot beat the market, except by chance, unless you are systematically faster in getting information. If you are systematically faster in getting information, you are highly likely to have insider information. Thus, on average, no investor, unless he has insider information, should be able to beat the market. The concept is thus implicitly behind the idea that active asset management will not outperform passive asset management, at least not net of fees. In this respect, an interesting fact documented by researchers is that fees seem to be inversely related to abnormal performance, at least for mutual funds.[2] In the case of hedge funds, incentive mechanisms as provided by high-water mark clauses and hurdle rates might incentivize managers to outperform, provided the market is inefficient. Vikas Agarwal et al. indicate that there exists a relationship between contractual incentive features and performance, as well as, between managerial discretion and performance.[3] For an analysis of the contractual features of hedge funds see an essay by William N. Goetzmann et al.[4]


This is all well and good, but in order to make the concept of EMH operational, one needs to define information. The theory distinguishes three degrees of efficiency depending on which information set is used. The weakest form postulates that past time series cannot be used to generate outperformance. The semi-strong form of efficiency supposes that publicly-available information is immediately integrated into prices. Finally, strong-form efficiency postulates that even private information is rapidly integrated into prices. One major issue is that in order for information to be integrated into prices, we need to define the value of information and how it relates to prices. In order to functionally relate information to prices and actually determine which information set is relevant, the EMH has recourse to another fundamental economics paradigm, the rational expectations hypothesis (REH) and its companion concept the rational expectations equilibrium (REE). The notion of rational expectations postulates that economic agents correctly anticipate what a theoretical model would predict. REE is supposed to emerge from the interaction of rational economic decision makers. As we highlight further below, this notion of rational expectations actually conceals a very strong assumption of objectivity.


In order to better understand the strong assumptions imposed on finance models it is worth discussing the theoretical foundations behind the standard finance paradigm. Let’s consider the value of an asset. For an economist, the value of an asset is given by the discounted future marginal utilities of respective cash flows. In general the future is uncertain and the decision maker has to evaluate the probability distribution over future cash flows. The standard approach supposes that the decision maker constructs such a probability distribution[5] and evaluates his expected utility with respect to that probability distribution, eventually forgetting that it might be one of many possible probability distributions. As this theoretical approach is not very operational, Harry M. Markowitz suggests an approximation of the expected utility function with a Taylor expansion which maps the function at statistical moments.[6] If the utility function is of the standard concave form as postulated by theory or if returns are normally distributed, the expected utility depends exclusively on the average return and volatility of a portfolio.


Now, every decision maker will maximize the expected return for a given variance. If investors have the same beliefs concerning the probability distribution of future returns, everybody’s portfolio will depend on the same expected returns and variances. Of course, some investors will be more risk-averse, but as they can combine the relevant risky portfolio with a part invested in cash, every investor should invest in the same risky portfolio, which incidentally happens to maximize the Sharpe ratio. As everybody is supposed to hold the same mean-variance risky portfolio, an asset is added to the respective portfolios if it contributes marginally more in terms of risk-adjusted return than the current risk-adjusted return of the portfolio. The marginal contribution to the risk-adjusted return depends on the return as well as the covariance of the asset with the current portfolio. In equilibrium and on the margin every asset should thus have the same risk-return characteristics as the market portfolio held by all the investors. Using this property and rewriting the relationship, we get the standard capital asset pricing model (CAPM), where asset returns depend on their correlation with market movements as measured by the famous beta. Typically, efficiency was considered as implying that returns are more or less in line with the predictions of the CAPM, new information being rapidly integrated into prices.


Still, some so-called anomalies were observed, as small companies and high market-to-book-value companies had returns that were not in line with the predictions. Thus a somewhat broader model is suggested by adding the small minus big as well as market to book value. It seems that the literature is rather agnostic on whether these factors can be considered as true risk factors in an arbitrage pricing theory (APT) model or whether they are just good at catching some empirical relationship. The interpretation might actually depend on ideological positions. George M. Frankfurter and Elton G. McGoun discuss the relationship between finance and ideology.[7] The fact is that those issues matter a lot, as the so-called fundamental value of an asset depends on the respective model. In fact, the risk adjustment alluded to when we discussed cash flow discounting using expected utility models is implemented by adjusting the discount factor by adding a risk premium that depends on the respective betas. We thus went from a subjective value, the expected utility of an investor, to an objective value, the future discounted cash flows, where the discount factor is given by a specific model. The aim, here, is to highlight the somewhat hidden assumptions behind efficiency.


Remember that expectations are a very important element of theoretical developments. Typically, RE are supposed to be in line with the theoretical model that leads to a REE. This notion however has a bearing on two fundamental concepts: model predictions and rationality. Implicitly, there is the idea that model predictions are somehow objective. This can be questioned from an econometric, as well as, a decision-theoretic viewpoint. In an issue of the Journal of Economic Theory on model uncertainty and robustness, economists develop so-called alternative approaches where the decision-maker faces a set of models and has to decide on the best decision rule.[8] This means that the decision maker faces model uncertainty, as well as, estimation uncertainty and this creates what decision makers call ambiguity.[9] Actually, the standard finance approach presumes that the investor faces risk, which means that the statistical distribution of returns is known. Ambiguity presumes that there is some uncertainty concerning the statistical distribution. The decision-theoretic literature provides two approaches to addressing ambiguity. Itzhak Gilboa and David Schmeidler suggest using a maxmin expected utility approach (MEU) with multiple priors,[10] whereas Schmeidler, on his own, suggests an approach based on the expectation of utility with respect to a non-additive set measure called a capacity.[11] The latter is the so-called Choquet expected utility model.


It turns out that those models imply a relaxation of fundamental axioms of rationality. Indeed, the standard paradigm in economics presumes that individuals act in accordance with some fundamental axioms. One of those axioms, namely independence, is questioned by decision theorists whenever ambiguity is present. This is a fundamental issue and the standard approach, where expectations independent of the utility functions are not guaranteed when a less strict version of the independence axiom, called certainty independence, is not verified. If utility functions are not independent of expectations, the notion of fundamental value no longer makes sense. Moreover, as we do not know how to define information and the estimator depends on preferences, portfolio weights depend on the chosen estimators that depend on the chosen statistical loss function, the latter being the inverse of the utility function. The DM thus has one criterion to determine the portfolio choice and the estimator. I suggest an approach to solving this issue by distinguishing ambiguity and risk.[12] The utility function refers to risky situations and the criterion relevant for selecting the parameters is given by a maximum entropy likelihood function. This is consistent with the empirical evidence on S-shaped probability transformations.[13]


Those more recent developments in theoretical economics might lead to some reconsideration of fundamental finance and economic policy issues. Firstly, as fundamental value no longer exists, but rather a range of values, the notion of asset bubble and efficiency are becoming a little empty. Secondly, the independence of some institutions such as central banks and competition authorities can be questioned on the ground that the theories and knowledge they use are not objective. In this respect, it is interesting to note that in the research note of the IMF,[14] macroeconomic monetary policy is critically reconsidered as it was implemented. In any case, given those theoretical approaches that haven’t yet fed into economic policy making, we can expect very interesting debates in the future.




[1] Eugene Fama: “Efficient Capital Markets: A Review of Theory and Empirical Work,” in: The Journal of Finance, vol. 25, no. 2 (1970), pp. 383–417.

[2] See Javier Gil-Bazo, Pablo Ruiz-Verdu: “The Relation between Price and Performance in the Mutual Fund Industry,” in: The Journal of Finance, vol. 64, no. 5 (October 2009), pp. 2153–2183.

[3] See Vikas Agarwal, Naveen D. Daniel, Narayan Y. Naik: “Role of Managerial Incentives and Discretion in Hedge Fund Performance,” in: The Journal of Finance, vol. 64, no. 5 (October 2009), pp. 2221–2256; Vikas Agarwal et al.: “Risk and Return in Convertible Arbitrage: Evidence from the Convertible Bond Market” (working paper, Georgia State University and London Business School).

[4] William N. Goetzmann, Jonathan E. Ingersoll, Jr., Stephen A. Ross: “High-water Marks and Hedge Fund Management Contracts,” in: The Journal of Finance, vol. 58, no. 4 (August 2003), pp. 1685–1717.

[5] See Leonard J. Savage: The Foundations of Statistics. New York 1954; Francis Anscombe, Robert Aumann: “A Definition of Subjective Probability,” in: The Annals of Mathematic Statistics, 34 (1963), pp. 199–205.

[6] See Harry M. Markowitz: “Portfolio Selection,” in: The Journal of Finance, vol. 7, no. 1 (1952), pp. 77–91.

[7] See George M. Frankfurter, Elton G. McGoun: “Ideology and the Theory of Financial Economics,” in: Journal of Economic Behavior & Organization, vol. 39, no. 2 (1999), pp. 159–177.

[8] See Pascal J. Maenhout: “Robust Portfolio Rules and Detection-error Probabilities for a Mean-reverting Risk Premium,” in: Journal of Economic Theory, vol. 1, no. 128 (2006), pp. 136–163; Lars Peter Hansen et al.: “Robust Control and Model Misspecification,” in: ibid., pp. 45–90; Fabio Maccheroni, Massimo Marinacci, Aldo Rustichini: “Dynamic Variational Preferences,” in: Journal of Economic Theory, vol. 128, no. 1 (2006), pp. 4–44.

[9] See Paolo Ghirardato, Fabio Maccheroni, Massimo Marinacci: “Differentiating Ambiguity and Ambiguity Attitude,” in: Journal of Economic Theory, vol. 118, no. 2 (2004), pp. 133–173; idem: “Certainty Independence and the Separation of Utility and Beliefs,” in: Journal of Economic Theory, vol. 120, no. 1 (2005), pp. 129–136.

[10] See Itzhak Gilboa, David Schmeidler: “Maxmin Expected Utility with a Non-unique Prior,” in: Journal of Mathematical Economics, 18 (1989), pp. 141–153.

[11] See David Schmeidler: “Subjective Probability and Expected Utility without Additivity,” in: Econometrica, vol. 57, no. 3 (May 1989), pp. 571–587.

[12] See Michel Verlaine: “Robust Asset Allocation with Generalized Preferences” (ICN Working Paper, 2008).

[13] See Drazen Prelec: “The Probability Weighting Function,” in: Econometrica, vol. 66, no. 3 (May 1998), pp. 497–527.

[14] See Olivier Blanchard et al.: “Rethinking Macroeconomic Policy” (IMF Staff Position note).




Print        Back to Top