Wednesday, April 09, 2008

Chaos and markets II

For those who teach finance, a number seems better than no number — even if it’s wrong.

- Mandelbrot and Taleb

It's the Tyranny of the Cookbook, to which we can reply: No number is better than some number - especially when it's wrong.

Much financial advice is common-sensical, but in recent decades has incorporated misguided notions of implicitly Gaussian or bell-curve statistics in analysis of price movements, as well as false concepts of "efficient markets." You often hear the jargon of means, variances, and betas. (A beta is just price volatility defined as a second moment, or a variance. The square root of the variance is the standard deviation.) We've already seen a truckload of examples of where and why such concepts break down and why methods based on such assumptions are wrong. The fact that this approach to finance has a Nobel prize is irrelevant.* The methods and concepts have spread from academic finance and economics departments to the desktops and minds of investment specialists in the last 30 years and done significant damage: the Long Term Capital Management crisis in 1998 and the mortgage crisis of 2007-08 were both made possible, in part, by such "professional consensus" malpractice. Here we have legendary cases of Platonified false expertise and the "empty suit" syndrome. The price change distributions are fractal-driven power laws, not bell curves, a fact first presented to the economics world almost 50 years ago by Mandelbrot - and then rejected because it didn't fit convenient, if unempirical, Mediocristan assumptions. The missing practical key is the widely unrecognized enhanced risk of large fluctuations, especially downward moves. Individuals and institutions adopting wrong rules expose themselves unwittingly to much larger risks than they realize.

If we drop the assumptions of bell-curve price fluctuations and efficient markets, where do we stand?

The first is basic math and science: get your units straight. People who practice finance usually get this right, but it's amazing to see ignorance even in the business pages about this. Economics, like mechanics, has three basic types of units: money (a universal store of value and medium of exchange), things or activities (count them distinctly and don't commit the Fallacy of Aggregation, lumping bananas and pork bellies, say), and time. The essential point is that wealth is an accumulation of flows. The flows are prices (measured in money) times things or activities (quantified somehow) divided by increments of time. Interest rates are prices divided by prices divided by time, or just 1/time. Wages are money per unit of labor (an activity) per time. And so on.

The principle of diversification remains, but its rationale changes. It's not "everything will even out" (it doesn't always), but "we don't know very well how individual investments and investment classes will perform - sample all of them." Diversification, not only within investment classes, but especially across classes, is even more important in Extremistan than in Mediocristan.

More basic to the uncorrelated, Gaussian price movement picture is the efficient market hypothesis, which has failed in a number of crucial respects. Market timing matters, especially if you're making large moves (investing or liquidating). The market analysis based on this wisdom is called "technical analysis" or "charting," and its advocates are called "chartists." They stare at price chart patterns. In the "uncorrelated random walk" picture, these patterns mean nothing. But in fact they do mean something. Market moves are indeed correlated across time. Only after three to five years do they start to lose their memory, and it's not clear that they ever entirely do.

Furthermore, there are investment classes that consistently under- and overperform the whole market average. The best-known underperformer is the class of "growth stocks," because they're hyped by the media and analysts to the point where buyers demand them strongly - they're consistently overpriced relative to their long-term performance. OTOH, there are underpriced investments: so-called "value" stocks, for example. Warren Buffet and others have made a fortune hunting for undervalued but worthy investments. It's all boils down to not paying more for an investment than it's worth.

Finally, the "fat tail" phenomenon should make everyone suspicious of probability distribution moments (means and variances). If misanalyzed using Gaussian assumptions, fat-tailed distributions appear to be non-stationary: if you keep sampling such distributions to estimate moments, your results will not, in general, converge as you add more data points. The estimated moments will just keep growing. After an infinite amount of sampling, they diverge to infinity. While means and variances are measures of performance, they're not good measures.

The devil's staircase. A better approach than looking at daily movements is to look at cumulants (integrals) and at absolute linear ranges (price highs - price lows). The cumulant is more stable than the daily changes in value, and sudden jumps in the total value of an asset or flow of goods and services show up clearly. (The fact that such sudden jumps often dominate the total or cumulative history of an asset or market also stands out clearly.) The absolute linear range grows with time, but gives you some sense of the best and worst the market can do. These are the rules of the road in Extremistan. "Mild" variables change by a large number of small increments. "Wild" variables change by a small number of large increments, and "really wild" variables change mainly by a handful of very large increments.

Markets with an incomplete cookbook. The investment community at large still has not fully absorbed Mandelbrot's message about fractals and the uselessness of Gaussian, bell-curve statistics in understanding and prospering in markets. The normal and the Levy-type distributions look similar when you compare them for small deviations from the mean.** It's the large deviations that constitute the acid test, and it is here where investment professionals often start waving their hands.† In a Gaussian world, such large changes shouldn't occur almost ever, and the history of Gaussian markets would be dominated by many, many small changes. But real markets are strongly shaped by a limited set of rare, large, and consequential events. A new investment science to replace the rigorous, Platonified irrelevancies of contemporary financial theory is badly needed.

POSTSCRIPT: Here's a short note on market risk by Mandelbrot and Taleb from a few years ago.

References

= B. Malkiel, A Random Walk Down Wall Street, rev. ed. Classic presentation of efficient-market, Gaussian random walk theory to the masses. Much of the technical side is wrong as a picture of markets, but the basic investment advice (the trade-off between active and passive investment, diversification) is sound.

This posting is a sketch of what's needed to replace the bell-curve price movement framework. Just noted today: the embarrassing underperformance of stock index funds since the 2000 market peak, compared with even lowly bonds, not to speak of value stocks.

= R. Haugen, The Inefficient Stock Market. Nice short, if technical, study of systematic inefficiencies (over- and underpricings) in markets.
---
* Black and Scholes won it in 1997, and Taleb and others have railed against this as a perfect example of rewarding Platonified bullshit with its origins in academic circles, with highly restrictive assumptions, applied to real life where those assumptions don't hold. The LTCM crises occurred less than a year after the award - again suggesting a just G-d, or perhaps one with a refined sense of humor.

A larger objection can be made against the economics Nobel prize altogether, and Taleb and others argue that as well. It's actually a Nobel foundation prize paid for by the Royal Bank of Sweden, not specified in Nobel's will. Although some great and deserving economists have won it (Hayek and Friedman among them), in general, it's difficult to argue with the reality that economics has often been subject to both fads and conveniently cookbook pseudoknowledge. The standards for the Nobel prizes in the natural sciences are much stricter, and I hope they remain thus, so that at least those Nobel prizes mean something.

** Actually, the log-normal. The Gaussian bell-curve is applied, not to prices, but to the logarithms of prices. Small changes in prices are then translated into small percentage changes. (For price P, the differential dP is replaced by dP/P.) For small ΔP's, the log-normal and Lévy-type distributions look almost identical - it is here that the theorists of the Gaussian random walk go astray.

The "random walk" idea can be taken beyond the Gaussian or normal type and recast into a more general form of Lévy flights, dropping the requirement of finite distribution moments. To handle correlations over time between events, it can also be generalized in another way, to have memory: fractal random walks. Such erratic "random" or "drunkard's walks" are an important tool for applying statistical methods to dynamics under conditions of limited knowledge. The random walk is also central to analyzing diffusion (both standard Gaussian and "anomalous" fractal types). In chemistry and biology, the random walk is sometimes called Brownian motion.

† In the last generation, improvisations have grown up around the failure of Gaussian methods, but this series of ad hoc patches and fixes doesn't get to the root of the problem. Some analysts still just take out large deviations ("outliers") by hand, a kind of data denial. Others appeal to the notion of "exogenous" (outside-the-system) shocks, which destroys the method's predictive (if not its retrospective) powers.

The most sophisticated patch is to make the Gaussian parameters depend on time, the common version being GARCH. This is the best you can do within the misguided Gaussian framework; in that wrong framework, the actual (and probably stationary) distribution of price movements looks non-stationary. The time-dependent parameters are supposed to mimic this, but at the cost of largely destroying the method's predictive power.

Labels: , , , , ,

0 Comments:

Post a Comment

<< Home