Saturday, May 31, 2008

A portfolio for Extremistan

So friends ask me, what do you do for your portfoilio? Are there rules of the road in Extemistan?

It's more of an art than a science, but I have some intuitions that I follow and that have served me well. One is good return, and another is don't get cocky. Don't worry about betas, volatilities, and correlations. That stuff is highly questionable at best and outright BS at worst. Everything known for sure about price behavior strongly hints that their statistical distributions are "fat-tailed," subject to "large fluctuations." Moments like the variance are probably not even defined, being either infinite or at least very and fuzzily large - so much so, they might as well be infinite.

Average return over some fixed time period is a stand-in for cumulative return, which is the right thing to look at to see how an asset has performed.* The problem is that, cumulative return isn't enough. You're not going to buy an asset ten years ago, or whenever the cumulative return period started. You're going to buy it now. So is it worth it now? All you can do is figure out if the asset is now overvalued. "Value" investing amounts to no more than paying a good price or less for something. To check valuation, people use all sorts of numbers, such as the price-to-earnings ratio. My favorite is the price-to-book ratio. Ratios below two are very favorable. From two to four is okay. Above four or five, you're getting into overpriced territory.**

The final principle is diversification may be hairy, but it's worth it. But you have think more diverse than many brokers have traditionally, beyond the old trinity of stocks, bonds, and money market. Think of real estate (yes, it's still a good investment, if you pick the right type), and commodities and raw materials (yes, they're volatile, but diversify within the class).

And that's it. I'm with Schwab and settled on three of their funds with the best relative rankings by these criteria: a small- to mid-cap value fund, real estate (with holdings more in commercial than residential real estate, and a lot of international coverage), and an international fund (valuation a little questionable, but needed for diversification's sake).

POSTSCRIPT: Back to another recent excursion in Extremistan for a minute, the water crisis. A friend pointed out that just the assumption of stationarity (underlying probability distribution being unchanging in time) is an assumption, just like the Gaussianity (bell-curve-ness) assumption. And that's true. If there were definitive evidence of non-stationarity, by all means let's drop it.

But stationarity is a simpler and more primitive assumption than Gaussianity. We have very good reasons, based on everything known about "open" systems with "flow-through" (rather than "closed" systems with fixed totals), to drop the Gaussianity assumption. It's a more specialized assumption than stationarity and thus more likely to be wrong: so says Occam.

Even if stationarity happens to be wrong in the end, there's no reason to assume that the time variation of the statistical distribution is due to human activity. There are all sorts of more likely possibilities. There's a pernicious assumption that "open" systems should be stationary, and, if they're not, it's humans' fault, dammit. In the case of climate, we already know of decadal to millennial timescale changes due to solar variability; on longer timescales, due to the Ice Ages, continental movements and changes in the Earth's orbit and orientation in space. No need to invoke human involvement unless there's some other "smoking gun" that can't be explained better some other way.
---
* Cumulative return = (present value/initial value) = (1 + inferred average annual return)years.

** There are more sophisticated approaches, a particular favorite of mine being the flow ratio. But such detailed analysis of an asset is worth it only if you're investing in a single stock or other asset at a time.

Labels: , , ,

Friday, May 30, 2008

The black hole of climate parameterizations

The theoretical band-aids used to "fix up" climate models after the butchering of the full theory produce a final mishmash, chunks of climate theory rounded out and connected by uncontrolled simplifications of physics too hard to solve, with details filled in with selected past behavior.

The patching up of conservation law violations and filling in for unsolvable turbulence and water dynamics are accomplished by "climate parameterizations," the rather embarrassing ad hoc-ery inherent in all present-day state-of-the-art climate modeling. Climate parameterizations are an unavoidable corollary of the GCM method. Each climate model "cell" produces a wrong answer, with no estimate of error. Somehow the sum total of these is supposed to produce a globally "right" answer. Regional climate models have even larger problems, because the climate specification on the boundaries of each "cell" is undefined to start with. If anyone comes up with the right answer using this procedure, it's strictly by luck.

These parameterizations "force closure" of the climate dynamics within each cell. "Subgrid" processes are not tracked. Instead climate parameterizations replace the missing dynamics on scales smaller than the grid resolution. They substitute for the dynamics on all but the largest spatial scales (hundreds of miles, too large to resolve even a hurricane), hiding essentially all of the chaotic behavior and most of the full hydrologic cycle (GCMs include simplified evaporation, but not condensation or precipitation dynamics). Some of these parameterizations are based on "reasonable" theoretical conjectures. But most are based on observed climate data - that is, on past climate behavior.

Briefly, there are at least three things wrong with these parameterizations.

1. Causality is eliminated within the grid cells. Cause-and-effect relationships unfolding in space and time are replaced by static, algebraic relationships. Subgrid dynamics disappears: chaotic turbulence, much of convection, condensation, cloud formation, and precipitation. Most of the self-organizing phenomena characteristic of our atmosphere are not dynamically simulated.

2. Past performance is no guarantee of future results. The climate system changes on time scales longer than modern climate data can capture. And it's also chaotic, shot through with unique, one-off events that never repeat. By using past climate data, climate parameterizations take an uncontrolled slice through the space of all possible weathers, essentially assuming that all possible weathers are represented by the time- and space-limited pool of available measurements. But this pool is restricted in time, in the spatial and temporal resolution and comprehensiveness of available data and, by its nature, cannot capture climate chaos.

Such an approach amounts to Fourier analyzing the complete climate evolution in space and time, then chopping out all but a limited range of time and spatial scales. The rest is missing, and that pesky chaos at zero frequency has been excised away. But climate processes at different spatiotemporal scales interact with one another, transferring energy, momentum, air, and water from larger scales to smaller and back, as weather features self-organize and dissipate.

3. Circularity of reasoning. To make predictions for a dynamical system, one ideally starts with a complete, defined theory, adds initial and boundary conditions, then solves for the answer. The results can be "cleanly" compared with measurements to see if the theory and any approximations made in solving it were right.

By using past climate data in defining the theory itself, we're "contaminating" the predictions of theory with the "already known answer" - cheating, in effect, although the cheater has copied a probably wrong answer. It's not a "clean" test by any means. In practice, global and regional climate models are continually adjusted to match observed climate. The resulting model looks "right," but that's an illusion. It's actually a massive case of what statisticians call "confirmation bias." The model has been adjusted to retroactively reproduce past behavior. There's no way to know if it can predict future behavior. More likely, the model will have to be readjusted again, the day after tomorrow, to "retrodict" tomorrow's weather.

The illusion of an answer. You might wonder why climate modelers ever got into what looks like a dead end. The answer is that there aren't, at present, good alternatives to this program of climate modeling approximations. Basic questions would need to be revisited and re-examined from scratch. This is a great open and urgent question in climate theory. The resources that such questions should get are instead used up in chasing illusory improvements in ever-larger and dubious GCMs. More computer power and memory can't solve this problem. It's the modeling procedure itself that's wrong. Better computers will just produce meaningless results more quickly.

While there are climate modelers and scientists guilty of overselling and misrepresenting the reliability and completeness of the GCM program, that sin pales into comparison to the main force behind this drive round and round the climate modeling cul-de-sac: it's political, not scientific. Certain political figures (not just elected politicians, but science policy and bureaucratic types as well, and the eco-fanatics) have a strong (but probably wrong) preconception of what's going on with climate. They want "correct" answers. In a larger sense, the general demand for definitive climate predictions of any kind is the more basic culprit.

The modern GCM approach to climate modeling began in the early 80s and has never left its infancy. By the early 90s, it was very prematurely "drafted" into providing pseudo-definitive climate answers. But in their current form, GCMs can never produce the answers sought or falsely claimed.

POSTSCRIPT: Essex and McKitrick discuss the full range of climate modeling fallacies in considerable, but not overly technical, detail. Leroux and Comby discuss the topic even more extensively, at greater technical depth.

Labels: , , , ,

Thursday, May 29, 2008

Climate models: What went wrong

You shall not curse a deaf man, nor place a stumbling block before the blind ....
- Leviticus 19

Climate models of any kind, including GCMs, involve multiple layers of approximation. They are not the full theory. That is unsolvable and cannot even be properly stated in its full complexity, which is why climate models are used in its place. From a mathematical point of view, the key question is the nature of those approximations.

On the grid. The most obvious approximation is the discrete spacetime "grid" that replaces the spacetime continuum. A continuum and any field on that continuum (pressure, temperature, etc.) contain an infinite amount of information and cannot be represented by a finite list of numbers and thus on a computer. The first step is to replace the continuum with a grid of discrete points in space and instants in time.

A set of thermohydrodynamic variables (pressure, temperature, wind, water) is solved for within a grid cell. In place of variables that are functions of continuously varying time and space, we get discrete points and instants. The continuous integrodifferential equations of physics are replaced in the model approximation by discrete difference and sum equations. This approximation gives rise to discretization error, which is bounded and can be controlled by making the space and time cell size smaller. Hence the quest for ever more computer memory, to handle larger and larger numbers of smaller and smaller climate "cells."

However, this approximation is far from the only one inherent in climate modeling. The mistaken assumption that it is feeds the illusion that bigger computers are all that's needed to reduce the uncertainties. Given the immense complexity and chaotic nature of climate, it's also not the case that bigger computers are a practical solution even for just this modeling error. Attempts to forecast weather over weeks and months have consistently led to the conclusion that computers much larger than any built, calculating for times longer than the lifetime of the universe, would be needed to cope with weather chaos, which in climate manifests itself as atmospheric turbulence.

Poorly defined statistical averages. In place of dealing with chaos directly, climate models sample sets or ensembles of initial conditions, then average over the samples. The climate modeling fallacy arises from this averaging over undefined model spaces. No one understands the full climate theory well enough to enumerate possible climates and assign them probabilities. (Mathematically, there's no "measure on the space of models.") How do you average? How do you know you've got a representative sample of the space of possible "weathers"? No one knows. The workaround today is more ad hoc handwaving, making convenient simplified assumptions there's no way to check and which further butcher the theory.*

Forcing closure on the equations. Discretization of continuous spacetime gives rise to other, more technical modeling errors as well. These additional approximations fall into two broad classes, although these classes of errors interact with each other.

1. Inherent in the complete theory of climate are continuous symmetries of the laws of physics. These laws are independent of translation in space, rotational orientation, and what time it is. Each gives rise to a conserved flow: densities of momentum, angular momentum, and energy. When the theory is discretized to form the numerical approximation, these symmetries are broken and the conservation laws violated. These violated conservation laws (momentum, angular momentum, and energy appearing from and disappearing into nothing) have to be "fixed up" in some way, so as to not produce nonsensical results. These "fixing up" methods, which we'll meet in the next post, themselves introduce ad hoc and uncontrolled approximations.

2. The unchanging identity of a parcel of dry air and a parcel of water gives rise to further conservation laws, relating the flows and densities of water and air. The full theory of climate includes within it the hardest equation of physics, first discovered in the 19th century, the Navier-Stokes equation. It describes the dynamics of fluids (air and water, both in gaseous and liquid states - physicists use "fluid" for both). These equations cannot, even on their own (without the effect of radiation and of the phase transitions of water from ice to liquid to vapor), be solved or even be stated in complete form. Instead, fluid dynamicists in physics and engineering introduce simplified approximations ("forced closure") to covert the fluid dynamics into something that can at least be stated as a complete, self-consistent mathematical problem. Introducing the phase transitions of water and the radiation passing through, being reflected, absorbed, and re-radiated, makes the problem even more intractable. So further approximations (more "forced closures") are introduced.

From a mathematical point of view, these "forced closures" are not controlled approximations. There's no way to bound or estimate the error made in introducing them. In laboratory or engineering applications, we have an "out," namely, controlled experiments that provide an alternative source of insight into the behavior of fluids. We have no controlled experiments for the atmosphere, with its mix of air, discontinuously changing water, and radiation.

Known unknowns and unknown unknowns. Reliable knowledge in the sciences arises from controlled contexts: deduction from explicit assumptions, laboratory experiments, mathematical approximations with bounded errors. In such situations, even if we can't arrive at an exact answer, we know the right questions to ask and get a range of the numerical values we seek. In climate modeling, we are lost. Not much has been attempted in the way of rigorous deduction from the full climate theory, partly because it's so complex. We have no controlled laboratory experiments. And the leap from the full, unsolvable theory to any known model (including the GCMs) is made with uncontrolled approximations. We might know the right question to ask, but have a only vague idea of the numerical range we're aiming for - perhaps on the order of five or so degrees C. It's quantitatively too fuzzy to serve for the kinds of precise conclusions that people seek, temperature changes on the order of tenths of degree C, or even a full degree.

What is climate anyway? And there's a more basic problem: we don't know what "climate" means, unless it means the exact state of the whole atmosphere and oceans at one instant. That's far too vast to comprehend or measure, and it might not even be necessary to know all of it. What's lacking is a reduction of "climate state" that can serve as a simplified abstraction to track. Such a state would need to track something about the state and flow of the air, the heat, and the water. There's no "temperature of the Earth," in spite of the meaningless numbers bandied about. All such intensive thermodynamic measurements are local and vary in space and time. We need something that captures the spatially spread-out nature of climate and the fact that it's controlled by flows, not static reservoirs, of heat, air, and water.

Just as there are few controlled approximations in modeling climate dynamics, there's no controlled and well-defined "state" of climate even to talk about. These are open scientific questions. Unfortunately, they're almost always taken as somehow already answered or are never even asked. But they need to asked, and we need to face the fact that, at present, there are no good answers.

POSTSCRIPT: Chapter 3 of Lorenz's chaos lectures discusses the origins of GCMs from the point of view of someone who was there.
---
* Readers of this blog might remember this problem from over a year ago in a very different context, the failure of the "multiverse" or "landscape" picture of string theory. There was no way in that case to specify a list of universes and assign their probabilities either.

Labels: , , , , ,

Wednesday, May 28, 2008

Climate models: Their origin and nature

An infinite number of approximation schemes are available to turn the full climate theory into something tractable. In practice, approximation methods have fallen into a few distinct classes.

Textbook cases. There are very simple approximate models that can be solved exactly. These typically exclude convection and include evaporation, condensation, and clouds only in a very simplified form, if at all. Such models are frequently used in climate textbooks. A slightly more complex class of models require computers to achieve an approximate (but typically quite accurate) solution. (That is, the theory is replaced by an approximate model, which itself is then subject to further approximations in order to solve it.) The earliest versions of both classes of models date from the end of the 19th and the early decades of the 20th centuries.* Early versions of numerical approximations, which replace continuous time and space with a discretized "grid" of time instants and spatial points, were developed during these periods. The calculations to implement these approximation methods are tedious and had to be done by hand. (In the 19th century, a "calculator" was a person who carried out this arithmetic drudgery!) In the 1920s, 30s, and 40s, electromechanical calculators, forerunners of today's electronic handheld calculators, were pressed into service to carry out such work. It is worth stressing: the computers just implement a numerical approximation scheme; they do no physics and don't "know" the approximation method except to the extent that they are programmed by humans.

The rise of large-scale computer models. Around that time, the British physicist Lewis Fry Richardson, following up the suggestion of Norwegian Vilhelm Bjerknes, collected the pieces of the full climate theory as we know it today. The fluid dynamics and thermodynamics of air and water vapor were discovered during the 19th century, including the famous Navier-Stokes equation, which describes the motion of turbulent fluids. At the end of that century and the first decade of 20th, the nature of radiation and radiative heat transport came to be understood for the first time. With all these necessary pieces, Richardson wrote down a simplified version of the complete climate dynamics. He postulated that hundreds or thousands of human "calculators" could be set to doing the necessary arithmetic to implement a numerical "grid-ified" approximation scheme for the atmosphere.

From the start, Richardson's first attempts to predict weather ran into just the problems that would subsequently occupy climate and atmospheric scientists for the rest of the century. He could never piece together enough initial condition information to properly start the integration forward in time. He ran into a version of chaos, although he failed to understand the full nature of what he had stumbled into. The human-implemented arithmetic calculations needed to carry out the method were so slow that weather prediction could not be done in real time. Starting on day one, he got to making a prediction for the following day's weather only after six weeks - and it was wrong. Richardson had posed the full climate problem, for the first time, as a problem in mathematical physics, and it quickly came to be perceived as unsolvable.

At the end of the 1930s, the invention of the electronic computer (first built with vacuum tubes, later with transistors and transistors on "chips") promised to transform the entire problem by making possible a large number of fast, accurate calculations. Better numerical approximation schemes (many of them rooted in the work needed to design and test the first nuclear weapons) became available. By the 1950s, people were seriously talking about making accurate weather predictions, not just for tomorrow or next week, but long-term, months or even years. Weather was one of the first non-military applications of these computers. Fantasies about controlling weather were floated as well, since accurate prediction and control are closely related.

Modern climate models, called atmosphere-ocean general circulation models (AOGCMs, or GCMs for short) have their roots in the postwar decades, the 1950s and 60s. They were put into their contemporary form in the 1980s and continue to serve as the main basis for the most complex long-term climate predictions.

And then chaos happened. Readers of this blog know what also happened during that period: the discovery of chaos by Edward Lorenz at MIT. By the early 70s, it was clear that long-term weather forecasting was doomed. A chasm opened up between hope and reality and between "weather forecasting" in the popular sense (limited to a week or two ahead) and "climate prediction" for the long term. Modelers retreated to a fuzzy distinction between "weather" and "climate," a distinction that has never been properly defined. Climate had to be defined statistically, as a set (or ensemble) or possible weathers, with some attached probabilities. Long-term predictions of climate would have to sample this ensemble, then average the results weighted by their respective probabilities. Because the full climate theory equations could not be solved accurately, heavy use of repetitive past weather situations to make future predictions came into play: in ordinary weather forecasting, known as synoptic meteorology; in "climate" prediction, as "climate parameterizations."**

Climate models: Successes and failures. The accuracy and control embodied in these GCMs are very uneven if we disaggregate the models into the various pieces that come from the fundamental theory. Before the next couple postings explain what's wrong with the models, it's a good idea to step back and point out what's right about them.

Mechanical equilibrium (pressure gradient balancing the pull of gravity downward) is the best-respected part of the whole standard climate picture. This piece gives us the pressure and density profiles as functions of altitude, as well as the atmospheric motions we know as winds. It's the thermal and chemical parts (heat transport and water phase transformations, respectively) where things get much hairier, because there is intermediate-scale structure smaller than the whole Earth but bigger than little parcels of air that are close to thermodynamic equilibrium: clouds, storms, cyclones, anti-cyclones, fronts.

Radiation and evaporation are the best controlled approximations in that sector of the models. Convection is under much poorer control, and turbulence essentially not at all. Neither are condensation and precipitation. "Global warming" due to infrared (IR)-opaque gases arises from the first two pieces (radiation, and the major enhancement of clear-air water vapor due to the much smaller effect of increased CO2 and CH4 concentrations). Not surprisingly, conventional climate models currently get these parts pretty well.

But the other parts, not under good control, are just as important. Convection is a significant heat transport mechanism in its own right and plays an essential role in getting water vapor above the bottom-most layer of the atmosphere to higher altitudes where it condenses into clouds. Turbulence embodies the chaotic, unpredictable evolution of climate. Condensation and precipitation complete the hydrologic cycle, form a major part of heat transport, and encompass the formation and dispersal of clouds. These in turn have crucial effects back on the radiation. Climate models don't get these parts well or at all. Not surprisingly, therefore, standard climate models overstate the degree of "global warming" due to IR-opaque gases: they get the warming parts, but do poorly with the anti-warming compensatory mechanisms, clouds above all.

The final two postings on climate models will drill further into these problems.
---
* It was during this period that the Swedish physicist Arrhenius first noticed the effect of IR-opaque gases such as carbon dioxide (CO2) on the temperature lapse rate.

** In weather forecasting, if limited to no more than about two weeks ahead, synoptic techniques have a limited but real justification. The time horizon of prediction is short enough that chaos does not come into full play.

Labels: , , , ,

Tuesday, May 27, 2008

Escape from the greenhouse

What is a greenhouse? Is the Earth's climate a greenhouse? Why are infrared (IR)-opaque gases misnamed "greenhouse" gases?

What is a greenhouse? A greenhouse is an environment artificially controlled to maintain equable conditions of temperature and humidity suitable for growing plants in colder and highly variable climates. It's a controlled "climate box."

There is one source of heat in the Earth's lower atmosphere (visible and ultraviolet radiation incoming from the Sun) and three means by which the radiation, converted to infrared, can escape upward. In one case, it remains IR radiation and escapes as such. In the other two cases, convection and evaporation, the IR radiation is converted to a form of heat in matter (air and water vapor) before it moves upward. The first mechanism is, by itself, easy to understand and straightforward to control. The other two are much harder to either control or understand.

What a greenhouse does is to put a lid on the escape of heat through convection and evaporation. These two upward heat flows are trapped and turned back downward. OTOH, a greenhouse allows radiation in and radiation out unimpeded or only mildly controlled. Because radiation flow is easy to control, conditions in the greenhouse - temperature and humidity - can be regulated with a fair degree of accuracy. That's the point of a greenhouse. The walls of a greenhouse also put the kibosh on winds, shutting off another source of climate variability.

Is the Earth's climate a greenhouse? No. A greenhouse is close to a "pure radiative heat transport" situation. The Earth's climate is strongly influenced by the other two heat flow mechanisms and can't be considered a greenhouse, even as an approximation. Greenhouses are built because the Earth's climate conditions are not equable, especially in temperate regions that experience large daily and seasonal swings of temperature and humidity. The closest natural situation on Earth to a greenhouse is the tropics, and even there conditions vary a lot over the year. Upward heat convection and evaporation are, if anything, stronger in the tropics than elsewhere.*

But there's a deeper point. As concentrations of IR-opaque gases rise, they put a larger obstacle in the way of heat escaping from the surface as radiation, but they do nothing directly to affect the other heat flows (convection and evaporation). IR-opaque gases don't make the Earth's climate more "greenhouse-y" in fact. To do that would require strong limits on the other forms of heat transport, just as a real greenhouse does. But the IR-opaque gases modify the radiative heat flow - the opposite of a greenhouse.

Why the "greenhouse" effect and "greenhouse" gases? A posting last year discussed the origins of this misguided metaphor in both popular and scientific misunderstandings about heat transport from a century or more ago. In 1909, English scientist R. W. Wood proved that greenhouses don't "trap" radiation - quite the contrary.

Unfortunately, the bad metaphor stuck in decades of popular books and scientific texts on climate. Climate and weather books often flag the faulty double metaphor (greenhouses don't "trap" radiation, and the Earth's climate isn't a greenhouse anyway). But most scientists have given up on trying to fix it. Some books use other metaphors as catchphrases and mneumonics, like "atmosphere effect," for what is in fact a complex series of heat flow constrictions and diversions. Last year, I used the fairly exact analogy of a constricted garden hose.

The "greenhouse" and "greenhouse gas" language is fallacious through and through. Now that they have contributed to the rise of the "global warming" hysteria, these runaway bad metaphors have done far more damage than anyone could have imagined 50 or 100 years ago. The related bad metaphor of "heat trapping," rarely stated in explicit form, also lurks in the background and adds to the confusion.

A lesson from greenhouses about control and predictability. Armed with a correct understanding of greenhouses and why Earth's climate isn't one, we can see that greenhouses exemplify a very important point about control and prediction of climate.

Greenhouses have a steady climate inside because they're "radiation boxes." Radiation transport is the simplest part of the climate problem and, by itself, the easiest to predict. That's why greenhouses work: they rely on "radiation in-radiation out" only. Part of the trick of greenhouses is that they also shut off (or strictly confine) the other, "wilder" parts of climate, convection-turbulence and evaporation-condensation. If these forms of heat flow were allowed to roam wild and free, the temperature and humidity in the greenhouse could be not controlled or predicted. That would destroy its purpose and make the greenhouse no different from the general lack of predictability and control in the atmosphere - the real weather we face every day.

To paraphrase Foster Morrison again, the degree of isolation controls the degree of predictability. That's especially the case when climate has two parts wildness (chaotic-turbulent convection and evaporation-condensation) to one part easy (radiation). A greenhouse isolates a small piece of the atmosphere from the larger wildness outside and allows that piece to be heated and cooled by a steady and thoroughly nonchaotic flow of radiation.

The Earth's climate as a radiation box. It might be objected that viewed from the outside, the Earth's atmosphere is a radiation box. After all, there's no air or water vapor in outer space, so radiation is the whole game. Radiation flows in, and only radiation flows out. That's correct, but it doesn't make the Earth's atmosphere a greenhouse.

The ultimate reason is one of relative scales. In the Earth's atmosphere, the scale of convective heat transport is tens or hundreds of meters; the scale of evaporation and condensation (as clouds), a kilometer or so. The latter is six to 12 times smaller than the height of the lower atmosphere, the former 20 to 100 times smaller. There is no sense in which the Earth's atmosphere as a whole can be viewed as a single greenhouse - it's too big. It can fit many, many greenhouse-sized boxes. But none of these imaginary boxes would be closed; they would have to be open to air and water flows and thus not greenhouses. While they would have the right size, they would not function as greenhouses, which work because they isolate a small piece of atmosphere from the rest.

Without being closed to air and water flows, such imaginary would-be greenhouses couldn't act as greenhouses, with all their steadiness and predictability. And, because of its size, neither can the atmosphere as a whole.

POSTSCRIPT: Freeman Dyson, one of the last representatives still alive from the heroic mid-century era of physics, writes about "global warming," carbon dioxide, and plants in the New York Review of Books.
---
* Thus the troposphere-upper atmosphere boundary is highest in the tropics, because of that strong upward "push." Upward convection and evaporation are weakest in the polar regions, and that same boundary is low over the poles, sometimes (during the polar winter) almost touching the surface ("sky falling to the ground").

Labels: , , , , , , , ,

Monday, May 26, 2008

Climate theory, models, and metaphors*

This and the next few postings cover a final technical examination of climate, a look at climate models, as promised earlier. A larger context is helpful too. Much of the lopsided and misguided debate on climate change is couched in terms of metaphors, necessarily fuzzy and usually linked to faulty analogies or models. Models in turn are frequently confused with climate theory, couched as integrodifferential and algebraic equations, the unique scientific truth about climate, but also unsolvable.

The full theory of climate contains:
  • The mechanical dynamics of density, pressure, and wind;
  • The nonequilibrium thermodynamics of heat transport in the forms of radiation, the hydrologic cycle (evaporation, condensation, and precipitation), and convection (often turbulent and thus chaotic);
  • The nonequilibrium thermodynamics of water phase transformations; and
  • In a more complete statement, other forms of chemical transport and transformation and the thermohydrodynamics of the oceans.
To be solved, the theory must be supplemented with initial conditions at some start time and spatial boundary conditions. The dynamical part of the theory alone needs a bunch of pages of graduate-level mathematics to state. The supplemental conditions require a detailed knowledge of atmosphere and oceans impossible to obtain, making the theory impossible even to state fully in practice.

Even if it could be fully stated, the dynamics itself cannot be solved. Suitably butchered, with the "hard parts" removed, parts of the theory can be solved, a fact that often misleads students (and not only students) into thinking that the full theory can be. Two properties of the heat and air transport and phase transformations render the problem intractable:
  • Chaos, discussed extensively in a recent series of postings: exponential sensitivity to initial conditions, or, equivalently, essentially nonperiodic behavior.

  • Discontinuity of water phase transformations, taking place in an infinitely complex pattern over the whole atmosphere and the atmosphere-land-ocean boundaries. These transformations affect the state of the matter (air and water mixture), but also affect the heat transport, being critical steps in the hydrologic cycle.
Approximations as rigorous method versus approximations as acts of desperation. When faced with such theories, the reaction of mathematical physicists and other quantitatively-oriented scientists is to substitute approximations of the full theory for the full theory itself, pick such approximations as are solvable, and attempt to justify the approximation.

All approximation methods aim at producing a tractable substitute for an unsolvable problem; their method is ranking different pieces of the problem in some order of "more important" (numerically bigger) and "less important" (numerically smaller). A starting approximation works with the most important pieces first; it can be refined and made more accurate by successively adding back in the less important pieces that were initially neglected. For this approach to lead to reliable results, there has to be a rigorous and controlled method for identifying, isolating, and ranking these pieces of the theory. As Foster Morrison puts it in his perceptive and useful Art of Modeling Dynamic Systems, the degree of precision is the degree of isolation: isolate one cause from another, one effect from another, one mathematical deduction from another.**

In mind-bogglingly complex problems like climate, there is no such method. Theorists make the leap anyway, just so they can get to something tractable. But in so doing, they are making only guesses of what's bigger and what's smaller. In some cases, partial justification can be found by appealing to observed climate behavior, which can (in favorable circumstances) hint that some things are more important than others. In other cases, the guesses are simply leaps in the dark, adopted for convenience, or suggested by historical precedent. And these considerations haven't even gotten us past the chaos problem. Only an infinitely detailed specification of climate at one instant of time, followed by an exact solution of the dynamics, can overcome this difficulty. We lack both, and so chaos limits, for example, weather forecasting to no more than two weeks ahead. Attempts to forecast for longer periods amount to guesses no better than random. We have to fall back on the notion of climate as a rough range, or a chaotic strange attractor. That attractor of behavior is a starting point for thinking about "climate" as something other than just "the infinitely complex instantaneous state of the atmosphere and oceans."

Theory replaced by models, and models reduced to often misleading metaphors. From a scientific point of view, accurate but unsolvable climate theory is, in practice, always replaced by solvable but uncontrolled climate models - models with limited usefulness, at best. From the point of view of the man in the street and the incessant chatter of environmentalists and the media, climate theory is a nonstarter. As a rule, in that context, we rarely rise even to the level of the simplest models and, if we think about it all, casually assume that such models are the last word on the subject - instead of a first and very preliminary word. More often, we're stuck swimming in an ocean of manufactured ignorance, pelted by a downpour of misleading metaphors.

A series of postings last year laid out these runaway bad metaphors and the climate model fallacies often implicit in them.

Fallacy #1. Radiative heat transport is the whole game, controlled by the concentration of infrared(IR)-opaque gases, such as water vapor, carbon dioxide (CO2), and methane (CH4).

But convection (including turbulence) and the cycle of evaporation, condensation, and precipitation also play a large role in Earth's climate. Radiation is not the whole game, and the heat transport is a complex three-way interplay of the water cycle, convection, and radiation, all acting alone and reacting off of one another. They all affect one another in a nonlinear and nonlocal way (nonlocal because radiation moves almost instantaneously through the clear air, in contrast to air and water.) As we'll see in the next few postings, the IPCC's predictions are based on enhanced CO2 concentrations (a small effect by itself), greatly amplified by the feedback of enhanced evaporation and clear-air water vapor. These typical and conventional climate models have a much harder time capturing convection, turbulence, condensation, and precipitation.

Once water evaporation is enhanced, no one knows how it will get divided between clear-air vapor and clouds. And clouds, as we know, have profound effects on climate, all cooling (lowering temperature).

Fallacy #2. The Earth's climate is a greenhouse. We'll look at this fallacy more closely in the next posting.

Fallacy #3. The obsession with temperature. Temperature, like pressure and humidity, is a local thermodynamic measurement. There is no "temperature of the Earth" - it's a whole temperature field distributed in space and changing in time. Confusions of this sort are shocking, not when committed by someone not educated in physics, but precisely by scientists and scientifically-educated nonscientists. Without the political hysteria, fallacies like this would be correctly viewed as laughable. Furthermore, even locally, temperature is not enough to specify the state of the atmosphere. You also need at least humidity and wind variables.

Fallacy #4. The confusion of temperature and heat. Temperature is not heat. They're even measured with different units, and they represent different physical phenomena. Heat is disorganized energy, disorganization itself measured by entropy. It's a "bulk" or extensive quantity: It can be localized, flow in space, and summed over volumes. Temperature is a local or intensive quantity. It measures how much of an increment of energy in a vanishingly small volume is related to an increment of disorganization or randomness (entropy) in that same volume. It's localized by its definition and doesn't flow in space or sum over volumes.

But even though they're not the same, there is an intimate relationship of heat and temperature. For a homogeneous system that does not suffer any discontinuous phase transformations (like melting or boiling), heat capacity relates how much a small increment of its temperature leads to a small increment of heat contained by it. If different parts of the system have different temperatures (like the climate), differences in temperature are closely related to flows of heat - the Second Law in action.

A system that does suffer discontinuous phase transformations - ice to liquid water, liquid to water vapor, and back - is altogether more complicated. A certain amount of heat, independent of changes in temperature, is needed to change ice to liquid water or liquid water to vapor. The same amount of heat is released by the opposite transformations. These heats of transformation, or latent heats, break the connection between increments of heat and increments of temperature. These heats, instead of raising temperatures, go into "loosening" the phase of the water - say, breaking up a tightly bound crystal of water molecules (ice) into a smooth fluid of water molecules that touch but slide past one another (liquid water). Our climate is a nonhomogeneous, nonequilibrium collection of flows suffering from just such discontinuous changes in water state.

Fallacy #5. It's heat that determines temperature. Actually, it should be clear by now, it's heat flow that determines temperature. The Earth's climate, from a thermal point of view, is an open system. Visible and ultraviolet radiation from the Sun flows in and is transformed into heat radiation, then flows back into space. Related fallacies include the "heat trapping" metaphor, as if the heat is locked in a closet and can't get out. IR-opaque gases don't trap heat; they change how it flows out.

Trapped in the greenhouse. The "greenhouse" metaphor (fallacy #2) itself is worth a closer look, not only because it's widely misused, but because a proper understanding of how a greenhouse works leads to a different, unexpected, and more accurate picture of climate and the relationship between controllability and predictability. We'll take a short and final detour through the greenhouse next.

MENTION MUST BE MADE of the passing of Edward Lorenz, the modern (re)discoverer of chaos, so tantalizingly anticipated by Poincaré. Twentieth-century science will be remembered for a handful of discoveries - the genetic code, the expansion of the universe - and for a few theories: relativity, quantum mechanics - and chaos. His original 1962 paper here (PDF).

Read more about Lorenz here, and consider his wonderful 1996 popular lectures, The Essence of Chaos.
---
* This posting to an extent parallels Essex and McKitrick's chapter by the same name. (Their book is now available on the US Amazon.) I also make exceptionally heavy use of postings from last year.

** Morrison's book is a splendid introduction to dynamics for the mathematically-minded non-specialist. He starts without even calculus, managing a kind of "dynamics for the masses" by looking at compound interest, clocks, and thermostats.

Labels: , , , , , , ,

Friday, May 23, 2008

It's Memorial Day weekend!

Kavanna on vacation - back in a few days.

Thursday, May 22, 2008

Sigh...

... and here's your drug war at work.
... [Atlanta] narcotics officers were required to serve nine warrants and make two arrest per month, or they'd risk losing their jobs. This led to routine lying on warrants and bullying and intimidation of informants. What we don't know is how many people were wrongly raided, arrested, and jailed because of all of this.
It's time to reconsider this nightmare. We got rid of Prohibition after 12 years or so - why is the drug war still going on after more than thirty?

Labels: ,

Wednesday, May 21, 2008

You read it here first

... sorry, can't help but gloat:
Global warming isn't to blame for the recent jump in hurricanes in the Atlantic, concludes a study by a prominent federal scientist whose position has shifted on the subject.

Not only that, warmer temperatures will actually reduce the number of hurricanes in the Atlantic and those making landfall, research meteorologist Tom Knutson [of NOAA] reported in a study released Sunday.

[... snip ...]

Another group of experts, those who study hurricanes and who are more often skeptical about global warming, say there is no link. They attribute the recent increase to a natural multi-decade cycle.
Both groups are right, albeit in different ways. But you read it here first. Kudos to Instapundit for staying on top of this.

And the NOAA website puts it this way:
NOAA Model Projects Fewer, But More Intense Hurricanes Late This Century
(Emphasis added.) You read that here first too.

Labels: , , , ,

Tuesday, May 20, 2008

Fat tails and outliers: The water crisis

Recently, the occasional severe droughts that afflict the drier parts of the US have given rise to a predictable whine of "correct" journalism that wrongly attributes the "water crisis" to human-caused changes in water supplies, instead of correctly to a combination of rising demand and misguided scientific and policy assumptions. Like the fallacies of extreme weather and quantitative finance, the water panic is ultimately rooted in an essentially wrong picture of statistical fluctuations in open systems. Amplified by "correct" herdthink, it has turned into a perfect storm of bad science and bad journalism.

Statistics and stationarity. A statistical distribution is a substitute for full knowledge of a collection (possibly infinite in size) of particular instances. It's a way of stating limited knowledge. Stationarity means that, while individual instances can differ and limited samples might show time dependence as the distribution is sampled, the statistical distribution itself doesn't change. It's not always right, but for most cases, it is a good starting assumption in the absence of compelling evidence to the contrary.

The problem with stationarity is that it is often combined with another, much more questionable assumption, Gaussianity of the distribution: the distribution is assumed to be a bell curve, arising by the classical Central Limit Theorem from more primitive proto-distributions with finite moments. For open systems not closed with respect to exchange of flows with the outside world, it's well-established that Gaussianity is usually wrong. Better to assume the more general case, a Lévy distribution with some of its moments infinite. Recall such distributions have "fat tails" and support large deviations from the mean ("black swans").

Wrong conclusions prompted by wrong assumptions. For example, a stationary but non-Gaussian distribution, if it is sampled periodically, will produce moments that look as if they're increasing in time, perhaps implying a time-dependent distribution. In fact, this is a well-known fallacy in statistical reasoning. The moments are simply diverging. The distribution is stationary, just not bell-curve. There are other, better ways of analyzing samples from statistical distributions in this case.*

It's been known for over a century that rainfall and other water-related flows in our environment do not follow Gaussian random-walk behavior. Their fluctuations exhibit violations of this assumption, of broadly two types: memory or time correlations (so that each step in the walk is not independent of previous ones); and divergent moments (infinite variance or standard deviation, say). Either one of these should be enough to signal that Gaussian assumptions should be loosened. Unfortunately, a combination of theoretical prejudice and convenience has led policy-makers to stick to wrong assumptions anyway, just as financial traders and regulators (until recently) have been under the spell of the classical Gaussian random walk to explain price changes.

And now to bad journalism. There has been a recent water crisis in the American West. Actually, it happens roughly once a decade - the current one is in the midst of disappearing after an exceptionally snowy winter. Such crises are increasingly attributed to humans changing the available water supply and destroying "stationarity." In fact, it's the accompanying Gaussian or bell-curve assumption that's wrong.** The "throughput" of annual water flow available for human use changes year to year. But the underlying statistical distribution doesn't have to change.

And then to bad policies. Policies built on wrong Gaussian assumptions will lead to the same characteristic mistake over and over: the conclusion that fluctuations should be frequent but small deviations from a well-defined mean. In reality, the distribution has much larger fluctuations, which hit regulators, policy-makers, and ordinary folks again and again as "surprises" or "crises." But there's no crisis, just wrong assumptions. Gaussian-stationarity was never a reality, only just an assumption, and a bogus one at that.

In the case of water flow, increasing demand raises the chance that, in any given year, the the water system will be flowing below the threshold needed to meet that demand. There are three solutions.

Lower demand. Much water use in the American West is heavily subsidized by state and federal governments. Much of it goes into marginal and inefficient agriculture (for example, growing alfalfa and rice - both monsoon crops - in the California Central Valley). Reduce those subsidies, and you'll reduce demand.

Boost reserves. Just as banks and other financial institutions should be required to hold on to larger reserves to meet "black swan" crises, so a hedge against a large downward fluctuation in water flow is to build more and larger reservoirs, and to manage them more conservatively.

Change expectations. If scientists, policy makers, politicians, and voters carry around in their heads a false concept that water flow is basically steady and predictable, they'll treat the inevitably different reality as a "crisis." If everyone involved understands that water flow is subject to large changes year to year and not predictable beyond outlining a rough range, the policies will be different and more oriented around hedging better against drought by storing water during good years.

The Earth's water flow is deterministic, like all aspects of climate. But it's also chaotic. Fat-tailed statistical distributions are characteristic residues of chaos, as is the typical pattern ("intermittency") of clusters of good years and bad. Caveat emptor.
---
* Use the absolute linear range, for instance.

** More technically, it's the assumption that the distribution can be estimated by estimating moments from past water flow data. If some or all of the distribution moments are infinite, this technique doesn't work and never worked. Scientists, engineers, and policy-makers who thought otherwise were simply fooling themselves.

Labels: , , , , , , ,

Monday, May 19, 2008

Your Congress at work...

... with your hard-earned tax dollars: the revolting farm bill already passed in the House and now making its way through the Senate, appropriating $300 billion for, for example, those ridiculous ethanol subsidies - you remember those from yesterday, no? But it's also for outright corruption, like paying farm price supports to people who happen to own arable land, don't farm it, and live somewhere else. And remember the evils of corn syrup? This bill adds to domestic sugar subsidies: so watch your food prices go up.

Although Bush's motives are suspect - he failed to veto a single spending bill when the Republicans controlled Congress - he will apparently do the right thing and veto this monstrosity, even though many congressional Republicans are supporting it. It will probably pass with a veto-proof majority in the Senate, as it did in the House. Perhaps Congress can be shamed into not overriding Bush's veto, but don't count on it.

The DC Examiner excoriates the bill here, and the Washington Post casts its skeptical eye. And don't miss these charming little items.

It makes McCain's upcoming campaign easier, though. As we pointed out earlier this year, he merely needs to run against Congress. Oh yeah, McCain needs to vote against this thing too, and he did. Clinton and Obama did not.

Labels: , , , , ,

Sunday, May 18, 2008

The wages of enviro-politico-media hysteria

Shocking, no? But it's true, at least truer than all that "eat locally, fart globally" crap. The ethanol craze did this. Because it uses food and not by-product or waste, it raised food prices, markedly. Here, it was enough for us to notice. There - in poor countries, where the margin of life is much thinner - they did more than notice. They're going hungry, occasionally rioting, and even dying. Mark Steyn explains.

Important media outlets have made their point about the ethanol idiocy recently. But did they make the point that it was the eco-fanatics ... the media ... and their enslaved politicians who pushed these policies on us, say, five or 10 years ago? All to deal with the fake crisis of "global warming"?

This sort of thing illustrates perfectly the stupidity inherent in the conventional news media: in the end, even when they're wrong, they're always right, but the government is always wrong, and everything can be made well if they can pressure the government into some stupid policy or other ... until the next fake crisis, which will be "solved" by manufacturing a real one ....

POSTSCRIPT: I'm so mad about this, I might need one of these.

Labels: , , , ,

Friday, May 16, 2008

The New Deal reconsidered: Reforming the welfare state

A few postings ago, I alluded to the approaching crisis of the welfare state. If it is to survive in some form and not bankrupt the federal government and create the largest economic and political crisis since the Great Depression, we need to start now to negotiate the critical choices. The Republicans had their chance to get the ball rolling in the 1990s, but (welfare reform apart) threw away their opportunity, then headed off in a very different direction after 1998. The result was a strange parody of liberalism, the Republicans' attempt to create their own version of vote-buying on a national scale with two new middle-class entitlements, in education and health care. A latter-day version of Nixonism, it worked for a few years, but has now lost credibility and heightened the federal government's burden.

It's not as if the problem is new. A spate of books published in the 90s (books by Jonathan Rauch, David Frum, Robert Samuelson, and Alice Rivlin) and, more recently, histories (like those of Goldberg and Shlaes) and policy briefs (like Bruce Bartlett, George Shultz, Charles Murray, and Cass Sunstein's) have mapped out the problem from different points of view. Previous crisis points, in the late 30s and late 70s, have periodically reminded Americans of the question: what to do about this behemoth born in the 1930s and periodically threatening to devour us with its ravenous demands for money and authority?

But the political context is different now. The imperial presidency is a greatly shrunken institution. Keynesian theories of inadequate demand, the business cycle, and "fine-tuning" have been discredited and replaced by newer versions of classical, neoclassical, and monetarist theories. We live today with a government that is fat but weak, unable to say no, tied down by an army of pressure groups jockeying to grab a piece of government power and impose narrow agendas at the expense of everyone else, proliferating and inconsistent laws, and politically-driven litigation. We lack powerful parties or executive leaders who can decisively steer or shape it. The last president to try was Reagan, with only modest success. Equilibrium, as in Clinton's second term, is only the accidental product of partisan stalemate.

Too much of the politics of the West, especially in Europe, but here as well, takes this behemoth for granted as an eternal presence that has always been with us. But it is not so. The welfare state in Europe dates from the 1880s; in the US, at the federal level, from the 1930s, although its seeds were planted earlier. From the start, observers could see the contradiction between claiming to represent the public good, while in actuality helping self-serving interest groups at the larger public expense. After the totalitarian era passed, the war ended, and the New Deal coalition broke down, the danger of greedy interest groups became all the stronger. Added to this were new, long-term dangers, especially the demographic danger, as the postwar Baby Boom gave way to bust, of not being able to afford the extravagant promises. Something less all-encompassing, yet still noxious, the fantasy of "fine-tuning" the economy through a mix of taxation, monetary policy, and subsidy led to stagflation - and later, in the 1990s and '00s, to a surge of asset bubbles and exploding public sector costs, especially in health care and education.

Reforming the welfare state to the degree that will be necessary in the next 10 to 15 years will require leaders nearly as powerful as those who originally created it. The once-powerful parties and presidency have lost their authority, but the large, intrusive, and expensive government they created is still with us. Every governmental transfer program creates a class of beneficiaries and intermediaries who immediately become vested interest groups. Without strong political parties or presidents to keep them in check, these groups become the real controllers (or at least veto powers) of politics. And these veto powers in turn have made it almost impossible for liberals to later change the programs or conservatives to later dismantle them. Our politics needs serious reform as well, to free our electoral system from its current nightmare of suppressed free speech and media tyranny.

It's all about you and me. The welfare state is sometimes confused with "helping the poor," but at the federal level, this is not its main role. For that, I'll direct you over here instead. Briefly, the negative income tax for the working and able-bodied poor would be better than the current system. While the 1996 welfare reform was a remarkable success, more could be done in that direction. But the federal spending on the poor is a fraction of federal spending on the middle and working classes. That's what "welfare state" means here.

The middle class welfare state consists of four functions. The first two are mainly "entitlements," meaning that citizens can receive their benefits by meeting certain eligibility requirements and nothing else. The original programs were passed earlier, but their present form (with automatic spending and without discretionary choices by Congress) dates to the Nixon years.

Social insurance - that is, Social Security and Medicare. The former will need reform by the end of the next decade to avoid bankruptcy; the latter is in even more dire shape and will need it sooner. The minimal reforms needed are not drastic: they include a mix of changing eligibility requirements and means testing (concentrating full benefits on beneficiaries with lower incomes). To go beyond that is less a necessity and more ideological preference, but larger redesigns are worth discussing. The main favor we can do for future generations is, to the extent possible, make these programs self-financing through forced saving, rather than transferring from present taxpayers to present retirees, which is what they do now. These programs are an incredibly bad deal for younger workers and immigrants.

The subsidy-loan guarantee state, which has caused growing mischief of all sorts and has few justifications in a society as wealthy as ours. It covers everything from pushing home ownership on people who can't afford it to exploding higher education costs to destructive ethanol subsidies. The federal government's role as lender of last resort and backer of otherwise private-sector loans opens it up to dangerous vulnerabilities, as well as encouraging what economists call "moral hazard" - beneficiaries taking excessive risks because they know someone will bail them out.

The regulatory-litigation state, which was originally more modest and with strong justification, for example, in the financial sector.* This federal function has become more and more twisted over the years by judicial passivity in the face of an aggressive trial bar. Tort reform is one answer here, including requiring judges to take a more active role and not defer to the lawyers. The role of Congress and regulatory agencies has been twisted in a different way, by the formation of the "iron triangle" of interest groups, the media, and politicians. Only stronger political parties and presidents not in thrall to the news media can enable positive change here.

The pork barrel state, perhaps the most characteristic feature of the welfare state in its mature phase. This is the system of special favors, earmarks, and patronage pioneered mainly by Democrats, but recently imitated and taken to new levels by Republicans. This development is often misunderstood as a result of private parties (interest groups, corporations, etc.) "buying" politicians. In fact, it's the politicians who typically take the initiative in creating these relationships in the first place. Remember: each such special favor granted to this group or that, makes a vested interest out of that group. Subsequent politicians are only occasionally able to buck these groups, once they're created.**

The tragedy of modern America is that the ideas and tools needed for this reform are not missing. Voters are in many ways well ahead of the politicians, their obnoxious handlers and advisors, and the news media complex they've enslaved themselves to (our age's equivalent of court scribblers and flunkies). Voters have seen through - very through - the politicians' empty promises. We lack education and wisdom, even as we drown in a torrent of often irrelevant or twisted "information." Real history and real intellectuals are what's needed to bring out Americans' latent skepticism about government and politicians and turn it into real understanding and real change. Not only do we need to abandon false ideals like equality of condition, but even half-truths like equality of opportunity. While it's an improvement, no modern society can guarantee the latter (much less the former) and remain modern. What is reasonable to expect is freedom of opportunity, and it is here that modern liberalism has left a positive mark, in lifting inherited and often arbitrary prejudices about what people in stigmatized groups are capable of. Traditionally, what such people suffered from was not exploitation, but barriers to full participation in society, and we should be grateful for what liberalism, in its heyday, was able to accomplish here. If the much-abused phrase "social justice" means anything, it means that.

Some final thoughts. The federal budget today is largely entitlements (more than two-thirds), which continue to grow in absolute terms and in proportion to the whole. There's still a lot of confusion about this, as well as mythology about the size of the military budget, which is smaller as a proportional of national income than it has been since 1940.

The main damage done by Bush is this: while early on, it was recession and tax cuts that led to renewed deficits, and the deficit situation improved once the economy started to expand again in 2002, the problem has more recently morphed into a structural spending-driven condition and will become steadily worse in the coming decade. Apart from Reagan and the Fed's singular achievement of taming inflation, the most important achievement of the 20-year period from 1979 to 1999 was what did not happen: no major new domestic spending commitments; a large step up in military spending, followed by an even larger drop after 1986; and very favorable conditions from 1994 to 2000, with a conservative Congress and a president unable and ultimately unwilling to push for more. The real disaster after 2000 was the almost total disappearance of influential conservatives at the national level, and partisan lock-step between Congress and the White House on spending. Even now, not many people have really absorbed the enormity of what went wrong under Bush - Republicans often don't get it, and everyone else is still talking about Bush as "right-wing" or "conservative." This mind-set has to end if we are to see clearly what went wrong and why, where we're headed, and what needs to be done.

Politically, it means that, while few conservatives are available, we will have to make do with liberal Republicans and conservative and moderate Democrats. They're the few at the national level who might see what's gone wrong and galvanize the public's skepticism about government. Obama and Clinton have little credibility here. McCain does have some personal credibility - but his party, no longer.
---
* A benefit of the New Deal was the creation of a truly national banking and financial regulatory system, as Hamilton foresaw would be needed and Jefferson resisted.

** To get a sense of the pretense and folly of "progressive" politics these days, here's an example of what it really means. And let's not forget Massachusetts, which long ago moved from nuts-and-bolts government to bloated "big thinking" (or "big digging").

Labels: , , , , , , , ,

Wednesday, May 14, 2008

Ahem...

... Mark Steyn on importing poor people. Why can't we import some rich foreigners too? :)

Statistical averages are necessarily simplifications and lump everyone together. What would be interesting would be getting a more accurate picture by disaggregating recent immigrants and their families from the rest of the population. Has the rest of Canada's population seen a fall in real income in the last 25 years? If not, then the trend in the overall average is due simply to the (passive) inclusion of the poorer recent immigrants in the average. By itself, that's not a worrisome trend. But if the non-immigrant population has seen a drop in wage levels, then something additional has happened, perhaps (active) competition driving those levels down. Similar questions can be posed about US income averages and distribution. The effect on tax base and social welfare spending is then an additional question to look at. That's why we have economists!

POSTSCRIPT: Yeah, I know, this posting is dated before Steyn's. It's Binah's new time machine - look for it on Amazon!

Labels: , , , ,

Tuesday, May 13, 2008

The New Deal reconsidered: Whats and whys

The welfare-regulatory state of all Western societies, built in waves since the late 19th century, is headed for unstoppable, wrenching change. But choices lie ahead; the outcome is not predetermined. It behooves every sentient adult citizen of wealthy advanced democracies to understand the coming crisis in at least its basics.

The welfare state was built from a variety of motives, some benevolent, some not. In some cases, it was the conscious goal; in others, the residue of more radical, failed experiments in centralized planning. The practical breakthroughs in the evolution of the West away from classical liberal politics and limited government came as a result of the First World War and the Great Depression. But even before these watershed events, mass political movements and educated prejudice alike were starting to run against free societies, democratic government, and markets - for complex reasons: some political, some based on confused economic ideas, some imperialist, racist, or even esthetic. The common denominator was replacing spontaneous social development with the "engineered" society. Marxists believed in false theories of progressive immiseration and replacing the supposed "anarchy" of markets with the supposed "rationality" of central planning. "Welfare" liberals and progressives saw a need for a much more powerful regulatory and redistributive state, along with a strong dose of paternalism to guide the masses. The imperial-minded wanted government to redirect resources toward a society more fit militarily and better prepared to sustain itself without trade with other countries. The First World War provided the paradigm of "total" war, with quotas, price fixing, and direct government command of resources. The state took an aggressive role in suppressing social conflicts, in some cases peacefully, in others coercively and violently. Institutions of culture were seen in a new light, as available for and needing "co-ordination" to become aligned with unified social goals put forth by the political class. Contrary to myth, the supposedly "conservative" 1920s saw these measures remain half-in-place: price boards, trade and immigration restrictions, discriminatory labor laws.*

By the 1930s and the onset of the Depression, the collapse of free societies was evident everywhere. Liberal-capitalist democracy seemed obsolete; collectivism, the "wave of the future." Political intellectuals of many stripes searched for authoritarian alternatives, including fascism (a fact conveniently forgotten later). But after 1945, wartime sacrifice, and the belated discovery of what collectivist societies really looked like, retreat and compromise set in. The new push for the contemporary "entitlement" state came as a result of postwar prosperity, but had much shakier justification as serving the larger public good. Politically-designated grievance and entitlement classes emerged and began to steer the politics of the welfare state. The era of powerful, charismatic leaders able to impose limits on the greed of interest groups ended. The era of lobbyists and entrenched interests began, all seeking a piece of government power for their own use.

The larger price of the welfare state became evident: governments printing or borrowing money to pay for false promises; regulatory agencies, litigants, and activists misusing systems created earlier to serve broad social purposes for their own power-building agendas. By the late 70s, the smell of voter revolt was everywhere in the West. The generation that followed saw a revival of respect for markets, a wave of deregulation, and the re-emergence of the globalizing capitalism that had flourished before 1914, before it was wrecked by the two world wars and the Depression.

But real dangers remain. Totalitarian forms of collectivism have either been defeated or have largely collapsed from their own economic failure. But, while most democratic countries have dismantled most of their classical socialist experiments, the "half-socialized" regulatory and redistributive features remain. Their costs, and their tendency to "privatize the gains and socialize the losses," keep expanding. In the US, the economy as a whole has expanded fast enough to keep the bill from getting completely out of control - at least so far. Other wealthy countries, lacking US-style economic growth and the willingness of foreigners to lend and invest, haven't been so lucky. All of them have been forced to cut the welfare state and reform their redistributive systems, such as socialized medicine and old age pensions.

The rationale then - and now. When the welfare state was created, the world was a different place. National economies were more self-sufficient, and national governments had an easier time controlling them. The birth rate in most wealthy societies was higher than today and longevities shorter, meaning that social security systems could count on an adequate number of new taxpayers and sufficient economic growth to keep going. Many of these welfare state features were poorly designed for the long run. Keeping their most negative tendencies - the emergence of greedy interest groups misusing the power of government at the expense of everyone else - in check required disciplined and powerful political parties and charismatic political figures that, for the most part, no longer exist. Our politics today is dominated not only by powerful narrow interest groups, but by the news media, to which we have conceded much of the role once played by the parties and higher education. The result is not pretty. No one forced it on us; we just let it happen.

Reforming the welfare state will mean starting the debate on the proper functions of government over again from scratch. We must start by recognizing that, whatever the failures of markets and the larger society, "government failure" is just as real.** The debate will have to proceed without the discredited baggage of central planning or phony economic arguments. Every civilized society needs a government of some sort. The question is not just, what do we want it to do, but also, what can government do? It doesn't create wealth; it can only redirect or restrict it. And the welfare state debate cannot be framed in terms of rights, which are limitations on government power, not expansions of it. Concern for the poor, the disabled, and the otherwise helpless; making sure the able-bodied avoid chronic bad decisions affecting their welfare; and regulation of complex economic and technological systems, must be framed in terms of the responsibility of the society at large. In a free society, what is legitimate and not legitimate for political regulation? How much power should government have to redirect the citizenry's lives and decisions?

POSTSCRIPT: Much of this history is retold more completely and authoritatively by Robert Skidelsky's excellent The Road From Serfdom, a basic work for understanding the last century. Jonathan Rauch's Government's End (first edition, Demosclerosis) is an indispensable complement to understanding the late welfare state paralysis of interest groups.
---
* For example, before 1914, only diplomats needed passports. During and after World War One, almost all countries adopted much tighter restrictions on travel and immigration for everyone.

** As the citizens of New Orleans well know. We must also speak of "media failure" as well: grotesque, "narrative"-driven misreporting of even basic facts on a grand scale.

Labels: , , , , , , ,

Monday, May 12, 2008

The New Deal reconsidered: The Holocaust crisis

Another important corrective to the hazy nostalgia in which the FDR years were later enveloped is a look at the reaction, or failure of reaction, of the US to the Holocaust. As David Wyman recounts in his essential book on the subject, the nature and scope of the genocide were known in the US by late 1942. For fear of appearing "pro-Jewish," the War and State Departments, respectively, refused and blocked any action to stop it. Until his death, FDR was indifferent to both the genocide itself and the refugees in flight from it. The State Department, under the influence of the British Foreign Office, was also hostile to Zionism and declined to press for Jewish refugees to be allowed into Palestine. The contrast with Churchill is striking. Once he knew of it, he spoke publicly about the genocide and devised schemes for getting weapons to resistance movements in continental Europe. His complaint about Anglo-Jewry was its timidity and lack of organization. In spite of his courageous and public statements and actions in connection with the Holocaust, there were sharp limits on how far he push the rest of the British government on the issue. But there was no question where he stood.

It wasn't supposed to turn out that way. FDR's presidency, and especially his landslide victory in 1936, cemented the love affair of American Jews with the Democratic party. There have been periods of erosion of that affair (Eisenhower in 1956, Nixon in 1972, and Reagan in 1984, all received close to half of the American Jewish vote), but never a real prospect of dissolution. Although anti-discrimination laws before the late 1950s were more limited in scope, applying only to government, the influence of the New Deal's public hiring practices, and later their application through much of the US economy during the war, essentially started the modern civil rights era. The 1930s was not only the most isolationist decade in US history, it was the most nativist, a period of strong intergroup tensions and bigotry. The Depression itself, of course, was the largest single cause. But the message emanating from Germany also exerted a distinct influence. American Jews looked to FDR as "King of the Jews," the "good czar" who would protect them. American Jewish leaders like Rabbi Stephen Wise and Sam Rosenman acted as American versions of "court Jews" familiar from Europe.

And it was "court Jew" politics that failed in the war years. This influential establishment of lay and rabbinical leaders, allied with FDR, were determined to maintain the palace-intrigue approach to Jewish issues. Far from being a help, they seriously harmed Jewish self-interest in those years: for all their backroom dealings, they came up empty on antisemitism, Zionism, or rescuing European Jews.

Eventually, a new, more American type of "bottom-up" politics emerged in response to the Holocaust. Its emergence was too late for most of Europe's Jews. But it led to a stunning breakthrough for America's. After the end of the war, it became clear that, for all his greatness as a leader, FDR and his "court Jews" were the ultimate obstacles to progress on these issues. While he repeatedly used popular anti-semitism as an excuse for inaction, the circumstances of the war itself rapidly changed American opinion, and FDR was left behind by change he himself had helped to instigate. Treasury Secretary Morgenthau's plan to rescue Jewish refugees was largely drawn up by non-Jews. Former president Herbert Hoover, who first made his name leading war relief efforts during and after the First World War, offered to head up a refugee commission. It did form but failed to accomplish much, because of State Department and White House resistance. Even the State Department itself, once the war was over, relented enough to negotiate a settlement of refugee property claims with the Swiss government.*

A critical mass of Jewish groups finally gave up on palace intrigue, organizing and protesting publicly in 1943 and 1944, making Zionism and the rescue of Europe's remaining Jews broadly accepted, nonpartisan issues. By the 1944 election, both parties endorsed this platform, and within a few years, rapid political change led to dramatic changes in American acceptance of Jews and the start of the sharp decline in antisemitism that marked the postwar decades. This decisive change occurred in a space of a few years. Contrast with the 1940 election, where in spite of the bipartisan support for intervention in the war, America First and important isolationist leaders like Lindbergh made discreet but effective use of social prejudice against Jews to bolster their case. The America we live in now was made in those few short years by people (some of them returning from the war) who abandoned the 1930s politics of fear. Given FDR's opposition to Zionism and his stubborn refusal to do anything about the genocide in Europe, it's almost a miracle.**
---
* Contrary to mythology pushed by the media in the 1990s, Switzerland had instituted the secret, numbered bank account system in the 1930s so that people fleeing Germany could move their assets to a safe place. It was generally less antisemitic than the rest of Europe and, in spite of the fact that much of its population was German-speaking, never fell for Hitler's Aryan vision. But most of the owners of the financial assets moved to Swiss banks perished, and several billions (in present dollars) were left unclaimed at the end of the war.

** Kenneth Levin's The Oslo Syndrome retraces Wyman's history in abbreviated form, then relates it to the return of Jewish self-ghettoization in the 1990s. Except that in a liberal democracy, self-ghettoization means self-defeat. "Court Jew" politics and palace intrigue don't work. While Clinton, unlike FDR, was not personally prejudiced against Jews, the political failure was similar, the Oslo "peace process" being the most damaging result.

Labels: , , , , , , , ,