Monday, May 26, 2008

Climate theory, models, and metaphors*

This and the next few postings cover a final technical examination of climate, a look at climate models, as promised earlier. A larger context is helpful too. Much of the lopsided and misguided debate on climate change is couched in terms of metaphors, necessarily fuzzy and usually linked to faulty analogies or models. Models in turn are frequently confused with climate theory, couched as integrodifferential and algebraic equations, the unique scientific truth about climate, but also unsolvable.

The full theory of climate contains:
  • The mechanical dynamics of density, pressure, and wind;
  • The nonequilibrium thermodynamics of heat transport in the forms of radiation, the hydrologic cycle (evaporation, condensation, and precipitation), and convection (often turbulent and thus chaotic);
  • The nonequilibrium thermodynamics of water phase transformations; and
  • In a more complete statement, other forms of chemical transport and transformation and the thermohydrodynamics of the oceans.
To be solved, the theory must be supplemented with initial conditions at some start time and spatial boundary conditions. The dynamical part of the theory alone needs a bunch of pages of graduate-level mathematics to state. The supplemental conditions require a detailed knowledge of atmosphere and oceans impossible to obtain, making the theory impossible even to state fully in practice.

Even if it could be fully stated, the dynamics itself cannot be solved. Suitably butchered, with the "hard parts" removed, parts of the theory can be solved, a fact that often misleads students (and not only students) into thinking that the full theory can be. Two properties of the heat and air transport and phase transformations render the problem intractable:
  • Chaos, discussed extensively in a recent series of postings: exponential sensitivity to initial conditions, or, equivalently, essentially nonperiodic behavior.

  • Discontinuity of water phase transformations, taking place in an infinitely complex pattern over the whole atmosphere and the atmosphere-land-ocean boundaries. These transformations affect the state of the matter (air and water mixture), but also affect the heat transport, being critical steps in the hydrologic cycle.
Approximations as rigorous method versus approximations as acts of desperation. When faced with such theories, the reaction of mathematical physicists and other quantitatively-oriented scientists is to substitute approximations of the full theory for the full theory itself, pick such approximations as are solvable, and attempt to justify the approximation.

All approximation methods aim at producing a tractable substitute for an unsolvable problem; their method is ranking different pieces of the problem in some order of "more important" (numerically bigger) and "less important" (numerically smaller). A starting approximation works with the most important pieces first; it can be refined and made more accurate by successively adding back in the less important pieces that were initially neglected. For this approach to lead to reliable results, there has to be a rigorous and controlled method for identifying, isolating, and ranking these pieces of the theory. As Foster Morrison puts it in his perceptive and useful Art of Modeling Dynamic Systems, the degree of precision is the degree of isolation: isolate one cause from another, one effect from another, one mathematical deduction from another.**

In mind-bogglingly complex problems like climate, there is no such method. Theorists make the leap anyway, just so they can get to something tractable. But in so doing, they are making only guesses of what's bigger and what's smaller. In some cases, partial justification can be found by appealing to observed climate behavior, which can (in favorable circumstances) hint that some things are more important than others. In other cases, the guesses are simply leaps in the dark, adopted for convenience, or suggested by historical precedent. And these considerations haven't even gotten us past the chaos problem. Only an infinitely detailed specification of climate at one instant of time, followed by an exact solution of the dynamics, can overcome this difficulty. We lack both, and so chaos limits, for example, weather forecasting to no more than two weeks ahead. Attempts to forecast for longer periods amount to guesses no better than random. We have to fall back on the notion of climate as a rough range, or a chaotic strange attractor. That attractor of behavior is a starting point for thinking about "climate" as something other than just "the infinitely complex instantaneous state of the atmosphere and oceans."

Theory replaced by models, and models reduced to often misleading metaphors. From a scientific point of view, accurate but unsolvable climate theory is, in practice, always replaced by solvable but uncontrolled climate models - models with limited usefulness, at best. From the point of view of the man in the street and the incessant chatter of environmentalists and the media, climate theory is a nonstarter. As a rule, in that context, we rarely rise even to the level of the simplest models and, if we think about it all, casually assume that such models are the last word on the subject - instead of a first and very preliminary word. More often, we're stuck swimming in an ocean of manufactured ignorance, pelted by a downpour of misleading metaphors.

A series of postings last year laid out these runaway bad metaphors and the climate model fallacies often implicit in them.

Fallacy #1. Radiative heat transport is the whole game, controlled by the concentration of infrared(IR)-opaque gases, such as water vapor, carbon dioxide (CO2), and methane (CH4).

But convection (including turbulence) and the cycle of evaporation, condensation, and precipitation also play a large role in Earth's climate. Radiation is not the whole game, and the heat transport is a complex three-way interplay of the water cycle, convection, and radiation, all acting alone and reacting off of one another. They all affect one another in a nonlinear and nonlocal way (nonlocal because radiation moves almost instantaneously through the clear air, in contrast to air and water.) As we'll see in the next few postings, the IPCC's predictions are based on enhanced CO2 concentrations (a small effect by itself), greatly amplified by the feedback of enhanced evaporation and clear-air water vapor. These typical and conventional climate models have a much harder time capturing convection, turbulence, condensation, and precipitation.

Once water evaporation is enhanced, no one knows how it will get divided between clear-air vapor and clouds. And clouds, as we know, have profound effects on climate, all cooling (lowering temperature).

Fallacy #2. The Earth's climate is a greenhouse. We'll look at this fallacy more closely in the next posting.

Fallacy #3. The obsession with temperature. Temperature, like pressure and humidity, is a local thermodynamic measurement. There is no "temperature of the Earth" - it's a whole temperature field distributed in space and changing in time. Confusions of this sort are shocking, not when committed by someone not educated in physics, but precisely by scientists and scientifically-educated nonscientists. Without the political hysteria, fallacies like this would be correctly viewed as laughable. Furthermore, even locally, temperature is not enough to specify the state of the atmosphere. You also need at least humidity and wind variables.

Fallacy #4. The confusion of temperature and heat. Temperature is not heat. They're even measured with different units, and they represent different physical phenomena. Heat is disorganized energy, disorganization itself measured by entropy. It's a "bulk" or extensive quantity: It can be localized, flow in space, and summed over volumes. Temperature is a local or intensive quantity. It measures how much of an increment of energy in a vanishingly small volume is related to an increment of disorganization or randomness (entropy) in that same volume. It's localized by its definition and doesn't flow in space or sum over volumes.

But even though they're not the same, there is an intimate relationship of heat and temperature. For a homogeneous system that does not suffer any discontinuous phase transformations (like melting or boiling), heat capacity relates how much a small increment of its temperature leads to a small increment of heat contained by it. If different parts of the system have different temperatures (like the climate), differences in temperature are closely related to flows of heat - the Second Law in action.

A system that does suffer discontinuous phase transformations - ice to liquid water, liquid to water vapor, and back - is altogether more complicated. A certain amount of heat, independent of changes in temperature, is needed to change ice to liquid water or liquid water to vapor. The same amount of heat is released by the opposite transformations. These heats of transformation, or latent heats, break the connection between increments of heat and increments of temperature. These heats, instead of raising temperatures, go into "loosening" the phase of the water - say, breaking up a tightly bound crystal of water molecules (ice) into a smooth fluid of water molecules that touch but slide past one another (liquid water). Our climate is a nonhomogeneous, nonequilibrium collection of flows suffering from just such discontinuous changes in water state.

Fallacy #5. It's heat that determines temperature. Actually, it should be clear by now, it's heat flow that determines temperature. The Earth's climate, from a thermal point of view, is an open system. Visible and ultraviolet radiation from the Sun flows in and is transformed into heat radiation, then flows back into space. Related fallacies include the "heat trapping" metaphor, as if the heat is locked in a closet and can't get out. IR-opaque gases don't trap heat; they change how it flows out.

Trapped in the greenhouse. The "greenhouse" metaphor (fallacy #2) itself is worth a closer look, not only because it's widely misused, but because a proper understanding of how a greenhouse works leads to a different, unexpected, and more accurate picture of climate and the relationship between controllability and predictability. We'll take a short and final detour through the greenhouse next.

MENTION MUST BE MADE of the passing of Edward Lorenz, the modern (re)discoverer of chaos, so tantalizingly anticipated by Poincaré. Twentieth-century science will be remembered for a handful of discoveries - the genetic code, the expansion of the universe - and for a few theories: relativity, quantum mechanics - and chaos. His original 1962 paper here (PDF).

Read more about Lorenz here, and consider his wonderful 1996 popular lectures, The Essence of Chaos.
---
* This posting to an extent parallels Essex and McKitrick's chapter by the same name. (Their book is now available on the US Amazon.) I also make exceptionally heavy use of postings from last year.

** Morrison's book is a splendid introduction to dynamics for the mathematically-minded non-specialist. He starts without even calculus, managing a kind of "dynamics for the masses" by looking at compound interest, clocks, and thermostats.

Labels: , , , , , , ,

Tuesday, March 11, 2008

Strangely attractive

What is that infinitely complex, non-repetitive structure that chaos lays down? Where does all that complexity come from?

Any depiction of chaotic motion necessarily has been generated by observing or calculating a finite elapsed time of motion. So no picture of chaos can ever show its full complexity. An infinite amount of nonrepetitive motion accumulates inside a finite box after an infinite time. And it takes that infinite time to fully exhibit the complexity of the motion. If the motion could be fully executed in any finite time, it would start to repeat on longer times. It wouldn't be chaotic.

A recorded chaotic trajectory of infinite time is called a strange attractor. It's an attractor because the motion doesn't leave the box. It always "sticks around," even as it never repeats. Mathematicians call it strange because of that infinite complexity. Strange attractors are also fractals; that is, objects with infinitely nested self-similarity.

The most famous strange attractor is that one from the first modern investigation of chaos, the Lorenz attractor, named for the man who discovered it.

The fractal concept is more general and has applications in many areas of applied mathematics. They were first discovered in the late 19th century, but not popularized until Mandelbrot brought them to the world's attention starting in the 1960s, showing that such structures are ubiquitous in the natural world.* Here are two.

This is the Sierpinski triangle.


This is a Julia set.



(If this picture reminds you of a spiral galaxy or a starfish, that's not an accident.)



Where does all that infinite complexity come from? Such structures, by not representing something repetitive, seem to have encoded in them an infinite amount of information. How can that happen?

It's our old friends, the irrational numbers, again. A rational number, being a ratio of integers, contains a finite amount of information. If you decimal-expand a fraction, that decimal form will eventually start to repeat, indicating that a rational number has "nothing more interesting to say" after a finite number of digits.


Not so an irrational number. Its decimal expansion never repeats. The square root of 2 is irrational.**

= 1.41421356237309 ....

"Never repeats" - sound familiar? It should. It's the essential characteristic of chaos: bounded nonrepetition. Chaos "processes" the infinite amount of information in the continuum of irrational numbers into infinitely detailed structure, but takes an infinite time to do so.

POSTSCRIPT: Learn more about chaos and the people who discovered it from one of the classics of modern science, James Glieck's Chaos: Making a New Science (1987).
---
* Fractals have even become a basic element of realistic computer graphics today, allowing the creation of much more realistic clouds and landscape, for example, than anything based on those boring old Platonic shapes of spheres, boxes, and so on. All thanks to Benoît.

Chaos was also first discovered in the late 19th century, by the French mathematician Poincaré. Based on his study of irregular planetary orbits, he was able to imagine the infinitely filigreed complexity of the strange attractor. But the terminology and true import of chaos had to await the 1960s and advent of the electronic computer. Then mathematicians and physicists could really investigate the complex subtleties of chaos and let computers do all the calculational drudgery of the necessary arithmetic.

In the 1950s, the Italian-American physicist Fermi, after an encounter with an earlier forerunner of chaos, referred to the study of linear systems that so fills up science and engineering education as the study of "elephant animals." Everything else was "non-elephant animals" - that is, most animals - and he wondered why we didn't spend more effort studying all those non-elephants.

** The square root of two is the first number known to be proven to be irrational. That is, it cannot be represented as the ratio of two integers. For several proofs, see here.

Labels: , , , , , , , ,

Friday, March 07, 2008

What is chaos?

Through all the tones in Earth’s multi-colored dream,
Sounds one faint note drawn out for the one who listens in secret.
- Friedrich Schlegel

Chaos is bounded aperiodicity.

Not everything in the world is periodic, as we know from experience. There are events, not cycles, that never repeat. There are aspects of things that do repeat, but shot through with a stream of the unique, the singular, the non-repetitive. We might even speak of a complementarity between an event and a cycle, the former unique in the domain of time and the other unique in the domain of frequency.*

If all that a powerful and ubiquitous scientific method like Fourier analysis amounted to was to tell us that everything is, in fact, multiperiodic (repeating with many different frequencies), we might demur: after all, the lower frequencies are longer than a human lifespan or human history. Maybe it all does repeat and we just need to wait for a long time.

But no: there are things that "repeat" once in an infinity of time - that is, they happen only once. They are fallout from what mathematicians and physicists call chaos. One way to define chaos is: bounded aperiodic motion. The first modern scientific work on the subject was meteorologist Edward Lorenz's 1962 classic paper "Deterministic Nonperiodic Flow." Let's unpack that title and the definition: the whole chaos business is there by implication.
  • Flow: It's a flow, or a motion, or a change. Lorenz used "flow" to refer to many variables, many functions, changing in time all simultaneously. We could plot these functions in a space of many variables, all functions of time, and have a flow in that space.

  • Deterministic: Really understanding this is a point we'll come back to. But it's enough to say for now that this means that the chaos is causal - it's not random or acausal.

  • Nonperiodic (or aperiodic): A component of the motion that does not repeat, has no frequency. (It's not frequent.) In Fourier analysis, its share of the motion shows up in the zero-frequency bin. If Fourier analysis were always valid, that component would represent a constant, a steady state independent of time.

    But this is exactly the situation where Fourier analysis breaks down and its preconditions are violated. Fourier and related techniques are not the right framework for analyzing chaos, a fact known for more than 40 years. Yet techniques valid in other areas of science and engineering, known to break down when applied to chaotic motion, are often lazily applied to chaos anyway, producing meaningless or seriously compromised results. This is a major problem in mathematical modeling of chaotic systems, such as the weather and financial markets. The zero-frequency bin of a Fourier spectrum, in practice, contains the share of the motion that is constant - but it contains the unique events of chaos as well.

  • Bounded: Finally, chaotic motion is bounded. I can always construct nonperiodic motion if I allow, say, two functions of time to diverge from one another to an arbitrary degree. As time goes to infinity, the motions diverge to an infinite degree.

    Chaos is aperiodic motion that never repeats, but also remains bounded. Even after an infinite time, the motion never leaves a "box." (We'll get more precise about this "box" later on.) This fact violates a basic intuition embedded in mathematical and scientific thought since ancient times. That wrong intuition says: motion bounded in space should ultimately repeat exactly in time. It's not true, but it is often a major stumbling block in understanding chaos. It means, among other things, that after an infinity of time has passed, the motion in the box contains an infinitely detailed nonrepetitive structure.
BTW, ideas about chaos are sometimes incorrectly referred to as "chaos theory." Chaos is just one part of the mathematical study of dynamical systems. It's not a separate theory in its own right.
---
* Being related to one another by the Fourier transform, the time domain and the frequency domain are indeed complementary to one another in exactly the same way anyone who knows something about quantum mechanics should recognize. There it's the complementarity of time and energy.

Labels: , , , , ,

Monday, October 15, 2007

Crochet the Lorenz attractor

Once an obscure diagram in dynamical systems, then the world-famous butterfly-resembling, butterfly-effect-causing Lorenz attractor - now you can crochet it. (Here's the original paper in PDF.)

About Lorenz and his attractor: more to come. Chaos: it's not just for physics nerds any more.

POSTSCRIPT: It's been pointed out to me that the crochet pattern is actually the Lorenz stable manifold, a two-dimensional reduction of the attractor. The attractor itself can't be crocheted. While the stable manifold is a reduction, it is nonetheless invariant under the flow. Trajectories don't cross it, and it encloses the full attractor in an invariant way.

Lorenz attractor

Labels: , ,

Wednesday, February 14, 2007

Climate change: A road map

I want to follow up a previous posting with a road map for understanding climate change, something that can guide future postings and discussions. The case and supposed remedy for human-caused global warming have three components:

1. The temperatures: they seem to be going up.
2. The models: no one really knows why, but inferences from climate models seem to indicate that human activity is the cause.
3. The economics and politics: it's worse than anyone realizes: a bizarre combination of hypocrisy, ignorance, fear - and fear-mongering by the omnipresent news media and increasingly hysterical environmentalist movement, reacting off of each another in a death-dance.

To fully understand climate change and climate prediction requires touching on a lot of issues and drilling into some climate science and deep scientific issues. As we drill in, we'll discover that the supposedly solid case falls apart, piece by piece. We'll also encounter some far more likely explanations for climate change. My discussion here partially follows one of the definitive books on the subject, Essex and McKitrick's Taken by Storm: The Troubled Science, Policy and Politics of Global Warming.

First, consider the temperatures. Ignore the bogus "hockey stick" graph that purported to show a steady surface temperature from 1000 until 1980 - a definitely wrong claim now discredited. The case for global warming really rests on the last 150 years of temperature data from thermometers, largely from the northern hemisphere. On the surface, the case seems solid - the Earth's atmosphere probably has warmed a bit since the 1850s. But even here, there are some difficult questions.

What's an average of temperature? This statistical artifact is rarely examined in its own right, and yet it is problematic. An average, even a weighted average, of temperature has no physical significance. There is no one temperature for the Earth - it's a non-equilibrium system with an infinite field of temperatures, in three dimensions of space and one of time.*

And that's not even broaching the issue of measurement artifacts. The most important is the urban heat island effect, which is known to raise the measured temperature of a downtown or an airport as compared to the surrounding countryside by up to about a degree C. No one knows how to correct for this effect, except in an approximate way, even though temperature measurement networks have become more concentrated in the last century in urban areas.

What could the physical meaning of the average temperature be? When people talk about "global warming," what they really mean is "rising heat content" - so many more Joules or calories stored as heat in the atmosphere. The temperature (in absolute Kelvins) is supposed to be a proxy for this heat content, and that supposedly justifies the averaging: heat content of dry air is proportional to the volume of air times its absolute temperature.

The fatal difficulty for this simple equation is that the air is not dry; it contains a significant and variable amount of water vapor that can retain heat on its own. (Everyone knows the difference between muggy and dry heat.) Wet air's heat content is not a simple linear function of absolute temperature. Not only is the function non-linear; it suffers discontinuities when water evaporates, condenses, and precipitates. (Such discontinuous changes of the state of matter are called phase changes or phase transitions.) Evaporation makes the air capable of holding a lot more heat; condensation and precipitation take it out. Water vapor is by far the most important heat-trapping gas in the atmosphere, much more important than the next-leading contenders.

When the average temperature indexes widely trumpeted to indicate "global warming" are broken down into their basic component measurement series, more questions arise. Not only is there the urban heat island effect. The variability of the composite temperature index is dominated by large variations in the least reliable and most spotty measurements, ones taken in Eurasia and the southern hemisphere. The most reliable and extensive measurements - from Western and Central Europe, North America, and parts of East Asia - show less variation and, in fact, only slight warming. Global warming should be indicated by temperatures rising in most places, yet there is no consistent pattern. The Arctic is warming, for example, while the Antarctic is cooling.

Even so, the claimed temperature changes of the last 150 years, about 0.4 to 0.6 oC, are very much non-unprecedented in the history of the Earth's climate over the last 12,000 years, since the end of the last Ice Age, which have featured temperature changes of at least three to five times that amount. The Ice Age transition featured even larger changes, eight to 15 times as great.

Technically, we're still in an Ice Age, or one of its relatively ice-free periods called an interglacial. Until about three million years ago, the Earth was hotter, wetter, and ice-free. Since then, it's been cooling down and drying out. This development seems to have had an important impact on our primate ancestors in East Africa and probably was a controlling factor in the appearance of one of their offshoots, the genus Australopithecus - the immediate ancestor of genus Homo.

Second, consider the climate models. There are many - in pricinciple, there could be an infinite number of them. What's really going here is The One, Complete, But Unsolvable Theory of Earth's Climate - let's call it The Theory - that contains everything: the properties of the air, the oceans, the land, solar radiation, clouds, dust and other particles, etc. The climate models used by scientific groups to estimate the evolution of climate, like the models used to predict the weather, are impressive computer approximations to The Theory. (Climate models predict over months and years, while weather models predict over days and weeks.)

The critical fact about these models is that they are uncontrolled approximations and thus suffer from potentially fatal drawbacks, depending on how they're interpreted and used. In mathematical physics, controlled approximations are used all the time to estimate the behavior of systems too difficult to solve in their full complexity. What makes them controlled is that it's possible to determine upper bounds on the error made in the approximation. Moreover, many of these techniques offer iterative methods to make the approximations successively more accurate. You can then make the approximation as accurate as you want, limited only by your time, patience, and computer power.

In an uncontrolled approximation (which is what all climate models are), there is no way to tell if you're converging on the right answer. In general, if you attempt to make an uncontrolled approximation more accurate, you will converge to nothing - or anything. The methods are instead subject to modeler bias: modelers tend to iterate the uncontrolled method until they get the answer they want or expect. If the answer is already known from controlled laboratory experiments, this might be useful - although even here, it just reproduces what you already know, and by a questionable method. If the "answer" is laid down by non-scientific preconceptions, what you've got is a serious case of scientific corruption or self-deception. Overall, there has been not nearly enough systematic exploration of how modeling approximations affect the climate model results - learning what's robust and what are merely artifacts of the methods.

Lurking in the background is a deeper and more serious problem for long-term climate prediction: the Earth's climate is chaotic - in the "butterfly effect" sense of dynamical chaos. We'll learn in detail what that means in later postings. Suffice it to say that chaos makes long-term climate prediction impossible with present methods. New methods might be invented to circumvent this difficulty, but it's not a direction climate research has taken, because it has been subjected to such serious conceptual distortion by the "global warming" hysteria.

Chaos arises from a certain kind of non-linearity in the world, where effects are often not proportional to causes. The specific physics that makes climate hard to model comes in two basic types. One is that the atmosphere exhibits turbulence, the form of chaos familiar in fluid dynamics. (What we call "weather" is this turbulence in the lower atmosphere.) The other is the already-mentioned discontinuities in the phase of water and their effect on atmospheric heat content.

You only need to compare the situation with climate forecasting to that of weather forecasting. Back in the 1950s, forecasters naively thought all you needed was just more computing power to make accurate forecasts into the far future. From the 1960s on, thanks to Edward Lorenz and others, the phenomenon of chaos came to be understood as an essential property of weather. The best forecasts today use the available firehose of satellite and other data; even so, they're only good out to about two weeks. "Climate" is supposed to be a "long term" or "average steady state" of weather. What chaos is telling us is that there ain't no such thing. Most discussions of climate modeling are, rather amazingly, stuck in a pre-chaos naiveté.

Last, consider the economics and politics of climate change. Given the level of industrial civilization and the rise of the new economic powers of China, India, and Brazil in the next century, large reductions in CO2 emissions in the next century are impossible. Those countries didn't sign the 1997 Kyoto Treaty in any case. The accord is a case study in hypocrisy. All European countries agreed to it, yet have failed to implement the required CO2 emission reductions. The US initialed it, but the treaty was then rejected in the Congress in a nearly unanimous vote. European governments continue to talk about the treaty in grave tones, while their economies fall further and further behind their scheduled emission reduction targets. In the US, the growth of CO2 emission has slowed, but don't expect the US to get any credit. The Kyoto Treaty is a perfect object of media chatter: all talk, all pretense, no reality. The fact that there's no reality makes it especially gripping television.

The smash-up of the manufactured "global warming" "consensus" is coming. Expect everyone responsible - the professional hysterics, the news media, the self-deluded scientists - to duck responsibility for it. The smash-up will constitute another body blow to the toxic news media that have done so much damage to our society, our politics, and, now, our science. I hope that everyone - voters and politicians alike - will take a smart pill and view what's in their daily "news" with ever-greater skepticism. Even now, much of it is propaganda and lazily rewritten press releases. The opposition to the hysteria was, until recently, practically non-existent, hobbled by a largely hostile media and politicians devoted to "doing something" about an imaginary threat. There's more public opposition now, but it is all uphill.

The imaginary "global warming" crisis has also diverted attention from the reality of the environment in advanced societies: it's getting better every year, and most of the progress from modern regulation happened early on, in the 70s and 80s. There are still some areas for improvement (like the regulation of "light" trucks and SUVs), but the battle for a better environment here and in other wealthy countries is largely won. The arguments today, when not over marginal issues, are increasingly unreal hysteria having little to do with reality and everything to do with fundraising and a quasi-religious demonization of rational and scientific thought. The religious atmosphere of the environmentalist movement - its preaching of Original Sin, its search for authorities and scapegoats, and its ominous need for sacrifices to placate the supposedly angry gods - deserves its own discussion I'll leave to capable others.

The important thing about pollution and conservation is that the frontier now lies in the so-called Third World, both in countries on the ascent (China, India, Brazil) and in much poorer countries. They all need, not a shutdown of development, but a leapfrog that gets them into better, less polluting technology sooner, so they can avoid the mistakes that Europe and the United States made in their histories. (Visitors to Shanghai and São Pão will understand.) "Global warming" is a wasteful diversion from this, much more important task.
---
* A recent pathbreaking paper explains why temperature averages are physically meaningless - the paper is summarized here.

Labels: , , , , ,