A Plan to Keep Carbon in Check
Retreating glaciers, stronger hurricanes, hotter summers, thinner polar bears: the ominous harbingers of global warming are driving companies and governments to work toward an unprecedented change in the historical pattern of fossil-fuel use. Faster and faster, year after year for two centuries, human beings have been transferring carbon to the atmosphere from below the surface of the earth. Today the world's coal, oil and natural gas industries dig up and pump out about seven billion tons of carbon a year, and society burns nearly all of it, releasing carbon dioxide (CO2). Ever more people are convinced that prudence dictates a reversal of the present course of rising CO2 emissions.
The boundary separating the truly dangerous consequences of emissions from the merely unwise is probably located near (but below) a doubling of the concentration of C02 that was in the atmosphere in the 18th century, before the Industrial
Revolution began. Every increase in concentration carries new risks, but avoiding that danger zone would reduce the likelihood of triggering major, irreversible climate changes, such as the disappearance of the Greenland ice cap. Two years ago the two of us provided a simple framework to relate future CO2 emissions to this goal.
We contrasted two 50-year futures. In one future, the emissions rate continues to grow at the pace of the past 30 years for the next 50 years, reaching 14 billion tons of carbon a year in 2056. (Higher or lower rates are, of course, plausible.) At that point, a tripling of pre-industrial carbon concentrations would be very difficult to avoid, even with concerted efforts to decarbonize the world's energy systems over the following 100 years. In the other future, emissions are frozen at the present
value of seven billion tons a year for the next 50 years and then reduced by about half over the following 50 years. In this way, a doubling of CO2 levels can be avoided. The difference between these 50-year emission paths—one ramping up and one flattening out—we called the stabilization triangle
To hold global emissions constant while the world's economy continues to grow is a daunting task. Over the past 30 years, as the gross world product of goods and services grew at close to 3 percent a year on average, carbon emissions rose half as fast. Thus, the ratio of emissions to dollars of gross world product, known as the carbon intensity of the global economy, fell about 1.5 percent a year. For global emissions to be the same in 2056 as today, the carbon intensity will need to fall not half as fast but fully as fast as the global economy grows.
Two long-term trends are certain to continue and will help. First, as societies get richer, the services sector—education, health, leisure, banking and so on—grows in importance relative to energy-intensive activities, such as steel production. All by itself, this shift lowers the carbon intensity of an economy.
Second, deeply ingrained in the patterns of technology evolution is the substitution of cleverness for energy. Hundreds of power plants are not needed today because the world has invested in much more efficient refrigerators, air conditioners and motors than were available two decades ago. Hundreds of oil and gas fields have been developed more slowly because aircraft engines consume less fuel and the windows in gas-heated homes leak less heat.
The task of holding global emissions constant would be out of reach, were it not for the fact that all the driving and flying in 2056 will be in vehicles not yet designed, most of the buildings that will be around then are not yet built, the locations of many of the communities that will contain these buildings and determine their inhabitants' commuting patterns have not yet been chosen, and utility owners are only now beginning to plan for the power plants that will be needed to light up those communities. Today's notoriously inefficient energy system can be replaced if the world gives unprecedented attention to energy efficiency. Dramatic changes are plausible over the next 50 years because so much of the energy canvas is still blank.
To make the task of reducing emissions vivid, we sliced the stabilization triangle into seven equal pieces, or "wedges," each representing one billion tons a year of averted emissions 50 years from now (starting from zero today). For example, a car driven 10,000 miles a year with a fuel efficiency of 30 miles per gallon (mpg) emits close to one ton of carbon annually. Transport experts predict that two billion cars will be zipping along the world's roads in 2056, each driven an average of 10,000 miles a year. If their average fuel efficiency were 30 mpg, their tailpipes would spew two billion tons of carbon that year. At 60 mpg, they would give off a billion tons. The latter scenario would therefore yield one wedge.
Wedges
In our framework, you are allowed to count as wedges only those differences in two 2056 worlds that result from deliberate carbon policy. The current pace of emissions growth already includes some steady reduction in carbon intensity. The goal is to reduce it even more. For instance, those who believe that cars will average 60 mpg in 2056 even in a world that pays no attention to carbon cannot count this improvement as a wedge, because it is already implicit in the baseline projection.
Moreover, you are allowed to count only strategies that involve the scaling up of technologies already commercialized
somewhere in the world. You are not allowed to count pie in the sky. Our goal in developing the wedge framework was to be pragmatic and realistic—to propose engineering our way out of the problem and not waiting for the cavalry to come over the hill. We argued that even with these two counting rules, the world can fill all seven wedges, and in several different Individual countries—operating within a framework of international cooperation—will decide which wedges to pursue, depending on their institutional and economic capacities, natural resource endowments and political predilections.
To be sure, achieving nearly every one of the wedges requires new science and engineering to squeeze down costs and address the problems that inevitably accompany widespread deployment of new technologies. But holding CO2 emissions in 2056 to their present rate, without choking off economic growth, is a desirable outcome within our grasp.
Ending the era of conventional coal-fired power plants is at the very top of the decarbonization agenda. Coal has become more competitive as a source of power and fuel because of energy security concerns and because of an increase in the cost of oil and gas. That is a problem because a coal power plant burns twice as much carbon per unit of electricity as a natural gas plant. In the absence of a concern about carbon, the world's coal utilities could build a few thousand large (1,000-megawatt) conventional coal plants in the next 50 years. Seven hundred such plants emit one wedge's worth of carbon. Therefore, the world could take some big steps toward the target of freezing emissions by not building those plants. The time to start is now. Facilities built in this decade could easily be around in 2056.
Efficiency in electricity use is the most obvious substitute for coal. Of the 14 billion tons of carbon emissions projected
for 2056, perhaps six billion will come from producing power, mostly from coal. Residential and commercial buildings account for 60 percent of global electricity demand today (70 percent in the U.S.) and will consume most of the new power. So cutting buildings' electricity use in half—by equipping them with superefficient lighting and appliances—could lead to two wedges. Another wedge would be achieved if industry finds additional ways to use electricity more efficiently.
Decarbonizing the Supply
Even after energy-efficient technology has penetrated deeply, the world will still need power plants. They can be coal plants but they will need to be carbon-smart ones that capture the CO2 and pump it into the ground (see "Can We Bury Global Warming?" by Robert H. Socolow; SCIENTIFIC AMERICAN, July 2005). Today's high oil prices are lowering the cost of the transition to this technology, because captured CO2 can often be sold to an oil company that injects it into oil fields to squeeze out more oil; thus, the higher the price of oil, the more valuable the captured CO2. To achieve one wedge, utilities need to equip 800 large coal plants to capture and store nearly all the CO2 otherwise emitted. Even in a carbon-constrained world, coal mining and coal power can stay in business, thanks to carbon capture and storage.
The large natural gas power plants operating in 2056 could capture and store their C02, too, perhaps accounting for yet another wedge. Renewable and nuclear energy can contribute as well. Renewable power can be produced from sunlight directly, either to energize photovoltaic cells or, using focusing mirrors, to heat a fluid and drive a turbine. Or the route can be indirect, harnessing hydro-power and wind power, both of which rely on sun-driven weather patterns. The intermittency of renewable power does not diminish its capacity to contribute wedges; even if coal and natural gas plants provide the backup power, they run only part-time (in tandem with energy storage) and use less carbon than if they ran all year. Not strictly renewable, but also usually included in the family, is geothermal energy, obtained by mining the heat in the earth's interior. Any of these sources, scaled up from its current contribution, could produce a wedge. One must be careful not to double-count the possibilities; the same coal plant can be left un-built only once.
Nuclear power is probably the most controversial of all the wedge strategies. If the fleet of nuclear power plants were
to expand by a factor of five by 2056, displacing conventional coal plants, it would provide two wedges. If the current fleet were to be shut down and replaced with modern coal plants without carbon capture and storage, the result would be minus one-half wedge. Whether nuclear power will be scaled up or down will depend on whether governments can find political solutions to waste disposal and on whether plants can run without accidents. (Nuclear plants are mutual hostages: the world's least well-run plant can imperil the future of all the others.) Also critical will be strict rules that prevent civilian nuclear technology from becoming a stimulus for nuclear weapons development. These rules will have to be uniform across all countries, so as to remove the sense of a double standard that has long been a spur to clandestine facilities.
Oil accounted for 43 percent of global carbon emissions from fossil fuels in 2002, while coal accounted for 37 percent; natural gas made up the remainder. More than half the oil was used for transport. So smartening up electricity production alone cannot fill the stabilization triangle; transportation, too, must be decarbonized. As with coal-fired electricity, at least a wedge may be available from each of three complementary options: reduced use, improved efficiency and decarbonized energy sources. People can take fewer unwanted trips (telecommuting instead of vehicle commuting) and pursue the travel they cherish (adventure, family visits) in fuel-efficient vehicles running on low-carbon fuel. The fuel can be a product of crop residues or dedicated crops, hydrogen made from low-carbon electricity, or low-carbon electricity itself, charging an onboard battery. Sources of the low-carbon electricity could include wind, nuclear power, or coal with capture and storage.
Looming over this task is the prospect that, in the interest of energy security, the transport system could become more carbon-intensive. That will happen if transport fuels are derived from coal instead of petroleum. Coal-based synthetic fuels, known as synfuels, provide a way to reduce global demand for oil, lowering its cost and decreasing global dependence on Middle East petroleum. But it is a decidedly climate-unfriendly strategy. A synfuel-powered car emits the same amount of COi as a gasoline-powered car, but synfuel fabrication from coal spews out far more carbon than does refining gasoline from
crude oil—enough to double the emissions per mile of driving. From the perspective of mitigating climate change, it is fortunate that the emissions at a synfuels plant can be captured and stored. If business-as-usual trends did lead to the widespread adoption of synfuel, then capturing CO2 at synfuels plants might well produce a wedge.
Not all wedges involve new energy technology. If all the farmers in the world practiced no-till agriculture rather than conventional plowing, they would contribute a wedge. Eliminating deforestation would result in two wedges, if the alternative were for deforestation to continue at current rates. Curtailing emissions of methane, which today contribute about half as much to greenhouse warming as CO2, may provide more than one wedge: needed is a deeper understanding of the anaerobic biological emissions from cattle, rice paddies and irrigated land. Lower birth rates can produce a wedge, too—for example, if they hold the global population in 2056 near eight billion people when it otherwise would have grown to nine billion.
Action Plan
What set of policies will yield seven wedges? To be sure, the dramatic changes we anticipate in the fossil-fuel system, including routine use of CO2 capture and storage, will require institutions that reliably communicate a price for present and future carbon emissions. We estimate that the price needed to jump-start this transition is in the ball-park of $100 to $200 per ton of carbon—the range that would make it cheaper for owners of coal plants to capture and store CO2 rather than vent it. The price might fall as technologies climb the learning curve. A carbon emissions price of $100 per ton is comparable to the current U.S. production credit for new renewable and nuclear energy relative to coal, and it is about half the current U.S. subsidy of ethanol relative to gasoline. It also was the price of CO2 emissions in the European Union's emissions trading system for nearly a year, spanning 2005 and 2006. (One ton of carbon is carried in 3.7 tons of carbon dioxide, so this price is also $27 per ton of CO2) Based on carbon content, $100 per ton of carbon is $12 per barrel of oil and $60 per ton of coal. It is 25 cents per gallon of gasoline and two cents per kilowatt-hour of electricity from coal.
But a price on CO2 emissions, on its own, may not be enough. Governments may need to stimulate the commercialization of low-carbon technologies to increase the number of competitive options available in the future. Examples include wind, photovoltaic power and hybrid cars. Also appropriate are policies designed to prevent the construction of long-lived capital facilities that are mismatched to future policy. Utilities, for instance, need to be encouraged to invest in CO2 capture and storage for new coal power plants, which would be very costly to retrofit later. Still another set of policies can harness the capacity of energy producers to promote efficiency—motivating power utilities to care about the installation and maintenance of efficient appliances, natural gas companies to care about the buildings where their gas is burned, and oil companies to care about the engines that run on their fuel.
To freeze emissions at the current level, if one category of emissions goes up, another must come down. If emissions from natural gas increase, the combined emissions from oil and coal must decrease. If emissions from air travel climb, those from some other economic sector must fall. And if today's poor countries are to emit more, today's richer countries must emit less.
How much less? It is easy to bracket the answer. Currently the industrial nations—the members of the Organization for Economic Cooperation and Development (OECD)—account for almost exactly half the planet's COz emissions, and the developing countries plus the nations formerly part of the Soviet Union account for the other half. In a world of constant total carbon emissions, keeping the OECD's share at 50 percent seems impossible to justify in the face of the enormous pent-up demand for energy in the non-OECD countries, where more than 80 percent of the world's people live. On the other hand, the OECD member states must emit some carbon in 2056. Simple arithmetic indicates that to hold global emissions rates steady, non- OECD emissions cannot even double.
One intermediate value results if all OECD countries were to meet the emissions-reduction target for the U.K. that was articulated in 2003 by Prime Minister Tony Blair—namely, a 60 percent reduction by 2050, relative to recent levels. The non-OECD countries could then emit 60 percent more CO2. On average, by mid-century they would have one half the per capita emissions of the OECD countries. The CO2 output of every country, rich or poor today, would be well below what it is generally projected to be in the absence of climate policy. In the case of the U.S., it would be about four times less.
Blair's goal would leave the average American emitting twice as much as the world average, as opposed to five times as much today. The U.S. could meet this goal in many ways. These strategies will be followed by most other countries as well. The resultant cross-pollination will lower every country's costs.
Fortunately, the goal of decarbonization does not conflict with the goal of eliminating the world's most extreme poverty. The extra carbon emissions produced when the world's nations accelerate the delivery of electricity and modern cooking fuel to the earth's poorest people can be compensated for by, at most, one fifth of a wedge of emissions reductions elsewhere.
Beyond 2056
The stabilization triangle deals only with the first 50-year leg of the future. One can imagine a relay race made of 50-year segments, in which the first runner passes a baton to the second in 2056. Intergenerational equity requires that the two runners have roughly equally difficult tasks. It seems to us that the task we have given the second runner (to cut the 2056 emissions rate in half between 2056 and 2106) will not be harder than the task of the first runner (to keep global emissions in 2056 at present levels)—provided that between now and 2056 the world invests in research and development to get ready. A vigorous effort can prepare the revolutionary technologies that will give the second half of the century a running start. Those options could include scrubbing CO2 directly from the air, carbon storage in minerals, nuclear fusion, nuclear thermal hydrogen, and artificial photosynthesis. Conceivably, one or more of these technologies may arrive in time to help the first runner, although, as we have argued, the world should not count on it.
As we look back from 2056, if global emissions of C02 are indeed no larger than today's, what will have been accomplished? The world will have confronted energy production and energy efficiency at the consumer level, in all economic sectors and in economies at all levels of development. Buildings and lights and refrigerators, cars and trucks and planes, will be transformed. Transformed, also, will be the ways we use them.
The world will have a fossil-fuel energy system about as large as today's but one that is infused with modern controls
and advanced materials and that is almost unrecognizably cleaner. There will be integrated production of power, fuels and heat; greatly reduced air and water pollution; and extensive carbon capture and storage. Alongside the fossil energy system will be a non-fossil energy system approximately as large. Extensive direct and indirect harvesting of renewable energy will have brought about the revitalization of rural areas and the reclamation of degraded lands. If nuclear power is playing a large role, strong international enforcement mechanisms will have come into being to control the spread of nuclear technology from energy to weapons. Economic growth will have been maintained; the poor and the rich will both be richer. And our descendants will not be forced to exhaust so much treasure, innovation and energy to ward off rising sea level, heat, hurricanes and drought.
Critically, a planetary consciousness will have grown. Humanity will have learned to address its collective destiny—and to share the planet.
An Efficient Solution
The huge potential of energy efficiency measures for mitigating the release of greenhouse gases into the atmosphere attracts little attention when placed alongside the more glamorous alternatives of nuclear, hydrogen or renewable energies. But developing a comprehensive efficiency strategy is the fastest and cheapest thing we can do to reduce carbon emissions. It can also be profitable and astonishingly effective, as two recent examples demonstrate.
From 2001 through 2005, Procter & Gamble's factory in Germany increased production by 45 percent, but the energy needed to run machines and to heat, cool and ventilate buildings rose by only 12 percent, and carbon emissions remained at the 2001 level. The major pillars supporting this success include highly efficient illumination, compressed-air systems, new designs for heating and air conditioning, tunneling heat losses from compressors into heating buildings, and detailed energy
measurement and billing.
In some 4,000 houses and buildings in Germany, Switzerland, Austria and Scandinavia, extensive insulation, highly efficient windows and energy-conscious design have led to enormous efficiency increases, enabling energy budgets for heating that are a sixth of the requirement for typical buildings in these countries.
Improved efficiencies can be realized all along the energy chain, from the conversion of primary energy (oil, for example) to energy carriers (such as electricity) and finally to useful energy (the heat in your toaster). The annual global primary energy demand is 447,000 petajoules (a petajoule is roughly 300 gigawatt-hours), 80 percent of which comes from carbon-emitting fossil fuels such as coal, oil and gas. After conversion these primary energy sources deliver roughly 300,000
petajoules of so-called final energy to customers in the form of electricity, gasoline, heating oil, jet fuel, and so on.
The next step, the conversion of electricity, gasoline, and the like to useful energy in engines, boilers and lightbulbs, causes further energy losses of 154,000 petajoules. Thus, at present almost 300,000 petajoules, or two thirds of the primary energy, are lost during the two stages of energy conversion. Furthermore, all useful energy is eventually dissipated as heat at various temperatures. Insulating buildings more effectively, changing industrial processes and driving lighter, more aerodynamic cars (see "Fueling Our Transportation Future,") would reduce the demand for useful energy, thus substantially reducing energy wastage.
Given the challenges presented by climate change and the high increases expected in energy prices, the losses that occur all along the energy chain can also be viewed as opportunities—and efficiency is one of the most important. New technologies and know-how must replace the present intensive use of energy and materials.
Room for Improvement
Because conservation measures, whether incorporated into next year's car design or a new type of power plant, can have a dramatic impact on energy consumption, they also have an enormous effect on overall carbon emissions. In this mix, buildings and houses, which are notoriously inefficient in many countries today, offer the greatest potential for saving energy. In countries belonging to the Organization for Economic Cooperation and Development (OECD) and in the megacities of emerging countries, buildings contribute more than one third of total energy-related greenhouse gas emissions.
Little heralded but impressive advances have already been made, often in the form of efficiency improvements that are invisible to the consumer. Beginning with the energy crisis in the 1970s, air conditioners in the U.S. were redesigned to use less power with little loss in cooling capacity and new U.S. building codes required more insulation and double-paned windows. New refrigerators use only one quarter of the power of earlier models. (With approximately 150 million refrigerators and freezers in the U.S., the difference in consumption between 1974 efficiency levels and 2001 levels is equivalent to avoiding the generation of 40 gigawatts at power plants.) Changing to compact fluorescent lightbulbs yields an instant reduction in power demand; these bulbs provide as much light as regular incandescent bulbs, last 10 times longer and use just one fourth to one fifth the energy.
Despite these gains, the biggest steps remain to be taken. Many buildings were designed with the intention of minimizing construction costs rather than life-cycle cost, including energy use, or simply in ignorance of energy-saving considerations. Take roof overhangs, for example, which in warm climates traditionally measured a meter or so and which are rarely used today because of the added cost, although they would control heat buildup on walls and windows. One of the
largest European manufacturers of prefabricated houses is now offering zero-net-energy houses: these well-insulated and intelligently designed structures with solar-thermal and photovoltaic collectors do not need commercial energy, and their total cost is similar to those of new houses built to conform to current building codes. Because buildings have a 50- to 100-year lifetime, efficiency retrofits are essential. But we need to coordinate changes in existing buildings thoughtfully to avoid replacing a single component, such as a furnace, while leaving in place leaky ducts and single-pane windows that waste much of the heat the new furnace produces.
One example highlights what might be done in industry: although some carpet manufacturers still dye their products at 100 to 140 degrees Celsius, others dye at room temperature using enzyme technology, reducing the energy demand by more than 90 percent.
The Importance of Policy
To realize the full benefits of efficiency, strong energy policies are essential. Among the underlying reasons for the crucial role of policy are the dearth of knowledge by manufacturers and the public about efficiency options, budgeting methods that do not take proper account of the ongoing benefits of long-lasting investments, and market imperfections such as external costs for carbon emissions and other costs of energy use. Energy policy set by governments has traditionally underestimated the benefits of efficiency. Of course, factors other than policy can drive changes in efficiency—higher energy prices, new technologies or cost competition, for instance. But policies—which include energy taxes, financial incentives, professional training, labeling, environmental legislation, greenhouse gas emissions trading and international coordination of regulations for traded products—can make an enormous difference. Furthermore, rapid growth in demand for energy services in emerging countries provides an opportunity to implement energy-efficient policies from the outset as infrastructure grows: programs to realize efficient solutions in buildings, transport systems and industry would give people the energy services they need without having to build as many power plants, refineries or gas pipelines.
Japan and the countries of the European Union have been more eager to reduce oil imports than the U.S. has and have encouraged productivity gains through energy taxes and other measures. But all OECD countries except Japan have so far failed to update appliance standards. Nor do gas and electric bills in OECD countries indicate how much energy is used for heating, say, as opposed to boiling water or which uses are the most energy-intensive—that is, where a reduction in usage would produce the greatest energy savings. In industry, compressed air, heat, cooling and electricity are often not billed by production line but expressed as an overhead cost.
Nevertheless, energy efficiency has a higher profile in Europe and Japan. A retrofitting project in Ludwigshafen, Germany, serves as just one example. Five years ago 500 dwellings were equipped to adhere to low-energy standards (about 30 kilowatt-hours per square meter per year), reducing the annual energy demand for heating those buildings by a factor of six. Before the retrofit, the dwellings were difficult to rent; now demand is three times greater than capacity.
Other similar projects abound. The Board of the Swiss Federal Institutes of Technology, for instance, has suggested a technological program aimed at what we call the 2,000-Watt Society—an annual primary energy use of 2,000 watts (or 65 gigajoules) per capita. Realizing this vision in industrial countries would reduce the per capita energy use and related carbon emissions by two thirds, despite a two-thirds increase in GDP, within the next 60 to 80 years. Swiss scientists, including myself, have been evaluating this plan since 2002, and we have concluded that the goal of the 2,000-watt per capita society is technically feasible for industrial countries in the second half of this century.
To some people, the term "energy efficiency" implies reduced comfort. But the concept of efficiency means that you get the same service—a comfortable room or convenient travel from home to work—using less energy. The EU, its member states and Japan have begun to tap the substantial—and profitable—potential of efficiency measures. To avoid the rising costs of energy supplies and the even costlier adaptations to climate change, efficiency must become a global activity.
ENERGY
Biodiesel Is Better
Petroleum alternatives include renewable fuels such as biodiesel, derived primarily from soybeans, and ethanol, distilled mostly from corn grain. In the first comprehensive analysis of the energy gains and environmental impact of both fuels, University of Minnesota researchers determined biodiesel to be the better choice. Ethanol from corn grain produces 25 percent more energy than all the energy people invested in it, whereas biodiesel from soybeans returns 93 percent more. Compared with fossil fuels, ethanol produces 12 percent fewer greenhouse gas emissions, whereas biodiesel produces 41 percent fewer. Soybeans also generate significantly less nitrogen, phosphorus and pesticide pollution. Dedicating all current U.S. corn and soybean production to biofuels, however, would meet only 12 percent of gasoline demand and 6 percent of diesel demand. Prairie grass may provide larger biofuel supplies with greater environmental benefits, the scientists reported online July 12 via the Proceedings of the National Academy of Sciences USA.
Climate;
Repair Manual
Global warming is a reality. Innovation in energy technology and policy are sorely needed if we are to cope ill ' Explorers attempted and mostly failed over the centuries to establish a pathway from the Atlantic to the Pacific through the icebound North, a quest often punctuated by starvation and scurvy. Yet within just 40 years, and maybe many fewer, an ascending thermometer will likely mean that the maritime dream of Sir Francis-Drake and Captain James Cook will turn into an actual route of commerce that competes with the Panama Canal.
The term "glacial change" has taken on a meaning opposite to its common usage. Yet in reality, Arctic shipping lanes would count as one of the more benign effects of accelerated climate change. The repercussions of melting glaciers, disruptions in the Gulf Stream and record heat waves edge toward the apocalyptic: floods, pestilence, hurricanes, droughts— even itchier cases of poison ivy. Month after month, reports mount of the deleterious effects of rising carbon levels. One recent study chronicled threats to coral and other marine organisms, another a big upswing in major wildfires in the western U.S. that have resulted because of warming.
The debate on global warming is over. Present levels of carbon dioxide—nearing 400 parts per million (ppm) in the earth's atmosphere—are higher than they have been at any time in the past 650,000 years and could easily surpass 500 ppm by the year 2050 without radical intervention.
The earth requires greenhouse gases, including water vapor, carbon dioxide and methane, to prevent some of the heat from the received solar radiation from escaping back into space, thus keeping the planet hospitable for protozoa, Shetland ponies and Lindsay Lohan. But too much of a good thing—in particular, carbon dioxide from SUVs and local coal-fired utilities—is causing a steady uptick in the thermometer. Almost all of the 20 hottest years on record have occurred since the 1980s.
No one knows exactly what will happen if things are left unchecked—the exact date when a polar ice sheet will complete a phase change from solid to liquid cannot be foreseen with precision, which is why the Bush administration and warming-skeptical public-interest groups still carry on about the uncertainties of climate change. But no climatologist wants to test what will arise if carbon dioxide levels drift much higher than 500 ppm.
A League of Rations
Preventing the transformation of the earth's atmosphere from greenhouse to unconstrained hothouse represents arguably the most imposing scientific and technical challenge that humanity has ever faced. Sustained marshaling of cross-border engineering and political resources over the course of a century or more to check the rise of carbon emissions makes a moon mission or a Manhattan Project appear comparatively straightforward.
Climate change compels a massive restructuring of the world's energy economy. Worries over fossil-fuel supplies reach crisis proportions only when safeguarding the climate is taken into account. Even if oil production peaks soon—a debatable contention given Canada's oil sands, Venezuela's heavy oil and other reserves—coal and its derivatives could tide the earth over for more than a century. But fossil fuels, which account for 80 percent of the world's energy usage, become a liability if a global carbon budget has to be set.
Translation of scientific consensus on climate change into a consensus on what should be done about it carries the debate into the type of political minefield that has often undercut attempts at international governance since the League of Nations. The U.S. holds less than 5 percent of the world's population but produces nearly 25 percent of carbon emissions and has played the role of saboteur by failing to ratify the Kyoto Protocol and commit to reducing greenhouse gas emissions to 7 percent below 1990 levels.
Yet one of the main sticking points for the U.S.—the absence from that accord of a requirement that developing countries agree to firm emission limits— looms as even more of an obstacle as a successor agreement is contemplated to take effect when Kyoto expires in 2012. The torrid economic growth of China and India will elicit calls from industrial nations for restraints on emissions, which will again be met by even more adamant retorts that citizens of Shenzhen and Hyderabad should have the same opportunities to build their economies that those of Detroit and Frankfurt once did.
Kyoto may have been a necessary first step, if only because it lit up the pitted road that lies ahead. But stabilization of
carbon emissions will require a more tangible blueprint for nurturing further economic growth while building a decarbonized energy infrastructure. An oil company's "Beyond Petroleum" slogans will not suffice.
Industry groups advocating nuclear power and clean coal have stepped forward to offer single-solution visions of
clean energy. But too much devoted too early to any one technology could yield the wrong fix and derail momentum toward a sustainable agenda for decarbonization. Portfolio diversification underlies a plan laid out by Robert H. Socolow and Stephen W. Pacala in this single-topic edition of Scientific American. The two Princeton University professors describe how deployment of a basket of technologies and strategies can stabilize carbon emissions by mid-century.
Perhaps a solar cell breakthrough will usher in the photovoltaic age, allowing both a steel plant and a cell phone user to derive all needed watts from a single source. But if that does not happen—and it probably won't—many technologies (biofuels, solar, hydrogen and nuclear) will be required to achieve a low-carbon energy supply. All these approaches are profiled, by leading experts in this special issue, as are more radical ideas, such as solar power plants in outer space and fusion generators, which may come into play should today's seers prove myopic 50 years hence..
No More Business as Usual
Planning in 50- or 100-year increments is perhaps an impossible dream. The slim hope for keeping atmospheric carbon below 500 ppm hinges on aggressive programs of energy efficiency instituted by national governments. To go beyond what climate specialists call the "business as usual" scenario, the U.S. must follow Europe and even some of its own state governments in instituting new policies that affix a price on carbon—whether in the form of a tax on emissions or in a cap-and-trade system (emission allowances that are capped in aggregate at a certain level and then traded in open markets). These steps can furnish the breathing space to establish the defense-scale research programs needed to cultivate fossil fuel alternatives. The current federal policy vacuum has prompted a group of eastern states to develop their own cap-and-trade program under the banner of the Regional Greenhouse Gas Initiative.
Fifty-year time frames are planning horizons for futurists, not pragmatic policymakers. Maybe a miraculous new energy technology will simultaneously solve our energy and climate problems during that time, but another scenario is at least as likely: a perceived failure of Kyoto or international bickering over climate questions could foster the burning of abundant coal for electricity and synthetic fuels for transportation, both without meaningful checks on carbon emissions.
A steady chorus of skeptics continues to cast doubt on the massive peer-reviewed scientific literature that forms the cornerstone for a consensus on global warming. "They call it pollution; we call it life," intones a Competitive Enterprise Institute advertisement on the merits of carbon dioxide. Uncertainties about the extent and pace of warming will undoubtedly persist. But the consequences of inaction could be worse than the feared economic damage that has bred overcaution. If we wait for an ice cap to vanish, it will simply be too late.
SA Perspectives:
Cooling Our Heels
From Christmas Day in 1991, when the white, blue and red Russian flag rose over the Kremlin, symbolizing the end of the Soviet Union, the U.S. assumed a dominant presence in world affairs the likes of which has not been witnessed since the Imperium Romanum. Yet the nation that endorsed the idea of preemptive military action has acted with remarkable passivity when it comes to an energy policy that deals with climate change.
In a recent scholarly article, economist Jeffrey D. Sachs and geo-physicist Klaus S. Lackner of Columbia University noted that the Bush administration's impulse on global warming has been to wait for "something to turn up"—say, the discovery of a plentiful, non-carbon fuel or a technique to eliminate greenhouse emissions at low cost. Global warming has never been the priority it should be.
The reasons are not hard to fathom. People worry that the consumerist way of life that Americans have come to accept as a birthright will have to be scaled back. After all, on average, each U.S. citizen has more than twice the energy consumption of a western European, according to statistics for 2003, and almost 10 times that of a Chinese. To narrow this gap, the U.S. will have to alter its energy-intensive habits. But that doesn't mean we must all live in cardboard boxes. In every plan to tackle warming, Americans will still be better off in 2050 than they are today.
Both technical and policy farsightedness will be needed to achieve the concurrent objectives of growth and sustainability. Decades may pass before hydrogen-powered trucks and cars relegate gasoline and diesel-fueled vehicles to antique auto shows. In the interim, conservation and better efficiencies in
both transportation and electricity generation and usage will allow us to muddle through. Yet for even that to happen, the world's leading economy—and emitter of almost one quarter of human-generated carbon emissions—will have to assume the leadership role that it has so far largely shirked.
Regaining a modicum of credibility will itself prove an immense undertaking. Both the president and Congress need to endorse the ever expanding body of evidence that points to the reality of warming and listen to, rather than harass, scientists who arrive bearing bad news.
Funding for energy research must be accorded the privileged status usually reserved for health care and defense. Yet rhetoric needs to go beyond the mantra that before taking action, more research is needed to eliminate uncertainties surrounding climate science. A ceiling on greenhouse emissions should be set, and then the market should decide how to achieve that target through sales and purchases of emissions allowances. Other measures that must be adopted include stiffened fuel economy standards, carbon taxes and requirements that the largest producers of greenhouse gases report their emissions publicly.
The U.S. should lead by example. An aggressive domestic program would enable this country to influence China, India and other fast-growing developing nations to control emissions. Without the U.S. at the head of the table, the prospects for any meaningful action on a global scale will gradually recede along with the Arctic glaciers.
Fueling Our Transportation Future
If we are honest, most of us in the world's richer countries would concede that we like our transportation systems. They allow us to travel when we want to, usually door-to-door, alone or with family and friends, and with our baggage. The mostly unseen freight distribution network delivers our goods and supports our lifestyle. So why worry about the future and especially about how the energy that drives our transportation might be affecting our environment?
The reason is the size of these systems and their seemingly inexorable growth. They use petroleum-based fuels (gasoline and diesel) on an unimaginable scale. The carbon in these fuels is oxidized to the greenhouse gas carbon dioxide during combustion, and their massive use means that the amount of carbon dioxide entering the atmosphere is likewise immense. Transportation accounts for 25 percent of worldwide greenhouse gas emissions. As the countries in the developing world rapidly motorize, the increasing global demand for fuel will pose one of the biggest challenges to controlling the concentration of greenhouse gases in the atmosphere. The U.S. light-duty vehicle fleet (automobiles, pickup trucks, SUVs, vans and small trucks) currently consumes 150 billion gallons (550 billion liters) of gasoline a year, or 1.3 gallons of gasoline per person a day. If other nations burned gasoline at the same rate, world consumption would rise by a factor of almost 10.
As we look ahead, what possibilities do we have for making transportation much more sustainable, at an acceptable cost?
Our Options
Several options could make a substantial difference. We could improve or change vehicle technology; we could change how we use our vehicles; we could reduce the size of our vehicles; we could use different fuels. We will most likely have to do all of these to drastically reduce energy consumption and greenhouse gas emissions.
In examining these alternatives, we have to keep in mind several aspects of the existing transportation system. First, it is well suited to its primary context, the developed world. Over decades, it has had time to evolve so that it balances economic costs with users' needs and wants. Second, this vast optimized system relies completely on one convenient source of energy—petroleum. And it has evolved technologies—internal-combustion engines on land and jet engines (gas turbines) for air—that well match vehicle operation with this energy-dense liquid fuel. Finally, these vehicles last a long time. Thus, rapid change is doubly difficult. Constraining and then reducing the local and global impacts of transportation energy will take decades.
We also need to keep in mind that efficiency ratings can be misleading; what counts is the fuel consumed in actual driving. Today's gasoline spark-ignition engine is about 20 percent efficient in urban driving and 35 percent efficient at its best operating point. But many short trips with a cold engine and transmission, amplified by cold weather and aggressive driving, significantly worsen fuel consumption, as do substantial time spent with the engine idling and losses in the transmission. These real-world driving phenomena reduce the engine's average efficiency so that only about 10 percent of the chemical energy stored in the fuel tank actually drives the wheels. Amory Lovins, a strong advocate for much lighter, more efficient vehicles, has stated it this way: with a 10 percent efficient vehicle and with the driver, a passenger and luggage—a payload of some 300 pounds, about 10 percent of the vehicle weight—"only 1 percent of the fuel's energy in the vehicle tank actually moves the payload."
We must include in our accounting what it takes to produce and distribute the fuel, to drive the vehicle through its
lifetime of 150,000 miles (240,000 kilometers) and to manufacture, maintain and dispose of the vehicle. These three phases of vehicle operation are often called well-to-tank (this phase accounts for about 15 percent of the total lifetime energy use and greenhouse gas emissions), tank-to-wheels (75 percent), and cradle-to-grave (10 percent). Surprisingly, the energy required to produce the fuel and the vehicle is not negligible. This total life-cycle accounting becomes especially important as we consider fuels that do not come from petroleum and new types of vehicle technologies. It is what gets used and emitted in this total sense that matters.
Improving existing light-duty vehicle technology can do a lot. By investing more money in increasing the efficiency of the engine and transmission, decreasing weight, improving tires and reducing drag, we can bring down fuel consumption by about one third over the next 20 or so years—an annual 1 to 2 percent improvement, on average. (This reduction would cost between $500 and $1,000 per vehicle; at likely future fuel prices, this amount would not increase the lifetime cost of
ownership.) These types of improvements have occurred steadily over the past 25 years, but we have bought larger, heavier, faster cars and light trucks and thus have effectively traded the benefits we could have realized for these other attributes. Though most obvious in the U.S., this shift to larger, more powerful vehicles has occurred elsewhere as well. We need to find ways to motivate buyers to use the potential for reducing fuel consumption and greenhouse gas emissions to actually save fuel and contain emissions.
In the near term, if vehicle weight and size can be reduced and if both buyers and manufacturers can step off the ever increasing horsepower/performance path, then in the developed world we may be able to slow the rate of petroleum demand, level it off in 15 to 20 years at about 20 percent above current demand, and start on a slow downward path. This
projection may not seem nearly aggressive enough. It is, however, both challenging to achieve and very different from our
current trajectory of steady growth in petroleum consumption at about 2 percent a year.
In the longer term, we have additional options. We could develop alternative fuels that would displace at least some petroleum. We could turn to new propulsion systems that use hydrogen or electricity. And we could go much further in designing and encouraging acceptance of smaller, lighter vehicles.
The alternative fuels option may be difficult to implement unless the alternatives are compatible with the existing distribution system. Also, our current fuels are liquids with a high-energy density: lower-density fuels will require larger fuel
tanks or provide less range than today's roughly 400 miles. From this perspective, one alternative that stands out is non-
conventional petroleum (oil or tar sands, heavy oil, oil shale, coal). Processing these sources to yield "oil," however, requires
large amounts of other forms of energy, such as natural gas and electricity. Thus, the processes used emit substantial amounts of greenhouse gases and have other environmental impacts. Further, such processing calls for big capital investments. Nevertheless, despite the broader environmental consequences, nonconventional petroleum sources are already starting to be exploited; they are expected to provide some 10 percent of transportation fuels within the next 20 years.
Biomass-based fuels such as ethanol and biodiesel, which are often considered to emit less carbon dioxide per unit of energy, are also already being produced. In Brazil ethanol made from sugarcane constitutes some 40 percent of transport fuel. In the U.S. roughly 20 percent of the corn crop is being converted to ethanol. Much of this is blended with gasoline at the 10 percent level in so-called reformulated (cleaner-burning) gasolines. The recent U.S. national energy policy act plans to double ethanol production from the current 2 percent of transportation fuel by 2012. But the fertilizer, water, and natural gas and electricity currently expended in ethanol production from corn will need to be substantially decreased. Production of ethanol from cellulosic biomass (residues and wastes from plants not generally used as a food source) promises to be more efficient and to lower greenhouse gas emissions. It is not yet a commercially viable process, although it may well become so. Biodiesel can be made from various crops (rapeseed, sunflower, soybean oils) and waste animal fats. The small amounts now being made are blended with standard diesel fuel.
It is likely that the use of biomass-based fuels will steadily grow. But given the uncertainty about the environmental impacts of large-scale conversion of biomass crops to fuel (on soil quality, water resources and overall greenhouse gas emissions), this source will contribute but is unlikely to dominate the future fuel supply anytime soon.
Use of natural gas in transportation varies around the world from less than 1 percent to 10 to 15 percent in a few
countries where tax policies make it economical. In the 1990s natural gas made inroads into U.S. municipal bus fleets to
achieve lower emissions; diesels with effective exhaust cleanup are now proving a cheaper option.
What about new propulsion system technology? Likely innovations would include significantly improved gasoline engines (using a turbocharger with direct fuel injection, for example), more efficient transmissions, and low-emission diesels
with catalysts and paniculate traps in the exhaust, and perhaps new approaches to how the fuel is combusted might be included as well. Hybrids, which combine a small gasoline engine and a battery-powered electric motor, are already on the road, and production volumes are growing. These vehicles use significantly less gasoline in urban driving, have lower benefits at highway speeds and cost a few thousand dollars extra to buy.
Researchers are exploring more radical propulsion systems and fuels, especially those that have the potential for low lifecycle carbon dioxide emissions. Several organizations are developing hydrogen-powered fuel cell vehicles in hybrid form
with a battery and an electric motor. Such systems could increase vehicle efficiency by a factor of two, but much of that benefit is offset by the energy consumed and the emissions produced in making and distributing hydrogen. If the hydrogen can be produced through low-carbon-emitting processes and if a practical distribution system could be set up, it has low-greenhouse-emissions potential. But it would take technological breakthroughs and many decades before hydrogen-based transportation could become a reality and have widespread impact.
Hydrogen is, of course, an energy carrier rather than an energy source. Electricity is an alternative energy carrier with promise of producing energy without releasing carbon dioxide, and various research teams are looking at its use in transportation. The major challenge is coming up with a battery that can store enough energy for a reasonable driving range,
at an acceptable cost. One technical barrier is the long battery recharging time. Those of us used to filling a 20-gallon tank in four minutes might have to wait for several hours to charge a battery. One way around the range limitation of electric vehicles is the plug-in hybrid, which has a small engine on-board to recharge the battery when needed. The energy used
could thus be largely electricity and only part engine fuel. We do not yet know whether this plug-in hybrid technology will
prove to be broadly attractive in the marketplace.
Beyond adopting improved propulsion systems, a switch to lighter-weight materials and different vehicle structures
could reduce weight and improve fuel consumption without downsizing. Obviously, though, combining lighter materials and smaller vehicle size would produce an even greater effect. Maybe the way we use vehicles in the future will differ radically from our "general purpose vehicle" expectations of today. In the future, a car specifically designed for urban driving may make sense. Volkswagen, for example, has a small twoperson concept car prototype that weighs 640 pounds (290
kilograms) and consumes one liter of gasoline per 100 kilometers (some 240 miles per gallon—existing average U.S. light-
duty vehicles use 10 liters per 100 kilometers, or just under 25 miles per gallon). Some argue that downsizing reduces safety,
but these issues can be minimized.
Promoting Change
Better technology will undoubtedly improve fuel efficiency. In the developed world, markets may even adopt enough of these improvements to offset the expected increases in the number of vehicles. And gasoline prices will almost certainly rise over the next decade and beyond, prompting changes in the way consumers purchase and use their vehicles. But market forces alone are unlikely to curb our ever growing appetite for petroleum.
A coordinated package of fiscal and regulatory policies will need to come into play for fuel-reduction benefits to be realized from these future improvements. Effective policies would include a "feebate" scheme, in which customers pay an extra fee to buy big fuel-consumers but get a rebate if they buy small, fuel-efficient models. The feebate combines well with stricter Corporate Average Fuel Economy (CAFE) standards—in other words, with regulations that require automobile makers to produce products that consume less fuel. Adding higher fuel taxes to the package would further induce people to buy fuel-efficient models. And tax incentives could spur more rapid changes in the production facilities for new technologies. All these measures may be needed to keep us moving forward.
Plan B for Energy
To keep this world tolerable for life as we like it, humanity must complete a marathon of technological change whose finish line lies far over the horizon. Robert H. Socolow and Stephen W. Pacala of Princeton University have compared the feat to a multigenerational relay race (see "A Plan to Keep Carbon in Check,") They outline a strategy to win the first 50-year leg by reining back carbon dioxide emissions from a century of unbridled acceleration. Existing technologies applied both wisely and promptly, should carry us to this first milestone without trampling the global economy. That is a sound plan A.
The plan is far from foolproof, however. It depends on societies ramping up an array of carbon reducing practices to form seven "wedges," each of which keeps 25 billion tons of carbon in the ground and out of the air. Any slow starts or early plateaus will pull us off track. And some scientists’ worry that stabilizing greenhouse gas emissions will require up to 18 wedges by 2056, not the seven that Socolow and Pacala forecast in their most widely cited model.
It is a mistake to assume that carbon releases will rise more slowly than will economic output and energy use, argues Martin I. Hoffert, a physicist at New York University. As oil and gas prices rise, he notes, the energy industry is "recarbonizing" by turning back to coal. "About 850 coal-fired power plants are slated to be built by the U.S., China and India—none of which signed the Kyoto Protocol," Hoffert says. "By 2012 the emissions of those plants will overwhelm Kyoto reductions by a factor of five."
Even if plan A works and the teenagers of today complete the first leg of the relay by the time they retire, the race will be but half won. The baton will then pass in 2056 to a new generation for the next and possibly harder part of the marathon: cutting the rate of CO2 emissions in half by 2106.
Sooner or later the world is thus going to need a plan B: one or more fundamentally new technologies that together can supply 10 to 30 terawatts without belching a single ton of carbon dioxide. Energy buffs have been kicking around many such wild ideas since the 1960s. It is time to get serious about them. "If we don't start now building the infrastructure for a revolutionary change in the energy system," Hoffert warns, "we'll never be able to do it in time."
But what to build? The survey that follows sizes up some of the most promising options, as well as a couple that are popular yet implausible. None of them is a sure thing. But from one of these ideas might emerge a new engine of human civilization.
Nuclear Fusion
Starry-eyed physicists point to the promise of unlimited fuel and minimal waste. But politicians blanch at fusion's price tag and worry about getting burned.
Fusion reactors—which make nuclear power by joining atoms rather than splitting them—top almost everyone's list of ultimate energy technologies for humanity. By harnessing the same strong thermonuclear force that fires the sun, a fusion plant could extract a gigawatt of electricity from just a few kilo-grams of fuel a day. Its hydrogen-isotope fuel would come from seawater and lithium, a common metal. The reactor would produce no greenhouse gases and relatively small amounts of low-level radioactive waste, which would be-come harmless within a century. "Even if the plant were flattened [by an accident or attack], the radiation level one kilometer outside the fence would be so small that evacuation would not be necessary," says Farrokh Najmabadi, a fusion expert who directs the Center for Energy Research at the University of California, San Diego.
The question is whether fusion can make a large contribution to the 21st century or is a 22nd-century solution. "A decade ago some scientists questioned whether fusion was possible, even in the lab," says David E. Baldwin, who as head of the energy group at General Atomics oversees the largest fusion reactor in the U.S., the DIII-D. But the past 20 years have seen dramatic improvements in tokamaks, machines that use giant electromagnetic coils to confine the ionized fuel within a doughnut-shaped chamber as it heats the plasma to more than 100 million degrees Celsius.
"We now know that fusion will work," Baldwin says. "The question is whether it is economically practical"— and if so, how quickly fusion could move from its current experimental form into large-scale commercial reactors. "Even with a crash program," he says, "I think we would need 25 to 30 years" to develop such a design.
So far political leaders have chosen to push fusion along much more slowly. Nearly 20 years after it was first proposed, the International Thermo-nuclear Experimental Reactor (ITER) is only now nearing final approval. If construction begins on schedule next year, the $10-billion reactor should begin operation in southeastern France in 2016.
Meanwhile an intermediate generation of tokamaks now nearing completion in India, China and Korea will test whether coils made of superconducting materials can swirl the burning plasma within its magnetic bottle for minutes at a time. Current reactors manage a few dozen seconds at best before their power supplies give out.
ITER aims for three principal goals. First it must demonstrate that a large tokamak can control the fusion of the hydrogen isotopes deuterium and tritium into helium long enough to generate 10 times the energy it consumes. A secondary aim is to test ways to use the high-speed neutrons created by the reaction to breed tritium fuel for example, by shooting them into a surrounding blanket of lithium. The third goal is to integrate the wide range of technologies needed for a commercial fusion plant.
If ITER succeeds, it will not add a single watt to the grid. But it will carry fusion past a milestone that nuclear fission energy reached in 1942, when Enrico Fermi oversaw the first self-sustaining nuclear chain reaction. Fission reactors were powering submarines 11 years later. Fusion is an incomparably harder problem, however, and some veterans in the field predict that 20 to 30 years of experiments with ITER will be needed to refine designs for a production plant.
Najmabadi is more optimistic. He leads a working group that has already produced three rough designs for commercial fusion reactors. The latest, called ARIES-AT, would have a more compact footprint and thus a lower capital cost than ITER. The ARIES-AT machine would produce 1,000 megawatts at a price of roughly five cents per kilowatt-hour, competitive with today's oil-and gas-fired plants. If work on a commercial plant began in parallel with ITER, rather than decades after it goes online, fusion might be ready to scale up for production by mid-century, Najmabadi argues.
Fusion would be even more cost-competitive, Hoffert suggests, if the fast neutrons produced by tokamaks were used to transmute thorium (which is relatively abundant) into uranium (which may be scarce 50 years hence) to use as fuel in nuclear fission plants. "Fusion advocates don't want to sully its clean image," Hoffert observes, "but fusion-fission hybrids may be the way to go."
High-Altitude Wind
Wind is solar energy in motion. About 0.5 per-cent of the sunlight entering the atmosphere is transmuted into the kinetic energy of air: a mere 1.7 watts, on average, in the atmospheric column above every square meter of the earth. Fortunately, that energy is not distributed evenly but concentrated into strong currents. Unfortunately, the largest, most powerful and most consistent currents are all at high altitude. Hoffert estimates that roughly two thirds of the total wind energy on this planet resides in the upper troposphere, beyond the reach of to-day's wind farms.
Ken Caldeira of the Carnegie Institution of Washington once calculated how wind power varies with altitude, latitude and season. The mother lode is the jet stream, about 10,000 meters (33,000 feet) up between 20 and 40 degrees latitude in the Northern Hemisphere. In the skies over the U.S., Europe, China and Japan—indeed, many of the countries best prepared to exploit it—wind power surges to 5,000 or even 10,000 watts a square meter. The jet stream does wander. But it never stops.
If wind is ever to contribute terawatts to the global energy budget, engineers will have to invent affordable ways to mine the mother lode. Three high-flying designs are in active development.
Magenn Power in Ottawa, Ontario, plans to begin selling next year a rotating, helium-filled generator that exploits the Magnus effect (best known for giving loft to spinning golf balls) to float on a tether up to 122 meters above the ground. The bus-size device will produce four kilowatts at its ground station and will retail for about $10,000— helium not included. The company aims to pro-duce higher-flying, 1.6-megawatt units, each the size of a football field, by 2010.
"We looked at balloons; the drag they produce seemed unmanageable in high winds," says Al Grenier of Sky Wind Power in Ramona, Calif. Grenier's venture is instead pursuing autogiros, which catch the wind with helicopter like rotors. Rising to 10,000 meters, the machines could realize 90 per-cent of their peak capacity. The inconstancy of sur-face winds limits ground turbines to about half that. But the company has struggled to gather the $4 million it needs for a 250-kilowatt prototype.
Still in the conceptual stages is the "ladder-mill," designed by astronaut Wubbo J. Ockels and his students at the Delft University of Technology in the Netherlands. Ockels envisions a series of computer-controlled kites connected by a long tether. The ladder of kites rises and descends, turning a generator on the ground as it yo-yos up and down. Simulations of the system suggest that a single ladder-mill reaching to the jet stream could produce up to 50 megawatts of energy.
Until high-altitude machines are fielded, no one can be certain how well they will hold up under turbulence, gusts and lightning strikes. Steep maintenance costs could be their downfall.
There are regulatory hurdles to clear as well. Airborne wind farms need less land than their terrestrial counterparts, but their operators must persuade national aviation agencies to restrict aircraft traffic in the vicinity. There is precedent for this, Grenier points out: the U.S. Air Force has for years flown up to a dozen large tethered aerostats at high altitude above the country's southern border.
By the standards of revolutionary technologies, however, high-altitude wind looks relatively straightforward and benign.
Space-Based Solar
With panels in orbit, where the sun shines brightest— and all the time—solar could really take off. But there's a catch When Peter Glaser proposed in 1968 that city-size satellites could harvest solar power from deep space and beam it back to the earth as invisible microwaves, the idea seemed pretty far out, even given Glaser's credentials as president of the International Solar Energy Society. But after the oil crises of the 1970s sent fuel prices skyrocketing, NASA engineers gave the scheme a long hard look. The technology seemed feasible until, in 1979, they estimated the "cost to first power": $305 billion (in 2000 dollars). That was the end of that project.
Solar and space technologies have made great strides since then, however, and space solar power (SSP) still has its champions. Hoffert cites two big advantages that high-flying arrays could lord over their earthbound brethren. In a geostationary orbit well clear of the earth's shadow and atmosphere, the average intensity of sunshine is eight times as strong as it is on the ground. And with the sun al-ways in their sights, SSP stations could feed a reli-able, fixed amount of electricity into the grid. (A rectifying antenna, or "rectenna," spread over several square kilometers of land could convert micro-waves to electric current with about 90 percent efficiency, even when obstructed by clouds.)
"SSP offers a truly sustainable, global-scale and emission-free electricity source," Hoffert argues. "It is more cost-effective and more techno-logically feasible than controlled thermonuclear fusion." Yet there is minimal research funding for space-based solar, he complains, while a $10-bil-lion fusion reactor has just been approved.
NASA did in fact fund small studies from 1995 to 2003 that evaluated a variety of SSP components and architectures. The designs took advantage of thin-film photovoltaics to create the electricity, high-temperature superconductors to carry it, and infrared lasers (in place of microwave emitters) to beam it to ground stations. Such high-tech innovations enabled SSP engineers to cut the systems' weight and thus reduce the formidable cost of launching them into orbit.
But here's the catch: the power-to-payload ratio, at a few hundred watts per kilogram, has remained far too low. Until it rises, space-based solar will never match the price of other renewable energy sources, even accounting for the energy storage systems that ground-based alternatives require to smooth over nighttime and poor weather lulls.
Technical advances could change the game rapidly, however. Lighter or more efficient photovoltaic materials are in the works (see "Nanotech So-lar Cells,") In May, for example, re-searchers at the University of Neuchatel in Switzerland reported a new technique for depositing amorphous silicon cells on a space-hardy film that yields power densities of 3,200 watts per kilo-gram. Although that is encouraging, says John C. Mankins, who led NASA's SSP program from 1995 to 2003, "the devil is in the supporting structure and power management." Mankins sees more promise in advanced earth-to-orbit space transportation systems, now on drawing boards, that might cut launch costs from more than $10,000 a kilogram to a few hundred dollars in coming decades.
JAXA, the Japanese space agency, last year announced plans to launch by 2010 a satellite that will unfurl a large solar array and beam 100 kilo-watts of microwave or laser power to a receiving station on the earth. The agency's long-term road map calls for flying a 250-megawatt prototype sys-tem by 2020 in preparation for a gigawatt-class commercial SSP plant a decade later.
NASA once had similarly grand designs, but the agency largely halted work on SSP when its priori-ties shifted to space exploration two years ago.
Nanotech Solar Cells
Five gigawatts—a paltry 0.038 percent of the world's consumption of energy from all sources. That, roughly, is the cumulative capacity of all photovoltaic (PV) power systems installed in the world, half a century after solar cells were first commercialized. In the category of greatest unfulfilled potential, solar-electric power is a technology without rival.
Even if orbiting arrays [see "Space-Based Solar," on page 108] never get off the ground, nanotechnology now looks set to rescue solar from its perennial irrelevance, however. Engineers are working on a wide range of materials that outshine the bulk silicon used in most PV cells today, improving both their efficiency and their cost.
The most sophisticated (and expensive) second-generation silicon cells eke out about 22 percent efficiency. New materials laced with quantum dots might double that, if discoveries re-ported this past March pan out as hoped. The dots, each less than 10 billionths of a meter wide, were created by groups at the National Renewable Energy Laboratory in Colorado and Los Alamos National Laboratory in New Mexico.
When sunlight hits a silicon cell, most of it ends up as heat. At best, a photon can knock loose one electron. Quantum dots can put a wider range of wavelengths to useful work and can kick out as many as seven electrons for every photon. Most of those electrons soon get stuck again, so engineers are testing better ways to funnel them into wires. They are also hunting for dot materials that are more environmentally friendly than the lead, selenium and cadmium in today's nano-crystals. Despite their high-tech name, the dots are relatively inexpensive to make.
Nano-particles of a different kind promise to help solar compete on price. Near San Francisco, Nano-solar is building a factory that will churn out 200 million cells a year by printing nano-scopic bits of copper-indium-gallium-diselenide onto continuous reels of ultra-thin film. The particles self-assemble into light-harvesting structures. Nano-solar's CEO says he is aiming to bring the cost down to 50 cents a watt.
The buzz has awakened energy giants. Shell now has a subsidiary making solar cells, and BP in June launched a five-year project with the California Institute of Technology. Its goal: high-efficiency solar cells made from silicon nanorods.
A Global Super Grid
Revolutionary energy sources need a revolutionary superconducting electrical grid that spans the planet «A basic problem with renewable energy sources is *\ matching supply and demand," Hoffert observes. Supplies of sunshine, wind, waves and even biofuel crops fade in and out unpredictably, and they tend to be concentrated where people are not. One solution is to build long-distance transmission lines from superconducting wires. When chilled to near absolute zero, these conduits can wheel tremendous currents over vast distances with almost no loss.
In July the BOC Group in New Jersey and its partners began installing 350 meters of superconducting cable into i the grid in Albany, N.Y. The nitrogen-cooled link will carry up to 48 megawatts worth of current at 34,500 volts, "We know the technology works this project will demonstrate that." says Ed Garcia, a vice president at BOC.
At a 2004 workshop, experts sketched out designs for a "Super Grid" that would simultaneously transport electricity and hydrogen. The hydrogen, condensed to a liquid or ultra-cold gas, would cool the superconducting wires and could also power fuel cells and combustion engines.
With a transcontinental Super Grid, solar arrays in Australia and wind farms in Siberia might power lights in the U.S. and air conditioners in Europe. But building such infrastructure would most likely take generations and trillions of dollars.
Waves & Tides
The tide has clearly turned for the dream of harnessing the incessant motion of the sea. "Ocean energy is about 20 years behind wind power," acknowledges Roger Bedard, ocean energy leader at the Electric Power Research Institute. "But it cer-tainly isn't going to take 20 years to catch up."
Through the 1980s and 1990s, advocates of tidal and wave power could point to only two commercial successes: a 240-megawatt (MW) tidal plant in France and a 20-MW tidal station in Nova Scotia. Now China has jumped onboard with a 40-kilowatt (kW) facility in Daishan. Six 36-kW turbines are soon to start spinning in New York City's East River. This summer the first commercial wave farm will go online in Portugal. And investors and governments are hatching much grander schemes.
The grandest is in Britain, where analysts suggest ocean power could eventually supply one fifth of the country's electricity and fulfill its obligations under the Kyoto Protocol. The U.K. government in July ordered a feasibility study for a 16-kilometer dam across the Severn estuary, whose tides rank second largest in the world. The Severn barrage, as it is called, would cost $25 billion and produce 8.6 gigawatts when tides were flowing. Proponents claim it would operate for a century or more.
Environmental groups warn that the barrage would wreak havoc on the estuarine ecosystem. Better than a dam, argues Peter Fraenkel of Marine Current Turbines, would be arrays of the SeaGen turbines his company has developed. Such tide farms dotting the U.K. coast could generate almost as much electricity as the Severn dam but with less capital investment, power variation and environ-mental impact.
Fraenkel's claims will be put to a small test this year, when a tidal generator the company is installing in Strangford Lough begins contributing an average power of 540 kW to the grid in Northern Ireland. The machine works much like an under-water windmill, with two rotors sharing a single mast cemented into the seabed.
"The biggest advantage of tidal power is that it is completely predictable," Bedard says. "But on a global scale, it will never be very large." There are too few places where tides move fast enough.
Energetic waves are more capricious but also more ubiquitous. An analysis by Bedard's group found that if just 20 percent of the commercially viable offshore wave resources in the U.S. were harnessed with 5 0-percent-efficient wave farms, the energy produced would exceed all convention-al hydroelectric generation in the country.
Four companies have recently completed sea trials of their wave conversion designs. One of them, Ocean Power Delivery, will soon begin reap-ing 2.25 MW off the coast of Portugal from three of its 120-meter-long Pelamis machines. If all goes well, it will order another 30 this year. Surf's up.
Designer Microbes
“We view the genome as the software, or even the operating system, of the cell," said J. Craig Venter. It's time for an upgrade, he suggested. Venter was preaching to the choir: a large group of biologists at the Synthetic Biology 2.0 conference this past May. Many of the scientists there have projects to genetically rewire organisms so extensively that the resulting cells would qualify as synthetic species. Venter, who gained fame and fortune for the high-speed methods he helped to develop to sequence the human genome, recently founded a company, Synthetic Genomics, to commercialize custom-built cells. "We think this field has tremendous potential to replace the petrochemical industry, possibly within a decade," he said.
That assessment may be overly optimistic; no one has yet assembled a single cell from scratch. But Venter reported rap-id progress on his team's efforts to create artificial chromosomes that contain just the minimum set of genes required for self-sustaining life within a controlled, nutrient-rich environment. "The first synthetic prokaryotic cell [lacking a nucleus] will definitely happen within the next two years," he predicted. "And synthetic eukaryotic genomes [for cells with nuclei] will happen within a decade at most."
Venter envisions novel microbes that capture carbon dioxide from the smokestack of a power plant and turn it into natural gas for the boiler. "There are already thousands, per-haps millions, of organisms on our planet that know how to do this," Venter said. Although none of those species may be suited for life in a power plant, engineers could borrow their genetic circuits for new creations. "We also have biological systems under construction that are trying to produce hydrogen directly from sunlight, using photosynthesis," he added.
Steven Chu, director of Lawrence Berkeley National Lab-oratory, announced that his lab is readying a proposal for a major project to harness the power of the sun and turn it into fuels for transportation. With the tools of genetic engineering, Chu explained, "we can work on modifying plants and algae’s to make them self-fertilizing and resistant to drought and pests." The novel crops would offer high yields of cellulose, which man-made microbes could then convert to fuels. Chu expects biological processing to be far more efficient than the energy-intensive processes, such as steam explosion and thermal hydroly-sis, currently used to make ethanol.
With oil prices approaching $80 a barrel, bio-processing may not have to wait for life-forms built from scratch. GreenFuel in Cambridge, Mass., has in-stalled algae farms at power plants to convert up to 40 percent of the CO2 they spew into raw material for biofuels. The company claims that a large algae farm next to a 1-GW plant could yield 50 mil-lion gallons a year of ethanol. "There are great opportunities here," Chu avers. "And not only that—it will help save the world."
The Nuclear Option
Nuclear power Supplies a sixth of the world's electricity. Along with hydropower (which supplies slightly more than a sixth), it is the major source of "carbon-free" energy today. The technology suffered growing pains, seared into the public's mind by the Chernobyl and Three Mile Island accidents, but plants have demonstrated remarkable reliability and efficiency recently. The world's ample supply of uranium could fuel a much larger fleet of reactors than exists today throughout their 40- to 50- year life span.
With growing worries about global warming and the associated likelihood that greenhouse gas emissions will be regulated in some fashion, it is not surprising that governments and power providers in the U.S. and elsewhere are increasingly considering building a substantial number of additional nuclear power plants. The fossil-fuel alternatives have their drawbacks. Natural gas is attractive in a carbon-constrained world because it has lower carbon content relative to other fossil fuels and because advanced power plants have low capital costs. But the cost of the electricity produced is very sensitive to natural gas prices, which have become much higher and more volatile in recent years. In contrast, coal prices are relatively low and stable, but coal is the most carbon-intensive source of electricity. The capture and sequestration of carbon dioxide, which will add significantly to the cost, must be demonstrated and introduced on a large scale if coal-powered electricity is to expand significantly without emitting unacceptable quantities of carbon into the atmosphere. These concerns raise doubts about new investments in gas- or coal-powered plants.
All of which points to a possible nuclear revival. And indeed, more than 20,000 megawatts of nuclear capacity have come online globally since 2000, mostly in the Far East. Yet despite the evident interest among major nuclear operators, no firm orders have been placed in the U.S. Key impediments to new nuclear construction are high capital costs and the uncertainty surrounding nuclear waste management. In addition, global expansion of nuclear power has raised concerns that nuclear weapons ambitions in certain countries may inadvertently be advanced.
In 2003 we co-chaired a major Massachusetts Institute of Technology study. The Future of Nuclear Power, that analyzed what would be required to retain the nuclear option. That study described a scenario whereby worldwide nuclear power generation could triple to one million megawatts by the year 2050, saving the globe from emissions of between 0.8 billion and 1.8 billion tons of carbon a year, depending on whether gas- or coal-powered plants were displaced. At this scale, nuclear power would significantly contribute to the stabilization of greenhouse gas emissions, which requires about seven billion tons of carbon to be averted annually by 2050 (see "A Plan to Keep Carbon in Check,").
The Fuel Cycle
If nuclear power is to expand by such an extent, what kind of nuclear plants should be built? A chief consideration is the fuel cycle, which can be either open or closed. In an open fuel cycle, also known as a once-through cycle, the uranium is "burned" once in a reactor, and spent fuel is stored in geologic repositories. The spent fuel includes plutonium that could be chemically extracted and turned into fuel for use in another nuclear plant. Doing that results in a closed fuel cycle, which some people advocate.
Some countries, most notably France, currently use a closed fuel cycle in which plutonium is separated from the spent fuel and a mixture of plutonium and uranium oxides is subsequently burned again. A longer-term option could involve recycling all the transuranics (plutonium is one example of a transuranic element), perhaps in a so-called fast reactor. In this approach, nearly all the very long lived components of the waste are eliminated, thereby transforming the nuclear waste debate. Substantial research and development is needed, however, to work through daunting technical and economic
challenges to making this scheme work.
Recycling waste for reuse in a closed cycle might seem like a no-brainer: less raw material is used for the same total power output, and the problem of long-term storage of waste is alleviated because a smaller amount of radioactive material must be stored for many thousands of years. Nevertheless, we believe that an open cycle is to be preferred over the next several decades. First, the recycled fuel is more expensive than the original uranium. Second, there appears to be ample uranium at reasonable cost to sustain the tripling in global nuclear power generation that we envisage with a once-through fuel cycle for the entire lifetime of the nuclear fleet (about 40 to 50 years for each plant). Third, the environmental benefit for long-term waste storage is offset by near-term risks to the environment from the complex and highly dangerous reprocessing and fuel-fabrication operations. Finally, the reprocessing that occurs in a closed fuel cycle produces plutonium that can be diverted for use in nuclear weapons.
The type of reactor that will continue to dominate for at least two decades, probably longer, is the light-water reactor, which uses ordinary water (as opposed to heavy water, containing deuterium) as the coolant and moderator. The vast majority of plants in operation in the world today are of this type, making it a mature, well-understood technology.
Reactor designs are divided into generations. The earliest prototype reactors, built in the 1950s and early 1960s, were often one of a kind. Generation II reactors, in contrast, were commercial designs built in large numbers from the late 1960s to the early 1990s. Generation III reactors incorporate design improvements such as better fuel technology and passive safety, meaning that in the case of an accident the reactor shuts itself down without requiring the operators to intervene. The first generation III reactor was built in Japan in 1996. Generation IV reactors are new designs that are currently being researched, such as pebble-bed reactors and lead-cooled fast reactors. In addition, generation III+ reactors are designs similar to generation III but with the advanced features further evolved. With the possible exception of high-temperature gas reactors (the pebble bed is one example), generation IV reactors are several decades away from being candidates for significant commercial deployment. To evaluate our scenario through to 2050, we envisaged the building of generation III+ light-water reactors.
The pebble-bed modular reactor introduces the interesting prospect of modular nuclear plants. Instead of building a massive 1,000-megawatt plant, modules each producing around 100 megawatts can be built. This approach may be particularly attractive, both in developing countries and in deregulated industrial countries, because of the much lower capital costs involved. The traditional large plants do have the advantage of economy of scale, most likely resulting in lower cost per kilowatt of capacity, but this edge could be challenged if efficient factory-style production of large numbers of modules could be implemented. South Africa is scheduled to begin construction of a 110-megawatt demonstration pebble-bed plant in 2007, to be completed by 2011, with commercial modules of about 165 megawatts planned for 2013. The hope is to sell modules internationally, in particular throughout Africa.
Reducing Costs
Based on previous experience, electricity from new nuclear power plants is currently more expensive than that from new coal- or gas-powered plants. The 2003 M.I.T. study estimated that new light-water reactors would produce electricity at a cost of 6.7 cents per kilowatt-hour. That figure includes all the costs of a plant, spread over its life span, and includes items such as an acceptable return to investors. In comparison, under equivalent assumptions we estimated that a new coal plant would produce electricity at a cost of 4.2 cents per kilowatt-hour. For a new gas-powered plant, the cost is very sensitive to the price of natural gas and would be about 5.8 cents per kilowatt-hour for today's high gas prices (about $7 per million Btu).
Some people will be skeptical about how well the cost of nuclear power can be estimated, given past over-optimism, going back to claims in the early days that nuclear power would be "too cheap to meter." But the M.I.T. analysis is grounded in past experience and actual performance of existing plants, not in promises from the nuclear industry. Some might also question the uncertainties inherent in such cost projections. The important point is that the estimates place the three alternatives—nuclear, coal and gas—on a level playing field, and there is no reason to expect unanticipated contingencies to favor one over the other. Furthermore, when utilities are deciding what kind of power plant to build, they will base their decisions on such estimates.
Several steps could reduce the cost of the nuclear option below our baseline figure of 6.7 cents per kilowatt-hour. A 25 percent reduction in construction expenses would bring the cost of electricity down to 5.5 cents per kilowatt-hour. Reducing the construction time of a plant from five to four years and improvements in operation and maintenance can shave off a further 0.4 cent per kilowatt-hour. How any plant is financed can depend dramatically on what regulations govern the plant site. Reducing the cost of capital for a nuclear plant to be the same as for a gas or coal
plant would close the gap with coal (4.2 cents per kilowatt-hour). All these reductions in the cost of nuclear power are plausible—particularly if the industry builds a large number of just a few standardized designs—but not yet proved.
Nuclear power becomes distinctly favored economically if carbon emissions are priced. We will refer to this as a carbon tax, but the pricing mechanism need not be in the form of a tax. Europe has a system in which permits to emit carbon are traded on an open market. In early 2006 permits were selling for more than $100 per tonne of carbon emitted (or $27 per tonne of carbon dioxide), although recently their price has fallen to about half that. (A metric unit, one tonne is equal to 1.1 U.S. tons.) A tax of only $50 per tonne of carbon raises coal-powered electricity to 5.4 cents per kilowatt-hour. At $200 per tonne of carbon, coal reaches a whopping 9.0 cents per kilowatt-hour. Gas fares much better than coal, increasing to 7.9 cents per kilowatt-hour under a $200 tax. Fossil-fuel plants could avoid the putative carbon tax by capturing and sequestering the carbon, but the cost of doing that contributes in the same way that a tax would (see "Can We Bury Global Warming?").
Because it is many years since construction of a nuclear plant was embarked on in the U.S., the companies that build the first few new plants will face extra expenses that subsequent operators will not have to bear, along with additional risk in working through a new licensing process. To help overcome that hurdle, the Energy Policy Act of 2005 included a number of important provisions, such as a tax credit of 1.8 cents per kilowatt-hour to new nuclear plants for their first eight years of operation. The credit, sometimes called a first-mover incentive, applies to the first 6,000 megawatts of new plants to come online. Several consortiums have formed to take advantage of the new incentives.
Waste Management
The second big obstacle that a nuclear renaissance faces is the problem of waste management. No country in the world has yet implemented a system for permanently disposing of the spent fuel and other radioactive waste produced by nuclear power plants. The most widely favored approach is geologic disposal, in which waste is stored in chambers hundreds of meters underground. The goal is to prevent leakage of the waste for many millennia through a combination of engineered barriers (for example, the waste containers) and geologic ones (the natural rock structure where the chamber has been excavated and the favorable characteristics of the hydrogeologic basin). Decades of studies support the geologic disposal option. Scientists have a good understanding of the processes and events that could transport radionuclides from the repository to the biosphere. Despite this scientific confidence, the process of approving a geologic site remains fraught with difficulties.
A prime case in point is the proposed facility at Yucca Mountain in Nevada, which has been under consideration for two decades. Recently the site was found to have considerably more water than anticipated. It remains uncertain whether the
Nuclear Regulatory Commission (NRC) will license the site.
Delays in resolving waste management (even if it is approved, it is unlikely that Yucca Mountain will be accepting waste before 2015) may complicate efforts to construct new power plants. By law, the government was to begin moving spent fuel from reactor sites to a repository by 1998. Failure to do so has led to a need for increased local storage at many sites and associated unhappiness among neighbors, towns and states.
Perhaps the first country to build a permanent storage site for its high-level nuclear waste will be Finland. At Oikiluoto, the location of two nuclear reactors, excavation has begun on an underground research facility called Onkalo. Extending about half a kilometer underground, the Onkalo project will involve study of the rock structure and groundwater flows and will test the disposal technology in actual deep underground conditions. If all goes according to plan and the necessary government licenses are obtained, the first canisters of waste could be emplaced in 2020. By 2130 the repository would be complete, and the access routes would be rilled and sealed. The money to pay for the facility has been levied on the price of
Finnish nuclear power since the late 1970s.
To address the waste management problem in the U.S., the government should take title to the spent fuel stored at commercial reactor sites across the country and consolidate it at one or more federal interim storage sites until a permanent disposal facility is built. The waste can be temporarily stored safely and securely for an extended period. Such extended temporary storage, perhaps even for as long as 100 years, should be an integral part of the disposal strategy. Among other benefits, it would take the pressure off government and industry to come up with a hasty disposal solution.
Meanwhile the Department of Energy should not abandon Yucca Mountain. Instead it should reassess the suitability of
the site under various conditions and modify the project's schedule as needed. If nuclear power expanded globally to one million megawatts, enough high-level waste and spent fuel would be generated in the open fuel cycle to fill a Yucca Mountain-size facility every three and a half years. In the court of public opinion, that fact is a significant disincentive to the expansion of nuclear power, yet it is a problem that can and must be solved.
The Threat of Proliferation
In conjunction with the domestic program of waste management just outlined, the president should continue the diplomatic effort to create an international system of fuel supplier countries and user countries. Supplier countries such as the U.S., Russia, France and the U.K. would sell fresh fuel to user countries with smaller nuclear programs and commit to removing the spent fuel from them. In return, the user countries would forgo the construction of fuel-producing facilities. This arrangement would greatly alleviate the danger of nuclear weapons proliferation because the chief risks for proliferation involve not the nuclear power plants themselves but the fuel enrichment and reprocessing plants. The current situation with
Iran's uranium enrichment program is a prime example. A scheme in which fuel is leased to users is a necessity in a world where nuclear power is to expand threefold, because such an expansion will inevitably involve the spread of nuclear power plants to some countries of proliferation concern.
A key to making the approach work is that producing fuel does not make economic sense for small nuclear power programs. This fact underlies the marketplace reality that the world is already divided into supplier and user countries. Instituting the supplier/user model is largely a matter, albeit not a simple one, of formalizing the current situation more permanently through new agreements that reinforce commercial realities.
Although the proposed regime is inherently attractive to user nations—they get an assured supply of cheap fuel and are relieved of the problem of dealing with waste materials—other incentives should also be put in place because the user states would be agreeing to go beyond the requirements of the treaty on the nonproliferation of nuclear weapons. For example, if a global system of tradable carbon credits were instituted, user nations adhering to the fuel-leasing rules could be granted credits for their new nuclear power plants.
Iran is the most obvious example today of a nation that the global community would rather see as a "user state" than as a producer of enriched uranium. But it is not the only difficult case. Another nation whose program must be addressed promptly is Brazil, where an enrichment facility is under construction supposedly to provide fuel for the country's two nuclear reactors. A consistent approach to countries such as Iran and Brazil will be needed if nuclear power is to be expanded globally without exacerbating proliferation concerns.
The Terawatt Future
A Terawatt—one million megawatts—of "carbon-free" power is the scale needed to make a significant dent in projected carbon dioxide emissions at mid-century. In the terms used by Socolow and Pacala, that contribution would correspond to one to two of the seven required "stabilization wedges." Reaching a terawatt of nuclear power by 2050 is certainly challenging, requiring deployment of about 2,000 megawatts a month. A capital investment of $2 trillion over several decades is called for, and power plant cost reduction, nuclear waste management and a proliferation-resistant international fuel cycle regime must all be addressed aggressively over the next decade or so. A critical determinant will be the degree to which carbon dioxide emissions from fossil-fuel use are priced, both in the industrial world and in the large emerging economies such as China, India and Brazil.
The economics of nuclear power are not the only factor governing its future use. Public acceptance also turns on issues of safety and nuclear waste, and the future of nuclear power in the U.S. and much of Europe remains in question. Regarding safety, it is essential that NRC regulations are enforced diligently, which has not always been the case.
In the scenario developed as part of the M.I.T. study, it emerged that the U.S. would approximately triple its nuclear deployment—to about 300,000 megawatts—if a terawatt were to be realized globally. The credibility of such a scenario will be largely determined in the forthcoming decade by the degree to which the first-mover incentives in the 2005 Energy Policy Act are exercised, by the capability of the government to start moving spent fuel from reactor sites and by whether the American political process results in a climate change policy that will significantly limit carbon dioxide emissions.
The Rise of Renewable Energy
No plan to substantially reduce greenhouse gas emissions can succeed through increases in energy efficiency alone. Because economic growth continues to boost the demand for energy— more coal for powering new factories, more oil for fueling new cars, more natural gas for heating new homes-carbon emissions will keep climbing despite the introduction of more energy-efficient vehicles, buildings and appliances. To counter the alarming trend of global warming, the U.S. and other countries must make a major commitment to developing renewable energy sources that generate little or no carbon.
Renewable energy technologies were suddenly and briefly fashionable three decades ago in response to the oil embargoes of the 1970s, but the interest and support were not sustained. In recent years, however, dramatic improvements in the performance and affordability of solar cells, wind turbines and biofuels—ethanol and other fuels derived from plants—have paved the way for mass commercialization. In addition to their environmental benefits, renewable sources promise to enhance America's energy security by reducing the country's reliance on fossil fuels from other nations. What is more, high and wildly fluctuating prices for oil and natural gas have made renewable alternatives more appealing.
We are now in an era where the opportunities for renewable energy are unprecedented, making this the ideal time to advance clean power for decades to come. But the endeavor will require a long-term investment of scientific, economic and political resources. Policymakers and ordinary citizens must demand action and challenge one another to hasten the transition.
Let the Sun Shine
Solar cells, also known as photovoltaics, use semiconductor materials to convert sunlight into electric current. They now provide just a tiny slice of the world's electricity: their global generating capacity of 5,000 megawatts (MW) is only 0.15 percent of the total generating capacity from all sources. Yet sunlight could potentially supply 5,000 times as much energy as the world currently consumes. And thanks to technology improvements, cost declines and favorable policies in many states and nations, the annual production of photovoltaics has increased by more than 25 percent a year for the past decade and by a remarkable 45 percent in 2005. The cells manufactured last year added 1,727 MW to worldwide generating capacity, with 833 MW made in Japan, 353 MW in Germany and 153 MW in the U.S.
Solar cells can now be made from a range of materials, from the traditional multi-crystalline silicon wafers that still dominate the market to thin-film silicon cells and devices composed of plastic or organic semiconductors. Thin-film photovoltaics are cheaper to produce than crystalline silicon cells but are also less efficient at turning light into power. In laboratory tests, crystalline cells have achieved efficiencies of 30 percent or more; current commercial cells of this type range from 15 to 20 percent. Both laboratory and commercial efficiencies for all kinds of solar cells have risen steadily in recent years, indicating that — an expansion of research efforts would further enhance the performance of solar cells on the market.
Solar photovoltaics are particularly easy to use because they can be installed in so many places— on the roofs or walls of homes and office buildings, in vast arrays in the desert, even sewn into clothing to power portable electronic devices. The state of California has joined Japan and Germany in leading a global push for solar installations; the "Million Solar Roof" commitment is intended to create 3,000 MW of new generating capacity in the state by 2018. Studies done by my research group, the Renewable and Appropriate Energy Laboratory at the University of California, Berkeley, show that annual production of solar photovoltaics in the U.S. alone could grow to 10,000 MW in just 20 years if current trends continue.
The biggest challenge will be lowering the price of the photovoltaics, which are now relatively expensive to manufacture. Electricity produced by crystalline cells has a total cost of 20 to 25 cents per kilowatt-hour, compared with four to six cents for coal-fired electricity, five to seven cents for power produced by burning natural gas, and six to nine cents for biomass power plants. (The cost of nuclear power is harder to pin down because experts disagree on which expenses to include in the analysis; the estimated range is two to 12 cents per kilowatt-hour.) Fortunately, the prices of solar cells have fallen consistently over the past decade, largely because of improvements in manufacturing processes. In Japan, where 290 MW of solar generating capacity were added in 2005 and an even larger amount was exported, the cost of photovoltaics
has declined 8 percent a year; in California, where 50 MW of solar power were installed in 2005, costs have dropped 5 per-
cent annually.
Surprisingly, Kenya is the global leader in the number of solar power systems installed per capita (but not the number of watts added). More than 30,000 very small solar panels, each producing only 12 to 30 watts, are sold in that country annually. For an investment of as little as $100 for the panel and wiring, the system can be used to charge a car battery, which can then provide enough power to run a fluorescent lamp or a small black-and-white television for a few hours a day. More Kenyans adopt solar power every year than make connections to the country's electric grid. The panels typically use solar cells made of amorphous silicon; although these photovoltaics are only half as efficient as crystalline cells, their cost is so much lower (by a factor of at least four) that they are more affordable and useful for the two billion people world-wide who currently have no access to electricity. Sales of small solar power systems are booming in other African nations as well, and advances in low-cost photovoltaic manufacturing could accelerate this trend.
Furthermore, photovoltaics are not the only fast-growing form of solar power. Solar-thermal systems, which collect sunlight to generate heat, are also undergoing a resurgence. These systems have long been used to provide hot water for homes or factories, but they can also produce electricity without the need for expensive solar cells. In one design, for example, mirrors focus light on a Stirling engine, a high-efficiency device containing a working fluid that circulates between hot and cold chambers. The fluid expands as the sunlight heats it, pushing a piston that, in turn, drives a turbine.
In the fall of 2005 a Phoenix company called Stirling Energy Systems announced it was going to build two large solar-thermal power plants in southern California. The company signed a 20-year power purchase agreement with Southern California Edison, which will buy the electricity from a 500-MW solar plant to be constructed in the Mojave Desert. Stretching across 4,500 acres, the facility will include 20,000 curved dish mirrors, each concentrating light on a Stirling engine about the size of an oil barrel. The plant is expected to begin operating in 2009 and could later be expanded to 850
MW. Stirling Energy Systems also signed a 20-year contract with San Diego Gas & Electric to build a 300-MW, 12,000-dish plant in the Imperial Valley. This facility could eventually be upgraded to 900 MW. ^
The financial details of the two Calfornia projects have not been made public, but electricity produced by present solar-thermal technologies costs between five and 13 cents per kilowatt-hour, with dish-mirror systems at the upper end of that range. Because the projects involve highly reliable technologies and mass production, however, the generation expenses are expected to ultimately drop closer to four to six cents per kilowatt-hour—that is, competitive with the current price of coal-fired power.
Blowing in the Wind
Wind power has been growing at a pace rivaling that of the solar industry. The worldwide generating capacity of wind turbines has increased more than 25 percent a year, on average, for the past decade, reaching nearly 60,000 MW in 2005.
The growth has been nothing short of explosive in Europe— between 1994 and 2005, the installed wind power capacity in European Union nations jumped from 1,700 to 40,000 MW. Germany alone has more than 18,000 MW of capacity thanks to an aggressive construction program. The northern German state of Schleswig-Holstein currently meets one quarter of its annual electricity demand with more than 2,400 wind turbines, and in certain months wind power provides more than half the state's electricity. In addition, Spain has 10,000 MW of wind capacity, Denmark has 3,000 MW, and Great Britain, the Netherlands, Italy and Portugal each have more than 1,000 MW.
In the U.S. the wind power industry has accelerated dramatically in the past five years, with total generating capacity leaping 36 percent to 9,100 MW in 2005. Although wind turbines now produce only 0.5 percent of the nation's electricity, the potential for expansion is enormous, especially in the windy Great Plains states. (North Dakota, for example, has greater wind energy resources than Germany, but only 98 MW of generating capacity is installed there.) If the U.S. constructed enough wind farms to fully tap these resources, these turbines could generate as much as 11 trillion kilowatt-hours of electricity, or nearly three times the total amount produced from all energy sources in the nation last year. The wind industry has developed increasingly large and efficient turbines, each capable of yielding 4 to 6 MW And in many locations, wind power is the cheapest form of new electricity, with costs ranging from four to seven cents per kilowatt-hour.
The growth of new wind farms in the U.S. has been spurred by a production tax credit that provides a modest kilowatt-hour, enabling wind turbines to compete with coal-fired plants. Unthreatened to eliminate the tax credit. Instead of instituting a long-term subsidy for wind power, the lawmakers have extended the tax credit on a year-to-year basis, and the continual uncertainty has slowed investment in wind farms. Congress is also threatening to derail a proposed 130-turbine farm off the coast of Massachusetts that would provide 468 MW of generating capacity, enough to power most of Cape Cod, Martha's Vineyard and Nantucket.
The reservations about wind power come partly from utility companies that are reluctant to embrace the new technology and partly from so-called NIMBY-ism. ("NIMBY" is an acronym for Not in My Backyard.) Although local concerns over how wind turbines will affect landscape views may have some merit, they must be balanced against the social costs of the alternatives. Because society's energy needs are growing relentlessly, rejecting wind farms often means requiring the construction or expansion of fossil fuel-burning power plants that will have far more devastating environmental effects.
Green Fuels
Researchers are also pressing ahead with the development of biofuels that could replace at least a portion of the oil currently consumed by motor vehicles. The most common biofuel by far in the U.S. is ethanol, which is typically made from corn and blended with gasoline. The manufacturers of ethanol benefit from a substantial tax credit: with the help of the $2-billion annual subsidy, they sold more than 16 billion liters of ethanol in 2005 (almost 3 percent of all automobile fuel by volume), and production is expected to rise 50 percent by 2007. Some policymakers have questioned the wisdom of the subsidy, pointing to studies showing that it takes more energy to harvest the corn and refine the ethanol than the fuel can deliver to combustion engines. In a recent analysis, though, my colleagues and I discovered that some of these studies did not properly account for the energy content of the by-products manufactured along with the ethanol. When all the inputs and outputs were correctly factored in, we found that ethanol has a positive net energy of almost five megajoules per liter.
We also found, however, that ethanol's impact on greenhouse gas emissions is more ambiguous. Our best estimates indicate that substituting corn-based ethanol for gasoline reduces greenhouse gas emissions by 18 percent, but the analysis is hampered by large uncertainties regarding certain agricultural practices, particularly the environmental costs of fertilizers. If we use different assumptions about these practices, the results of switching to ethanol range from a 36 percent drop in emissions to a 29 percent increase. Although corn-based ethanol may help the U.S. reduce its reliance on foreign oil, it will probably not do much to slow global warming unless the production of the biofuel becomes cleaner.
But the calculations change substantially when the ethanol is made from cellulosic sources: woody plants such as switch-grass or poplar. Whereas most makers of corn-based ethanol burn fossil fuels to provide the heat for fermentation, the pro-ducers of cellulosic ethanol burn lignin—an unfermentable part of the organic material—to heat the plant sugars. Burning lignin does not add any greenhouse gas-es to the atmosphere, because the emis-sions are offset by the carbon dioxide absorbed during the growth of the plants used to make the ethanol. As a result, substituting cellulosic ethanol for gaso-line can slash greenhouse gas emissions by 90 percent or more.
Another promising biofuel is so-called green diesel. Researchers have produced this fuel by first gasifying bio-mass—heating organic materials enough that they release hydrogen and carbon monoxide—and then converting these compounds into long-chain hy-drocarbons using the Fischer-Tropsch process. (During World War II, Ger-man engineers employed these chemical reactions to make synthetic motor fuels out of coal.) The result would be an eco-nomically competitive liquid fuel for motor vehicles that would add virtually no greenhouse gases to the atmosphere. Oil giant Royal Dutch/Shell is currently investigating the technology.
The Need for R&D
Each of these renewable sources is now at or near a tip-ping point, the crucial stage when investment and innovation, as well as market access, could enable these attractive but generally marginal providers to become major contributors to regional and global energy supplies. At the same time, aggressive policies designed to open markets for renewables are taking hold at city, state and federal levels around the world. Governments have adopted these policies for a wide variety of reasons: to promote market diversity or energy security, to bolster industries and jobs, and to protect the environment on both the local and global scales. In the U.S. more than 20 states have adopted standards setting a minimum for the fraction of electricity that must be supplied with renewable sources. Germany plans to generate 20 percent of its electricity from renewables by 2020, and Sweden intends to give up fossil fuels entirely.
Even President George W. Bush said, in his now famous State of the Union address this past January that the U.S. is "addicted to oil." And although Bush did not make the link to global warming, nearly all scientists agree that humanity's addiction to fossil fuels is disrupting the earth's climate. The time for action is now, and at last the tools exist to alter energy production and consumption in ways that simultaneously benefit the economy and the environment. Over the past 25 years, however, the public and private funding of research and development in the energy sector has withered. Between 1980 and 2005 the fraction of all U.S. R&D spending de-voted to energy declined from 10 to 2 percent. Annual public R&D funding for energy sank from $8 billion to $3 billion (in 2002 dollars); private R&D plummeted from $4 billion to $1 billion.
To put these declines in perspective, consider that in the early 1980s energy companies were investing more in R&D than were drug companies, whereas today investment by energy firms is an order of magnitude lower. Total private R&D funding for the entire energy sector is less than that of a single large biotech company. (Amgen, for example, had R&D expenses of $2.3 billion in 2005.) And as R&D spending dwindles, so does innovation. For instance, as R&D funding for photovoltaics and wind power has slipped over the past quarter of a century, the number of successful patent applications in these fields has fallen accordingly. The lack of attention to long term research and planning has significantly weakened our nation's ability to respond to the challenges of climate change and disruptions in energy supplies.
Calls for major new commitments to energy R&D have become common. A 1997 study by the President's Committee of Advisors on Science and Technology and a 2004 report by the bipartisan National Commission on Energy Policy both recommended that the federal government double its R&D spending on energy. But would such an expansion be enough? Probably not. Based on assessments of the cost to stabilize the amount of carbon dioxide in the atmosphere and other studies that estimate the success of energy R&D programs and the resulting savings from the technologies that would emerge, my research group has calculated that public funding of $15 billion to $30 billion a year would be required—a fivefold to 10-fold increase over current levels.
Greg F. Nemet, a doctoral student in my laboratory, and I found that an increase of this magnitude would be roughly comparable to those that occurred during previous federal R&D initiatives such as the Manhattan Project and the Apollo program, each of which produced demonstrable economic benefits in addition to meeting its objectives. American energy companies could also boost their R&D spending by a factor of 10, and it would still be below the average for U.S. industry overall. Although government funding is essential to supporting early-stage technologies, private-sector R&D is the key to winnowing the best ideas and reducing the barriers to commercialization.
Raising R&D spending, though, is not the only way to make clean energy a national priority. Educators at all grade levels, from kindergarten to college, can stimulate public interest and activism by teaching how energy use and production affect the social and natural environment. Nonprofit organizations can establish a series of contests that would reward the first company or private group to achieve a challenging and worthwhile energy goal, such as constructing a building or appliance that can generate its own power or developing a commercial vehicle that can go 200 miles on a single gallon of fuel. The contests could be modeled after the Ashoka awards for pioneers in public policy and the Ansari X Prize for the developers of space vehicles. Scientists and entrepreneurs should also focus on finding clean, affordable ways to meet the energy needs of people in the developing world. My colleagues and I, for instance, recently detailed the environmental benefits of improving cooking stoves in Africa.
But perhaps the most important step toward creating a sustainable energy economy is to institute market-based schemes to make the prices of carbon fuels reflect their social cost. The use of coal, oil and natural gas imposes a huge collective toll on society, in the form of health care expenditures for ailments caused by air pollution, military spending to se-cure oil supplies, environmental damage from mining operations, and the potentially devastating economic impacts of global warming. A fee on carbon emissions would provide a simple, logical and transparent method to reward renewable, clean energy sources over those that harm the economy and the environment. The tax revenues could pay for some of the social costs of carbon emissions, and a portion could be designated to compensate low-income families who spend a larger share of their income on energy. Furthermore, the carbon fee could be combined with a cap-and-trade program that would set limits on carbon emissions but also allow the cleanest energy suppliers to sell permits to their dirtier competitors. The federal government has used such programs with great success to curb other pollutants, and several northeastern states are already experimenting with greenhouse gas emissions trading.
Best of all, these steps would give energy companies an enormous financial incentive to advance the development and commercialization of renewable energy sources. In essence, the U.S. has the opportunity to foster an entirely new industry. The threat of climate change can be a rallying cry for a clean-technology revolution that would strengthen the country's manufacturing base, create thousands of jobs and alleviate our international trade deficits—instead of importing foreign oil, we can export high-efficiency vehicles, appliances, wind turbines and photovoltaics. This transformation can turn the nation's energy sector into something that was once deemed impossible: a vibrant, environmentally sustainable engine of growth,
What to do about Coal
More than most people realize, dealing with climate change means addressing the problems proposed by emissions from coal-fired power plants. Unless humanity takes prompt action to strictly limit the amount of carbon dioxide (CO2) released into the atmosphere when consuming coal to make electricity, we have little chance of gaining control over global warming.
Coal—the fuel that powered the Industrial Revolution—is a particularly worrisome source of energy, in part because burning it produces considerably more carbon dioxide per unit of electricity generated than burning either oil or natural gas does. In addition, coal is cheap and will remain abundant long after oil and natural gas have become very scarce. With coal plentiful and inexpensive, its use is burgeoning in the U.S. and elsewhere and is expected to continue rising in areas with
abundant coal resources. Indeed, U.S. power providers are expected to build the equivalent of nearly 280 500-megawatt, coal-fired electricity plants between 2003 and 2030. Meanwhile China is already constructing the equivalent of one large coal-fueled power station a week. Over their roughly 60-year life spans, the new generating facilities in operation by 2030 could collectively introduce into the atmosphere about as much carbon dioxide as was released by all the coal burned since the dawn of the Industrial Revolution.
Coal's projected popularity is disturbing not only for those concerned about climate change but also for those worried about other aspects of the environment and about human health and safety. Coal's market price may be low, but the true costs of its extraction, processing and consumption are high. Coal use can lead to a range of harmful consequences, including decapitated mountains, air pollution from acidic and toxic emissions, and water fouled with coal wastes. Extraction also endangers and can kill miners. Together such effects make coal production and conversion to useful energy one of the most destructive activities on the planet.
In keeping with Scientific American's focus on climate concerns in this issue, we will concentrate below on methods that can help prevent CO2 generated during coal conversion from reaching the atmosphere. It goes without saying that the environmental, safety and health effects of coal production and use must be reduced as well. Fortunately, affordable techniques for addressing CO2 emissions and these other problems already exist, although the will to implement them quickly still lags significantly.
Geologic Storage Strategy
The techniques that power providers could apply to keep most of the carbon dioxide they produce from entering the air are collectively called CO2 capture and storage (CCS) or geologic carbon sequestration. These procedures involve separating out much of the CO2 that is created when coal is converted to useful energy and transporting it to sites where it can be stored deep underground in porous media—mainly in depleted oil or gas fields or in saline formations (permeable geologic strata filled with salty water) (see "Can We Bury Global Warming?" by Robert H. Socolow).
All the technological components needed for CCS at coal conversion plants are commercially ready—having been proved in applications unrelated to climate change mitigation, although integrated systems have not yet been constructed at the necessary scales. Capture technologies have been deployed extensively throughout the world both in the manufacture of chemicals (such as fertilizer) and in the purification of natural gas supplies contaminated with carbon dioxide and hydrogen sulfide ("sour gas"). Industry has gained considerable experience with C02 storage in operations that purify natural gas (mainly in Canada) as well as with CO2 injection to boost oil production (primarily in the U.S.). Enhanced oil recovery processes account for most of the CO2 that has been sent into underground reservoirs. Currently about 35 million metric tons are injected annually to coax more petroleum out of mature fields, accounting for about 4 percent of U.S. crude oil output.
Implementing CCS at coal-consuming plants is imperative if the carbon dioxide concentration in the atmosphere is to be kept at an acceptable level. The 1992 United Nations Framework Convention on Climate Change calls for stabilizing the atmospheric CO2 concentration at a "safe" level, but it does not specify what the maximum value should be. The current view of many scientists is that atmospheric CO2 levels must be kept below 450 parts per million by volume (ppmv) to avoid unacceptable climate changes. Realization of this aggressive goal requires that the power industry start commercial-scale CCS projects within the next few years and expand them rapidly thereafter. This stabilization benchmark cannot be realized by CCS alone but can plausibly be achieved if it is combined with other eco-friendly measures, such as wide improvements in energy efficiency and much expanded use of renewable energy sources.
The Intergovernmental Panel on Climate Change (IPCC) estimated in 2005 that it is highly probable that geologic media worldwide are capable of sequestering at least two trillion metric tons of CO2—more than is likely to be produced by fossil-fuel-consuming plants during the 21st century. Society will want to be sure, however, that potential sequestration sites are evaluated carefully for their ability to retain CO2 before they are allowed to operate. Two classes of risks are of concern: sudden escape and gradual leakage.
Rapid outflow of large amounts of CO2 could be lethal to those in the vicinity. Dangerous sudden releases—such as that which occurred in 1986 at Lake Nyos in Cameroon, when CO2 of volcanic origin asphyxiated 1,700 nearby villagers and thousands of cattle—are improbable for engineered CO2 storage projects in carefully selected, deep porous geologic formations, according to the IPCC.
Gradual seepage of carbon dioxide into the air is also an issue, because over time it could defeat the goal of CCS. The 2005 IPCC report estimated that the fraction retained in appropriately selected and managed geologic reservoirs is very likely to exceed 99 percent over 100 years and likely to exceed 99 percent over 1,000 years. What remains to be demonstrated is whether in practice operators can routinely keep CO2 leaks to levels that avoid unacceptable environmental and public health risks.
Technology Choices
Design studies indicate that existing power generation technologies could capture from 85 to 95 percent of the carbon
in coal as CO2, with the rest released to the atmosphere.
The coal conversion technologies that come to dominate will be those that can meet the objectives of climate change mitigation at the least cost. Fundamentally different approaches to CCS would be pursued for power plants using the conventional pulverized-coal steam cycle and the newer integrated gasification combined cycle (IGCC). Although today's coal IGCC power (with CO2 venting) is slightly more expensive than coal steam-electric power, it looks like IGCC is the most effective and least expensive option for CCS.
Standard plants burn coal in a boiler at atmospheric pressure. The heat generated in coal combustion transforms water into steam, which turns a steam turbine, whose mechanical energy is converted to electricity by a generator. In modern plants the gases produced by combustion (flue gases) then pass through devices that remove particulates and oxides of sulfur and nitrogen before being exhausted via smokestacks into the air.
Carbon dioxide could be extracted from the flue gases of such steam-electric plants after the removal of conventional pollutants. Because the flue gases contain substantial amounts of nitrogen (the result of burning coal in air, which is about 80 percent nitrogen), the carbon dioxide would be recovered at low concentration and pressure—which implies that the CO2 would have to be removed from large volumes of gas using processes that are both energy-intensive and expensive. The captured CO2 would then be compressed and piped to an appropriate storage site.
In an IGCC system coal is not burned but rather partially oxidized (reacted with limited quantities of oxygen from an air separation plant, and with steam) at high pressure in a gasifier. The product of gasification is so-called synthesis gas, or syngas, which is composed mostly of carbon monoxide and hydrogen, undiluted with nitrogen. In current practice, IGCC operations remove most conventional pollutants from the syngas and then burn it to turn both gas and steam turbine generators in what is called a combined cycle.
In an IGCC plant designed to capture CO2, the syngas exiting the gasifier, after being cooled and cleaned of particles, would be reacted with steam to produce a gaseous mixture made up mainly of carbon dioxide and hydrogen. The C02 would then be extracted, dried, compressed and transported to a storage site. The remaining hydrogen-rich gas would be burned in a combined cycle plant to generate power.
Analyses indicate that carbon dioxide capture at IGCC plants consuming high-quality bituminous coals would entail significantly smaller energy and cost penalties and lower total generation costs than what could be achieved in conventional coal plants that captured and stored CO2. Gasification systems recover CO2 from a gaseous stream at high concentration and pressure, a feature that makes the process much easier than it would be in conventional steam facilities. (The extent of the benefits is less clear for lower-grade subbituminous coals and lignites, which have received much less study.) Pre-combustion removal of conventional pollutants, including mercury, makes it feasible to realize very low levels of emissions at much reduced costs and with much smaller energy penalties than with cleanup systems for flue gases in conventional plants.
Captured carbon dioxide can be transported by pipeline up to several hundred kilometers to suitable geologic storage sites and subsequent subterranean storage with the pressure produced during capture. Longer distances may, however, require recompression to compensate for friction losses during pipe-line transfer.
Overall, pursuing CCS for coal power facilities requires the consumption of more coal to generate a kilowatt-hour of electricity than when CO2 is vented— about 30 percent extra in the case of coal steam-electric plants and less than 20 percent more for IGCC plants. But overall coal use would not necessarily increase, because the higher price of coal-based electricity resulting from adding CCS equipment would dampen demand for coal-based electricity, making renewable energy sources and energy-efficient products more desirable to consumers.
The cost of CCS will depend on the type of power plant, the distance to the storage site, the properties of the storage reservoir and the availability of opportunities (such as enhanced oil recovery) for selling the captured C02. A recent study co-authored by one of us (Williams) estimated the incremental electric generation costs of two alternative CCS options for coal IGCC plants under typical production, transport and storage conditions. For CO2 sequestration in a saline formation 100 kilometers from a power plant, the study calculated that the incremental cost of CCS would be 1.9 cents per kilowatt-hour (beyond the generation cost of 4.7 cents per kilowatt-hour for a coal IGCC plant that vents CO2—a 40 percent premium). For CCS pursued in conjunction with enhanced oil recovery at a distance of 100 kilometers from the conversion plant, the analysis finds no increase in net generation cost would occur as long as the oil price is at least $35 per barrel, which is much lower than current prices.
CCS Now or Later?
Many electricity producers in the industrial world recognize that environmental concerns will at some point force them to implement CCS if they are to continue to employ coal. But rather than building plants that actually capture and store carbon dioxide, most plan to construct conventional steam facilities they claim will be "CO2 capture ready"—convertible when CCS is mandated.
Power providers often defend those decisions by noting that the U.S. and most other countries with coal-intensive energy economies have not yet instituted policies for climate change mitigation that would make CCS cost-effective for uses not associated with enhanced oil recovery. Absent revenues from sales to oil field operators, applying CCS to new coal plants using current technology would be the least-cost path only if the cost of emitting CO2 were at least $25 to $30 per metric ton. Many current policy proposals for climate change mitigation in the U.S. envision significantly lower cost penalties to power providers for releasing CO2 (or similarly, payments for CO2 emissions-reduction credits).
Yet delaying CCS at coal power plants until economy-wide carbon dioxide control costs are greater than CCS costs is shortsighted. For several reasons, the coal and power industries and society would ultimately benefit if deployment of plants fitted with CCS equipment were begun now.
First, the fastest way to reduce CCS costs is via "learning by doing"—the accumulation of experience in building and running such plants. The faster the understanding is accumulated, the quicker the know-how with the new technology will grow, and the more rapidly the costs will drop.
Second, installing CCS equipment as soon as possible should save money in the long run. Most power stations currently under construction will still be operating decades from now, when it is likely that CCS efforts will be obligatory.
Retrofitting generating facilities for CCS is inherently more expensive than deploying CCS in new plants. Moreover, in the absence of CO2 emission limits, familiar conventional coal steam-electric technologies will tend to be favored for most new plant construction over newer gasification technologies, for which CCS is more cost-effective.
Finally, rapid implementation would allow for continued use of fossil fuels in the near term (until more environmentally friendly sources become prevalent) without pushing atmospheric carbon dioxide beyond tolerable levels. Our studies indicate that it is feasible to stabilize atmospheric COi levels at 450 ppmv over the next half a century if coal-based energy is completely decarbonized and other measures described in the box at the left are implemented. This effort would involve decarbonizing 36 gigawatts of new coal generating capacity by 2020 (corresponding to 7 percent of the new coal capacity expected to be built worldwide during the decade beginning in 2011 under business-as-usual conditions). In the 35 years after 2020, CO2 capture would need to rise at an average rate of about 12 percent a year. Such a sustained pace is high compared with typical market growth rates for energy but is not unprecedented. It is much less than the expansion rate for nuclear generating capacity in its heyday—1956 to 1980—during which global capacity rose at an average rate of 40 percent annually. Further, the expansion rates for both wind and solar photovoltaic power capacities world-wide have hovered around 30 percent a year since the early 1990s. In all three cases, such growth would not have been practical without public policy measures to support them.
Our calculations indicate that the costs of CCS deployment would be manageable as well. Using conservative assumptions—such as that technology will not improve over time—we estimate that the present worth of the cost of capturing and storing all CO2 produced by coal-based electricity generation plants during the next 200 years will be $1.8 trillion (in 2002 dollars). That might seem like a high price tag, but it is equivalent to just 0.07 percent of the current value of gross world product over the same interval. Thus, it is plausible that a rapid decarbonization path for coal is both physically and economically feasible, although detailed regional analyses are needed to confirm this conclusion.
Policy Push Is Needed
Those good reasons for commencing concerted CCS efforts soon will probably not move the industry unless it is also prodded by new public policies. Such initiatives would be part of a broader drive to control carbon dioxide emissions from all sources.
In the U.S., a national program to limit CO2 emissions must be enacted soon to introduce the government regulations and market incentives necessary to shift investment to the least-polluting energy technologies promptly and on a wide scale. Leaders in the American business and policy communities increasingly agree that quantifiable and enforceable restrictions on global warming emissions are imperative and inevitable. To ensure that power companies put into practice the reductions in a cost-effective fashion, a market for trading C02 emissions credits should be created—one similar to that for the sulfur emissions that cause acid rain. In such a plan, organizations that intend to exceed designated emission limits may buy credits from others that are able to stay below these values.
Enhancing energy efficiency efforts and raising renewable energy production are critical to achieving carbon dioxide limits at the lowest possible cost. A portion of the emission allowances created by a carbon cap-and-trade program should be allocated to the establishment of a fund to help overcome institutional barriers and technical risks that obstruct widespread deployment of otherwise cost-effective CO2 mitigation technologies.
Even if a carbon dioxide cap-and-trade program were enacted in the next few years the economic value of CO2 emissions reduction may not be enough initially to convince power providers to invest in power systems with CCS. To avoid the construction of another generation of conventional coal plants, it is essential that the federal government establish incentives that promote CCS.
One approach would be to insist that an increasing share of total coal-based electricity generation comes from facilities that meet a low CO2 emissions standard—perhaps a maximum of 30 grams of carbon per kilowatt-hour (an achievable goal using today's coal CCS technologies). Such a goal might be achieved by obliging electricity producers that use coal to include a growing fraction of decarbonized coal power in their supply portfolios. Each covered electricity producer could either generate the required amount of decarbonized coal power or purchase decarbonized-generation credits. This system would share the incremental costs of CCS for coal power among all U.S. coal-based electricity producers and consumers.
If the surge of conventional coal-fired power plants currently on drawing boards is built as planned, atmospheric carbon dioxide levels will almost certainly exceed 450 ppmv. We can meet global energy needs while still stabilizing CO2 at 450 ppmv, however, through a combination of improved efficiency in energy use, greater reliance on renewable energy resources and, for the new coal investments that are made, the installation of CO2 capture and geologic storage technologies. Even though there is no such thing as "clean coal," more can and must be done to reduce the dangers and environmental degradations associated with coal production and use. An integrated low-carbon energy strategy that incorporates CO2 capture and storage can reconcile substantial use of coal in the coming decades with the imperative to prevent catastrophic changes to the earth's climate.
High Hopes for Hydrogen
Developing cleaner power sources for transportation is perhaps the trickiest piece of the energy puzzle. The difficulty stems from two discouraging facts. First, the number of vehicles worldwide, now 750 million, is expect-ed to triple by 2050, thanks largely to the expanding buying power of customers in China, India and other rapidly developing countries. And second, 97 per-cent of transportation fuel currently comes from crude oil.
In the near term, improving fuel economy is the best way to slow the rise in oil use and greenhouse gas emissions from cars and trucks. But even if auto-makers triple the efficiency of their fleets and governments support mass transit and smart-growth strategies that lessen the public's reliance on cars, the explosive growth in the number of vehicles around the world will severely limit any reductions in oil consumption and car-bon dioxide emissions. To make deeper cuts, the transportation sector needs to switch to low-carbon, non-petroleum fuels. Liquid fuels derived from woody plants or synthesized from tar sands or coal may play important roles. Over the long term, however, the most feasible ways to power vehicles with high efficiency and zero emissions are through connections to the electric grid or the use of hydrogen as a transportation fuel.
Unfortunately, the commercialization of electric vehicles has been stymied by a daunting obstacle: even large ar-rays of batteries cannot store enough charge to keep cars running for distances comparable to gasoline engines. For this reason, most auto companies have abandoned the technology. In contrast, fuel-cell vehicles—which combine hydrogen fuel and oxygen from the air to generate the power to run electric motors—face fewer technical hurdles and have the enthusiastic support of auto manufacturers, energy companies and policymakers. Fuel-cell vehicles are several times as efficient as today's conventional gasoline cars, and their only tail-pipe emission is water vapor.
What is more, hydrogen fuel can be made without adding any greenhouse gases to the atmosphere. For example, the power needed to produce hydrogen from electrolysis—using electricity to split water into hydrogen and oxygen—can come from renewable energy sources such as solar cells, wind turbines, hydroelectric plants and geothermal facilities. Alternatively, hydrogen can be extracted from fossil fuels such as natural gas and coal, and the carbon by-products can be captured and sequestered underground.
Before a hydrogen-fueled future can become a reality, how-ever, many complex challenges must be overcome- Carmakers must learn to manufacture new types of vehicles, and consumers must find them attractive enough to buy. Energy companies must adopt cleaner techniques for producing hydrogen and build a new fuel infrastructure that will eventually replace the existing systems for refining and distributing gasoline. Hydrogen will not fix all our problems tomorrow; in fact, it could be decades before it starts to reduce greenhouse gas emissions and oil use on a global scale. It is important to recognize that a hydrogen transition will be a marathon, not a sprint.
The Fuel-Cell Future
Over the past decade, 17 countries have announced national programs to develop hydrogen energy, committing billions of dollars in public funds. In North America more than 30 U.S. states and several Canadian provinces are developing similar plans. Most major car companies are demonstrating prototype hydrogen vehicles and investing hundreds of millions of dollars into R&D efforts. Honda, Toyota and General Motors have announced plans to commercialize fuel-cell vehicles sometime between 2010 and 2020. Automakers and energy companies such as Shell, Chevron and BP are working with governments to introduce the first fleets of hydrogen vehicles, along with small refueling networks in California, the northeastern U.S., Europe and China.
The surge of interest in hydrogen stems not only from its long-term environmental benefits but also from its potential to stimulate innovation. Auto manufacturers have embraced fuel-cell cars because they promise to become a superior consumer product. The technology offers quiet operation, rapid acceleration and low maintenance costs. Replacing internal-combustion engines with fuel cells and electric motors eliminates the need for many mechanical and hydraulic subsystems; this change gives automakers more flexibility in designing these cars and the ability to manufacture them more efficiently. What is more, fuel-cell vehicles could provide their owners with a mobile source of electricity that might be used for recreational or business purposes. During periods of peak power usage, when electricity is most expensive, fuel-cell cars could also act as distributed generators, providing relatively cheap supplemental power for offices or homes while parked nearby.
Automakers, however, must address several technical and cost issues to make fuel-cell cars more appealing to consumers. A key component of the automotive fuel cell is the proton-ex-change membrane (PEM), which separates the hydrogen fuel from the oxygen. On one side of the membrane, a catalyst splits the hydrogen atoms into protons and electrons; then the protons cross the membrane and combine with oxygen atoms on the other side. Manufacturers have reduced the weight and volume of PEM fuel cells so that they easily fit inside a compact car. But the membranes degrade with use—current automotive PEM fuel cells last only about 2,000 hours, less than half the 5,000-hour lifetime needed for commercial vehicles. Companies are developing more durable membranes, however, and in late 2005 researchers at 3M, the corporation best known for Scotch tape and Post-it notes, reported new designs that might take fuel cells to 4,000 hours and beyond within the next five years.
Another big challenge is reducing the expense of the fuel cells. Today's fuel-cell cars are handmade specialty items that cost about $1 million apiece. Part of the reason for the expense is the small scale of the test fleets; if fuel-cell cars were mass-produced, the cost of their propulsion systems would most likely drop to a more manageable $6,000 to $10,000. That price is equivalent to $125 per kilowatt of engine power, which is about four times as high as the $30-per-kilowatt cost of a comparable internal-combustion engine. Fuel cells may require new materials and manufacturing methods to reach parity with gasoline engines. Car companies may also be able to lower costs by creatively redesigning the vehicles to fit the unique characteristics of the fuel cell. GM officials have stat-ed that fuel-cell cars might ultimately become less expensive than gasoline vehicles because they would have fewer moving parts and a more flexible architecture.
Automobile engineers must also figure out how to store enough hydrogen in a fuel-cell car to ensure a reasonable driving range—say, 300 miles. Storing hydrogen in its gaseous state requires large, high-pressure cylinders. Although liquid hydrogen takes up less space, it must be super-cooled to temperatures below -253 degrees Celsius (-423 degrees Fahrenheit). Automakers are exploring the use of metal hydride systems that adsorb hydrogen under pressure, but these devices tend to be heavy (about 300 kilograms). Finding a better storage method is a major thrust of hydrogen R&D worldwide. In the absence of a breakthrough technology, most fuel-cell vehicles today opt for the simplicity of storing the hydrogen as a compressed gas. With clever packaging and increased pressure, these cars are approaching viable travel ranges without com-promising trunk space or vehicle weight. In 2005 GM, Honda and Toyota demonstrated compact fuel-cell cars with a 300-mile range using hydrogen gas compressed at 70 megapascals. (Atmospheric pressure at sea level is about 0.1 megapascal.)
Finally, safety is a necessary precondition for introducing any new fuel. Although hydrogen is flammable, it has a higher ignition temperature than gasoline and disperses in the air much more quickly, reducing the risk of fire. On the downside, a much wider range of concentrations of hydrogen is flammable, and a hydrogen flame is barely visible. Oil refineries, chemical plants and other industrial facilities already handle vast quantities of hydrogen without incident, and with proper engineering it can be made safe for consumer applications as well. The U.S. Department of Energy and other groups are currently developing safety codes and standards for hydrogen fuel.
Once hydrogen cars are introduced, how soon could they capture a large share of the market and start to significantly reduce carbon emissions and oil use? Because cars last about 15 years, it would take at least that long to switch over the entire fleet. Typically after a new automotive technology undergoes pre-commercial research, development and demonstration, it is introduced to the market in a single car model and only later appears in a variety of vehicles. (For example, hybrid gas-electric engines were first developed for compact sedans and later incorporated into SUVs.) Costs generally fall as production volumes increase, making the innovation more attractive. It can take 25 to 60 years for a new technology to penetrate a sizable fraction of the fleet. Although fundamental research on hybrid vehicles began in the 1970s, it was not until 1993 that Toyota began development of the Prius hybrid. Initial sales began in late 1997, but eight years later hybrid models from several manufacturers still accounted for only 1.2 percent of new vehicle sales in the U.S.
Harvesting Hydrogen
Like electricity, hydrogen must be produced from some energy source. Currently the vast majority of hydrogen is obtained from the high-temperature processing of natural gas and petroleum. Oil refineries use hydrogen to purify petroleum-derived fuels, and chemical manufacturers employ the gas to make ammonia and other compounds. Hydrogen production now consumes 2 percent of global energy, and its share is growing rapidly. If all this hydrogen were devoted to fuel cell cars, it would power about 150 million vehicles, or about 20 percent of the world's fleet. Although most hydrogen is produced and immediately used inside refineries or chemical plants, some 5 to 10 percent is delivered to distant locations by truck or pipeline. In the U.S. this delivery system carries enough energy to fuel several million cars, and it could serve as a springboard to a hydrogen economy.
Making hydrogen from fossil fuels, however, generates carbon dioxide as a by-product. If hydrogen were produced from natural gas, the most common method today, and used in an efficient fuel-cell car, the total greenhouse gas emissions would work out to be about 110 grams per kilometer driven. This amount is somewhat less than the total emissions from a gasoline hybrid vehicle (150 grams per kilometer) and significantly less than those from today's conventional gasoline cars (195 grams per kilometer).The ultimate goal, though, is to produce hydrogen with little or no greenhouse gas emissions. One option is to capture the carbon dioxide emitted when extracting hydrogen from fossil fuels and inject it deep underground or into the ocean. This process could enable large-scale, clean production of hydrogen at relatively low cost, but establishing the technical feasibility and environmental safety of carbon sequestration will be crucial. Another idea is biomass gasification—heating organic materials such as wood and crop wastes so that they release hydrogen and carbon monoxide. (This technique does not add greenhouse gases to the atmosphere, because the car-bon emissions are offset by the carbon dioxide absorbed by the plants when they were growing.) A third possibility is the electrolysis of water using power generated by renewable energy sources such as wind turbines or solar cells.
Although electrolysis and biomass gasification face no major technical hurdles, the current costs for producing hydrogen using these methods are high: $6 to $10 per kilogram. (A kilogram of hydrogen has about the same energy content as a gallon of gasoline, but it will propel a car several times as far because fuel cells are more efficient than conventional gasoline engines.) According to a recent assessment by the National Research Council and the National Academy of Engineering, however, future technologies and large-scale production and distribution could lower the price of hydrogen at the pump to $2 to $4 per kilogram [see box on opposite page}. In this scenario, hydrogen in a fuel-cell car would cost less per kilometer than gasoline in a conventional car today.
Nuclear energy could also provide the power for electrolysis, although producing hydrogen this way would not be significantly cheaper than using renewable sources. In addition, nuclear plants could generate hydrogen without electrolysis: the intense heat of the reactors can split water in a thermochemical reaction. This process might produce hydrogen more cheaply, but its feasibility has not yet been proved. Moreover, any option involving nuclear power has the same drawbacks that have dogged the nuclear electric power industry for decades: the problems of radioactive waste, proliferation and public acceptance.
A New Energy Infrastructure
Because the U.S. has such rich resources of wind, solar and biomass energy, making large amounts of clean, inexpensive hydrogen will not be so difficult. The bigger problem is logistics: how to deliver hydrogen cheaply to many dispersed sites. The U.S. currently has only about 100 small refueling stations for hydrogen, set up for demonstration purposes. In contrast, the country has 170,000 gasoline stations. These stations cannot be easily converted to hydrogen; the gas is stored and handled differently than liquid fuels such as gasoline, requiring alternative technologies at the pump.
The need for a new infrastructure has created a "chicken and egg" problem for the incipient hydrogen economy. Consumers will not buy hydrogen vehicles unless fuel is widely available at a reasonable price, and fuel suppliers will not build hydrogen stations unless there are enough cars to use them. And although the National Research Council's study projects that hydrogen will become competitive with gasoline once a large distribution system is in place, hydrogen might cost much more during the early years of the transition.
One strategy for jump-starting the changeover is to first focus on fleet vehicles—local delivery vans, buses and trucks— that do not require an extensive refueling network. Marine engines and locomotives could also run on hydrogen, which would eliminate significant emissions of air pollutants. Hydrogen fuel cells might power small vehicles that now use electric batteries, such as forklifts, scooters and electric bikes. And fuel cells could also be used in stationary power production: for example, they could generate electricity for police stations, military bases and other customers that do not want to rely solely on the power grid. These niche markets could help bring down the cost of fuel cells and encourage energy companies to build the first commercial hydrogen stations.
To make a substantial dent in global oil use and green-house gas emissions, however, hydrogen fuel will have to succeed in passenger vehicle markets. Researchers at the University of California, Davis, have concluded that 5 to 10 percent of urban service stations (plus a few stations connecting cities) must offer hydrogen to give fuel-cell car owners roughly the same convenience enjoyed by gasoline customers. GM has estimated that providing national coverage for the first million hydrogen vehicles in the U.S. would require some 12,000 hydrogen stations in cities and along interstates, each costing about $1 million. Building a full-scale hydrogen system serving 100 million cars in the U.S. might cost several hundred billion dollars, spent over decades. This estimate counts not only the expense of building refueling stations but also the new production and delivery systems that will be needed if hydrogen becomes a popular fuel.
Those numbers may sound daunting, but the World Energy Council projects that the infrastructure costs of maintaining and expanding the North American gasoline economy over the next 30 years will total $1.3 trillion, more than half of which will be spent in oil-producing countries in the developing world. Most of these costs would go toward oil exploration and production. About $300 billion would be for oil re-fineries, pipelines and tankers—facilities that could eventually be replaced by a hydrogen production and delivery system. Building a hydrogen economy is costly, but so is business as usual.
Furthermore, there are several ways to deliver hydrogen to vehicles. Hydrogen can be produced regionally in large plants, then stored as a liquid or compressed gas, and distributed to refueling stations by truck or gas pipeline. It is also possible to make hydrogen locally at stations—or even in homes—from natural gas or electricity [see box on page 96]. In the early stages of a hydrogen economy, when the number of fuel-cell vehicles is relatively small, truck delivery or on-site production at refueling stations might be the most economical options. But once a large hydrogen demand is established—say, 25 per-cent of all the cars in a large city—a regional centralized plant with pipeline delivery offers the lowest cost. Centralized hydrogen production also opens the way for carbon sequestration, which makes sense only at large scales.
In many respects, hydrogen is more like electricity than gasoline. Because hydrogen is more costly to store and trans-port than gasoline, energy companies will most likely produce the fuel all over the country, with each generation plant serv-ing a regional market. What is more, the supply pathways will vary with location. A hydrogen economy in Ohio—which has plentiful coal and many suitable sites for carbon dioxide sequestration—might look entirely different from one in the Pacific Northwest (which has low-cost hydropower) or one in the Midwest (which can rely on wind power and biofuels). A small town or rural area might rely on truck delivery or on-site production, whereas a large, densely populated city might use a pipeline network to transport hydrogen.
Developing a hydrogen economy will certainly entail some financial risks. If an energy company builds giant production or distribution facilities and the fuel-cell market grows more slowly than expected, the company may not be able to recoup its investments. This dilemma is sometimes called the "strand-ed asset" problem. The energy industry can minimize its risk, though, by adding hydrogen supply in small increments that closely follow demand. For example, companies could build power plants that generate both electricity and a small stream of hydrogen for the early fuel-cell cars. To distribute the hydrogen, the companies could initially use truck delivery and defer big investments such as pipelines until a large, established demand is in place.
The First Steps
The road to a hydrogen transportation system actually consists of several parallel tracks. Raising fuel economy is the essential first step. Developing lightweight cars, more efficient engines and hybrid electric drive trains can greatly reduce carbon emissions and oil use over the next few de-cades. Hydrogen and fuel cells will build on this technical progression, taking advantage of the efficiency improvements and the increasing electrification of the vehicles.
The development of the hydrogen fuel infrastructure will be a decades-long process moving in concert with the growing market for fuel-cell vehicles. Through projects such as the California Hydrogen Highways Network and HyWays in Europe, energy companies are already providing hydrogen to test fleets and demonstrating refueling technologies. To en-able fuel-cell vehicles to enter mass markets in 10 to 15 years, hydrogen fuel must be widely available at a competitive price by then. Concentrating hydrogen projects in key regions such as southern California or the Northeast corridor might help hasten the growth of the fuel-cell market and reduce the cost of infrastructure investments.
In the near term, the bulk of the hydrogen fuel will most likely be extracted from natural gas. Fueling vehicles this way will cut greenhouse gas emissions only modestly compared with driving gasoline hybrids; to realize hydrogen's full benefits, energy companies must either make the gas from zero-carbon energy sources or sequester the carbon by-products. Once hydrogen becomes a major fuel—say, in 2025 or beyond—governments should phase in requirements for zero or near-zero emissions in its production. And in the meantime, policymakers should encourage the ongoing efforts to develop clean-power technologies such as wind, solar, biomass gasification and carbon sequestration. The shift to a hydrogen economy can be seen as part of a broader move toward lower-carbon energy.
Although the transition may take several decades, hydrogen fuel-cell vehicles could eventually help protect the global climate and reduce America's reliance on foreign oil. The vast potential of this new industry underscores the importance of researching, developing and demonstrating hydrogen technologies now, so they will be ready when we need them.