Inside Scoops on the Environment

Sherry Seethaler

Entertainment Budget

How much does it cost to watch TV? What “gotchas” should I be aware of?

It depends on whether your set rivals the wall-dominating models in Ray Bradbury’s book Fahrenheit 451 or is a more modest old-fashioned type. The average power use per square inch is 0.36 watts (W) for a plasma TV, 0.27W for an LCD TV, 0.23W for one of those vintage cathode ray tube sets, and 0.14W for a digital light processing TV, according to a study by Pacific Gas and Electric (PG&E).

For an average-sized model, PG&E estimates that power use ranges from 101W for a cathode ray tube TV to 361W for a plasma screen. If you spend 5 hours per day glued to the set or with the TV providing background noise, average energy consumption ranges from 184 to 659 kilowatt-hours (kWh) per year. At a rate of 13¢ per kilowatt-hour, that would cost between $24 and $86 annually. For a table comparing the power consumption of 150 HDTVs, see CNET reviews (http://reviews.cnet.com/green-tech/tv-consumption-chart/).

TV energy consumption has been increasing rapidly in recent years. The trend is expected to continue due to growth in sales of LCD and plasma TVs; expanding screen size; increase in daily use time; enhanced features and functionality; and electronics associated with TVs, such as cable TV and satellite tuners, digital video recorders, and game consoles.

The Natural Resources Defense Council estimates that television viewing represents about 10 percent of residential energy consumption. To decrease energy use, your set may allow you to opt for the power-saver mode, reduce light output, choose the “home” instead of the “retail” setting, and turn off “quick start.”

Vampire Appliances

If you leave a room and plan to come back, is it better to leave the lights on or turn them off? I’ve heard conflicting views on this.

Many people hold the misconception that switching on lights, especially fluorescent lights, requires a large power surge. This is not true. The power surge to light a fluorescent bulb requires only as much energy as the bulb would use in a few seconds of operation.

On the other hand, the life of a bulb is diminished by frequent switching. Trading off energy use versus lifetime of the bulb, a general rule of thumb is that it is worth turning off the light if you will be gone more than 10 minutes.

When possible, consider turning off devices that consume standby power, such as satellite boxes and devices with receivers for remote controls. On average, standby power accounts for 5 to 7 percent of U.S. household energy use and increases with greater penetration of home electronics.

Light Pollution

Is there any way to estimate, however roughly, the amount of greenhouse gases that would be kept out of the atmosphere if everyone turned off their porch lights?

Let’s assume that, on average, every household in America illuminates one 60W light bulb for 8 hours per night. Over the course of a year, each household would use 175kWh to keep the porch lights on. According to the American Community Survey, about 125 million housing units exist in the United States. That means porch lights could consume a total of 22 billion kWh.

The amount of greenhouse gases emitted per kWh of electrical energy produced depends on the production method. The coefficient is about 2 pounds of carbon dioxide per kWh if electricity is derived from the combustion of coal, which currently fuels nearly half of U.S. electricity generation. The burning of other fossil fuels, mostly natural gas, generates almost a quarter of our electric energy needs. About another one-fifth comes from nuclear power. The remainder comes from renewables, including wind, solar, and geothermal energy.

The U.S. Environmental Protection Agency has a greenhouse gas equivalencies calculator at its Web site (www.epa.gov/cleanenergy/energy-resources/calculator.html). Based on the current mixture of electricity-generation methods, about 1.6 pounds of carbon dioxide equivalents are produced per kWh generated. Therefore, saving 22 billion kWh of electricity would reduce greenhouse gas emissions by 16 million metric tons. This is 0.03 percent of the approximately 50 billion tons of annual worldwide greenhouse gas emissions from human activities.

The Big Chill

Is there a chance that there will be another ice age?

Because large ice sheets still exist in Greenland and Antarctica, we technically still live in the ice age that began millions of years ago. By a more colloquial definition, the last ice age, or period of extensive advance of ice sheets toward the equator, ended 10,000 years ago. Climatologists refer to these colder periods within an ice age as glacial periods and the warmer periods (like the present) as interglacials.

Not long ago, many climatologists were predicting that Earth was headed into a glacial period. What appeared to be a warming trend in the early part of the twentieth century, after the “Little Ice Age,” shifted to cooling from 1940 to the late 1960s. (Your relatives in the Midwest or eastern United States or Europe weren’t exaggerating—winter really was snowier back in the good old days.)

Until the warming trend resumed in the 1970s, much disagreement swirled over whether the world had entered into a long-term cool spell. The cooling turned out to be temporary and localized to the Northern Hemisphere. Particulate matter in the atmosphere from industrial pollution (most prevalent in the Northern Hemisphere) may have played a role by reflecting incoming solar radiation. New regulations and pollution-control technologies had started to clear the air around the time the warming resumed.

Glacial periods happen on a quasiregular basis, so we should eventually plunge into another. However, computer simulations suggest that the increasing levels of carbon dioxide in the atmosphere may lock Earth into an irreversible greenhouse effect and stave off future ice ages.

That sounds terrific (especially to weather wimps like me), but the climate system may have some tricks in store. For example, the sinking of dense, salty water in the North Atlantic helps drive a conveyer belt of currents around the globe that ultimately brings warm water back to the North Atlantic. Some researchers are worried that fresh water from the melting of glaciers could create a “lid” that reduces the sinking that helps drive the globe-girdling current. Most researchers do not think the current would shut down completely, but reductions in flow could chill Europe and have wide-ranging effects on climate elsewhere.

The Little Chill

I’ve recently seen a few programs about the “Little Ice Age,” yet they were all about the Northern Hemisphere. What was happening in the Southern Hemisphere at that time?

The term “Little Ice Age” has been used to describe a cool period between roughly the seventeenth and nineteenth centuries. No consensus has arisen on when the Little Ice Age began and ended. Some scientists place the beginning as early as 1300, when the ice pack around Greenland began to grow.

The most detailed climate information about the Little Ice Age comes from Europe and North America. Evidence also shows that mountain glaciers advanced during this period in a number of regions in the Southern Hemisphere, including New Zealand, Chile, and Peru. Glaciers advanced and retreated throughout the Little Ice Age in a zigzag of climactic shifts instead of simply advancing continuously.

The cause of the Little Ice Age is not well understood. Historical records reveal that there was a remarkable lack of sunspots at the height of the Little Ice Age. Scientists think this could mean that changes in solar activity played a role.

Volcanic activity probably also contributed to the cooling. In the 1600s, the world experienced at least six climactically significant volcanic eruptions. Several major eruptions also took place in the 1800s, including Mt. Tambora in Indonesia in 1815, which may have been the largest eruption in the last 15,000 years. Large volcanic eruptions produce cold episodes because ash particles sent high into the stratosphere block solar radiation.

Even within the Northern Hemisphere, the average temperature during the Little Ice Age was less than 1°C cooler (about 34°F) relative to late-twentieth-century temperatures. Cooling was considerably more pronounced in some areas, suggesting that regional climate changes may have been largely independent.

Complex interactions between the atmosphere and the ocean affect regional climate. Sea surface temperature anomalies, such as the North Atlantic Oscillation (NAO) and El Niño in the central tropical Pacific Ocean, alter the movement of air masses and, therefore, alter temperature and moisture distributions.

Century-scale changes in the NAO may have chilled Europe during the Little Ice Age. When the NAO changes phase and winds blow from the northeast, Europe is bathed in cold air from Siberia rather than heat from the Atlantic’s surface. Similarly, in some parts of South America, glacial advances during the Little Ice Age have been tied to periods of few El Niño events.

Capricious Cycles

In the global warming debate, why are Earth’s cycles of warming and cooling never talked about? As recently as 30 years ago, we humans were scared of global cooling. Now, it is global warming. What if Earth is moving into another cycle?

During the most recent geological era, glacial/interglacial cycles have occurred every 100,000 years, on average. Scientists have proposed more than 30 different models for these cycles, most of which attribute the timing of glaciations to regular, slow changes in three factors: Earth’s orbit around the sun, the tilt of Earth on its axis, and the orientation of its axis. Other models attribute the cycles to random internal climate variability.

Statistical tests of the models suggest that Earth’s tilt is the dominant factor controlling glacial cycles. Earth’s tilt changes from 22° to 24.5° every 41,000 years. Glaciations end when the average annual sunlight reaching high latitudes increases, which occurs as the tilt of Earth’s axis increases. It may take two or three 41,000-year tilt phases for the ice sheets to grow large enough to become sensitive to changes in Earth’s tilt. Feedback within Earth’s climate is important: The bigger the ice sheets get, the more sunlight they reflect, and the more Earth cools.

The tilt of Earth is currently 23.5° and decreasing. Therefore, Earth should slowly be moving toward a period in which the ice sheets advance. Of course, other natural factors affect climate within each glacial or interglacial period, including volcanic activity and solar activity.

Variation in solar activity is favored as an alternative explanation for global warming by individuals who dispute the role of human-produced greenhouse gases. But recent research shows that although solar cycles might explain the warming in the first half of the last century, it cannot explain the change in global temperatures over the past two decades. During that time, the sun’s output declined.

Today more certainty surrounds global warming than was the case in the 1970s with global cooling because vastly more research exists on how and why climate changes. Still, the effects on future climate of natural sources of variability, such as solar activity and volcanic activity, are currently unpredictable.

For Good Measure

Articles about climate change often report average regional and global temperatures to the hundredth of a degree. How are temperatures for Earth and specific regions determined now, and how were they determined in the recent past? How certain are average temperature measurements?

The instrumental period—the era of regular global thermometer-based temperature measurements—began in the 1850s. Accuracy has improved gradually over the instrumental period, with the geographical expansion of coverage and the introduction of new methods of data collection and averaging.

Current global temperature datasets are derived from measurements taken at more than 4,000 stations on land, as well as sea-surface temperatures recorded by ships and buoys. Satellite microwave and infrared imagery has supplemented marine temperature measurements since about 1980, and it is now possible to monitor land surface temperature with satellites.

Nonclimatic factors influence long-term temperature data. Random influences include changes in station location or instruments. Systematic changes can occur due to the introduction of new algorithms to calculate daily or monthly mean temperatures. When the cause of an observed discontinuity in temperature data at a site is not known, comparisons with neighboring sites can factor out nonclimatic influences.

Other adjustments may be needed, for example, to assess sea-surface temperature measurements taken by different methods. Before the 1940s, sea water was usually collected in uninsulated buckets for temperature measurements. Now temperature is often measured at ships’ engine inlets or hull sensors. Also, a calibration procedure relates the ocean “skin” temperature measured by satellites to ship and buoy temperature data, which are collected in the upper few meters of the ocean.

Temperature datasets do not have uniform coverage across the globe. Land-based measurements are sparsest over the interior of Africa, South America, and Antarctica. Measurements of sea surface temperature are densest along main shipping lanes and sparsest in the Southern Ocean.

Computational modeling is required to average temperature data across large regions and blend land and marine data. The models share some of the same raw data, but the averaging methods and the treatment of gaps in the data differ.

Although the data combination methods differ among models, they reveal similar warming trends. Each model provides a global temperature trend per decade estimated to be accurate within 0.05°C (0.09°F), which is also the maximum variation among the major models used (from NASA; the Hadley Center; Remote Sensing Systems; and the University of Alabama, Huntsville).

Therefore, the global decadal warming trends are not certain beyond two decimal places. They may be recorded with three decimal places to permit calculations to be made without additional rounding error.

The Case of The Missing Oxygen

Given the quantity of fossil fuel being burned, one would expect a measurable consumption of atmospheric oxygen where the emissions are carbon dioxide and water vapor. Is this due to occur eventually?

Only within the past two decades has it become possible to measure the small changes in oxygen concentration due to the burning of fossil fuels. These measurements are technically difficult because the background concentration of oxygen in the atmosphere is so large. Oxygen makes up nearly 21 percent of the atmosphere, a concentration more than 500 times greater than that of carbon dioxide.

The very sensitive measurements show that the burning of fossil fuels is consuming atmospheric oxygen (see Nature, August 27, 1992). The decline is about a ten-thousandth of 1 percent of the total amount of oxygen in the atmosphere.

Under Pressure

Barometer readings that measure atmospheric pressure are not changing. Where are the cumulative emissions of millions of tons of greenhouse gases?

The combustion of fossil fuels yields carbon dioxide (CO2) and water vapor (H2O). The atmospheric concentrations of both gases are increasing. But because the oxygen atoms in both gases come from atmospheric oxygen (O2), only a portion of the emissions (the carbon and hydrogen from fossil fuels) is “new” mass being added to the atmosphere.

The mass of the atmosphere would increase by the mass of fossil fuels burned if the chemical equation for combustion were the sole factor to consider. The other critical factor is that only about two-thirds of the carbon dioxide released by the burning of fossil fuels remains in the atmosphere. The oceans absorb the remaining carbon dioxide, which reacts with water to form carbonic acid.

Accounting for the uptake of carbon dioxide by the ocean, the mass of carbon dioxide and water vapor being produced is roughly equal to the mass of oxygen being consumed. Although these calculations are not precise because the burning of fossil fuels also involves other chemical reactions, any pressure changes caused by the burning of fossil fuels are small compared to normal fluctuations in atmospheric pressure.

On the other hand, global climate change has had measurable effects on the density of different layers of the atmosphere. As Earth’s surface has warmed by a fraction of a degree, at higher altitudes (30–50 miles), the atmosphere has cooled by several degrees and contracted (see Science, November 24, 2006). Because this pulls the intervening layers downward, the density of the atmosphere where satellites orbit (above 100 miles) has declined. With less drag, satellites will stay aloft longer, but so will potentially damaging spacecraft debris.

Nature Burps?

I read an article that said nature generates about 30 times as much CO2 as man does. Just what is the role of anthropogenic CO2 in the great debate?

The climate change issue has plenty of legitimate scientific debates, such as the extent of future warming and sea level rise, the role of future solar activity and volcanic activity, the best ways to model the effects of atmospheric particles—which, depending on their identity and location, can lead to warming, cooling, or cloud formation (itself challenging to model)—and the influence of ocean-atmospheric circulation patterns.

However, whether the increase in atmospheric carbon dioxide is less than, equal to, or greater than what humans have produced is measurable and, thus, should not be a source of controversy.

Just as a sustained increase in calorie intake (with no other changes) results in a slowly expanding girth, the billions of tons of anthropogenic (human produced) carbon dioxide emitted annually should be expected to accumulate in the environment. Indeed, the concentration of carbon dioxide in the atmosphere has increased since the industrial revolution from 280 parts per million (ppm) to nearly 390 ppm.

The increase in atmospheric carbon dioxide is two-thirds or less of what humans have produced from burning fossil fuels and manufacturing cement during that period. Counting estimated effects of deforestation and other land use changes, the increase in atmospheric carbon dioxide is around half of what humans have produced.

In other words, the article is dead wrong on the relative amounts of carbon dioxide produced by humans and nature. Nature is acting as a net absorber of carbon dioxide rather than a net producer. (So as not to be guilty of countering one unsupported assertion with another, an article in the July 16, 2004, issue of Science details nature’s role as a carbon dioxide sink.)

Carbon dioxide is the second-most-important greenhouse gas, after water vapor, causing around 20–30 percent of the greenhouse effect. (That figure comes from the February 1997 issue of the Bulletin of the American Meteorological Society.)

Increasing atmospheric carbon dioxide concentrations are having another significant, measurable environmental impact: ocean acidification. The oceans have absorbed much of the “missing” carbon dioxide, the anthropogenic carbon dioxide that has not remained in the atmosphere. Carbon dioxide dissolves to form carbonic acid, which is corrosive to coral and other creatures with calcium carbonate shells. Many of these organisms are at the base of food chains, so acidification could have profound effects on ocean ecology.

Cause or Effect?

Some say carbon dioxide is a result of rising temperature, not an effect. Care to respond?

Rising temperature is both a cause and a result of increasing atmospheric carbon dioxide. Scientists think that at the end of the ice ages, changes in Earth’s orbit led to the initial warming, which caused carbon dioxide to be released from the oceans. The increased carbon dioxide in the atmosphere resulted in additional warming, which stimulated more carbon dioxide release, and so on.

On one hand, carbon dioxide causes warming by holding back infrared rays radiating from Earth, reducing the amount of thermal energy that escapes into space. On the other hand, warming causes an increase of carbon dioxide in the atmosphere because, as it does in a glass of soda, carbon dioxide becomes less soluble in sea water as water temperature increases.

The ocean is currently a significant sink for carbon dioxide. Future estimates of carbon dioxide uptake by the oceans must account for the effect of the predicted temperature increase on the ability of carbon dioxide to dissolve in sea water.

One way scientists can trace the carbon dioxide increases to human activities is by using isotopes, or types, of carbon. Release of carbon dioxide from sea water should not significantly change the ratio of carbon-12 relative to carbon-13 isotopes in the atmosphere.

Yet the amount of carbon-12 relative to carbon-13 in the atmosphere has increased since the industrial revolution. Because plants, from which fossil fuels are derived, preferentially incorporate lighter carbon-12, the carbon isotope “fingerprint” is consistent with fossil fuel burning as the source of much of the increased carbon dioxide.

Back to the Drawing Board

Nuclear power generates approximately 20 percent of U.S. electricity needs. Recently, the U.S. Secretary of Energy curtailed development of Yucca Mountain for long-term handling of nuclear waste. What is the current U.S. policy for nuclear waste by-products? Is the current policy adequate to protect the public?

Plans for a geologic repository for spent nuclear fuel at Yucca Mountain in Nevada have been shelved, yet concerns about energy independence and global warming have rekindled calls to increase nuclear power generation. The United States does not currently have a backup potential long-term repository.

In 1987, the U.S. Congress selected Yucca Mountain as the only site to be investigated for a permanent nuclear waste storage facility. The Department of Energy (DOE) spent more than $13 billion researching the site. The final projected cost of developing the repository topped $75 billion, not including the cost of transporting the 60,000 tons of existing waste from reactor sites.

The DOE initially set out to design a facility that would be safe for at least 10,000 years. A recent ruling forced the DOE to extend the safety guarantee to one million years because the half-lives of some elements in the spent fuel are hundreds of thousands of years.

While the DOE goes back to the drawing board, the waste will remain at nuclear plants around the country. There, after spent fuel rods are removed from a nuclear reactor, they are placed in steel-lined concrete pools to cool. A few years later, they are removed, dried, and sealed into steel containers, which are packed into concrete silos. This is considered a safe medium-term storage option.

At the beginning of the nuclear age, the United States planned to reprocess spent nuclear fuel because the dominant type of nuclear reactor (the light-water reactor) consumes only a tiny percentage of the fissionable material before the reaction halts. United States reprocessing ceased after India’s 1974 nuclear test, which used plutonium separated with U.S. aid for civilian purposes.

Economics, rather than nuclear proliferation risk, is now the main reason reprocessing is out of favor, even in a number of countries that had been reprocessing spent fuel. The cost of natural uranium has declined since large deposits were discovered in Canada and elsewhere. Thus, it is cheaper to make new fuel rods (albeit, ignoring disposal costs).

New fast reactors that can process spent fuel rods could provide a technological solution, but they are much more expensive to build and run than conventional reactors. For now, the unwanted rods remain in steel-and-concrete limbo.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.123.73