Chapter 2

Analysis of Today’s Energy Situation

2.1 Basic Energy Terms

Energy, measured in Joule [J], is the product of power, measured in Watts [W], multiplied by time, measured in seconds [s]. In technical terms, it is convenient to measure time in this context not in seconds, but in hours (Wh, which is 3,600 Ws = 3,600 J). Through using a prefix as shown in Table 2.1, the huge span of different energy contents can be described. The table also shows the prefixes for small dimensions, often used for length (measured in meters, m), weight (measured in grams, g) and time (measured in seconds, s, also needed in later sections). Except for the very first and the last interval there is always a factor of 1,000 separating the various prefixes. Starting from the unit, each prefix for smaller numbers is one thousand’s of the preceding: 1 milli = 1/1,000 unit, 1 micro= 1/1,000 milli and so on. Similarly, for increasing numbers, each following prefix is 1,000 times larger than the preceding one: 1 kilo = 1,000 units, 1 Mega = 1,000 kilo and so on. In order to have a better feeling for the huge span ranging from 10−35 up to 1030 which covers 65 orders of magnitudes some examples are given in terms of length. Although this exercise is trivial for scientists it may be helpful for others.

Table 2.1 Prefixes for (very) large and (very) small numbers, the scientific notion (in brackets the logarithm) together with some examples.

While the smallest dimensions equal the subcomponents of the constituents of atoms, the bigger dimensions describe the size of our Milky Way galaxy. The biggest dimension in reality, the size of our universe, is about 5×1030m (or 5 billion times the size of our Milky Way galaxy) and the smallest dimension is about 1.6×10−35m, the so-called Plank’s length. Smaller dimensions do not make any physical sense as theoretical physicists tell us (according to their string theory). Right in the middle (micro to Mega) of the two extremes of this length scale is what we as human beings normally experience. This coincidence of why the world of human experience is around the logarithmic middle of the minimum and maximum of reality may be a topic for philosophers to speculate about.

In order to get an understanding of what the meaning of some energy contents are, Table 2.2 gives some examples. Throughout this book energy will always be described in Wh. For those who want to compare these numbers with other measurements from different publications, a conversion table is given in Table 2.3.

Table 2.2 Commonly used units and some examples in the electricity and energy business.

Table 2.3 Conversion table of commonly used energy measures (SKE (SteinKohleEinheit) = coal equivalent).

There exists a variety of different energy forms: primary, secondary and end user energy. The first is the energy content of the primary resources like coal, oil, gas or uranium just after mining or drilling. For convenient usage, these primary resources have to be converted to secondary energy forms. Crude oil for example is converted into the secondary energy forms diesel and petrol with fairly small losses compared to those associated with converting primary resources (coal, oil, gas or uranium) into the secondary energy electricity.

A note for the specialists: Today there are three different ways to measure the primary energy (PE) for the various constituents, (1) the Physical Energy Content Method, (2) the Direct Energy Equivalency Method and (3) the Substitution Method. The first method, used among others by the OECD, IEA and Eurostat measures the useful heat content for all fossil and nuclear materials as well as geothermal and solar thermal electricity production as PE, while for those renewables producing electricity directly, this secondary energy (SE) is defined as PE. This is obviously arbitrary as I would generally define the PE as the useful energy content for all materials and processes entering a machine or process to produce the needed SE from this PE. In this case to define the PE for renewables, we would have to take the SE divided by the efficiency of the respective process. For example if we have a 20% efficient solar module producing 10 kWh electricity the PE would then be 50 kWh which is contained in the solar power used. But convention has now established the procedure I first described which will also be used in this book – except when otherwise stated. The second method used by the UN and IPCC is similar to the first one with the exception that non-combustion methods like geothermal, solar thermal electricity technologies but also nuclear equate the SE electricity with PE. The third method, mostly used by the US Energy Information Administration (EIA) and BP, is based on the convention to equate all forms of SE (electricity and heat) with the volume of fossil fuels, which would be required to generate the same amount of SE. In this case, one not only has to postulate the assumed efficiency for this procedure, but it is obviously a tribute to the “old days”, when fossil sources were seen as the most important ones.

The last step is the conversion of secondary energy into end energy, which is needed to power the actual service wanted. This is again accompanied by energy losses. Examples would be diesel and petrol to drive a car from A to B, where only about 30 % of the secondary energy is transformed into motion whereas 70% is lost as heat; electricity to power a light bulb where the old incandescent lamp converted only less than 10% of secondary energy into light intensity (measured in lumen) while more than 90% were lost as heat. Another important example is the way of comfortable housing. Firstly, we should remember what our ancestors 2000 years ago already knew, namely to position the house so as to optimally collect – or keep away – solar radiation during a year. Secondly, we should use today’s technologies and products for insulating walls and windows using specially coated glass panes (low e glass) in order to minimize the residual energy needed to heat and cool our houses. How the then necessary residual energy can be provided in the most cost competitive way – solar thermal collectors with seasonal storage or heating with PV solar electricity – will become an interesting question.

2.2 Global Energy Situation

The situation of the global energy need today is shown in Figure 2.1. The left column describes the primary energy consumed around 2010 as being approximately 140 PWh which is based on IEA data [2-1,2] and using the “Physical Energy Content Method”. The contribution from renewable energies, including hydropower and biomass, used to be less than about 13% for a long time, fossil (coal, oil and gas) provided the lion’s share of 80% and nuclear accounted for about 7%. The split for the various energy carriers is shown in Figure 2.2. It will be the goal of this book to provide all the necessary information in order to understand that all future energy needs can be cost effectively covered through using only renewables just a few decades from now.

Figure 2.1 Primary, secondary and end user energy globally around 2010.

Figure 2.2 Split of the primary energy (~2010 in PWh) into the various energy carriers.

The second column summarizes the secondary energy sources consisting of treated fossil sources (like petrol, diesel, gas) and the convenient energy form electricity. The losses shown are mainly associated with converting primary energy to electricity in power stations. Although it is today possible to convert fossil fuel to electricity in a modern power station with up to ~60% efficiency, it is a matter of fact that globally the average conversion is only about one third, with two thirds of primary energy content lost as heat in the cooling towers. In the same year, the approximately 20 PWh of global electricity needs and the associated losses of ~30 PWh were obviously eating more than one third of the total primary energy (here I assumed 15 PWh from fossil and nuclear which contribute to the named losses and the other 5 PWh from hydro and other renewables with no losses). The conversion of the remaining 90 PWh of primary energy sources into 70 PWh for fuel, gas, heating oil and other secondary energy sources is associated with losses which I estimated to ~20 PWh.

The split of the primary energy between the various energy carriers is shown in Figure 2.2. In addition the fossil sector is divided into oil, gas and coal.

The secondary energy is ultimately converted into what we really need, the end energy. This is again associated with quite high losses including those for transportation and also depending on the type of end energy we are looking at. Some examples are the conversion of electricity into light, moving a car from A to B using petrol in an Otto engine, heating homes with gas or oil including heating water, and powering the various processes in industry through electricity or process heat. Only approximately 40 PWh, which is less than half of the secondary energy and less than one third of the original primary energy, is contained in the end energy. An old rule of thumb to determine how much energy would remain from process to process was the easily retained ratio of 3:2:1 for primary, secondary and end energy, respectively (today 140:90:40 PWh = 3.5 : 2.3 : 1). As we will see later, this will no longer be the case in the future. Instead, a more favorable ratio of ~ 1.5 : 1 for secondary and end energy can be considered due to more efficient machines and appliances.

An analysis of the timely development of the major contributors of the various secondary energy forms is very interesting, as the one conducted by Marchetti [2-3] in 1979. He found that since the 18th century when wood was the main secondary energy source, a new form gradually developed every 50 to 100 years to increasingly replacing the preceding one. Industrialization with the development of steam engines pushed the use of coal as a source of secondary energy. The discovery of the Otto engine and the quick increase in the number of cars at the beginning of the 20th century saw the development of oil as a new secondary energy source, followed by gas. Coal saw its peak at around 1920, and Marchetti’s study foresaw the peak of oil to be around 1990 with gas following around 2040. An interesting finding was the strongly decreasing proportion of carbon to hydrogen content in the chemical compounds of the fuel with each transition from wood to coal to oil to gas. It is approximately 10 to 1 to 0.5 to 0.25. Nuclear, with a carbon content of zero, was still in its early stages in the 70ies and Marchetti assumed it to peak possibly at the end of the 21st century. He also made a forecast for a new secondary energy form, starting in 2020 and which he called “Solfus” an acronym for sol-ar and fus-ion. In a later publication [2-4] Marchetti elaborated more specifically on his predicted future development: gas should reach its market peak in 2040, nuclear in 2090 with a 60% market share and Solfus was reduced to fusion with a mere starting point of 1% market penetration in 2025. Only in 2090, he predicted “solar power beamed from Venus” together with elementary particle technologies. In my point of view, however, nuclear may already see its end well before the close of the 21st century and solar along with all the other renewables may take up 100% of the market share by then – what a possible and fundamental change!

2.3 Energy Sectors

The use of the three secondary energy forms is shown in Table 2.4. We will concentrate on three major sectors

  • industry,
  • small and medium enterprises (SME’s), private homes, offices, hotels etc.
  • mobility

Table 2.4 Secondary energies in main segments for 2010 (ref.: IEA [2-2], Greenpeace [2-5] and own considerations).

For many it is astonishing that about one quarter of secondary energy is used for mobility for the worldwide transportation, including private cars, buses, trucks, railway, ships and planes. Although today most mobility sectors are powered by oil we have to differentiate because the future will use different secondary energies – much more electricity, hydrogen or hydrogen based fuel (like CH4) but little biofuel - for the various transportation vehicles.

Based on recent studies by Greenpeace [2-5] and “Quantify” [2-6] the split of the various mobility sectors within the first decade of this century was approximately as summarized in Table 2.5.

Table 2.5 Mobility sectors with consumed energy (PWh) and relative share (%).

The fact that fuel for the passenger transport exceeds 50% of all fuel consumption is striking – and at the same time presents a great opportunity to shift towards renewable energy as will be discussed later. Railway transport is today the only sector which uses a significant yet still small portion of electricity (0.2 PWh) besides oil based fuel. Railway could potentially absorb a significant portion of passenger and freight transport. The areas of aviation and shipping although not highly significant in the overall fuel consumption do, however, contribute to pollution much more than the other areas. This is due to the fact that most of the planes fly at high altitudes and thereby create much secondary damage and many ships use heavy marine diesel fuel with a high content of sulfur in particular.

The sector private homes, SME’s, trade, offices, hotels and others has a share of ~70% heating/cooling energy and ~30% electricity. It is quite surprising what a considerable portion of today’s secondary energy is being used up to heat and cool houses, both private and business related ones. Mostly oil and gas is used for this purpose and only a small proportion is served through district heating using waste heat from conventional electricity producing power stations. Proper insulation could almost eliminate this huge energy consumption and is called a passive energy measure because it is not necessary to actively provide heating or cooling. It may be easy to decrease this energy need in new houses which are being built with appropriate insulation, however, there still remain the millions of old houses in inventory which will require a major effort in energy efficient renovations in the coming years. As energy efficiency for buildings is a high value topic of society, we must make the necessary investments by politically supporting measures such as low interest rates and tax deduction mechanisms. In the meantime, until all houses – including the existing ones – have this passive energy measure, it is desirable to serve as much heating and cooling energy needs through solar thermal technologies. However, in the long run this will become unnecessary as will be discussed later.

Industry satisfies most of its energy needs through process heat (~50%), followed by electricity (~30%) and fuel (~20%).

In order to easily remember the total secondary energy for the various energy sectors plus electricity, see the graphical representation in Figure 2.3. Low temperature heat and mobility take up slightly more than a quarter each, while electricity and industry (without electricity) each absorb slightly less than a quarter of the total secondary energy. Of the total electricity, about 40% are for industry usage.

Figure 2.3 Global energy sectors for secondary energy (2010).

One of the important drivers of inflation is the increasing price of energy, both in transportation as well as in heating (in more northern countries like Germany) or cooling (in more southern areas). For example in Germany the average shopping basket attributes 13% of expenditure to transport fuel and 31% to household gas and fuel – hence almost half of the inflation rate is bound to this increase. This could be completely changed. The depleting oil resources with increasing prices for exploration and transportation will be replaced by renewable sources. Zero emission houses of the future would no longer need fuel, which is becoming more and more expensive. Even if we do not have “zero emission” houses, “low emission” ones plus excess energy from wind and solar which could be used to power heat pumps for heating and cooling could be even more cost effective. The annual cost for comfortable housing would remain constant, thereby suppressing inflation considerably. As food for thought, one could postulate that the replacement of exhaustible energy sources by renewable energies would change the delicate and fragile balance of today’s economies towards a much more stable situation. The reason for this is that exhaustible sources are becoming more and more expensive and further technology development in renewables will even decrease their cost. I wonder how an economist would further elaborate this postulate.

Today there are many discussions on whether society is able to survive with zero economic growth. This is becoming more and more important as – hopefully sooner rather than later – the growth of the global population will also come to zero and everyone will have the same quality of life based on a fair and well defined average. Together with the fading of inflation the possibility for zero economic growth could become an interesting option for future economic models.

2.4 Challenges for Fossil Fuels

2.4.1 Finiteness of Fossil Fuels Leading to the Peak of Oil and Gas

Today’s fossil fuels are the result of the accumulated growth of trees and plants over millions of years in pre-historic times which were subsequently transformed into coal, oil and gas. In its Flagship-Report [2-7] WBGU has collected all available information about the existing content for these fossil resources. For each of the resources an effort was made to differentiate between conventional and unconventional technologies to extract the individual resource from its natural surroundings and also between reserves, resources and further deposits where the individual resource is located. In Table 2.6 these data are summarized; for convenience the respective data for the nuclear resources are also added.

Table 2.6 Global occurrences of fossil and nuclear sources [2-7].

Bearing in mind that the annual primary energy for today is 140 PWh one is tempted after a quick look to Table 2.6 to assume that we must not worry about our future energy security: with the overall total of more than 1 million PWh we could supply our annual primary energy needs for the coming 7,000 years. However this conclusion would be totally wrong. If we plot the numbers from Table 2.6 in a graph as done in Figure 2.4 it quickly becomes evident that the biggest contributions are very questionable. Many occurrences, especially for the unconventional methods and further deposits, are either not available or only with substantial risk (examples are fossil sources below the ice shields of the Arctic and Antarctic, the extraction of methane from deep-sea methane hydrate or the use of fast breeder reactors).

Figure 2.4 Global occurrences for fossil and nuclear sources.

I am always astonished when reading about new findings of oil and gas fields in newspapers (like the unconventional shale gas discoveries in recent years) and by the impression that is given that this would be the solution for the future. There is one very clear fact: no matter how many additional oil and gas fields we may find, there is a limited number of fossil fuels available for mankind which should not be burned but should be kept for organic chemistry and related products for future generations. It is a shame that the arguments from many high ranking politicians and industry leaders are only guided by short term profit thinking and not by the welfare of future generations. I sometimes wonder whether these people have no kids or grandchildren.

Let us now take a closer look at the situation of oil. There are the conventional oil fields which have been established over the last decades. From Table 2.6 the known reserves are only 1,778 PWh which would last about 37 years at the current annual consumption. All known resources (1,391 PWh) add another 32 years. Only when unconventional oil like oil sand and deep sea oil fields are utilized – which of course will be much more expensive and/or adding environmental concern – could we substantially prolong the usage of this energy carrier. In Figure 2.5 we see the rise of consumption of oil over the last decades (black line) and the annual findings in past years as well as potential future findings (annual bars). Until the 1960s, much more oil was discovered each year compared to the annual production and consumption. But the world oil discovery rates have been declining since the early 1960s and many of the large oil fields are now 40 to 50 years old. Moreover, we are now consuming oil at a rate about 6 times greater than the rate of discovery. There is one obvious trend: a growing gap is developing between the annual production and the future findings which lead to the fact that when the surplus of findings until 1980 (the crossing point between discovery and production) is eaten up, we will have reached the so-called point in time of “peak oil”, also called “Hubbert peak”. This means that from that year onwards there will be less oil produced and consumed than in previous years. In Figure 2.5 this is indicated for an assumed future production and happens in around 2020. Many projections conclude that this may already just have happened or will happen at the latest by 2040. This will undoubtedly have a pronounced effect on the price of oil within a free market dominated economy: whenever a product is known to be getting scarcer, it will experience an increase in price. This is quite different from the situation which we saw after 2008, when due to the financial crisis there was a lot of speculation which drove the barrel of oil towards $150. After that, it went back to the low $50s, increasing again towards and above the $100 rate by now. After peak oil has happened, there will only be one direction for the oil price without taking speculation into account, namely upwards towards $200 and even beyond. Many people see the risk of military conflict between countries demanding oil for their respective welfare.

Figure 2.5 Annual regular conventional oil discovery and consumption (past discovery based on ExxonMobil (2002), revisions backdated).

For gas, we have some more reserves than for oil, but the peaking of gas will also occur in a few decades, depending on how much a further increase in findings will be. From all the different numbers I have seen, we may experience peak gas around the middle of this century.

Even if new findings – like shale gas in a number of countries or deep water oil drilling – are made, we should carefully balance the additional volume versus the ever increasing risks associated with the exploration of these additional resources. We all remember the catastrophe with BP’s Deep Water Horizon on 20th April, 2010, where about 800 million liters of oil spilled into the sea. Imagine this had happened in the Arctic Ocean, where low temperatures would not have allowed to restore the environment with the help of bacteria as quickly as was the case in the Gulf of Mexico. In the case of shale gas, there are several environmental challenges associated with it which have to be examined very carefully. Let me just name one example where more transparency is definitely needed: even today (2013), companies which perform the fracking are in some countries allowed to refuse disclosure of the chemicals used for this process; in spite of the fact that one of the challenges is the contamination of ground water.

Coal is expected to be available for many centuries and even for 2 millennia with all known resources – if we follow the same annual usage as today. Unfortunately coal is adding most of the dangerous green-house gas CO2 as will be discussed in more detail in the next section. Another important aspect when discussing the reach of fossil fuels are the underlying assumptions. Most often it is assumed that current consumption will remain constant in the coming years. However, if an increase in consumption is considered, many approaches based on stable consumption would prove totally wrong. Not introducing additional measures to replace the traditional energy forms would then have a catastrophic effect. The impact of a growing energy consumption can be impressively understood by a simple calculation done by Carlson [2-8]. He considered the US coal reserves to last for about 250 years at today’s usage level. This time shrinks considerably to only half of that time if a very modest increase per year of just 1.1% is assumed. The duration further decreases to about 80 years at an annual increase of 2% in consumption. Qualitatively the same effect of considerable shortening is true for all of the worldwide fossil energy resources. In addition, if the gas and oil resources are depleted in a non-renewable economy, coal will have to be transformed into gas and liquid fuels.

2.4.2 Climate Change Due to Green House Gases – Best Understood by a Journey Through Our Earth’s History From Its Origin Until Today

It is interesting to observe the different arguments related to the topic of increasing CO2 concentration in the atmosphere and its influence on the global climate. While there are still some people who deny this correlation, there is an overwhelming number of serious scientists worldwide arguing in favor of it. The first group with the name “Climategate” is collecting scientific meteorological papers and looking for evidence that the topic of CO2 is less of a problem and that it could even be viewed as an advantage. The second group, the Intergovernmental Panel on Climate Change (IPCC) brings together thousands of scientists from 194 member countries worldwide. It was founded by the World Meteorological Organization (WMO) and the United Nations Environment Program (UNEP) in 1988. The First Assessment Report (FAR) was published in 1990 and has been updated regularly since then (2nd AR or SAR, 3rd AR3 and 4th AR4 in 1995, 2001 and 2007, respectively [2-9]). The 5th AR5 report is just being prepared and will be ready in 2013/14. The findings of these reports are intensively discussed at regular conferences.

While it may be possible that in such a worldwide effort mistakes do also occur, every effort should be undertaken to avoid mistakes and to clarify the problems. Following a dispute with people from “Climategate” I tried to find out what the controversy was all about. As often when groups with different beliefs are arguing, it is valuable to first look for the data – which took me several clicks on the internet. One example of a mistake is the underestimated growth by the IPCC TAR3/AR4 [2-9] report of the Antarctic ice shield, while emphasis is only placed on the decrease of the Arctic ice. Let me just summarize a few numbers to illustrate the difficult situation between the opponents of the IPCC results, synonym “Climategate” and the IPCC community, synonym “IPCC”:

1. From IPCC AR4 report: the change in the level of Antarctic sea ice was reported to amount to (0.47 +/− 0.8)% per decade
2. From Comiso&Nishio [2-10] a much higher number of (1.7 +/− 0.3)% per decade was reported as pointed out by “Climategate”
3. Newest data from Climategate (Turner, Comiso et al [2-11]) concluded 0.97 % per decade
4. Interestingly, in one of the “Climategate” comments there was also a cross reference to a discussion within IPCC discussing a number of (1.3 +/− 0.2)% per decade

Dated 16th February, 2010 the “Climategate” community summarized these results on the internet under the headline “Another IPCC Error: Antarctic sea ice increase underestimated by 50%”. If the “Climategate” community knew that within IPCC the number as given in #4 above is seriously discussed they would have obvious difficulties with their headline (and then compare the number 1.3 from #4 with 0.97 in #3, which is even 30% higher!). On the other hand, there is obviously room for improvement as always in science – also in the IPCC community.

Another example from the “Climategate” community was an IPCC comment on the melting of the Himalaya glaciers. It became apparent that in an interview, a specific year, namely 2035 had been named as the year by which these glaciers may have disappeared. The IPCC later admitted that this had been a mistake. However, the same authors did agree that the melting of glaciers worldwide was a fact, but happening at a slower rate.

Regardless of when this will happen exactly, it will be a catastrophe for billions of people since glaciers are the prerequisite for a continuous flow of water to many of the rivers. No glaciers means no continuous water flow and as a result we will have times of no water in the rivers and in the same year flooding when the rainy season comes. This also has a dramatic effect on the situation of drinking water which is already a big problem in some regions today.

An important argument against the Climategate community is that the consequences of climate change are so disastrous that they would have to be 100% sure of the non-existence of climate change – which is impossible. Therefore they act irresponsibly.

A journey through time from our earth’s origin until today

In a recent paperback by Rahmstorf and Schellnhuber [2-12] the topic “climate change” is described in a comprehensive and well documented manner. In particular, I would like to summarize one part where the authors took a journey through the earths history from its beginnings until today. As this helped me to better understand what causes and consequences the various parameters have had on the climate of our planet, it may also be useful to others. I strongly advise everyone who is interested in the important debate on climate change to take a closer look to the mentioned paperback – it is more than worth it!

Before we start with our journey we have to understand what the major parameters are which determine our climate. At any given time, our climate is the simple energy balance between absorbed solar radiation (= incoming solar radiation minus reflection) and outgoing infrared radiation from our earth to the universe. Some gases, also called greenhouse gases, in particular CO2, water vapor and methane let the incoming solar radiation pass, but not the outgoing infrared radiation, which originates from the absorption of the solar spectrum and change into a longer wavelength spectrum. The absorbed heat is equally radiated in all directions – therefore also a part back to earth. This results in a higher radiation on the surface of the earth (solar radiation plus reflected infrared radiation from greenhouse gases) compared with radiation in the absence of greenhouse gases. A new equilibrium with greenhouse gases can only be established if the earths surface radiates more, with the consequence that it has to be warmer. It is this phenomenon which is called the greenhouse effect. This is also vital for life, as without the presence of these gases we would have a completely frozen planet: without greenhouse gases the average temperature on our planet would be -18°C and only with these gases is there a well-being temperature of ~+15°C. The relative proportion of the various greenhouse gases today is 55% for CO2 and 45% for all others. The cause for concern is humans active contribution to considerably increasing the concentration of greenhouse gases, especially CO2 in a relatively short period of time as we will see at the end of our time journey.

It all started 4.5 billion years ago when our solar system emerged from an interstellar matter on the brink of the Milky Way galaxy. The sun in the center is a fusion reactor where hydrogen atoms combine to helium and thereby release a lot of energy. Calculations show that at the beginning of our earth, the brightness of the sun was some 25% to 30% weaker than today. This should have resulted in a ~20°C colder environment on earth, well below freezing point. There would also have been more reflected solar radiation, also called Albedo, mainly because of the large area of ice, and there would have been less water vapor due to the lower temperature caused by the weaker solar radiation, so it is likely that the temperature would actually have been even lower. This should have resulted in a completely frozen planet over the first 3 billion years. However, there are many geological hints that during most of that time we had flowing water on our planet. This apparent contradiction is known as the “faint young sun paradox”. To solve this contradiction, we have to assume that during the time of the weaker sun there had to be a higher greenhouse effect. But how is it possible that over billions of years there was always – or mostly – the correct mixture of greenhouse gases present to balance the changing radiation from the sun? The answer lies in a number of closed loop controls. The most important one is the carbon cycle, which has regulated the concentration of CO2over millions of years. Due to the weathering of stone – mainly in the mountains – CO2 plus water reacted with many minerals1. Without a reaction in the opposite direction, CO2 would have been disappearing from the atmosphere over millions of years, causing a significant temperature drop. Fortunately, the continents were drifting and as a consequence sediments were pushed into the magma and CO2 was released into the atmosphere via volcanoes. Since the weathering depended on climate, there was a closed loop: if the climate warms up, the weathering process speeds up, leading to more CO2 being removed from the atmosphere, which decreases the greenhouse effect and counteracts a further temperature increase. However, this loop cannot cushion a quick change in temperature – the time needed for an exchange of CO2 between the earths crust and the atmosphere is much too long.

Several times, most recently ~600 million years ago, our earth experienced a so-called “snowball earth”. This means that all continents, even the tropical regions, and the oceans were covered by an ice crust several 100m thick. The closed CO2 loop helped overcome this deep frozen status: while the CO2 reduction – which is the weathering process – stopped below the ice, the CO2-source – volcanism – was still on-going. Over millions of years, the concentration of CO2 increased steadily and even with the high Albedo the greenhouse effect became strong enough to melt the ice. The required concentration of CO2 caused the temperature to rise up to levels of ~50°C and it took a long time to come back to “normality”. Geological data give evidence that snowball earth periods were followed by times of high temperatures. Some biologists relate this most recent climate catastrophe as the cause for the evolution that followed.

Let us now take a closer look at the last 500 million years, for which we know the position of the continents and oceans and are able to construct the ups and downs of our climate by analyzing sediments. Two cycles can be identified: the first from 600 to 300 million years and the second lasting until today. The first cycle started with a concentration of ~5,000 ppm CO2, which steadily declined to levels similar to those of today (~300 ppm). During the following ~100 million years it increased again towards 2,000 ppm and then continuously decreased again towards a range of 200 and 300 ppm. Those times with high levels of CO2 correlate with little and mostly no ice coverage, while the times of low CO2 levels show considerable ice coverage where the ice shields expand from the poles towards the 30th degrees of latitude. One piece of evidence proving the second warm phase without any ice coverage is the Cretaceous period from 140 to 65 million years ago. In this era, Dinosaurs even lived in polar regions, as evidence from archaeological findings in Alaska and Spitsbergen shows. There is one special finding worth mentioning as it is linked to what we are discussing and with our anthropogenic CO2 addition. The above mentioned continuous cooling which started ~200 million years ago did not happen undisturbed: data obtained from the resolution of sediment from 55 million years ago show that in a time frame of less than 1,000 years there was a sudden temperature increase of about 6°C, which then took ~200,000 years to go back down to previous levels. The temperature increase can be attributed to an increase of carbon. Three possible causes are

  • A release of methane from deep sea methane-ice
  • The eruption of a huge volcano
  • A meteor strike

A fourth possibility was elaborated by DeConto et al. [2-13]. These authors concluded that a thawing of the permafrost was a possible reason for this event, which would mirror what is happening just today. Whatever the exact reason was it can be seen that if a quick increase of carbon – like what mankind is doing now – takes place, this is quickly followed by a shift in temperature which then needs a long time to recover again.

We will conduct a more detailed analysis to the last 350,000 years, when the first hominids, including Neanderthals and Homo Sapiens appeared. From that time onwards, relevant data can be derived from the analysis of ice cores. The Wostok-ice core drilled in the 1980/90s in the Antarctic was well known, during which the CO2 concentration (among many other data) over a time span of 420,000 years could be determined. The emerging picture from a multitude of such experiments is the clear evidence of three distinct cycles which can be summarized as shown in Figure 2.6.

Figure 2.6 Temperature and CO2 variation over the last 350,000 years [2-12].

The correlation of increasing CO2 concentration with temperature increase and vice versa is obvious. An increase of CO2 from ~200 ppm to ~290 ppm causes the temperature to rise by ~11°C. The same pattern can be observed from the analysis of many ice cores with a calculated average temperature difference of ~8°C. This cycle happened three times and can be clearly associated to three ice-age cycles. It is interesting to note that the warming up only takes ~10,000 years and the cooling down is much longer, at ~90,000 years. The reasons for the ice-cycles are well understood today and can be contributed to a superposition of periodic ups and downs (Milankovich cycles [2-14]). It was possible to correlate these periodic changes and some extraterrestrial parameters: eccentricity of the earth’s orbit (~ 100.000 years), the obliquity of the tilt variation of the rotational axis (~41.000 years) and the precession of equinoxes of the rotational axis (~23.000 years). Even if there are some critical discussions on the long 100.000 year cycle, the fact that there were causes of this cycle is well acknowledged. For example, Muller et al. [2-15] discussed as another possibility the periodic movement of the earth out of the plane defined by its way around the sun.

The cause and its effect are the exact opposite of today’s situation. While today the increase of anthropogenic CO2 causes the temperature to rise, in the past it was he different distance between the earth and the sun that caused temperature changes. As a result of the different temperatures, the CO2 concentration in the atmosphere developed from the temperature dependent equilibrium between air and ocean (and some other effects). These long term changes will also happen in the future causing similar effects but should not be misused as an argument for continuing with business as usual.

The closer in time we get to today, the more detailed the available data become, to reconstruct even at a local level. Scientists are now able to locate about 20 incidents (the so-called Dansgaard-Oeschger incidents) with sudden and dramatic climate changes that took place since the end of the last ice-age. In Greenland for instance, there was a temperature rise of up to 12°C within only 10–20 years, which then remained for several centuries. Reasons for this were sudden changes of ocean currents in the North Atlantic, bringing huge amounts of warm water to this region. Also very large icebergs might have induced such changes.

While CO2 concentration and temperature have remained fairly constant over the last 10,000 years, we can clearly see the distinct and quick increase of CO2 produced by mankind over the last 150 years. We have to go back a couple of million years in earth’s history before finding a concentration of the 390 ppm CO2 that were measured in 2010 again. At that time in history, the ice shields were non-existent and this is exactly what has been happening over the last years: the melting of shelf ice in the Arctic and the disappearance of glaciers globally. Rising temperatures are additionally causing the Arctic permafrost to thaw, leading to a release of methane and CO2 into the atmosphere. This will result in an additional temperature increase which will accelerate the above described developments. Schaefer et al. [2-16] estimated a decrease in permafrost area of between 29% and 59% and a 53–97 cm increase in active layer thickness by 2200. The corresponding cumulative carbon flux into the atmosphere has been calculated at 190 +/− 64 Gt Carbon. When comparing this with the current annual release of CO2 which is ~33 Gt CO2 (or ~9 Gt Carbon) we can see the significant amount which with high probability will come from thawing permafrost. Even on a short time scale the authors predict that from the mid-2020s onwards this permafrost carbon feedback will transform the Arctic from a carbon sink to a carbon source. In order to limit the temperature increase caused by green-house gases to the desired maximum of 2°C, an even larger reduction in fossil fuel emission is needed than is commonly discussed in the public debates.

According to the authors of [2-12] the changes in solar activity are only causing minor effects which can nonetheless be observed within the shorter timescale of the last centuries. By combining the knowledge of the past and by carefully analyzing the current trends, the authors discussed the following potential impact on our future climate:

  • doubling of the pre-industrial level of CO2 from ~280 ppm to 560 ppm would cause a temperature rise of (3+/−1)°C
  • in order to fix the increase at only 2°C this concentration should not exceed 450 ppm which would correspond to a further addition of only ~750 Gt CO2
  • as we have already reached 390 ppm and are adding ~1.4 ppm every year there are only ~40 years left to reach a time with no further carbon release; even if we were to completely stop the burning of ALL fossil sources (NO gas, oil and coal) by then (~2050), we would still have methane released by animals, CO2 from cement and permafrost thawing and much more – hence we will most probably be heading towards 3+°C in the future

This increased level of carbon dioxide and the accompanying temperature rise will have the following consequences:

  • more and more severe weather extremes (hurricanes, tornados, cyclones, floods and droughts) – it is less the frequency of occurrence which changes, but the intensity and virulence
  • global decrease in glaciers
  • decrease of the area of the Arctic sea ice (an ice-free polar ocean during the summer is becoming reality)
  • increased thawing of permafrost with considerable release of carbon dioxide leading to an even greater temperature increase
  • rise of the mean sea level: cm to dm levels due to an increased volume of water caused by temperature increase, dm to m levels due to the thawing of existing ice shields (smaller levels for sea ice, +7 m if all ice shields in Greenland were to melt, +6 m if all ice shields in the West Antarctic were to melt and +50 m if the most stable east Antarctic ice shield were to melt). If all ice shields disappeared, as was the case in times of very high CO2 concentrations tens of millions of years ago, there would be a sea level rise of more than 60(!) m – even 1m of sea level rise would have a catastrophic effect globally on areas inhabited by humans!
  • Changes of ocean currents causing dramatic local climate changes

In summary, there is much more scientific evidence that supports a serious impact of anthropogenic carbon dioxide on climate change compared to the assumption that we may continue with business as usual. This was impressively demonstrated by two studies in 2004: one analyzed ~1,000 scientific publications to the topic “global climate change” [2-17] and the other ~600 articles in daily newspapers in the US (performed by the University of California). The scientific community supported to 75% the fact that man-made CO2 and other green-house gases were indeed responsible to climate change; 25% did not comment to this specific point. The authors concluded that obviously among serious scientists there was a general agreement on anthropogenic CO2 seriously affecting our climate. In contrast, the analysis of the newspaper articles showed that 35% of them emphasized the anthropogenic fact but also reported on the opposite, only 27% agreed entirely, while another 27% were of the opposite position. As the public debate is mostly influenced by the discussions in newspapers, we are unfortunately faced with a “balance as bias”, as it was called. It is no surprise that the public debate is also heavily sponsored by all those think tanks which receive heavy financing by those industries benefitting from the “business as usual” approach and from denying the impact anthropogenic CO2 is having on climate change.

2.5 Problems with Nuclear Energy

In the 1960s there was a big worldwide hype affecting all political parties and spurring them on to introduce nuclear power, to solve – which was the belief those days – all future energy needs: first with fission reactors, then with fast breeders and ultimately with fusion reactors. Although it was argued that this technology would be for peaceful application, we know that the cold war was also responsible for a lot of nuclear weapons. Additionally, I remember serious publications from these days arguing that by the turn of the century – which came and went 13 years ago – it would no longer be necessary to have electrical meters in the households because electricity from nuclear would be so cheap that it would be a waste of money to install such meters. Others argued that this cheap nuclear electricity could be used to keep the motorways free of ice in winter. The reality now looks quite different!

We know that investments in nuclear reactors are getting more and more expensive – especially if all possible accidents are to be taken into account – and also the uranium that is required is limited in quantity to last for some decades if only fission reactors are considered. This fact was and still is the reason for the need to develop fast breeder reactors which could indeed breed enough fissionable uranium and plutonium for the coming centuries as seen in Figure 2.4 and Table 2.6.

But this is no solution for the future – not in the short term and certainly not in the long term! Firstly, the Fukushima catastrophe in Japan in 2011 demonstrated – once again after Chernobyl in 1986 – that the residual risk should not be tolerated by society. It is frightening to see that only some months after the tragedy, when newspapers were full of articles, the topic has almost disappeared from the public debate by now. It is often discussed that in other countries such an earthquake and tsunami at the same time could not happen. But it is also known that terrorist attacks could also cause a melting of the core – anywhere and at any time! There is also a common misunderstanding of what the residual risk, specified in “accident per nuclear reactor years”, really means. In 1993, a final study [2-18] was conducted by GRS (Gesellschaft für Anlagen- und Reaktorsicherheit = society for equipment and reactor safety) who came to the conclusion that every 20,000 reactor years such a tragedy may happen. Many people confuse this and believe that such an accident may happen in 20,000 years, by which time we will have solved the problems and therefore we should continue. But the term “reactor years” implies that we have to divide the 20,000 reactor years by the total number of reactors. As we have approximately 400 reactors running, the result is that we should expect such a tragedy every 50 years – and Chernobyl happened 25 years ago! Hence the result of the study was in reality 100% over-optimistic. In addition, the study concluded that with known safety measures this risk can be reduced by a factor of 10. If we take into account the “near major accidents” which happened 1979 in Three Mile Island (US) and 2006 at Forsmark (Sweden), it is obvious that we must conclude that there is a rather short term (~decade) real and likely risk of such an accident and not the nice-looking number of 500 years. There is no way to argue that by adding technical features this risk could be brought to zero – there is always a very likely probability that a disaster will happen. And imagine what would have happened in Fukushima if the wind had not been so gracious as to blow the nuclear cloud towards the ocean but would have instead steered it towards the Greater Tokyo area where 30 million people live – we would have witnessed an inferno.

Even if the above arguments were not convincing enough, there is another unresolved issue, namely the storage of waste from nuclear reactors. Nowhere in the world is there a viable solution. Even the storage in salt domes that is being considered in Germany can be questioned in view of the disaster of water penetration into the existing “Asse” salt dome in northern Germany. Knowing that we have to safely store the waste for many 100,000s of years, it is difficult to believe that any solution will ever be found. Researchers at the “Karlsruhe Institute of Technology” are experimenting on a method of converting the problematic long living radionuclides into much shorter living elements. This process is called “Partitioning and Transmutation (P&T)”, where plutonium and the so called minor actinides (mainly Neptunium, Americium and Curium) are extracted chemically and thereafter transformed with energy rich neutrons into short living substances. This process would reduce the amount of high radioactive waste by a factor of 5, but would increase the quantity of low and medium radioactive waste several times [2-19]. Even the authors from the institute conclude at the end of their article that the ambitious targets set for nuclear reactors of the 4th generation – more security, increased sustainability, and higher profitability – can only be reached in combination with expanded closed nuclear fuel cycles, of which P&T would then be a natural part. Knowing that this process has been investigated since the 1960s without success, I very much doubt that this new nuclear alchemy would really be cost effective overall – if at all technically feasible.

Let us now take a look at what is often called the “third generation” of nuclear reactors, the “fast breeder”. Without going into details there are two major arguments why I would not like this technology to be introduced. Firstly, the reprocessing of spent fuel is a must in order to extract the bred plutonium used in the fuel rods for further use. This implies that we would have to handle many tons of plutonium with the risk of proliferation. Secondly, and even more importantly, there is the fact that while it may be possible to construct an inherently safe fission reactor, to my knowledge this is not possible with a fast breeder. With small pebble-bed fission reactors and using ThO2 as fission material the negative temperature coefficient after complete breakdown of cooling would prohibit a melting of the reactor core and thereby provide inherent safety. However, fast breeder reactors have to be controlled at every moment which is, as always in technology operated by mankind, simply impossible.

Finally a word on the development of fusion which looks very promising at first glance. It involves mimicking the reactions happening in the sun and with a few materials, we could have energy forever. But the challenges to be solved are enormous. When as a student in the 1960s I was listening to the lectures of Prof. Pinkau from the Max Plank institute in Garching (Munich), one of the advanced and knowledgeable places for this research, he stated that by the turn of the century a first pilot reactor should be up and running. When I discussed this with his successor Prof. Breadshaw from the same institute in the late 1990s he – and also his colleagues worldwide – were able to possibly envisage this pilot reactor in another 20 to 30 years from that time. For physical reasons, such a reactor should have a minimum size of 5+GW because of volume to surface considerations due to the high temperatures needed in the plasma, and should run only as a base load provider. Whether this is an appropriate solution for the future is very questionable, as decentralized power supply will be superior to an extremely concentrated and centralized supply. But even if all technical problems were to be solved, it is very difficult to see whether it will ever be possible to produce electricity below $ct5/kWh in the future with this technology when all cost elements, not only safety but also insurance, are taken into account – which is possible with many renewable technologies, including PV, as will be demonstrated later.

Looking to the European and worldwide spending on R&D for fusion one could critically question the huge amount of money already spent and even more importantly the money that will be spent in the future. In Europe, it was recently suggested that in order to complete the next pilot reactor ITER (International Thermonuclear Experimental Reactor in Cadarache, France) which needs more R&D money as originally scheduled, other R&D areas like renewables should decrease their R&D spending by the same amount in order to keep the total budget the same. As a physicist, I certainly support research in order to principally understand fusion – but for this we only need a small fraction of the money that would be needed to pilot a fusion reactor, which I deem to be neither necessary nor practically feasible.

I very much do like nuclear fusion – however, only when it is already up and running in our sun and the energy produced is transported as sunlight to earth where it can be cost effectively transformed into electricity and heat through photovoltaics and other renewable technologies.

1 As an example, Olivin (Mg2SiO4), present in many volcanic rocks, was transformed into quartz (Mg2SiO4 + 2 H2O + 4 CO2 → 2 Mg(HCO3)2 + SiO2).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.15.205