6.7. Redefining Force and Energy

All currently available fundamental definitions in New Science emerge from Newton's laws. Let's review the conventional definitions and present thereafter the scientific definition that emerges from the above section.
Force: Conventionally, a force is defined to be an influence that tends to change the motion of an object. The inherent assumption is, this “force” is external to the object. This is a false premise because the entire creation is internal and connected to each other, as presented by recent works of Islam et al. (2010a, 2012) and Khan and Islam (2012).
Currently it is believed there are four fundamental forces in the universe: the gravity force, the nuclear weak force, the electromagnetic force, and the nuclear strong force in ascending order of strength. In mechanics, forces are seen as the causes of linear motion, whereas the causes of rotational motion are called torques. The action of forces in causing motion is described by Newton's laws under ordinary conditions. Subsequently, forces are inherently vector quantities, requiring vector addition to combine them. This further characterization is yet another tactic to cover up for the false first premise.
Khan and Islam (2012) provide one with detailed deconstruction of Newton's laws. With the scientific theory of the previous section, one can redefine force as something that drives the universal movement. It is constant, absolute, and immutable. With this definition, there is no need to further characterize force in the above-mentioned categories. This replaces the notion of gravity in conventional sense. The source of this force is the absolute light that is omnipresent. This description answers the questions regarding what forces make the entire galactic system move—a question that has perplexed modern scientists (Cowen, 2012).

6.7.1. Energy

Energy is commonly defined as the capacity for doing work. One must have energy to accomplish work—it is like the “currency” for performing work. To do 100 J of work, one must expend 100 J of energy.
New Science postulates the purest form of energy is light that is comprised of photons. These photons are thought to have zero mass. As stated earlier in this chapter, this assertion disconnects mass from energy but invokes Einstein's formula, E = mc2, which itself is based on Maxwell's formula that considers energy as a collection of solid, spherical, rigid balls (similar to atoms). The assertion of zero mass also invokes infinite speed (a notion that was promoted by Aristotle but discarded by Ibn Al-Haytham, some 900 years ago). This obvious gaffe is “remedied” by forcing speed of light, “c” to be constant and maximum attainable speed by any particle. Apart from the fact that a zero mass would not qualify to be called a “particle,” this also poses the obvious spurious solution to the equation, E = mc2 and renders it an absurd concept. This mathematical fallacy is “solved” with dogmatic assertion of quantum physics. As such, the product of 0 times infinity gives a value that is a function of the frequency of the photon. Furthermore, it is asserted that a photon can be converted to a particle and its antiparticle (a process called pair creation), so it does convert energy into mass. All equations, therefore, give an answer but are completely devoid of any physical significance.
Similar illogical attributes are assigned to Higgs boson, neutrino, and a number of other particles, some of which have zero mass. In addition, it is asserted that certain particles, such as neutrino, can travel through opaque material, albeit with a speed lower than that of light (photons). In order to compensate for the concept of gravitational force that is conventionally nonexistent in zero-mass conditions, it is asserted that the Higgs particle is a carrier of a force. This force mediated by the Higgs boson is considered to be universal as the Higgs boson interacts with all kinds of massive particles, no matter whether they are quarks, leptons, or even massive bosons (the electroweak bosons). Only photons and gluons do not interact with the Higgs boson. Neutrinos, the lightest particles with almost zero mass, barely interact with a Higgs boson. This description and assignment of special ‘power’ to certain particles is characteristic of pragmatic approach (Khan and Islam, 2012). In simple terms, they are stop gap tactics for covering up the fundamental flaws in basic premises.
Neutrinos are considered to be similar to electrons, with one crucial difference: neutrinos do not carry electric charge. Because neutrinos are electrically neutral, they are not affected by the electromagnetic forces that act on electrons. Neutrinos are affected only by a “weak” subatomic force of much shorter range than electromagnetism, and are therefore able to pass through great distances in matter without being affected by it. If neutrinos have mass, they also interact gravitationally with other massive particles, but gravity is by far the weakest of the four known forces. Such repeated characterization of matter and energy with contradicting traits has been the most prominent feature of New Science. The characterization offered by Islam et al. (2012) and Khan and Islam (2012) eliminates such needs.
When it comes to “heat energy,” New Science is full of gaffes as well. The entire “heat engineering” is based on Lord Kelvin's work. Lord Kelvin, whose “laws” are a must for modern day engineering design believed that the earth is progressively moving to worse status and which would eventually lead to the “heat death.” So, if Kelvin were to be correct, we are progressively moving to greater energy crisis and indeed we need to worry about how to fight this “natural” death of our planet. Kelvin also believed flying an airplane was an absurd idea, so absurd that he did not care to be a member of the aeronautical club. Anyone would agree, it is not unreasonable to question this assertion of Lord Kelvin, but the moment one talks about the nature progressively improving, if left alone (by humans, of course), many scientists break out in utter contempt and invoke all kinds of arguments of doctrinal fervor. How do these scientists explain then, if the earth is progressively dying, how it happened that life evolved from the nonbiological materials and eventually very sophisticated creature, called homo sapiens (thinking group) came to exist? Their only argument becomes the one that has worked for all religions, “you have to believe.” All of a sudden, it becomes a matter of faith and all the contradictions that arise from that assertion of Lord Kelvin becomes paradoxes and we mere humans are not supposed to understand them. Today, the Internet is filled with claims that Kelvin is actually a god and there is even a society that worships him. This line of argument cannot be scientific (Islam et al., 2010).
Modern scientists claim to have moved away from doctrinal claims of Kelvin. However, no theory has challenged the original premise of Kelvin. Even Nobel laureate winning works (review, for instance the work of Roy J. Glauber, John L. Hall, and Theodor W. Hänsch, along with their Nobel prize winning work on light theory), consider Kelvin's concept of absolute temperature a fact. Theoretically, at that point, there is zero energy, hence matter would not exist either. A matter is being rendered nonexistence because it does not move—an absurd state. Table 6.5 shows how everything in creation is in a state of motion, including time itself. However, instead of realizing this obviously spurious premise, New Science offers the following explanation:

“If you take away energy from an atom, you do so by lowering the energy level of its electrons, which emits a photon corresponding to the energy gab between the electron bands. Keep doing that until the electron is absorbed by the nucleus, and converts a proton to a neutron. Now you need to extract energy from the nucleus. How are you going to do that? How are you going to shield the resulting neutron from the influence of the rest of the universe, including radio waves, that penetrate everything?”

Another explanation attempts to justify discontinuity between mass and energy, by saying, “All matter has energy, unless it is at absolute zero temperature, true. But that amount of energy is tiny compared to the energy you could get if the matter were totally converted to energy via Einstein's famous equation, E = mc². But there is no way for that to happen unless you are dealing with antimatter. Even the sun converts only a tiny percentage of the matter to energy, but that tiny percentage (because of the c² term) produces a lot of energy.” In this, the notion of “antimatter” is invoked.
Natural light or heat is a measure of radiation from a system (called “material” in the above section). This radiation is continuous and accounts for change in mass within a system. In this, there is no difference between heat generation and light generation, nor there is any difference in radiation of different types of “radiation” (such as X-ray, gamma ray, visual light, infrared, etc.) other than they are of various frequencies. This can be reconciled with New Science for the limiting cases that say that there is an exponential relationship between reactants and products (Arrhenius equation) through the time function. Such relationship is continuous in time and space. For instance, as long as the assumption of continuity is valid, any substance is going to react with the media. The term “reaction” here implies formation of a new system that will have components of the reactants. This reaction has been explained by Khan et al. (2008) as a collection of snow flakes to form avalanche. Islam et al. (2014) developed similar theory that also accounts for energy interactions and eliminates separate balance equations for mass and energy. This theory considers energy or mass transfer (chemical reaction or phase change) as merger of two galaxies. Before merger, the two galaxies have different sets of characteristic frequencies. However, after merger, a new galaxy is formed with an entirely new set of characteristic frequencies. Such phenomena is well understood in the context of cosmic physics. Picture 6.2 shows NASA picture of two galaxies that are in a collision course. Cowan (2012) reports the following:
image
Picture 6.2 It is reported that two galaxies are in a collision course. Cowan, 2012.

“Four billion years from now, the Milky Way, as seen from Earth in this illustration, would be warped by a collision with the Andromeda galaxy. It's a definite hit. The Andromeda galaxy will collide with the Milky Way about 4 billion years from now, astronomers announced today. Although the sun and other stars will remain intact, the titanic tumult is likely to shove the Solar System to the outskirts of the merged galaxies.”

Such collision does not involve merger of two suns or any planets or moons. It simply means reorientation of the starts and planets within a new family. Note how conservation of mass is strictly maintained as long as an artificial boundary is not imposed. In New Science, such artificial boundary is imposed by confining a system within a boundary and imposing “no-leak” boundary conditions. Similarly, adiabatic conditions are imposed after creating artificial heat barriers.
With the galaxy model, physical or chemical changes can both be adequately described as change in overall characteristic frequency. So, how does heat or mass gets released or absorbed? As stated above, “the titanic tumult” would cause the stars to be “shoved” toward the outskirt of the newly formed galaxy. In case, they are indeed placed around the outskirt, this would translate into excess heat near the boundary. However, if those stars are “shoved” inside the new giant galaxy, for an outsider, it would appear to be a cooling process, hence, endothermic reaction. In this context, the “titanic tumult” is equivalent to the “spark” that lights up a flame or starts a chain reaction. It is also equivalent of onset of life or death as well as “big bang” in the universal sense. Even though these terms have been naturalized in New science vocabulary, they do not bear scientific meaning. Islam et al. (2012, 2014) recognized them to be unknown and unexplainable phenomena that cause onset of a phase change. They can be affected by heat, light, pressure that are direct results of changes within the confine of a certain system.
Source of heat is associated to “collisions” as represented above in the context of galaxies, be it in subatomic level (known as chemical reactions), in combustion within a flame, or in giant scale (such as solar radiation). For our system of interest, i.e., the earth, our primary source of heat is the sun that radiates mass in various wavelengths. New Science recognizes “the solar constant” as the amount of power that the sun deposits per unit area that is directly exposed to sunlight. The solar constant is equal to approximately 1368 W/m2 at a distance of one astronomical unit (AU) from the sun (that is, on or near earth). Sunlight at the top of earth's atmosphere is composed (by total energy) of about 50% infrared light, 40% visible light, and 10% ultraviolet light. In another word, the heat source is inherently linked to light source. As discussed in previous sections, this transition between different forms of energy is continuous and should be considered to be part of the same phenomenon characterized here as “dynamic nature of everything in creation.” These are not “mass-less” photons or “energy-less” waves, they are actually part of mass transfer that originates from radiation of the sun.
Before solar emissions enter the atmosphere of the earth, nearly one-third of the irradiative material are deflected through filtering actions of the atmospheric particles. How does it occur? It is similar to the same process described above as galactic collision. During this process, the composition of the atmospheric layer changes continuously and “new galaxies” form continuously in the “tumult” mode, while some of the material are deflected outside the atmosphere and the rest penetrating the atmosphere to trigger similar ‘tumult” events through various layers of the atmosphere. These atmospheric layers are such that all the layers act similar to a stacked up filtering system. Following is a brief description of different layers of the atmosphere.
1. The exosphere is the thinnest (in terms of material concentration) layer. This is the upper limit of the earth atmosphere.
2. The thermosphere is a layer with auroras. This layer sees intensive ionic activities.
3. The next layer is mesosphere. This is the layer that burns up meteors or solid fragments. The word “solid” implies most passive levels of activities of the constitutive material. See Figure 6.8 with reference to “solid” representing collection of “dust specks” that exhibit the slowest characteristic speed. Meteors or rock fragments burn up in the mesosphere. Another way to characterize matter in terms of solid liquid and vapor state.
Within earth, the following configuration applies. It is possible that such configuration of various states will apply to other celestial entities, but that is not the subject of interest in the current context. Figure 6.12 shows how the relationship between characteristic speed and physical state of matter is a continuous function. Note that natural state of matter is an important consideration, particularly in relation to human species and life. For instance, the most abundant matter on earth is water, most useful for human species in its liquid state. It turns out that water is also the most abundant in liquid state. In solid, clayey matter (SiO2, see the position of “dust speck” in Figure 6.8) is the most abundant solid and scientists are beginning to find out humans are also made out of such matter. Here is a quote from Daily mail (2013):
image
Figure 6.12 Characteristic speed (or frequency) can act as the unique function that defines the physical state of matter.

“The latest theory is that clay - which is at its most basic, a combination of minerals in the ground - acts as a breeding laboratory for tiny molecules and chemicals which it 'absorbs like a sponge'.

The process takes billions of years, during which the chemicals react to each other to form proteins, DNA and, eventually, living cells, scientists told the journal Scientific Reports.

Biological Engineers from Cornell University's department for Nanoscale Science in New York state believe clay 'might have been the birthplace of life on Earth'.

It is a theory dating back thousands of years in many cultures, though perhaps not using the same scientific explanation.”

Clay also retain the most amount of water—the most essential ingredient of life and organic material. As would be seen in other chapters as well as later in this section, similar optima exist in terms of visible light being the most abundant of sunlight rays and earth being the most dense of all the planets in the solar system. Overall, all characteristic features for the earth makes it the most suitable as a “habitat for mankind” (Khan and Islam, 2012).
4. The next layer of the atmosphere, called stratosphere, is the most stable layer of the atmosphere. Many jet aircrafts fly in the stratosphere because it is very stable. Also, the ozone layer absorbs harmful rays from the sun. By the time, sun rays enter the final and fifth layer, almost 30% of the total irradiation have been removed. What energy (in form of light and heat) is ideal for rendering the earth system totally sustainable and ideal for human habitation? This layer is the most vulnerable to human intervention and is the cause of global warming (Islam et al., 2012). This aspect is elaborated below.
The Intergovernmental Panel on Climate Change stated that there was a “discernible” human influence on climate and that the observed warming trend is “unlikely to be entirely natural in origin” (IPCC 2001). The Third Assessment Report of IPCC stated, “There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities.” Khilyuk and Chilingar (2004) reported that the CO2 concentration in the atmosphere between 1958 and 1978 was proportional to the CO2 emission due to the burning of fossil fuel. In 1978, CO2 emissions into the atmosphere due to fossil fuel burning stopped rising and were stable for nine years. They concluded that if burning fossil fuels was the main cause, then the atmospheric concentration should stop rising, and, thus, fossil fuel burning would not be the cause of the greenhouse effect. However, this assumption is extremely shortsighted and the global climate certainly does not work linearly, as envisioned by Khilyuk and Chilingar (2004). Moreover, the “Greenhouse Effect One-Layer Model,” proposed by Khilyuk and Chilingar (2003, 2004), assumes there are adiabatic conditions in the atmosphere that do not and cannot exist. The authors have concluded that the human-induced emissions of carbon dioxide and other greenhouse gases have a very small effect on global warming. This is due to the limitation of the current linear computer models that cannot predict temperature effects on the atmosphere other than at low levels. Similar arguments were made while promoting dichlorodifluoromethane (CFC-12) in order to relieve environmental problems incurred by ammonia and other refrigerants after decades of use. CFC-12 was banned in USA in 1996 for its impacts on stratospheric ozone layer depletion and global warming. Khan and Islam (2012) presented detailed lists of technologies that were based on spurious promises. Zatzman and Islam (2007) complemented this list by providing a detailed list of economic models that are also counterproductive. Khilyuk and Chilingar (2004) explained the potential impact of microbial activities on the mass and content of gaseous mixtures in earth's atmosphere on a global scale. However, this study does not distinguish between biological sources of greenhouse gas emissions (microbial activities) and industrial sources (fossil fuel burning) of greenhouse gas emissions. Emissions from industrial sources possess different characteristics because they derive from diverse origins and travel different paths that, obviously, have significant impacts on atmospheric processes.
Current climate models have several problems. Scientists have agreed on the likely rise in the global temperature over the next century. However, the current global climatic models can predict only global average temperatures. Projection of climate change in a particular region is considered to be beyond current human ability. Atmospheric Ocean General Circulation Models are used by the IPCC to model climatic features, but these models are not accurate enough to provide a reliable forecast on how climate may change. They are linear models and cannot forecast complex climatic features. Some climate models are based on CO2 doubling and transient scenarios. However, the effect of climate in these models, while doubling the concentration of CO2 in the atmosphere, cannot predict the climate in other scenarios. These models are insensitive to the difference between natural and industrial greenhouse gases. There are some simple models that use fewer dimensions than complex models and do not predict complex systems. The Earth System Models of Intermediate Complexity are used to bridge the gap between the complex and simple models, but these models are not able to assess the regional aspect of climate change (IPCC 2001).
Overall, any level of artificial products in the stratosphere will affect the final and the most important layer of the earth atmosphere.
5. The closest layer to the earth surface is troposphere. This layer contains half of the earth's atmosphere. All transient phenomena related to weather occur in this layer. This layer too contributes to attenuation of sunlight and at the end some 1000 W/m2 falls on the earth when the sky is clear and the sun is near the zenith. The multiple filtering system of the atmosphere is such that it filters out 70% of solar ultraviolet, especially at the shorter wavelengths.
The immediate use of solar energy in terms of sustaining human life is photosynthesis—the process that allows plants to capture the energy (through mass transfer) of sunlight and convert it to “live” chemical form. The energy stored in petroleum and other fossil fuels was originally converted from sunlight by photosynthesis in the distant past.
The most significant is the photosynthetic mechanism. There are two classes of the photosynthetic cycle, the Calvin–Benson photosynthetic cycle and the Hatch–Slack photosynthetic cycle. The Calvin–Benson photosynthetic cycle is dominant in hard woods and conifers. The primary CO2 fixation or carboxylation reaction involves the enzyme ribulose-1,5-diphosphate carboxylase and the first stable product is a three-carbon compound. This reaction is considered to be “light-independent.” This series of reactions occur in the fluid-filled area of a chloroplast outside of the mytosis membranes. These reactions take the light-dependent reactions and perform further chemical processes on them. Various stages of this process are carbon fixation, reduction reactions, and ribulose 1,5-bisphosphate regeneration. In describing this cycle of reactions, the role of light energy is marginalized. This process occurs only when light is available. Plants do not carry out the Calvin cycle by night. They, instead, release sucrose into the phloem from their starch reserves. This process happens when light is available independent of the kind of photosynthesis (C3 carbon fixation, C4 carbon fixation, and Crassulacean acid metabolism. The exceptions are Crassulacean acid metabolism, also known as CAM photosynthesis, a carbon fixation pathway that is used by some plants as an adaptation to arid conditions. In a plant using full CAM, the stomata in the leaves remain shut during the day to reduce evapotranspiration, but open at night to collect carbon dioxide (CO2). The CO2 is stored as the four-carbon acid malate and then used during photosynthesis during the day. The precollected CO2 is concentrated around the enzyme RuBisCO, increasing photosynthetic efficiency.
On the other hand, the Hatch–Slack photosynthetic cycle is the one used by tropical grasses, corn, and sugarcane. Phosphenolpyruvate carboxylase is responsible for the primary carboxylation reaction. The first stable carbon compound is a C-4 acid, which is subsequently decarboxylated. It is then refixed into a three-carbon compound. These three steps define the canonical C4 photosynthetic pathway. Overall, the photosynthesis process shows how nature converts energy into mass, storing energy for long-term use. This must be understood in order to appreciate the role of natural processing in the context of petroleum usage.
The process of energy-to-mass conversion is greatly affected by temperature (Fink, 2013). Sometimes temperatures are used in connection with day length to manipulate the flowering of plants. Chrysanthemums will flower for a longer period of time if daylight temperatures are 50 °F. The Christmas cactus forms flowers as a result of short days and low temperatures. Also, temperatures alone also influence flowering. Daffodils are forced to flower by putting bulbs in cold storage in October at 35–40 °F. The cold temperature allows the bulb to mature. The bulbs are transferred to the greenhouse in midwinter where growth begins. The flowers are then ready for cutting in three–4 weeks.
Plants produce maximum growth when exposed to a day temperature that is about 10–15 °F higher than the night temperature. This allows the plant to photosynthesize (build up) and respire (break down) during an optimum daytime temperature and to curtail the rate of respiration during a cooler night. High temperatures cause increased respiration, sometimes above the rate of photosynthesis. This means that the products of photosynthesis are being used more rapidly than they are being produced. For growth to occur, photosynthesis must be greater than respiration. Temperature alone can affect this process.
Low temperatures can result in poor growth. Photosynthesis is slowed down at low temperatures. Since photosynthesis is slowed, growth is slowed, and this results in lower yields. Each plant has an optimum temperature that allows maximum growth. For example, snapdragons grow best when night time temperatures are 55 °F, while the poinsettia grows best at 62 °F. Florist cyclamen does well under very cool conditions, while many bedding plants grow best at a higher temperature.
Buds of many plants require exposure to a certain number of days below a critical temperature before they will resume growth in the spring. Peaches are a prime example; most cultivars require 700 to 1000 h below 45 °F and above 32 °F before they break their rest period and begin growth. This time period varies for different plants. The flower buds of forsythia require a relatively short rest period and will grow at the first sign of warm weather. During dormancy, buds can withstand very low temperatures, but after the rest period is satisfied, buds become more susceptible to weather conditions and can be damaged easily by cold temperatures or frost. This series of phenomena have immediate implications to seeds and future of the biomass.
Overall, temperature represents level of subatomic particle activities. Any rise in temperature increases movement of all particles of the system. For certain systems, this would suffice to trigger a chain reaction, while for others this temperature rise would simply facilitate dispersion of the mass. In terms of phase change, Figure 6.12 shows how any change in temperature can trigger phase change by altering the characteristic speed of a collection of particles.
Similar effects are expected with pressure. Photosynthesis offers an example of natural effect of pressure on organic reactions. Beer and Waisel (1982) studied photosynthetic responses to light and pressure (up to 4 atm) for two seagrass species abundant in the Gulf of Eilat (Red Sea). In Halodule uninervis (Forssk.) Aschers. pressure decreased net photosynthetic rates, while in Halophila stipulacea (Forssk.) Aschers. pressure had no effect on net photosynthetic rates. In both species, light saturation was reached at 300 μE (400–700 nm) m2/s and the compensation point was at 20–40 μE (400–700 nm) m2/s. Comparing these results to in situ light measurements, neither species should be light limited to a depth of about 15 m, and H. stipulacea should reach compensation light intensities at about 50 m. The latter depth corresponds well to the natural depth penetration of this species. H. uninervis is never found deeper than 5 m in the Gulf of Eilat, and it appears that pressure rather than light is one of the factors limiting the depth penetration of this species. The differential pressure response of the two species may be related to aspects of leaf morphology and gas diffusion.
Scientifically, confining pressure is responsible for creating a series of vibrations that are in conflict with natural frequencies of matter. Because of continuity of matter, the external vibrations cause reactions to matter that attempt to escape its confinement. Pressure, alone can cause a series of oscillatory events that prompt fundamental changes in the subatomic structure of matter.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.17.27