11
Complex Systems Appraisal: Sustainability and Entropy in a Worldwide Cooperative Context

This chapter is intended to provide an update on issues raised by a lot of project managers (some ones leading to the Project Management Institute (PMI)) working in the area of information systems and business intelligence: they often state that our decision support systems (DSS), in a broad sense, are continuously growing, and creating more and more information (that is to say that their related entropy is increasing). In addition, they consider this phenomenon as an irreversible one because technical advances require us to move forward.

This assertion is questionable: in any engineering task, intended to develop a new product or innovative service, “sustainability” has become the main factor to be considered to evaluate the relevance of the human activity. Indeed, the purpose of a “sustainable” development refers to an economy of technological development which preserves the resources and environment available to the future generations of people. Problems arise from the fact that a lot of people talk about sustainability but are unable to measure or compare it to reference values: it is of great importance to see in which direction progress develops.

11.1. Introduction

Currently, the only way to evaluate and measure the sustainability of a system, and then its adequacy against the new societal constraints, is to measure the “entropy generation” of the system [ROE 79]. It will be expressed either in a qualitative way (positive or negative) or through a variation ΔS (S being the entropy of the system).

As a reminder, the entropy generation of our society, during the last centuries of the industrial era, is mainly due to:

  • – consumption and waste of energy;
  • – irreversible use and destruction of limited raw materials and physical resources;
  • – increase in CO2 and other gas emissions, e.g. methane;
  • – social rejects due to the growing gap between Western and Third World countries; this leads to the decimation of whole tribes and/or cultural destruction, steady streams of refugees, etc., creating disorders and societal problems in terms of nourishment, racist and greed attitudes, safety and security, etc.

In comparison, just to realize how people are thinking in terms of ecology, and thus of nature preservation and characteristics, we could say:

Nature runs on sunlight

Nature uses only the energy it needs.

Nature fits form to function.

Nature recycles everything.

Nature rewards cooperation.

Nature banks on diversity.

Nature demands local expertise.

Nature curbs excesses from within.

Nature taps the power of limits.”

Currently, society makes judgments concerning our industry, economy and governance based on these above views, even if they are sometimes contradictory to their philosophy. As soon a huge disequilibrium appears, people do not perform a systemic analysis of the situation (e.g. human or economic development with 10 billions inhabitants); they just condemn a partial political decision which does not fit to these above constraints.

For these reasons, and to better develop sustainable systems, it is essential to explore some examples to see how these concepts can be applied, to analyze the underlying mechanisms and to restore certain phenomena and characteristics of these systems, knowing that in nature, as in life or in our information systems, the basic mechanisms are universal and need to make certain transpositions. Such an approach allows us to better understand and act in everyday affairs.

Right now, the only way to learn about the sustainability measurement of our systems under development is to go through the so-called “entropy generation”: the objective is to provide the society with “reduced entropy generation systems”.

It is neither a fashionable trend nor a business opportunity, since the future of all humans is involved. It is a paradigm change, a question of ethics and awareness, and lastly a set of drastic changes from standards, policies and practices, to our own values, consciousness and ways of life.

In this chapter, we will study some aspects only of this issue related to information and information systems and decision-making, by linking them to notions of time, quantum fluctuations and entropy.

This is especially important since we talk about worldwide collaboration, while everything is interdependent and involves each of us.

In order to make our information available to any people not familiar with physics, some examples will be used as illustrations to avoid theoretical and non-digestible demonstrations.

11.2. The context

System sustainability concept is often linked with system complexity. In our life, “sustainability” expresses the fact that people are afraid of losing control of a complex phenomenon; this is also associated with the need to preserve a situation in the face of apparently irreversible changes.

Under these conditions, is sustainability a marketing trap? Is it a real concern? Considering what is happening in our world, we cannot be sure yet because complexity is the normal evolution of nature.

What we know is that all the systems surrounding us are now integrating some of these concepts in their design, engineering and development.

Hereafter, we are only interested in the evolution of technologies implied in the decision and control of our industrial and economic systems. Complexity is an invasive concept which requires a permanent adaptation of our DSS.

As we can see in Figure 11.1, there is the integration of two different ways of thinking and a progressive development of many associated sciences and technologies:

  • – in a first stage, less than a century ago, two independent disciplines were established or developed. The scientific one and the psycho-socio one;
  • – in a second stage from about 50 years, the decision, control and management technologies evolved and some new sciences and technologies created or developed independently of each other (cybernetics, systems theory, non linear dynamic system modeling (NLDS), etc.);
  • – now, a few closer relationships are established between different domains;
  • – in order to better appraise the complexity and sustainability of our systems, a full convergence of all the involved disciplines will be required in the future.
ch11-f001.jpg

Figure 11.1. Interconnection of evolving theories and sciences

This last step could be quoted as “convergence theory”: it implies to work in a transdisciplinary and interdisciplinary way, to integrate and assimilate all the complementary sciences as defined above. This was the aim of the Advanced Technology Group (ATG) in IBM, devoted to the competitivity of European development and manufacturing centers, during the 1990s. It is the only way to understand global challenges, to prepare paradigm changes and to develop innovative and best-suited technologies. Now, this is partly covered with the so-called business intelligence technologies (but a too much conventional approach based on quantitative and qualitative databases (DB) approaches is still involved).

As stated before, and keeping in mind Figure 11.1, we will develop some aspects related to the sustainability, complexity and entropy concepts of any complex system. Indeed, questions we have in mind are:

  • – in complex systems, are emerging properties typical to sustainable systems?
  • – in monitoring and control, are the engineering technologies suitable to the design and development of sustainable systems?

11.3. Information systems: some application fields and the consequences

11.3.1. Entropy in information systems: business intelligence

In the context of our work, entropy measures the lack or loss of information, uncertainties, disorders and inconsistencies in the generated information, system complexity (in terms of resulting behaviors variety). It also addresses the indefinable number of information and disparate possible interpretations, or the loss of cognitive structures, etc.

In this section, we will formally introduce the role of brain in any information system. Right now, we can say that “brain” is the support of most of the thought mechanisms and processes. Here, entropy characterizes the knowledge we have about an object or the world; it thus defines the possibility that everybody, i.e. any living agent, may have a consciousness and a more or less developed thought. The more generated knowledge items are diverse, vague and scattered, the more entropy increases.

The well-known principle, “garbage in, garbage out”, is still valid: we cannot properly seek and apply our mind and consciousness if the entropy in a given system is too large. In this case, we can associate a kind of probability with the entropy enabling us to perform reliable predictions, to elaborate and make good decisions or to get storable and then reversible phenomena. Indeed, if everything is “well-ordered”, described and traceable, the evolution of a system can be followed-up and it is possible to go back in a process and to change its future track.

COMMENT 1.–

This first comment is of most importance in risk management. Several politicians and media leaders are now saying that it is unforgivable not to anticipate industrial disasters. This statement is quite inappropriate since unpredictable events cannot be anticipated. Moreover, we do not know whether to blame the bad faith of the some Chief Executive Officers or the ignorance of those who spread rumors and speculative information. This is based on comments related to big events such as the Apollo 13 syndrome, the 2010 BP oil drilling problem in the Gulf of Mexico, the Fukushima nuclear plant catastrophe in 2011 and even the AF447 air plane crash. It is quite easy to criticise post-disaster, especially when it is a replication of something already known. But, “just-doing-out-of-necessity” syndrome has to be revisited in any process where nonlinear dynamics and high level of entropy apply: under these conditions, a disaster is always an occurrence of a phenomenon without memory. Moreover, in terms of sustainability, we cannot ignore that anticipation is a costly process (about the entropy) whose cover ability and reliability is very low.

COMMENT 2.–

Consistency of DSS modeling. The evolution of a software application generally meets both Gödel’s incompleteness theorems [GÖD 31] related to the inherent limitations of axiomatic approaches either in mathematical logic or in modeling formal reasoning:

  1. In the first incompleteness theorem, it can be roughly stated that: “in any recursively axiomatizable, and consistent theory, an ‘effective procedure’ is able to formalize arithmetic patterns; one can construct an arithmetic statement from facts which are true but can neither be proven nor refuted, within this theory”. Here, the question is to find out why we are faced with undecidable statements.
  2. In the second incompleteness theorem, we can say: “if T is a consistent theory that satisfies similar assumptions (it is thus able to prove some basic facts), the consistency of T, which can be expressed in the T theory is not provable in T”.

Both these theorems are directly related to the evolution of software applications and interactive systems for decision support. In fact, they indicate that:

  • – the more a formal system complexity increases, the more “it digs its own grave”;
  • – the less the information is structured, organized, concise and accurate, the more the decisions are inconsistent.

In a convenient way, it has been known for many years that the systems are still evolving toward greater organization and complexity; we also know that mathematics, despite their very high power of abstraction, have limited capacity in modeling; finally, we know that systems called “expert” or intelligent (such as knowledge-based systems (KBS) cannot explain everything with a formal knowledge representation.

Moreover, the more we advance in this KBS approach, trying to represent, model and explain everything in a formal system, the more we will fall sooner or later in one of the following pitfalls:

  • – either we are faced with a situation we cannot model in the system (Gödel’s incompleteness theorem): indeed, there are always statements that we can never determine and describe when remaining in the case of the theory. Wikipedia states “a theory powerful enough to do arithmetic is necessarily incomplete in the sense that there exists some statements that are not provable and whose negation is not also provable within this theory”;
  • – or we will have a combinatorial explosion or crash the KBS by inconsistency, incoherence or contradiction. Correcting this problem, adding new representations is not a sustainable solution since we will fall forward on a new case of incompleteness: “It is a statement expressing the consistency of the theory: it can never prove everything, and therefore anything, and this statement cannot be demonstrated in the theory itself”.

By analogy, Gödel’s incompleteness theorem also shows that using formal logic, that is to say, the conventional approaches we apply, a formal machine cannot alone dynamically detect feedback loops and repetitive structures, already experienced in the past, but unexpected in the future, except if they were preliminary planned.

As a result, in a formal world, complication and complexification are a limit to sustainability.

11.3.2. Importance of entropy in an organization

COMMENT 1.–

When talking about the orderliness of a system of low entropy, it is question of an order which is clear and obvious. Thus, an industrial process, a fractal factory, a business organization and behavioral rules at an individual level (such as ethics) all form a low entropy process. This is because the number of arrangements or possible configurations, corresponding to the assembly of rules, components or elements, is compatible with the original structure of the system: we can detail, describe and model them easily. The more we have organization and information structuring, the lower the entropy: organization, knowledge and know-how on some specific areas do not vary in the same direction as the entropy.

COMMENT 2.–

A system including a high number of agents can be of higher entropy and, also, be very orderly: this is the case with cells in an organism, a group of people, a school of fish, consisting of interacting individuals whose motions are coordinated in a precise way. With regard to Schroeder studies [SCH 92], the energy dissipation of a complex system is an instance of scaling. For example, in nature, for warm-blooded species, the energy loss (W) depends on its weight or mass (M) according to a relation like:

eq01.jpg

where K and C are the constant values. The relevant graph is detailed hereafter in Figure 11.2.

What to keep in mind is the trend of the graph (indeed, the power law factor = 2/3 may slightly change: about 1 for the bats and around ¾ for a human).

ch11-f002.jpg

Figure 11.2. Energy dissipation versus weight of living beings [SCH 92]

Such a transformation is interesting as the entropy is directly related to volume (sometimes weight) and temperature of a dissipative structure. This macromodeling is quite common and of most importance in industry or electronic systems: it is possible to estimate the cost or the number of failures of a system consisting of a given number of components and energy consumption, much before a precise forecasting based on reliability models. This can give a good idea of the sustainability of a complex system: the more complex is a system, the more it is devoted to death.

COMMENT 3.–

Fractal structures in time and space optimize entropy production in complex dissipative systems. Indeed, in consummate dissipative systems, fractal structures are spontaneously created: they participate in the emergence of orders because they optimize entropy production and enable the optimal dissipation of energy gradients. To be more precise, the whole universe is in thermal equilibrium, i.e. in maximum disorder, and the life, as a developing system of order, is only possible in regions with strongly changing entropy: thus, ordered forms, such as a tornado or a highly dynamic funnel in a bathtub, or again Benard cells, continue to live as long as there are energy gradients to dissipate efficiently heat or energy. In a general way, complex systems, life and humans provide the quintessential example for the spontaneous creation of order through embryogenesis; they are remarkably stable, robust yet fragile, healthy creatures of fractal nature (with negative entropy) and function to produce a better level of entropy in the environment.

In a company, fractal structure cannot blindly pursue decreasing entropy, and maintaining a certain and low entropy increase: due to self-organization capabilities, it may have higher flexibility, adaptability and coordination and improve continuously skills. This is why knowledge is a kind of learning in dissipative systems. As for in any fractal structure, it has the advantage of lowering entropy more than in a traditional organizational structure.

Thus, the calculation of this related entropy is relatively simple [HAO 10].

COMMENT 4.–

Within this context, as mentioned before and roughly speaking: entropy allows us to measure a given disorder, a kind of diversification and dissipation, thus the ability of a system to perform complex tasks. But, this is a simplistic view of a concept. Indeed, in terms of disorder, this one has to be clear, visible and obvious [PEN 92].

Also, as per Figure 11.2, a system including a high number of agents, or elements, can be of higher entropy and, also, be very orderly: it is the case with a group of people, a school of fish, consisting of interacting individuals whose motions are coordinated in a precise way (by mimicry, recruiting and hiring, etc.):

  • – in a population, the observed movements and behaviors are associated with existing interactions between the agents. These interactions are related to recruitment or hiring effects and local influences (mimicry), etc. This is what we observe when a disturbance occurs in the survival motion of moving schools of fish, flocks of birds, the panic during riots, coevolution phenomena, etc.: after a very consuming disturbance, or turbulences, a new and steady order appears. These disordered states can be easily observed and identified;
  • – although these movements seem to be random, we can reconstruct their geographical evolution in time (tracking and traceability) and, by reversing their movement or travel, we can reconstruct their evolution and find the original structure of this group. For instance, either in elastic or folding transformations, or in the case of assembly and test operations in a computer manufacturing plant, the number of involved components and the number of final configurations are limited: we are in reversible systems with orders of magnitude of 10+2, while considering a Boltzmann gas, or moving molecules in a drop of water, the scales of magnitude are much larger 10+23 (Avogadro’s number).

11.3.3. Recommendations and management practices in sustainable systems

It should be noted, as described earlier in this chapter, that the pendulum is perfectly reversible in time; it achieves the similar but opposite evolution curve of those observed with their time moving in the positive direction (from past to present). Its entropy remains constant: the past, present and future are combined together.

This is also what we observe in any real system: in life sciences and cognition, we have both “innate” and “acquired” information. A system that operates solely on innate information that is “genetically determined” (e.g. a financial control or management system) has a stable entropy: it is based on symptomatic or presupposed programs. On the contrary, the emergence of significant forms may also depend on in-information coming from an external source or process. In fact, we are discussing in terms of ontogenesis: ontogenesis describes the development of an organism or organization; its underlying mechanism can influence subsequent evolutionary or phylogenetic processes such as thought, reasoning, understanding or cognition. As the overall entropy increases (since entropy is generally the sum of its internal and external entropy), we will always be developing hybrid systems (e.g. with a mix of biological and cognitive features) that have to be globally regressing (in the sense of entropy). It is the same challenge we have in the real estate and construction sectors, when developers try to design positive energy buildings.

So, we come to consider the second law of thermodynamics. Here, we are dealing with isolated dissipative systems for which entropy increases over time. Three applications are now described:

  1. A physical or alive system (a crystal, a sporting team, a group of singers, a set of specifically interacting agents, etc.) isolated from the rest of the world has a given organized and obvious initial state; it will gradually deteriorate overtime, be split down or dismantled into independent items, sometimes in a coordinated way; the effectiveness, efficiency and structure of each element (or resulting subsets), however, will be lower or degraded. This decompositional trend is inexorable: it shows that efforts to structure and organize a system are always doomed to failure or degradation. For this reason, in any business, it is useless to expect a major change in the environment (such as a disaster, an economic breakdown and a societal breakthrough) before evolving and adapting ourselves. Thus, to increase our sustainability, we must constantly challenge our organizations and make them evolve in a systematic way to anticipate the nature of possible deviances and to go ahead despite all the constraints imposed from outside.
  2. Evolution of a software application. This is a common fact: applications change tremendously over time according to a given complexification. This consists of adding successive feature developments (adaptations, changes, fixes and patches, etc.). They represent from 60% up to 80% of the final product cycle cost. Many studies have shown how successive degradations are introduced and were altering its efficiency, effectiveness and reliability.

    Furthermore, ill-timed modifications in an application generate unplanned and unintended-induced effects such as hidden side effects because of the many existing interactions between the modules. A side effect can variously modify some functional states or some arguments in a given variable, raise an exception, write data to a display or a file, read data or call for other side-effecting functions. These disturbances, hard to detect and dissipative, require a lot of skill to diagnose, understand and debug them. Here, Gödel’s theorems apply and directly reduce the sustainability of an application.

  3. System monitoring. Any autonomous and self-organized phenomenon, running without any meta-rule or control device, will mandatorily meet its own loss (with regard to the universal entropy law). We emphasized that self-organization is related to a reduction of entropy generation. This shows that even under a control system framework, sustainability needs for a systemic approach in which both positive and negative feedback loops are present (some devoted to external regulation and others to self-regulation). Survival and perpetuation of a complex system requires such efforts.

11.4. Evolution of entropy in complex systems

11.4.1. Notion of time in artificial intelligence

The above considerations show the need to develop approaches based on different concepts to compensate the variances in entropy generation. In IBM factories, 20 years ago, we implemented decision tools based on “ondulatory artificial neural networks” for process control [JCP 96].

These devices were able to self-store information on their own functioning (obtained by self-observation). In fact, in any control or monitoring action, the most important purpose is to detect any “monotonous sequence of events”, symmetry breaking or monotony breaking so as to detect significant complex structure contingencies, even if they are mixed up with noise. For example:

  • – in a manufacturing process that involves human interaction, any process change or deviance is considered a normal event. But the repeatability of a failure is a fact that must be detected as abnormal (same as for periodic or repetitive patterns, source separation, etc.);
  • – with a robot or cellular automata, the situation is reversed: interactivity and repeatability are a normal mode of operations, while an unpredictable event, a weak noise or a new situation, is abnormal. In such devices, we do not proceed with “symbolic and formal learning”, but with the analog perception of patterns and images in time, as occurs in our brain’s reflex areas.

11.4.2. Temporal evolution of entropy in reasoning processes

The temporal evolution of a cognitive system can be represented by a variable like X(t). Here, X(t) is the level of acquired knowledge in a system, expressed as the space of possible states. As for the second law of thermodynamics, X(t) increases overtime: its representative curve naturally and progressively moves as the number and variety of knowledge is increasing, and the entropy also increases positively over time. This statement can be represented by a graph (Figure 11.3).

COMMENT 1.–

From basic knowledge, experiences and principles (the so-called initial information which is associated with a low entropy), we are able to develop innovative knowledge and new paradigms. This appears when the system evolves in the direction of the future, consistent with the behavior of systems in the universe we live and experience.

We do not know exactly what is the initial entropy value, at time TDate, that is to say the one which is before the “of source information” which we have spoken earlier. For these reasons, we have positioned a hypothetical entropy value at that time. Conversely, we know that the entropy was growing before TDate: thus, we can draw a curve (hence the left side of the curve on the graph in Figure 11.3, called “evolution”).

Note that, as per our level of knowledge, it is quite impossible to go below Planck’s time (10-43 second) at T ~ 0.

COMMENT 2.–

Currently, at the instant denoted by “TDate”, we are starting with a core of given information, corresponding to a certain entropy. This information allows a human to reason; three great opportunities are available:

  • deductive reasoning. In this case, following a number of inferences (steps of thinking) we can deduce a number of facts and findings that increase our knowledge as well as the overall entropy of the system. We are using pattern recognition approaches; because of the emergence of new information, the knowledge base is enriched: however, they are sometimes already modeled and the creative process remains limited;
  • reasoning by abduction. It is based on syllogisms: here, we are able to bring out new premises. This is really the only way to create new knowledge, yet unknown to the decision-maker. For this reason, the entropy will grow more sharply: on the graph, the curve related to entropy generation will be located above the one related to induction;
  • inductive reasoning. In this case, we reason in reverse: we are going back in the underlying mechanisms of knowledge to extract general and structuring rules of reasoning that are not yet identified. So we create explanations and ordering mechanisms for sustaining knowledge, as if to develop mechanisms to trace the story of the TDate point. This top left part of the curve is called “induction”.

In the above three cases, we are in evolutive processes applied to steady environments. This is a strong assumption because, globally, the entropy of the systems under study continues to grow following the time arrow, whatever its reversibility (negative) or irreversibility (positive). As we can see, it is not only a question of size scale: this does not only concern the either micro-/nanologic or cosmologic worlds.

Information is the basis of creation of our visible universe. Long before Planck’s time and the Big Bang, there was only information. In our living world, knowledge is the source of the thought, concentrated in a nucleus comprising some basic information associated with a very low entropy level. Indeed, in order to grasp reality, humans probably started from a more germinal, difficult to define state, therefore with an even lower entropy, perhaps excessively low. And they evolved from there to progressively construct the first seeds of knowledge – an initial set of facts and production rules – that could be activated to develop reasoning, consciousness and, finally, the many cognitive assets discussed above. This is a reason why entropy has increased considerably over time.

As a result, the current state of knowledge available worldwide is becoming colossal. What is striking is that there is consistent information between them, but that they are also inconsistent: when analyzing the content of some databases or knowledge bases about a specific topic, it is easy to find a lot of incomplete information, and contradictory or redundant facts (as mentioned in this book, it is not rare to reach up to 50% of information records that we cannot exploit in a consistent way). This can be considered to be increasing diversity; it is to be compared with an important disorder, thus akin to a high entropy level.

These aforementioned statements apply to the human beings at the time of his emergence; everything is contained in the two strands of DNA, corresponding to a minimum entropy: from there, a human being develops, comes and goes, and during a complete lifetime he will accumulate knowledge, skills and experiences. Thus, he will create and generate new ones until his death. The genetic code is also evolving (to include the initial but partly muting “innate” and also new “acquired” information) [CHA 10]. Again, keys to these new cognitive assets are partially incorporated into the new DNA that will be used to breed the next generation of living beings. As we can see, sustainability principles are indirectly devoted to the DNA program, and not essentially to the human species.

ch11-f003.jpg

Figure 11.3. Evolution of entropy in reasoning

Analysis of Figure 11.3 shows that, in the absence of factors imposing an external constraint or state to our planet, the entropy increases in both directions of time arrow, from the TDate state. The entropy increase in the direction of future (positive time arrow) is obvious: the states related to a higher entropy correspond to the generation of many new and diverse knowledge; it follows a geometric growth rate (Moore’s law).

Conversely, the states located in a low entropy area (e.g. left side of the graph) are just plausible assumption: we do not know yet how so low an entropy, at the beginning of the living world, could generate as much knowledge. Why, how and what was the structure of the world, at the beginning of time, to have such a low entropy? We can only say that during the very fast and initial expansion of a world (it is the same for an enterprise), it is not possible to produce reliable forecasts about its sustainability.

This is why the process used by some “business angels” to participate to the development of innovative start-ups through seed capital assistance is difficult to implement: the required business plan and market projections have a very reduced meaning since their sustainability is questionable. In fact, only risk-prone and intuitive hunches based on values, with partners having vision, energy and experience, can make great business. Here, Gödel’s theorems and entropy theory fully apply. The only way to control entropy growth is to develop organization capabilities (product, process and production development, market, etc.).

Similar mystery surrounds the increasing level of entropy in such a short time (on nature’s scale) in a new human being. It concerns the evolution between the moment the DNAs from both parents are assembled and the moment the brain content of a mature individual is achieved; and finally when the DNA representing the final knowledge state of an individual is obtained, before he leaves all his achievements to his progeny. In this case, when observing how people evolve all along their life, we could be objecting that the entropy is not only continuously increasing in a regular way. We will now turn our attention to these considerations.

11.4.3. Discontinuities in the increase and reduction of the state vectors

In quantum mechanics, the state vector follows an evolution in part governed by the Schrödinger’s equation. However, as soon a measurement is made, there may be an issue related to a lack of information, which causes a change in the state vector according to Figure 11.4 [PEN 92].

ch11-f004.jpg

Figure 11.4. Sustainable integration of various solution approaches. The graphics describe the temporal evolution of the entropy in a system submitted to several paradigms relevant to competitivity

In the field of knowledge, we are observing similar phenomena when considering in Figure 11.4 the variable “entropy” in place of the one called “state”. This can occur in:

  1. Computer sciences. A few decades ago, a car manufacturer, after deleting some technical files, lost their field bill of materials (FBMs) and operation lists necessary in car manufacturing. Everyone was in trouble, but, with patience, a careful analysis of existing assembled products enabled to gradually recover the technical files. When such a fail happens, the resulting loss of information, the stop and go in manufacturing many various models of end-products and the “jump back” lead to a reduction of entropy level.
  2. Medicine. When a patient becomes ill or suffers from amnesia, memory problems result in irreversible loss of information and knowledge.
  3. Economy. When two companies are merging, it is important to reduce the operating costs of the whole entity. Reduction of the employee’s number is most often based on quantitative and social criteria. Prevailing qualitative and competence needs is not a priority since focus brings on financial criteria: thus, highly skilled people leave the company. A negative leap, in terms of knowledge, follows such a strategic decision implementation (with a trajectory change, in terms of entropy), with irreversible effects: later, in an attempt to recover a desired state, the energy expense will be very strong and considered as non-profitable.
  4. Change of dominance. When a civilization is disappearing, there is an irreversible loss of cultural heritage and knowledge learned and acquired for a long period. It is said that the next civilization will be on a completely different track; this is not fully true: the old and new assets and practices are not completely independent: the previous civilization is progressively immersed and digested by the new one (notion of continuity and “soft disaster”) because nature always relies on knowledge gained during past experiences to innately develop, and therefore, ultimately, to change, adapt and enhance the expression of biological and cognitive processes.

This last point is quite important: entropy changes have seldom discontinuities. There is always a legacy of the past and if temporarily, during a transition stage, there is a decrease in entropy, this is because the concept of evolution will help us to overcome entropy levels previously achieved. We are in an ondulatory-like evolution.

We know this problem of “knowledge assets” and “inheritance” in various fields is similar to that of the acquired and innate, in the DNA. It brings some comments on how knowledge is distributed and handled.

  1. In biology, first, we will find those characteristics at two levels [CHA 99]:
    1. relationships between a given sequence of different structural genes and the sequence of amino acids in a protein: interactions depend on elaborated activation principles and type of folding;

    2. brain functions: they imply large cellular sets, gradually built over time, often in an asynchronous way.

      Both levels involve sophisticated mechanisms much more complex and complicated than people believe. For instance, most of the time, we cannot map a gene directly to a function. Conversely, there are strong interactions between the different constituents regardless of the assembly levels considered. This is what we have in the fractal structured networks (FSNs) architecture in the organizations. Here, we do not know how to measure the entropy, and then the sustainability of the structure. It is a new domain and we can just proceed by comparison, to say whether such solution is better or worse in terms of entropy generation.

  2. As stated in [MAS 08], we are faced with a main problem: the “complexification control”:
    1. in physics, with the Boltzmann gas experiment, we can understand a physical phenomenon at micro-or macrolevel, but we cannot explain the transition from micro to macro: with a change in the scale, there is a paradigm change, and we do not use the same laws;
    2. in a company or a society, it is the same: we do not know the chained links existing between what happens in the brain of an individual and a societal behavior;
    3. in an information system or in the Internet, following processes such as life, death and evolution are well known: some Web application modules disappear, new others are integrated. Globally speaking, the entropy increases, but how much?

In short, our overall “societal system” is progressing and moving toward greater complexity. It is a proven fact since we are now able to apply some principles related to fractals (invariance of scale), deterministic chaos (unpredictability of behaviors), or even network theories (collective intelligence), etc.

In each case, we see evolutive and progressive approaches, which tend toward an equilibrium (thermodynamical or not, self-organized with attractors, etc.); the key factor is related to the presence of many actors or agents interacting together. The problem is that we do not know how to extrapolate a mini-event occurring at a scale level (n) by projecting it at another level (n +1). Again, this argues for a systemic approach because it is the only way to change our vision of the world and overcome the limitations related to reductionism and a Cartesian approach. It is a paradigm change for many decision-makers whose culture is not prepared to that technology.

In what follows, we will consider in a “global” system, where entropy increases in time, in a nonlinear and often intermittently way, and we will focus on the sustainability of our creations, emerging structures and technologies, etc. Indeed, in our occidental world, we are faced with an existential question: what is the purpose of our activities? What are the global objectives? Is the finality of our economy oriented toward the well-being of the populations? Is sustainability of the humanity a key success factor?

11.5. Underlying sustainability principles in information and decision

11.5.1. Structuring in phases

We may distinguish three main phases in computer sciences evolution:

  1. Application development: initially, specific applications were designed and implemented within a company to automate every function and get more efficiency or effectiveness.
  2. Information systems: the corresponding technologies were generally more infrastructure-oriented. They are based on the concept of data models, data organization and information processing with development methods of MERISE type, etc.
  3. Computerization of the processes that organize the company around the “computer-assisted work”. They are based on “object” approaches of type UML, rationale and system analysis.

Now, we are evolving toward the so-called “society informatization” which is a wider concept where everything or everyone is an object; it is based on the Web 4.0 – the Internet of Things or Objects.

We can notice that preserving the quality of an information system requires ongoing design and development work (e.g. data management, configuration management, ontologies and strong definition of concepts to improve data consistency and the use of repositories, and also tools and methods for decision-making, etc.). These tasks are all the more difficult as we have to set up a formalization or modeling of a wide variety of processes. Concerning the sustainability of these systems: if such a work is not provided, the information system will continue to deteriorate as a phenomenon of entropy similar to that observed when creating disorder in a physical system [VOL 02]. As a reminder, before addressing the notions of entropy, and simply to show that the underlying mechanisms are almost the same regardless of the application fields considered, we will consider three interesting processes.

11.5.2. Analyzing the scientific thought

No sustainability can be reached without a global motivation of all the stakeholders. In this book, we have highlighted insights arising from studies in decision-making. It shows that in any rationale and systematic approach, a decision generally follows several steps:

  • – observation of the agent, object of study and its context;
  • – understanding of the situation and/or problem;
  • – predictive description of the agents, associated with the modes of reasoning to determine new facts and knowledge. In this process, we often implemented confusing but necessary approaches since we are involved in a dual world: thus, we are often mixing the continuous and discontinuous phenomena, the predictable (consequences) and unexpected (assumptions) information, we sometimes proceed by doing “stretching” (extrapolation in the mathematical sense of the word), sometimes by interacting through common links (here, we would talk about interpolation): the thought comes and goes back in a given field of knowledge frequently through brief incursions, beyond commonly accepted boundaries. Thought progresses step-by-step, as performed by pseudopodia species in a kind of hesitantly walk, made successively with forward and backward motions.

The above process results from skill acquisition: it begins by a conscious and deliberate analysis of the situation, becoming capable of automatic operation as soon as a frequent use of the same expertise is required. Thus, there are evolving substrates which we used to call “false expertise”. Here, a high-skilled manager or specialist will be able to reason and rapidly take a decision.

This is sometimes called “post-conscious automaticity”.

In parallel, with this rationale approach, we can say that we call on socio-cognitive processing based on moral, perception and social judgments, emotions, motivation and goals, behavioral contagion, etc.

Much of our social-cognitive processing is believed to occur automatically only according to some consciousness. This explanation is not enough: the relative automaticity of the brain systems, thus in decision-making process, however, is also a function of unconscious perception, thinking and decision-making. Indeed, some unconsciousness defines the way we think and organize our lives.

Indeed, our learning mechanisms, motivation and behaviors depend on conscious or subliminal reward levels [BAR 12]. For instance, unconscious stimuli can induce a person to achieve a goal. This is because the unconscious helps not only to act, but also to find a specific motivation to act. It is the same in society: people with a dominant position may adopt a selfish and corrupted behavior, just because they feel above any suspicion. They unconsciously put their own interest ahead of the public one, and are little impressed with reproaches regarding sectarianism and anti-social feelings they may have. Similarly, some people, such as parents, who put the interests of their children ahead of their own, are altruistic: such basic behaviors when becoming predominant will drive implicit protective attitudes. It is the basis of a so-called preconscious or natural automaticity.

Such contexts are forms of unconsciousness that also populate our dreams and explain why we interact differently, monitor and develop some specific emotions when we are faced with difficult situations.

11.5.3. Knowledge structuring principles [BER 99]

Structuration of facts and information is necessary to perform a best suited knowledge and know-how acquisition. It comprises the following steps:

  • – step 1: acquisition of an information and evaluation of its pertinence and comparison with the body of the knowledge base of the individual;
  • – step 2: comparison of this information with a number of facts belonging to a given knowledge corpus or body of knowledge (in programs such as IMS-GNOSIS, KADS, CYC, etc.). In this stage, based on fruitful thinkings, many methodologies have been set up to enhance the design for manufacturing (DFM) and establish various types of reasoning such as deductive, inductive and abductive reasonings in industry.

    These approaches cannot accept inconsistencies such as contradictions and redundancies; here, we can hope to detect false information, and also structure a network linking the new information to a given number of references and ontologies. This indicates how the concepts of perception, apprehending situations and assimilating new information must evolve;

  • – step 3: consolidation of the assets by integrating new information into our knowledge base and correlation with the largest part of our background in the related field of application. In fact, we try to perform all the possible deductions using a new feasible and what we knew already in this field. Depending on the results of this evaluation and validation testing, a new information will be either integrated “in” the body of knowledge (which changes the depth of our knowledge) or will be a “limitation” of this corpus (which then modifies the scope of our knowledge).

It is obvious that in these couplings, both quantitative and qualitative approaches are involved:

  • – quantitative: this means that some focus is brought on the notions of reliability (the more links to already known and recorded facts are numerous and the information will be acceptable);
  • – qualitative because it can lead to notions of wealth (more links with a set of concepts are strong and it is geared toward innovation).

In summary, we can say that new information with weak links to a corpus will lead to a confirmation, a validation or a tautology, while information provided with more scattered but strong ties is likely to cause an innovation. Thus, sustainability of an evolving system is not just the result of a random process.

11.5.4. Basic characteristics and measurement of an information system

In decision-making, we use a methodology that can broadly be summarized into three stages:

  • – step 1: the system performs data filtering in an attempt to locate and delimit the context in which it will be necessary to “decide” an act or gesture, correct a problem or implement any behavioral change. It will, therefore, collect some specific data, transform them into “information” more or less related to its area of concern, etc. We can say roughly that it will generate an informational body of knowledge to reduce its field of uncertainty;
  • – step 2: the system performs classification and ranking operations. It will prioritize and establish a sequencing group of information in this corpus: this is done through reliability and credibility assessments, as well as overlaps, analogies with other scenarios previously experienced and stored. The aim of this phase (conscious or not) is to identify all of the most important directions for a powerful investigation and decision strategies (a popular term known consists of saying “evaluating the pros and cons”, or “assessing the possible choices” of a decision). We, therefore, anticipate “futures” by assigning appropriate likelihoods or success probabilities;
  • – step 3: in a third step, work consists of choosing one of the possible scenarios through discriminant analysis, classification or ranking, which is effectively called “making a decision”. Thus, the goal is to disable all other futures (potential solutions) in favor of one that will provide the real action.

A question arises: can we measure this? We have very few examples available to measure the pertinence and complexity of a decision system. Here, we will just mention what has been done in an IBM manufacturing plant in the 1990s [MAS 06]. A tool called LMA [BEA 90] enabled us to improve the planning and scheduling of some new computer technologies. A complete analysis of the decision rules taken in conducting the manufacturing line over 2 years lead to a surprising result: only 23 different decisions were taken. This system can be considered as a sustainable one, but: how to characterize this fact?

In terms of complexity, we decided to use the complexity measurement technique as defined by Lange, Hauhs and Romahn [SCH 97] to measure the complexity of terrestrial water ecosystems. In this approach, decisions were considered as a set of about N=500 data collected during the real-time series and distributed on an arrow of time. The method used is from symbolic dynamics: metric entropy has been calculated and is able to characterize the complexity of the decision system. Unfortunately, we did not perform additional studies in varying the window length, just to evaluate the intercorrelation factors between the chronological sets of decisions. We think this is a promising measurement technique.

11.5.5. Increasing complex system design: measurement

In each case, the problem of design and development arises in the databases and repositories:

  1. Databases are a memory of reality, but again their structure and contents are based on our perception capabilities. In addition, the database and the reality it apprehends mutually influence each other while evolving at different speeds: computer systems are changing in a discrete manner while interacting with the real world which evolves in a continuous way. As such, the questions of time and asynchronism are critical. The problems encountered (and sometimes ignored) by managers and users of databases are sometimes underestimated, because scientists and computer skills are available to correct the weaknesses and inconsistencies of the computer system. In the case where enterprise owners and legal representatives are involved, the problem is particularly acute since information always generates some rights and duties.

    The problem of evolution is how can we to optimize the flow of information and enrich the basic model, while minimizing management costs? Developments of the theory for conceptual modeling provide managers and users with all the elements on a given methodology: the interpretation of available databases subsets as part of a context can, in fact, improve the management of large and complex information systems, subject to challenges and conflicts between the homogeneity of formal representations and heterogeneity of empirical categories.

    According to Boydens [BOY 00], it is important to explore, with both technical and historical approaches, the production practices and interpretation conditions of databases. Indeed, a database is never a “simple” object, either in terms of quality or representativeness, relevance, clarity, etc.

    This study reveals what is never said or written in many documents: informal mechanisms used for interpreting data are always done within the context of an operational implementation of rather framed culture, politics, laws and regulations. They evolve over time and require a specific reading by those who are willing to spend time thinking about how things really work in a real environment with usual practices.

  2. To ensure a minimum consistency, repositories are associated with each database. In general, they only cover a part of the information system needs (e.g. a directory of people, an organizational map, but no technical files such as FBMs/field feature bills of materials (FFBMs), parts or operation lists, etc.). Furthermore, the elaboration of a repository raises difficult methodological problems; now, responsible people are trying to manage this weakness resulting from a data patchwork, the best performance at a lower cost, but the “zero defect” never exists and requires a continuous support. This is the problem [VOL 02] studied by Boydens.
  3. Concerning the data included in a database, techniques are now available to ensure a minimum level of consistency. But no real measurement or indicator is used. In a more advanced way, these data can represent the behavior of a system: under these conditions, we can quote the work that has been done in IBM Europe [MAS 08] to control and monitor either the consistency or complexity of the observed data:
    1. in the area of the behaviors, we use the Lyapunov exponent (also used to characterize deterministic chaos in complex systems);
    2. about the follow-up of dysfunctions and failures, we use the James Stein indicator (to evaluate the queue lengths and shapes of a statistical distribution.

11.5.6. Entropy control in information systems: a set of practices

Whatever the methods used: Merise, UML, etc., the disorder, incompleteness or loss of control will arise in any information system; it is similar to the entropy that is born and grows in matter, as and when changes are made into applications to complexify and enhance them. For instance, at higher organizational levels in business, database repositories related to support services, institutions and local production centers will be useful and usable if integrated or embedded into each individual process.

As a conclusion, when designing an information system:

  • – the software engineers are developing repositories for each process to be modeled;
  • – the constraint of time delay is high and the development of modules for automatic updates is not a priority; most of the changes are done manually. When a repository evolves, some disorder takes hold: it is not always satisfactorily managed;
  • – a number of user interfaces and tools are existing for error reporting, audits and adjustments but they are computer processors and back-offices consuming. Communication interfaces are often limited, unreliable and delay the use of knowledge;
  • – interpretation of the data is quite often subjective and is subject, in conversations and information exchange between managers, to endless perplexities, speculation and other analysis errors.

To avoid a general deviance of an application, unfitted functions and emergence of many disorders, strict design and development rules are needed. When unable to control everything in detail, we act differently: for instance, we will implement certification for developers in given fields and provide them a degree of freedom in their work. The synchronization and control, which reside at a higher level, will be set up at project management level using meta-management rules.

When multipartnership is involved, this has already been developed and will not be detailed again; except to point out that project management should be based on how a living organism or human body is controlled. This is to cover the organizations with low granularity (but in large numbers of granular cells), as they are existing on the Internet with open sourcing.

In the case of merging several companies provided with different information systems, technical files and repositories, several issues may come from cultural approaches between the people working in different entities, benchmark results, power struggles (80% of time spent) and compatibility problems during the integration, occupy 90% of the time of project managers: much energy will be spent in coordinating and motivating the troops. It is not technical skills that we need, same as for time, but leadership and compromise management (as defined in a thermodynamic equilibrium).

11.6. Business intelligence systems and entropy

11.6.1. Introduction

Part of the developments included in this book bring on the following: “engineering a sustainable world economic through mass planetary collaboration”. This requires exploring items involved in interacting systems, as already mentioned in this book. Some will be considered again because of several purposes:

  1. The intention of this book is to highlight the underlying mechanisms included in system engineering, with the aim to discover them along the way.
  2. Self-organization is an important principle leading to diversification and an attractor of convergence. It is then useful to see how entropy is evolving in this area, so as to design DSS accordingly. Here, we will spend some time focusing on a more specific area: business intelligence.
  3. To provide a definition and explanation concerning the limited capabilities of existing DSS environment concepts, for instance networking, cloud computing and even bio-inspired organizations.

Items we consider in this chapter are expressed and modeled according to the transpositions of system dynamics concepts, as shown in Figure 11.5.

ch11-f005.jpg

Figure 11.5. Sustainable integration of various solutioning approaches

11.6.2. The brain: some specificities

A few decades ago, many scientists tried to design and develop computer systems based upon the structure of the brain. Artificial intelligence was often considered as a mature technology, and artificial neural networks (ANNs), computational algorithms and “thinking machines” are supposed to work in a similar way to our brains.

Even if some differences still exist between the computer and brain, the gap is being reduced over time. First, new models of brain operations are likely to inspire the information systems designers, and second, people are investigating to what extent the architecture of current computers may help us better understand the organization of the brain circuitry and its functioning. Within this framework, international programs have been set up. Nevertheless, we are not ready to emulate the brain because every day new discoveries are being made. For instance:

  • – in terms of multitasking, we cannot think about two things at the same time. When two cognitive tasks are performed by a human being, in reality they are processed in sequence by the brain with a time delay of between 200 and 500 ms [MAQ 10]. This is what happens in a car when you have to brake while you are on the phone. It is the same in a meeting when you are reading a report while you listen to a question to be answered. Indeed, there is always a minimum time delay between two mental tasks, called psychological refractory period. This time delay can be drastically reduced after a long training period (in the course of an exponential training curve). However, when such exercise is performed about 10,000 times, we do not see any more improvement and we are generally above 100 ms;
  • – again, the multitasking is even worse when we are performing physical tasks such as searching in our bags and braking the car.

To explain and complete that phenomenon: if a person is able to manage two activities at once, this is more much complicated with three simultaneous tasks.

According to a scientific study performed by Etienne Koechlin and Sylvain Charron, published in the Science Journal in 20101, the human brain struggles as soon more than three tasks have to be performed at the same time.

The findings of the study show (through medical imaging) that, when a person is subject to a single activity, the two frontal lobes of the brain are active. More specifically, when a subject performs a single task associated with a single goal (e.g. winning an award), his frontal lobes of both hemispheres are activated simultaneously. Regardless of the lobe considered, a part of the frontal lobe is directly processing the task, while the other part is working on the goal.

But when the brain has to handle two tasks simultaneously, each frontal lobe is then assigned to a specific task. Brain imaging shows that the two frontal lobes are independently activated: while one is responsible for processing a same task attached to a given purpose, the other will process the task #2 associated with the goal # 2. Thus, the two frontal lobes are assigned to each specific task (distributed work). Each one provides a single task associated with a single goal; the time delay required to ensure the transition from one task to another is so small (about 100 ms) that we are not conscious of any sequencing and the two tasks are quite simultaneous. However, when a third activity is launched, the scientists found a strong increase in the number of errors (in about 30% of cases) and a decrease in responsiveness, that is to say, a worse response time.

For these reasons, our own physical capabilities are limited: our brain seems unable to concentrate on three simultaneous activities without making mistakes. It is not fully necessary to carry out several activities, at once; this requires us to give up unnecessary tasks and concentrate on one or two of the most important ones.

In an enterprise, brain multitasking is a myth and this poses the problem of multitasking constraints that managers are submitted to do a business: parallelization of decisions cannot be reliable, they take time, and results, because of possible errors, are time-consuming. In terms of entropy, that is to say in terms of creation of disorders, it is not a good for the system evolution.

11.6.3. The brain: underlying principles for a DSS organization

In the area of DSS, ANNs were developed several decades ago, mainly for pattern recognition purposes (handwritten characters recognition, predictive modeling, vision, speech recognition, etc.). They were supposed to have the same structure as biological systems, where billion of nerves collectively perform these tasks more efficiently than a similarly powerful computer.

It is a good idea but there is a huge gap between several mechanisms sought in ANNs and the human nervous system, even if about 100 B neuronal cells or weighted switch relays are used. But where do these differences come from? It is not a question of hardware components but of architecture and organization.

As a reminder, a neural network is an information processing system composed of a large number of interconnected processing elements, arranged in several layers, where the input layer describes an input event, while the output layer corresponds to a separate pattern classification. In this area, we can quote many works and achievements from J.A. Anderson, L.N. Cooper, T. Kohonen, J.J. Hopfiled, G. Paillet (general vision), etc. Many variants were developed with or without feedback loops to integrate different learning capabilities, and some industrial developments were made available (ZISC: standing for Zero Instruction Set Computer, in IBM).

Here again, it was said that ANNs were a copy of the brain architecture and were working in a similar way, however:

  • – the human nervous system consists of highly specialized nervous cells (several thousands) interconnected through the synapses, while an ANN comprises few thousands of quite similar cells, often unidirectionally interconnected through synaptic weights, from an input toward an output;
  • – in our brain, billions of these synapses are assigned to the processing of only one stimulus.

However, the difficulties encountered in understanding and reproducing the operations and behaviors of the brain are related not to the global architecture of the brain, but to the nature and design of the neuron itself which is much more sophisticated than expected:

  1. As mentioned in our book, neurons are not just communicating through the dendrites and axons; in some parts of our brain, there are existing electrical fields (higher than 1 MEv/mm) at the neuron level which modify the activity of neighboring neurons, so that communication is much more invasive and reliable (C. Anastassiou from the Californian Institute of Technology). This is important for synchronizing neuron tasks and to stimulate the exchange of information related to memory and cognition.
  2. Communication is generally done between different neurons through their axons and synapses. However, some neurons can directly communicate via the axons in their proximity. (J. Ziskin from the John Hopkin University). This enables faster communication between different areas of the brain, e.g. the two hemispheres.

  3. Information never goes one way: nerve impulses can be produced at the beginning of an axon or at the end of a synapse. As stated by Nelson Pruston, from neurobiology department in the North Western University (Illinois), the functioning and behavior of the brain is much more complex than observed today.
  4. As demonstrated by Y. Shu (Yale University) and H. Alle and J. Geiger (Max-Planck Institute) and contrary to popular beliefs, several types of electrical signal can circulate at axon level: brain cells use a mix of analog and digital coding at the same time to communicate efficiently. In addition to the usual action potential, smaller analog voltage deflections may give rise to action potential. As this action potential reaches the synaptic terminals of the axon, it causes the release of a transmitter onto the next neurons in the chain. So, although signals in the cell body are represented in an analog manner, they are thought to be transmitted between cells solely through the rate and timing of the action potentials that propagated down the axon, that is in a digital manner. This can explain neuronal dysfunction.
  5. Some neurons may use several transmitters and reinforce speed and amplitude of the impulse to be provided to the next neuron, etc.

All these results have a strong impact on the design and development of the so-called DSS. Indeed, when considering the ductility, flexibility, plasticity and flexibility of our brain, we can just show that decision-making, performed in the brain of a human, is a very complex process which can conduct to many possible rational, or irrational and disordered, solutions with a big entropy impact on the system under study.

11.6.4. Collaboration and collective approaches

The purpose of this section is to briefly describe some principles related to these two notions and to measure their impact in terms of sustainability and, consequently, entropy generation.

Worldwide collaboration is a recursive and interactive process where two or more people or organizations work together in a self-similar way to realize shared goals (this is more than the intersection of common goals seen in cooperative ventures, but a deep, collective, determination to reach an identical objective) by sharing knowledge, learning and building a consensus.

In fact, collaboration is a working technology which is based on some specific approaches:

  1. Most collaboration is based on leadership, that is to say the ability to excel and lead the people, although the form of required leadership can be social within a decentralized and peer-to-peer group. In particular, teams that work collaboratively can obtain greater resources, recognition and reward when facing competition for finite resources.
  2. In this area, considering the antagonistic nature of the human being and business processes, people will collaborate in mixing cooperation with competition over time to do business. This is why we introduced concepts such as cooperation and competition in order to optimize decision processes.
  3. In terms of solution searches, different techniques will be used. For instance:
    1. auctions and negotiations based on Nash equilibrium;
    2. game theory which is a branch of applied mathematics and economics that looks at situations where multiple players make decisions in an attempt to maximize their returns.

Conversely, collective intelligence is a wider and higher level concept often used in elaborating more global solutions. Collective intelligence can be defined as the capacity for a group of individuals to envision a future and reach it in a complex context.

Collective intelligence is becoming a full discipline, with its formal framework, theoretical and empirical approaches, etc., based upon collaborative and communication tools, associated with a shared ethics.

More specifically, the Cartesian mechanistic thought process has fractioned the universe into three complementary fields: matter, life and mind, which are part of our eco-biosphere. To catch the meaning of these global items, transdisciplinary approaches have to be implemented. Indeed, physics alone cannot explain poetry, neither can psychoanalysis explain cellular division or group technology. In fact, we have to imply various sciences such as social and human sciences, arts and structure in nature, mathematics, theology, biology, religion, and even politics, etc.

Indeed, in this world, everything is connected to everything; it is a kind of global integration that we have to implement, where each thing possesses at the same time an inner and subjective dimension (that has to be interpreted), an outer dimension (that we perceive), an individual dimension (the agent) and a societal dimension (the population and the whole society). From this whole, properties at community level will emerge.

For many people, and managers, collaboration is a panacea: it is able to integrate groups of people, and make them participate in a common goal.

More globally, when talking about collective intelligence, there are underlying impacts in terms of universal governance (global, local, transversal, transcultural, etc.) while developing practical and immediate know-how for today’s organizations, through an ethics of collaboration. Thus, it is a good way to get the right and accurate information or decisions with a minimum entropy generation.

In terms of control architectures, worldwide collaboration based on peer-to-peer mechanisms can be represented by a heterarchical working structure as shown in Figure 11.6. Compared to other structures, heterarchy is an advantage for the following reasons: adaptability, robustness in the answers, consensual decisions and autonomy in the operations. On the other hand, a heterarchy has a drawback, that is to say a “cost” or a counterpart:

  • – this structure is the most complex one to monitor and will require to study “network sciences” and “bio-inspired” systems later in this book;
  • – it is energy-consuming and a more dissipative process during the control operations.

This means that entropy generation, under this environment, will be higher than in other approaches (for instance, centralized control). As such, entropy generation is higher on the left side rather than on the right side of Figure 11.6.

Holonic systems, for instance, are an intermediate stage between hierarchical and heterarchical; recursiveness introduces a kind of structure and involves nodes (agents or group of agents) with less abstraction level: thus, control is easier and more efficient (lower entropy generation).

ch11-f006.jpg

Figure 11.6. Organizational structures and expected performances (Defense R&D Canada–DRDC Valcartier TR 2008-015)

Finally, we can easily conclude how global approaches are positioned in terms of sustainability: either with the consistency of a solution and decision or in terms of control. The difficulty lies in finding the right compromises.

11.6.5. Loneliness: a common impact of collective approaches

Figure 11.6, however, is not as idyllic as we like to think: our society is an exclusive one, whatever the statements of good intent expressed by the human resources managers in many companies. Cooperation and collective approaches have a hidden side. Hereafter is a detailed example of such a situation.

The problem of loneliness is a kind of rejection and entropy generation. In most of our current societies, one-third of the population lives in solitude. Loneliness consists of being alone when faced with a problem: unemployment, loss of salary, illness, stress, etc. In this case, the person is not able to solve his/her problem alone. Such a person can become an outcast of society because they feel unable to defend and protect themselves, thus to overcome their own problems and recover from the situation. Loneliness can be considered as the result of a mismatch between a given person, his/her surrounding environment and the behavior of the people with whom he/she interacts. Loneliness evolves into six steps, as in a “vicious” circle:

  1. perception of a difficulty;
  2. feelings related to shame and lack of pride (we dare not to explain our problem and not to show any weakness);

  3. loss of confidence: we no longer believe in ourselves;
  4. loss of our standard of life: the person becomes confined to an asocial cluster;
  5. the lonely person lives in an offset manner, and rebels;
  6. back to the first point, that is to say at the beginning of a positive or negative feedback loop.

Loneliness reinforces clustering of a population and then its diversity, complexifies its management and is energy-consuming. Moreover, in terms of ethics, loneliness affects indifferently any kind of potential resources: young people, workers, skilled seniors, elderly and retired people, etc. This topic has already been discussed in Chapter 8, assigned to the survival and perpetuation of the species, with phenomena related to eusociality. But we must be aware that this is a common problem: many companies, organizations, team leaders, etc., are acting under the pressure of the competition and the financial greediness: they tend to divert from resolving “hard” problems; this is left to the charge of a state, a nation or a society at large.

Loneliness is a topical problem; it is growing in parallel with the evolution of our society. Some usual causes can be described as follows:

  • – people are assigned to more physical and intellectual mobility and flexibility;
  • – the ways of working are strengthened (not physically, but intellectually speaking);
  • – lack of skill and ignorance of some top managers;
  • – individualized and hedonistic society, with its associated “greed attitude”;
  • – with low or poor exchange of information, everyone becomes suspicious;
  • – the poorer you are (in terms of money, knowledge, relationships, etc.), the more you are forgotten or despised;
  • – our exclusive society is characterized by the fact that:
    • - 25% of young people, those under age 25, who are unemployed are an unused resource;

    • - 35% of over 55s are considered as too costly for an enterprise: they are excluded from the labor market.

As such, it is a mess: a generation of disturbances and situations unsuitable for a sustainable environment. The tragedy is that loneliness is just related to an oversight problem:

  • – when forgetting: we cannot see or we do not want to see the value of what people can bring;
  • – we forget that we live on outdated business models, not adapted to current societies;
  • – we forget that the elderly people are consumers: they often have significant financial resources. Thus, we deprive ourselves of a part of the society that can boost the industry.

In this so-called “collaborative” world, it is clear that cooperation principles, and the society as well, do nothing for most of excluded populations. Within the framework of an integration process, cooperation is not enough.

As already stated, any inclusive society is based on several factors:

  1. the respect or consideration for others (any living being or inanimate object, such as energy, etc.);
  2. links, relationships and preferably spoken communication, care and attendance to others, to better listen and stay close to people;
  3. psychological recovery of a person through the physical activity and cooperation;
  4. more ethics by trying to interact: who is he, where is he going how and how can we help him solve his problem?

We are all members of society, but each of us has to fulfill this role. At company level, it is said that its greatest asset is related to its human capital. But, is the enterprise sincere? As soon a company, and any human organization, dedicates more respect for employees and a better understanding about a more “consecrated” vision of life, approaches to loneliness would be different. There would be less blah blah blah, fewer ghettos, fewer barriers between communities and a greater homogeneity in the population, and hence less entropy.

We must remember that in any business, a good manager (who must also be a good leader) must do what he can, as best he can, with what he has.

To conclude, worldwide collaboration is aimed at reducing the entropy generation and creating a more sustainable global system. Unfortunately, it also creates a significant entropy generation: this is fully in agreement with the principle of duality in nature. In terms of governance, the difficulty will consist of managing and giving adequate priorities to some of these equilibria.

11.6.6. Organization of some target complex systems

Darwin’s theory, devoted to the evolution of species, tells us that species change over a long period of time. They evolve to suit their environment, and species that survive to changes in the environment are not the strongest or the most intelligent ones, but those that are more responsive to change. Thus, the manufacturing companies better prepared to survive are those that respond better to emergent and volatile environments.

For these reasons, reconfigurable manufacturing systems (RMS) are designed for rapid changes in their structure, as well as their hardware or software components, in order to quickly adjust the functionalities and production capacities to sudden market changes, and intrinsic or failure system changes. Consequently, they require the implementation of characteristics such as modularity, integrability, customization, scalability, convertibility and diagnosability.

This supposes a specific structure and architecture, and a particular control system software. Biological systems and nature are suitable sources of information to be transposed for the development of reconfigurable and sustainable manufacturing systems.

To fulfill such requirements, holonic system architecture is best suited. Biosystems also suggests we implement distributed controls based on autonomous and cooperative agents (as we have in living organisms).

11.7. The holonic enterprise paradigm

11.7.1. Introduction

It is an organism comprising a holarchy of collaborative components, regarded as holons. Holon is a term derived from the combination of two words: “holos”, a whole, and the suffix “on”, which means a particle, an item or a subsystem. Thus, a holon is made up of subordinate parts or a part of a larger whole. These holons (agents) are provided with local autonomy and proper propagation mechanisms.

Holarchies are not holons – or physical systems of holons – but are an organization or conceptual arrangements of holons that represent the basic formal entities for a holonic interpretation of the structures and dynamics of “reality”.

The best-known examples of what a holonic organism consists of going back to the fractal organizations as detailed by H.J. Warnecke in the “fractal and agile company” and Massotte and Corsi in [MAS 06].

ch11-f007.jpg

Figure 11.7. Examples of ascendant holarchies as systems of classification (from [FUN 95])

11.7.2. Properties of holons

In this observational context, a holon is viewed as an entity that is at the same time autonomous, self-reliant and dependent; interactive vertically as expressed in Figure 11.7, as well as cooperating horizontally with other holons, and characterized by rules of behavior (DRDC Valcartier TR 2008-015 41). Thus, we are in and between a hierarchical and heterarchical organization. We can explain a little bit more what these characteristics are:

  1. Autonomy is revealed in the holon’s structure and functioning, which must permit a dynamic that is distinct from the context and that refers to the holon-unit. Thus, the holon has a stable form that allows it to make decisions of limited scope, and that gives it vitality and ability to survive environmental disturbances.

    In the present systems, a holon has the capability to create and control the execution of its own plans and/or strategies (and to maintain its own functions). In IS, each holon has local recognition, decision-making, planning and action taking capabilities, enabling it to behave reactively and proactively in a dynamic environment.

  2. Self-reliance resides in its ability both to deal with contingent circumstances without requiring “authorization” or “instructions” from some superordinate unit and to control in some way the units it includes.
  3. Interactivity is revealed by the two-way connection between the whole and the parts comprising it. This enables cooperation intended to achieve the overall system objectives. Right now, we can state that the holonic structure associated with vertical interactions will address partly the difficulty of coordination in decentralized systems.
  4. Dependence and cooperation. This implies that the holon is subject to some form of “control” by the superordinate unit precisely because it has a role in the survival of the vaster structure that contains it. The superordinate structure can set the behavioral objectives of the subordinate structure, which transmits the results of its activities to the superior level. It is a cooperative process whereby a set of holons develops mutually acceptable plans and executes them. Coordination, negotiation, bargaining and other cooperation techniques allow holons to flexibly interact with other holons in an abstract form. Because of the dynamic nature of the holarchies, each holon must employ generalized interaction patterns and manage dynamic acquaintances.
  5. The rules represent the set of constraints on the actions of the holon due to its being both a whole and a part. The holon is defined by the position it occupies and by the direction of observation. The decisions that a holon can make are limited to accepting the request being made and executing the request by utilizing available resources.

The process used to arrive at a decision is only as complex as necessary for that class of holons and its level within the holarchy. For simple systems, the decision process for a given holon is a set of fixed rules that govern its behavior. The flexibility displayed by holonic systems is the result of the combined behavior of the holarchy and not the actions of an individual holon. Thus, within this context, we can define:

  1. Self-organization: the ability of holons to collect and arrange themselves in order to achieve an overall system goal. Holonic systems immediately renegotiate the organization of the system whenever environmental conditions change.
  2. Reconfigurability: the ability of the function of a holon to be simply altered in a timely and effective manner. Because of the modular approach, holons can be reconfigured locally once the inherent flexibility of the holons has reached its limit.

The notion of functional decomposition is another important ingredient of the holonic concept. It can be explained by Simon’s observation when he says that “complex systems evolve from simple systems much more rapidly if there are stable intermediate forms than if there are not”. In other words, the complexity of dynamic systems can be dealt with by deconstructing the systems into smaller parts.

As a result, holons can be an object, an agent or a group of agents, and they can contain other holons (i.e. they are recursive). Also, problem solving is achieved by holarchies or groups of autonomous and cooperative basic holons and/or recursive holons that are themselves holarchies.

Holonic systems are partly based upon biological and social systems, thus:

  1. These systems evolve and grow to satisfy increasingly complex and changing needs by creating stable intermediate forms that are self-reliant and more capable than the initial systems.
  2. In living and organizational systems, it is generally difficult to distinguish between wholes and the parts. Almost, every distinguishable element is simultaneously a whole (an essentially autonomous body) and a part (an integrated section of a larger, more capable body).

11.7.3. A transposition

A transposition of these concepts was done a decade ago within the international program intelligent manufacturing systems (IMS). The holonic organization was extended to the so-called holonic production paradigm at an intraenterprise level. This paradigm was also extended to the hardware (physical machine) and software (control and communication) level. Now, everybody realizes that a global and open systemic approach applies and is more suited to the development of sustainable production systems. Thus, what is recommended is to switch (from these holarchies, we will continue to keep in mind) toward a more elaborated and structured model. Indeed, these models are those encountered in any complex system (such as in the Web and biology). They are characterized by three invariants:

  1. paradox of survival: maintaining perennial solutions and fostering changes through antagonisms;
  2. dialectic between the transformation and diffusion of energy, and information exchange;
  3. dialectic between stocks and flows, thinking tanks and flow of information.

This is of key importance to structure the methodology to be implemented in the area of sustainable systems. More precisely, an illustration of these concepts, in four different application fields, is represented in Table 11.1.

Table 11.1. Characteristics of some complexified systems

Any System:

Molecule

Town

Company

Society

1 – is made of:

Atoms

People

Employees and investments

People

2 – is organized or self-organized to adapt:

Cells; then, Humans

Governance

Market

Morale and rules. Population behaviors

3 – and react against changes and disturbances:

Virus

Unemployment

Concurrence, Social changes

Economic crisis, earthquakes

4 – thus, to develop themselves and survive:

Species reproduction

Counties growth

Profit, wellbeing

Economic and cultural influence

5 – while improving underlying capabilities of its sub-complex structures (holons):

Brain

Logistics, urban public structures

Holarchies and/or heterarchical organizations

Society knowledge and consciousness. Basic theories and sciences, etc.

What is not said in this table is that in each area potential energy and resources (sometimes scarce or expensive) are used in order to transform raw materials and services (through working procedures that are aimed at transforming “disorders” into orders/organized patterns) into more complex systems, with respect to added value constraints and sustainability (i.e. with a minimum entropy generation).

11.7.4. A comment

To achieve these aims, holons could call for the so-called “swarm intelligence” concept, also inherited from Biology. It is defined as the emergent collective intelligence of groups of simple and single entities (like holons). It offers an alternative way of designing intelligent systems, in which autonomy, emergence and distributed functioning replace control, preprogramming and centralization approaches, as usually done in conventional systems. This is often associated with the concept of “artilects”. This last one, however, will be more often used in heterarchies to conduct auctions, negotiations and evolutive decisions.

In terms of implementation, holonic systems will be shaped as in Figure 11.8, for instance through the Petri Nets technique.

ch11-f008.jpg

Figure 11.8. Holonic system modeling [LEI 08]

In Figure 11.8, a global behavior can emerge from the behavior of each individual holon. This is because we will converge toward an attractor (working pattern, work organization, skills and tasks distribution, etc., with regard to self-organization mechanisms).

11.8. Self-organization and entropy

Self-organization is not a new concept, being applied in many different industrial and economics domains. It can be defined [LEI 08] as the integration of autonomy and learning capabilities within entities to achieve, by emergence, global behavior that is not programmed or defined a priori. A possible way to integrate self-organization capabilities is to move from fixed and centralized architectures to distributed ones, according to the perception of an environment that does not follow a fixed and estimated organization.

11.8.1. Discussing examples

In the holonic manufacturing system (HMS), the adaptive holonic control architecture for distributed manufacturing systems (ADACOR) project has been proposed [LEI 08].

It is a holonic control architecture which addresses the agile reaction to disturbances at the shop floor level, being built upon a set of autonomous and cooperative holons, each one representing a factory component which can be either a physical resource (robots, pallets, etc.) or a logic entity (orders, etc.). The manufacturing control emerges, as a whole, from the interaction among the distributed collaborative ADACOR holons, each one contributing with its local behavior to the global control objectives.

One of the major concepts introduced by ADACOR is the adaptive control approach, being neither completely decentralized nor hierarchical, but balancing between a more centralized approach and a flatter one, and passing through other intermediate forms of control. ADACOR adaptive production control shares the control between supervisor and operational holons, and evolves in time between two alternative states, stationary and transient, trying to combine the global production optimization with agile reaction to unpredictable disturbances. This dynamic evolution or the reconfigurability of the control system is supported by the presence of supervisor holons in a decentralized system, and the presence of self-organization capability associated with each ADACOR holon (expressed by the local autonomy factor and proper propagation mechanisms).

In the stationary state, holons are organized in a hierarchical structure, with supervisor holons coordinating several operational and/or supervisor holons. The role of each supervisor holon is the global optimization of the production process. In this state, each operational holon has low autonomy, following the proposals sent by the supervisor holon.

The transient state, triggered by the occurrence of disturbances, is characterized by the reorganization of the holons in a heterarchical-like control architecture, allowing the agile reaction to disturbances. This reorganization is performed through the self-organization of holons, through the increase in their autonomy and the propagation of the disturbance to the neighbor holons using ant-based techniques. After disturbance recovery, the operational holons reduce their autonomy, evolving the system to a new control structure (often returning to the original one). As we can see, the restructuration of the control system is done so that the energy consumption is a minimum one. As a result, the integration of these technologies would bring a greater efficiency for manufacturing applications [XIA 08, PAR 10].

More generally, when dealing with reconfigurable systems, in which structural reorganization and emergence of new patterns play key roles, it is crucial to have regulation mechanisms that react quickly and introduce new orders and stability against the increase in entropy and, consequently, chaotic or instable states. Here, the second law of thermodynamics that states the total entropy of any isolated physical system tends to increase over the time approaching a maximum value, and this is the point we have to focus on.

11.8.2. What comes after holonic systems?

However it is viewed (at a physical-reactive, biological-active, human-cognitive or formal-logical level), the holon cannot be considered as the panacea of evolution. In system modeling, it is a useful concept to represent some behaviors and describe some individual strategies directly related to autonomy. For instance, our experience in heterarchical approaches, through VFDCS [MAS 06] and PABADIS, shows that peer-to-peer mechanisms, game theory, negotiation, etc., as deployed in Web applications, are not sufficient to drive toward sustainable societies.

However, what is happening today is quite important: all our management and control systems, either in economy or industry, have been influenced by the Web; they are at the origin of new paradigms and practices which reinforce the emergence of business models based upon NLDS, systemic approaches, chaos and self-organization. To summarize all the sciences behind these terms and environment, we will call this “network sciences”.

Thus, “network sciences” is the present paradigm: it is in front of the so-called “bio-inspired” sciences, even if some embryos relevant to biomimicry are already implemented in evolutionary algorithms and regenerative approaches. To better understand where we are going, in terms of sustainable development and entropy generation, we have to recall a few basic principles behind the so-called term “evolution”.

11.8.3. Evolution

Evolution has been widely developed in this book. In order to see how it applies in our current working life and to highlight its contribution to entropy, and then sustainability, we have to summarize again some of its attached characteristics. In summary, in our subject matter, we will address the five following points covered by the “systems evolution”:

  1. Continuously increasing complexity of any system in nature. This complexity denotes systems that have some or all of the following attributes:
    1. the number of parts (and types of parts) in the system, the number of relations between the parts and interactions are non-trivial, however, there is no general rule to separate “trivial” from “non-trivial”;
    2. the system has memory or includes feedback loops (negative and positive);
    3. the system can adapt itself according to its history or feedback, and its environment; it is relevant to systemic modeling;
    4. the relations between the system and its environment are non-trivial or nonlinear; even a simple/elementary system can generate chaotic behaviors;
    5. the system can be influenced by, or can adapt itself to, its environment; vi) the system is highly sensitive to initial conditions (SIC).
  2. Increasing differentiation/integration of potential advantages. The development of existing markets or capabilities to maintain a competitive advantage. This development can also be done in creating new needs and proposing adaptive solutions in each area where a market (or an opportunity) becomes attractive. Here, the best approach is not to reinvent the wheel, but to select the best opportunities in terms of solutions and to assemble (and integrate) them in order to provide a set of most suited responses to a wide range of needs. This approach is the less expensive one, the more valuable in terms of time delay procurement, the most reliable one, and thus the one which requires the less entropy generation.

    The second law of thermodynamics involves the expected outcomes of diversification which then conducts to differentiation: we may expect a great strategy of “economic” value (growth and profitability) or, first and foremost, great coherence complementary to their current activities (exploitation of know-how, more efficient use of available resources and capacities). This is the case both in nature and industry.

  3. Increasing organization of resulting structures. A lot of considerations are brought to self-organization. Without developing this point, we will just state that there are several broad classes of physical processes that can be described as self-organization. Such examples related to our subjects of interest include:
    1. structural (order–disorder) phase transitions and symmetric breakings;
    2. second-order phase transition, associated with “critical points” at which the system exhibits scale invariance in structures;
    3. pattern emergence and structure generation. The theory of dissipative structures of Prigogine and Hermann Haken’s synergetics were developed to unify the understanding of these phenomena, which include lasers, turbulence and convective instabilities in any “fluids” or “flows”;
    4. self-organizing dynamical systems: complex systems made up of small, simple units connected to each other usually exhibit self-organization and self-organized criticality (SOC).
  4. Increasing relative autonomy of the active entities. Autonomy refers to a concept found at moral, political, social and bioethical levels before being applied in industrial systems: it is the capacity of a rational individual to make an informed, uncoerced decision. At moral and political levels, autonomy is often used as the basis for determining moral responsibility for one’s actions, and governance or control principles. In fact, the theory of autonomy must focus on ethics: while morale covers societal responsibilities, while deontology addresses the respect of some rules and behaviors within a profession, ethics indicates what we have to do, or not to do, according to our own consciousness and personal perception of what is right and wrong to undertake.

    In the same context, autonomy applied to industrial systems concerns a device (agent or entity) that would need to have a longer leash being able to complete complex missions without human intramanagement. For instance (as defined in [WIK 14]), autonomy can take the following aspects:

    1. in computing, an autonomous peripheral is one that can be used with the computer turned off;
    2. autonomy may also refer to “autonomy support versus control”, “hypothesizing that autonomy-supportive social contexts tend to facilitate self-determined motivation, healthy development, and optimal functioning”;
    3. in mathematics, a differential equation is said to be autonomous if it is time-independent;
    4. in automation, robotics and holonism, autonomy means independence of control. This characterization implies that autonomy is a property of the relation between two agents. Introduction of self-sufficiency, situatedness (situated systems), learning or adaptive features increase an agent’s degree of autonomy, according to Rolf Pfeifer studies;
    5. in economics, an autonomous consumption is a consumption expenditure when income levels are zero, making spending autonomous to income;
    6. in governance, autonomy means self-determination, or independence versus other political constraints (either social or community types).
  5. Increasing of thetelos” concept. Telosis is the root of the term “teleology”. It is related to a natural process being directed to the study of purposiveness, or the study of objects with a view to their aims, purposes or intentions. Teleology figures centrally in Aristotle’s biology and in his theory of causes: this means that “form” follows “function”. The ultimate purpose is to get a goal-oriented process.

For example, we have eyes because we require eyesight; not that we developed eyesight because we happen to have eyes. This is of most importance: teleological interactions are like social interactions; they are the result of purposeful goal-directed behavior in both biological and technological systems.

11.8.4. Consequences

Applying these aforementioned evolution principles to advanced manufacturing systems is the equivalent to thinking in terms of “evolution of manufacturing systems”. Within this context, we will be ready for implementing concepts related to “network sciences”. In fact, this is a transitional step to something more evolved. This is a way to introduce the era of “Intelligent Manufacturing Systems” as specified in the IMS program, to prepare a new paradigm shift toward bio-inspired systems. Indeed, when considering that our knowledge in biomechanisms is not even a millionth of what we should know about life science or nature, it would be pretentious to claim that human beings are able to carry out “bio-inspired” systems. There is a huge gap between our intentions and the reality of facts.

Presently, as per our level of knowledge and experience, the final capabilities required for these IMSs can be expressed in four different ways:

  • – self-preservation (homeostasis): the agent possesses the characteristics that permit it to maintain its structure “as such” (pattern) independently of the environment, components and programs it is composed of;
  • – self-adaptation (organizations): the agent is part of a wider system; it is able to adapt itself, to link up with other superordinate agents or holons, and to change behavior; that is to react mechanically, biologically or intentionally to the stimuli of the other superordinate agents; this supposes an ability to evolve (mutation) and select the best options;
  • – self-transcendence (ability to evolve and innovate): an agent may gain its own new and emerging qualities, not existing in the agents that it includes. With such properties, then not only is the universe dynamic but it is also “creative”, since it causes new properties to emerge for the subsequent inclusion of agents (morphogenesis);
  • – self-dissolution: the agents break up along the same vertical lines they used to form; the process of subsequent inclusion in an upward direction is transformed into a process of subsequent break up or splitting (ability to acquire new impulses crossing a breakthrough and bouncing in a new paradigm).

Again, as far as we can see, the main concept behind all these characteristics is “self-organization”. This concept is strongly related to the activity of so-called programmable networks. In addition to the fact that such networks, because of their dynamicity, can generate new patterns, the characteristic on which we intend to return is related to the interactions, that is to say to network feedbacks or, said differently, communications.

Communication helps us to reduce uncertainty, and thereby, like a form of entropy, doubt and ambiguity may eventually creep back into a relationship if there is no reinforcement over time. This is where the true value of phatic communication exists, as it helps maintain these connections (by reinforcing the trust in future interpersonal interactions) until more significant interactions occur.

However, it must be remembered that networked systems and NLDS have converging attractors valleys, which limit the systems divergences (in terms of potential dissipation): the evolution of the systems is mostly maintained in the bottom of these valleys and contributes to its stability.

So, as already stated, networking and self-organization are the contributing factors for reducing entropy generation, but this generation is not equal to zero since it is a dissipative process.

11.9. Analysis of new trends in sustainable production systems

11.9.1. Introduction

In this section, to avoid any misunderstanding, we are discussing two points:

  1. Production systems: it is necessary to remind that “production” is a general term which is not only addressed to “industry”. It consists of a transformation of several products and components to provide a final product or service to some users or customers. Production can apply to industry, finance, consulting, administration, tourism, etc.
  2. Trends: this book also addresses some underlying principles that we will use in the next years to improve our production systems. It is then necessary to position them, in terms of sustainability, against the old paradigms and the new ones. To avoid too many duplications, we will limit our developments to the few paradigms noted above.

The first trend in production system development consists of applying global and systemic approaches relevant to “ecosystems” and “network sciences” to provide our society with more sustainable systems.

The second trend will consist of introducing autonomous behaviors and life mechanisms inspired from biology. Before measuring the impact of global bio-inspired technologies, we will study how they are implemented into some existing approaches, such as regenerative methods, which have been applied, in recent years, for carrying out IMS.

11.9.2. Research and development

Here, research and development can be classified into two groups:

  1. Evolutionary algorithms, in the family of regenerative modeling. They are inspired from biology such as genetic algorithms, simulated annealing, ant colony optimization and particle swarm intelligence. They are applied in applications of CAPP [MAS 06, WAN 09, SHA 09].

    In this area, two main classes of fitness functions exist: one where the fitness function does not change, as in optimizing a fixed function or testing with a fixed set of test cases; and the other where the fitness function is mutable, as in niche differentiation and coevolution.

  2. Holonic, then genetic and biological production systems are the most remarkable concepts. Biological organisms have two types of information: genetic information (DNA-type) and knowledge information (brain/neural-type or BN-type). DNA-type information evolves through successive generations, according to the evolution of the systems, while the BN-type information is achieved during the lifetime of one organism by learning [BRU 95]. Unification of information makes organisms show functions such as self-recognition, self-growth, self-recovery and evolution [UED 00].

As an applicative example, the technical product data could determine which processes are required by the product transformation while the process data would specify which tools and equipment on which corresponding machines are operated. However, the BN-type information consists of the rules for cooperating machines in order to carry out a given process. Machine tools, transporters, robots and so on should be seen as biological organisms, which are capable of adapting themselves to environmental changes.

In order to realize such bio-inspired models, agent technology is generally used for carrying out the intelligent behaviors of the system such as self-organization, evolution and learning [UME 06, MAS 08]. The reinforcement learning methods, case based reasoning or even pattern recognition techniques, can be applied for generating the appropriate rules that determine the intelligent behaviors of the machines.

As we can see, the contribution of biomimicry is just a follow-on step of the so-called “network sciences”. Indeed, research and development activities based on bio-inspired technologies require first to control the underlying self-organization principles to be implemented in the models. Then, it introduces and integrates the notion of swarm of cognitive agents. Here again, swarm can be associated with collective intelligence (interactions among the agents in a programmable network) while cognitive agents could use evolutionary algorithms for generating their own knowledge about rules to be applied in process planning and control, for instance. All of these concepts cooperate for generating the whole schedule of the system.

The advantages of the existing concepts are inherited and integrated into the IMS-BP concept.

ch11-f009.jpg

Figure 11.9. Actual production systems classification. The graphic remains incomplete as it does not include advances related to cognition and brain mechanisms

The way bio-inspired mechanisms are used in production systems is still conventional: in the case of no unexpected disturbances or changes on the production shop floor, a raw material becomes the final product by the combining the DNA or BN-information which are generated from the first generation: in fact, in life sciences, every day we discover that our knowledge and assumptions are wrong since the induced mechanisms are much more complicated than expected: they always combine several underlying principles but also antagonisms; at last, the decision is not the result of a computed algorithm but the emergence of a pattern. What we use is just a simplified mimic of a distant reality: there is still a lot of room for enhancements.

11.9.3. Emergence of modern networking: concepts and entropy

Our way of life is entirely based on vital networks. They have been significantly influencing our lives for several centuries. Nowadays, however, we are used to refering only to corporate networks, social networking, e-marketing, etc. Without them, there is no economy, but more importantly, there is no life: no water, no electricity, no power, no transportation and logistics, no telecommunications, information or collective intelligence: our modern world has been shaped around the networks.

The first characteristic of modern networking is that all emerging global networks (physical and informational ones) are intertwined, interconnected and interdependent. Any tiny problem propagates across many different networks and causes, in turn, a cascade of disasters and catastrophes, each one having an economic impact associated with a social crisis:

  • – in January 1998, in Canada: because of an electrical incident, the government had to consider and organize the evacuation of 2 million people in the City of Montreal;
  • – the fall of the World Trade Center in New York City caused outages and Web fails in South Africa (11/9/2001);
  • – an electrical power plant failure near the North Sea caused a huge power failure in Italy (November 2006), Germany and parts of France;
  • – the Fukushima earthquake interrupted the manufacturing of electronic components intended to be used by European car makers in automotive industry, for several weeks.

The second common and global characteristic is that networks are essential for meeting basic and vital needs of the population, economy, security and local or global governance. The human species is now dependent on these ubiquitous and virtual networks: we use them but do not control anything.

Indeed, the development of networks has been done gradually, insidiously, from holonic networks: these holons (or specific agents) are highly interconnected and generate global behaviors that are qualitatively prognosticable but not quantitatively predictable.

Concerning risk management, both the Internet, the cultural addiction to the Web, and the reliance on economics have created unpredictable phenomena and caused and amplified unpredictable disasters – whose impacts remained unassessed.

The third characteristic of the networks is related to the sustainability of the any human achievements: in terms of entropy, what can we say? Possible answers are discussed in the following.

11.9.4. Evolving organization of the networks

Complexification was first done by activity sectors, that is to say, uniformly and consistently, electrical power systems, and then energy transportation networks (e.g. oil) and information networks, etc.

In the second stage, there was an integration and a growing interdependency within these sets of heterogeneous networks: for instance, information networks were coupled with those of the electricity (information is circulating in electrical wires while these wired networks are managed by informational networks), etc.

Information networks, even if they are issued from different backgrounds and based on very diverse structures, are relatively homogeneous: there are few technology providers; the development tools are quite compatible and common. Organizationally and conceptually, this means that diversity and disorder are contained within acceptable limits: in terms of sustainability, this is a good indicator.

On the other hand, those who develop and implement Web applications are a crowd of small and private operators who act independently according to their sensibilities and interests, without any meta-control except the one, but limited, provided by the service providers. In fact, their activity is relatively well-framed and aligned with Web technology. All new features made possible by the development of these global networks, therefore, conduct to a limited, unless low, generation of entropy.

11.9.5. Impact of disturbances

“Disaster” or “bifurcation” effects (as defined in NLDS) can be very important and follow the law of power that characterizes complex systems. Their frequency, itself, follows a James–Stein distribution as was shown in the field of high-tech computer networks [MAS 08]. Moore’s law is also applicable for these complexification processes (not only for computers but everywhere in nature).

As a summary, the networks in which we are embedded regulate our lives and are such that any local, minor and unexpected failure (and this is not a seldom event) often has a global impact on the world (SCI of NLDS).

These facts, now discovered by some specialists in the network area, have been evident for a long time, such as in IBM [MAS 06]. Indeed, people talk about the assembly of critical systems, the combination of several minor incidents that overlap in unexpected ways and cause the emergence of major and widespread failures and so on.

But the risk management concept, a quarter of a century ago, was different. The phenomena related to migration and propagation of faults in electronic circuits were considered inevitable and unavoidable. Computer technology did not have the reliability that we know today. When a large computer consisted of more than 80,000 components, we were faced with this problem daily: we tried to improve the overall reliability, we tried to integrate that risk into our procedures (the computer crash with a major customer was a disaster that could considered but not be planned). It is a kind of anticipation which is not accepted today.

On behalf of sustainability, increasingly less risks are allowed; so, a major disaster or industrial accident is not well accepted in our western countries. This objective is sometimes puzzling since there is now a confusion between “hazard” and “risk”. This can be summarized as follows:

  • – in any system, the first goal is not to prevent, but to delay, the onset and spread of a major incident or crisis: such a strategy is easier to implement and remains more comfortable;
  • – death, as a general concept (either for a human, a cow or an enterprise), is not always easily accepted by the population;
  • – another goal is to seek those responsible for a tragedy and to punish them;
  • – finally, important efforts consist of requesting financial aid and damage compensation;

Our society is in a fully greedy approach: risk management is guided by money and social management, and not by technical and entrepreneurship considerations. Indeed, a “risk” is seen as the probability of a situation that could seriously affect the physical integrity of a person or physical goods (it is associated with possible “damages”). For instance, it is said that we run the risk of catching a cold when going out, bareheaded, in a cold weather, while that it is hazardous (we are endangered) when crossing a street without looking at car traffic. Hazard creates fear and calls for caution: fighting causes of hazard requires more skills, courage and sometimes subconsciousness. In NLDS, unexpected situations require unexpected decisions.

As we can see, the notion of “entropy” cannot be the deep concern of the decision-maker; sustainability has become a societal concern and its definition has still to be refined. Our society, through the network theory, has to rediscover what the term “system vulnerability” means. We have disregarded the experiences and methods of yesteryear, as if thrown into the garbage.

For example, when we state that “creating a dependency relationship between two networks, even if each of the infrastructure is super strong, leads to weaken the global system”, this is disturbing for many people and can be wildering. Indeed, the Internet constraint is simply related to the fact that it is an assembly of computer networks systems often designed and developed to run in a local and self-sufficient way.

To build the complete Web network, there has not been a no “dynamic system” approach:

  • – interactions are of different natures and are not considered as essential integration features (interactions being much more important than the intrinsic function of a given application);
  • – the concepts of decoupling are not integrated into product design;
  • – security and safety are not considered in terms of “isolation” but in terms of “localization” (which is not the case in electronics or automation);
  • – lastly, there is a misguided desire steer and control these complex networks while they are autonomous and self-organized.

In brief, to better control these networks, it will be necessary to design and develop systems even more complicated because, to recall a simple observation, “we do not control the human brain with a single neuron”.

In returning to sustainability, the objective is not to add more entropy to the system because it will not so far increase its durability or survival, but simply to change our way of thinking with a new paradigm.

11.9.6. Lean concepts: continuous flow manufacturing (CFM) and just-in-time (JIT)

These two concepts, developed in industry, are not fully similar since they address complementary problems and objectives [ELM 97] in planning and scheduling. They are relevant to “lean” approaches and can be easily applied, however, in our context: reliability of the global network is directly depending on the fact that all of the subnetworks are used and growing in JIT.

In practice, the growth most specific networks is always a posteriori, under the pressure of demand, according to changing needs, and finally to fulfill some stability requirements. Indeed, cost and performance factors are important goals reflected in international competitiveness, and the choices, in terms of investments, are very tight. Continuous dynamic simulation will be used to adjust parameters of such strategies, as well as to assess the risks and limit backorders.

Thus, when choices are made by decision-makers, they are perfectly aware of the situation. For this reason, and by comparison:

  • – industrial systems also operate in lean mode, that is to say, with minimal inventories (in terms of equipment and production in progress);
  • – energy networks, information networks and logistics services are developed in the same way, with a minimum investment. Thus, in the case of major breakdown of some critical networks (electricity, transport and virulent pandemic), society is always the first to feel its impact. Nevertheless, we must also remember that society is responsible for it because it is at the origin of the deviances:
    • - as a consumer, it imposes given practices and competitive factors;
    • - as a user, it requires everyday more computing power and mobility;
    • - as a government, it deploys informatization of new usages: e-administration and e-commerce, etc.

Service providers, in parallel, cover the logistics and are in charge to delivering the right amount of information, goods and services at the right place, right time and right cost. As a producer, it limits its investments and production costs.

As often mentioned in this book, the world exists in duality: it enables us to find the good equilibria (convergences) and avoid a complete energy/temperature dissipation, which is the signature of a complete disorder (high entropy). In this sense, speciation, as described here above is a good way to reduce entropy generation and ensure a better sustainability.

Nowadays, in the networks, we conduct preventive storage only when strategic coverage is required (to protect against competition problems) or to satisfy precautionary principles (critical components, information or product supplies, etc.).

This is an unsustainable situation because when stocks and inventories are allowed, as when setting up costly solutions to control the dysfunctioning of our networks, we accept imperfections and regard the presence of poor quality and lack of reliability as normal, in the design and development of human achievements.

Here, the real question is how and why NLDSs evolve and diverge? What is the type of behavior to which we can converge? What are the long-term effects of such a situation?

Indeed, all systems issued from complexification are unpredictable; it is important, as already described [MAS 06], to further explore possible approaches to the simplexification of a subject system rather than complexifying control procedures which can only contribute in increasing its entropy, thus further reducing its sustainability.

11.9.7. The general problem of “decoupling” processes

In order to reduce the lockout effects, the differences and impacts of disasters, we can proceed with the introduction of internal chaotic disturbances. This is well known in automation and consists of:

  • – starting a small and voluntary lockout in a well-specified network to avoid the spread of a phenomenon and to prevent bifurcation (this reduces the diversity, so the generation of entropy);
  • – introducing a chaotic-type noise to disturb the operation at a network node level and “break” non-required dynamicity of the system;
  • – changing the status of a downstream demand level, to counteract a disruption, etc.;
  • – changing the back propagation weight factors;
  • – conducting decoupling, by removing feedback loops: this allows the system to be split (through partitioning and grouping techniques) [MAS 08];
  • – carrying out a better definition of the applications granularity and the K-network connectivity to keep them more reliable and viable: indeed, it is through these parameters that we can keep converging a complex system to a given attractor type.

What is important to know is that these networks are always subject to self-organization; we can focus them in a given direction: to not control complexity, and also, to not try deglobalization. In the Internet or Web networks, as in nature, they always evolve irregularly and in a non-reversible way, without possible backtracking.

11.9.8. Network and Web sciences

Many scientists focus on network theory to study and understand what is happening with the networked world which surrounds our society.

In order not to rediscover already-known approaches or to knock against some still unknown paradigm, such as those relevant to bio-mimicries, it would be advisable to highlight some underlying principles and mechanisms already used in economic and social sciences (already applied in enterprise engineering).

Indeed, when analyzing the effects of some major disturbances occurring in a network, we may report that:

  • – the conventional notion of distance has no meaning (in terms of transportation cost, delivery time, troubleshooting time propagation and planning) because the scheduling of needs can be shift, according to their location around the world;
  • – the notion of informational distance between two database servers is usually very low (as we have already seen), due to the fact that the structure of the Web is very tangled (with fast transmission circuits such that they are found in the brain, through axons and wireless communications) and also because of the high concentration of some critical nodes;
  • – meta-governance of networks does not exist (in terms of real-time supervision of the whole network structure); there is no detailed mapping (it is like in a black box) and no overall networks management. For these reasons, the ATG, in IBM, developed approaches based on the automatic generation of asynchronized orders to test the capabilities of a system or network to detect and recover from complex dysfunctionings. Similarly, implementing in such NLDS some control functions still more complex than the system itself can only carry technical risks, e.g. quick obsolescence of ongoing planned actions (due to quantitative demand changes and permanent reconfigurations requests leading to an additional management of FFBMs, the availability of resources with their associated constraints, etc.), side effects due to unreliable modifications, improvements and enhancements, etc;
  • – the granularity of nodes, or size of servers, and network K-connectivity, are key elements of security and network performances. Here the study of cellular automata and automata networks will provide key information, in terms of sustainability;
  • – now, concerning NLDS-type approaches: the Verhulst formulations (related to the study of inventories chaotic distribution) as well as those of Boltzmann (for the elastic transform of component flows in interconnection links, e.g. PLOOT [MAS 06]) are technologies essential to better understand the dynamic behavior of such systems. It is a big mental change since in most of our engineering office, people are often reasoning in a static manner;
  • – architecture considerations: the gradual and continuous expansion of networks is often done without reconsidering the structure of the backbones and highways. This causes great instability of the system, with flow and density fluctuations associated with large and too strong coupling, diffuse feedbacks within the network, etc.

Lastly one very important effect of network implementation is related to the standardization and unification of concepts:

  • – in fashion, there are products which are produced and sold in most of the worldwide populations under the same brands;
  • – in world culture, there were over 30,000 different languages at use at the beginning of the last century. This number, due to a few dominant countries, could be reduced to only several hundred different languages;
  • – in terms of energy procurement, the same standards and specifications are adopted by all the countries;
  • – economies are interconnected and dependent at the same time. Practices and regulations are implemented and controlled through global and international organisms, such as IMF and ECB, in banking and finance;
  • – in web applications, the same architecture, tools and applications (as mentioned before) are used by more than 2 billion people around the world.

As a result, the developments and evolutions of our society are carried out with a minimal entropy generation, knowing that the entropy in some specific areas has decreased.

To specify some of our statements related to the entropy creation, it is useful to introduce some additional concepts to be applied in the networked information systems implying technologies such as the Internet, social networks, the World Wide Web and other imbedded applications. In many recent studies, it is said that the utility of information depends on Metcalfe’s law.

The number of arrows (hence links) in a complete graph comprising n nodes is equal to:

eq02.jpg

which can be approximated by n2 / 2 as soon n is increasing. Metcalfe’s law stipulates that the more there are links within a network (that is to say possible “pair of connections”), the more people can be interconnected and the more it is valuable.

Nevertheless, Reed’s law is more suitable for this kind of analysis: it is an assertion formulated by David P. Reed which states that the utility of a network, like a social network (Facebook, etc.), can scale exponentially with the size of the network. Indeed, the number of possible subgroups of network participants is:

eq03.jpg

where N is either the number of participants or nodes. It is quite normal to reason like this because people are often catering to several groups of interest. This utility grows much more rapidly than Metcalfe’s law, so that even if the utility of groups available to be joined on a peer-to-peer group basis is very small, the network effect of potential group membership can dominate the overall economics of the system:

  • – so, if we are registered, as alone in a network, we will find it useless; but if everyone we know is also a part of the network, sharing and exchanging information is beginning to make sense;
  • – the content of collaborative tools (e.g. Wikipedia) is enriched over time and becomes globally more consistent (efficient, unique, completed and neither redundant nor contradictory), in a faster way than the speed of growth in the number of contributors.

Consequently, the networks described above, with their pros and cons, are the best way to develop our capabilities, when being associated with a minimal entropy generation. For all these reasons, we will say that networking of our society is a rather sustainable process able to develop the human species, and hence DNA, at the lowest thermodynamical cost.

11.10. Artificial life and collective thinking science

Many people and scientists talk about bio-inspired systems. With regard to Benyus [BEN 97] and Paulo Leitao, it is time to start developing bio-inspired systems. It is necessary, however, to point out some actual practices related to bio-inspiration. A lot of scientists and architects, etc., implement innovative processes shapes or patterns in their work. The problem is that they are sometimes poor mimics of reality, where they simply crated nice solution to a given problem or to develop an alternative to a computerized algorithm, etc. Moreover, they are applied in a static way: do we know how and why such or such pattern has emerged, where it comes from and what it will become in future? What are the global interactions, constraints and embodiements associated with this pattern? What kind of dynamicity does this shape entail? What can we do with this solution for the living? Does mimicry give better sustainability over time? Why? It is not just the perpetuation of a situation or a system (this is a static and defensive position) but a plan for switching toward a new paradigm.

11.10.1. General comments about bio-mimicry

The first case study is one related to the development of either rough materials (e.g. specific iron+carbon alloy), a new functionality (e.g. a new medicine to fight an illness) or an alternative solution (e.g. salt extraction performed by some living organisms). Large databases have been set up within this framework. They also enable geologists and engineers to work together, develop transdisciplinary skills and generalize the system analysis approach which is useful to address problems related to global and sustainable contexts.

Even if each scientist or engineer is at first only concerned about their own problems, it is important that they remain open-minded to discoveries at the border of different disciplines and to transpose them in innovative areas.

Here is a fundamental choice set in managing skills in an enterprise:

  1. Working within complex systems requires to focus attention on structure and not on function.
  2. As already stated, it is much more important to ask questions such as “how” rather than “why”. This denotes a less diversifying reasoning process, leading o low entropy generation.

  3. In biodiversity, as in a company, everyone can play a useful role; everybody is important and complementary to others: when facing a peculiar situation, a deviant organism will be able to satisfactorily solve the problem.
  4. To resolve a complex problem requires us to think in terms of reconfiguration and self-organization which are both entropy reductive. Not in terms of optimization and constraints programming which generate entropy.
  5. Ecosystems of type #3 are those for which the growth rate is low; the available surrounding resources, however, are used in an optimal way. Everything rejected by an upper level system is dismantled, then exploited or transformed by another lower level system, and finally recycled and reused by the upper one, and so on. Except for the energy brought by the sun, everything is working in a closed, loop system where many agents are involved in a complementary way. Such systems are very self-protective against external disturbances. Moreover, because of their low level of dissipation, they are generating a minimum of entropy.
  6. Focus on system analysis approach as it is done in nature, etc.

For over a decade, several examples have existed in our daily lives, where engineers are exploiting the reusability of products and services to improve the sustainability of the whole economic and industrial world. For instance, as defined in the IMS program:

  • – recovery and renewal of electronic components (with a good reliability rate), issued from some higher assemblies back from customer area, to be used in different features and new assemblies;
  • – reuse of the building foundations in construction (Japan);
  • – heat issued from a nuclear power plant used to improve the production efficiency in fish farming;
  • – dismantling in car industry where many parts and subassemblies can be recovered to provide second-hand market in the so-called “development economy’, etc.

11.10.2. Bio-inspired information systems

The second application field is related to organization and complex systems management. Indeed, conventional optimization, monitoring and control are dissipative approaches, and then entropy generative. Another way to manage such systems is to look at nature, since, for several billions of years, it evolved in implementing innovative and sustainable complex systems. Before developing this point, we must analyze the evolution of the physical implementation of a DSS or information system.

ch11-f010.jpg

Figure 11.10. Evolution of an information system architecture

Figure 11.10 details the evolution of the main information systems architectures. Even if some terms such as “intelligence” seem to be inappropriate (“smart” being worthwhile), we can see that it successively integrates:

  • – telecommunications and communication networks which were first private networks before becoming public and accessible to everybody. We are in the conventional era, where granularity of the applications is quite high and associated with closed communication protocols. Extended enterprises concept is going to be developed and properties about flexibility and adaptivity start to be deployed;

  • – ubiquitous evolution toward the notion of pervasive computing. Nowadays, information systems are ubiquitous (whatever the products and facilities considered); it is the opposite of the customized PC concept, where a resource is dedicated to a specific use. Information processing is more related to a specific “work” and “network” organization; it is embedded everywhere and is integrated into smart products and devices, from clothing to tools to appliances to cars to homes to the human body to your coffee mug, and can be embedded with chips to connect the device to an infinite network of other devices. Many electronic features (such as mobile tags, wireless sensors, RFID, etc.) are used in parallel with several integrated computing devices or information systems working simultaneously in an information network (such as Internet, LAN, etc.). Thus, the goal of pervasive computing is to create an environment where the connectivity of devices is embedded in such a way that the connectivity is unobtrusive and always available. We are deeply integrated in the so-called extended open applications and networked intelligence. Due to social networks, we are now switching from distributed intelligence to collaborative intelligence;
  • – ambient intelligence (AmI) is the result of fully pervasive computing. It refers to electronic environments that are sensitive and responsive to the presence of people and various communicating objects. Here, we are in the area of the Internet of Objects or Internet of the Things. Such information systems, presently planned, are more devoted to “home” and “health” environments due to a lot of mobile internet devices (MIDs) and localization devices which are autonomous, interconnected and associated with new interfaces that can be used in mobile applications. Within this vision on the future of consumer electronics, telecommunications and computing environment, the AmI world can provide services and support to people in carrying out their everyday activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. As these devices grow smaller, more connected and more integrated into our environment, the technology disappears into our augmented surroundings until only the user interface remains perceivable by users. It is a user-centric paradigm which supports a wide variety of embedded and distributed artificial intelligence methods; it works pervasively, non-intrusively and transparently to aid the user, to exploit knowledge and to make the collaboration between people easier, making them aware of their activities and context.

We suggest, however, to complete such a graph with an additional step related to the worldwide usage of the Internet. Indeed, the conventional planetary networks will still evolve and generate mutations leading to cultural and governance paradigm changes. Indeed, AmI will still develop due to pervasive computing. However:

  • – governance and organization of our activities will change due to a better perception, analysis and processing of our consciousness and unconsciousness. The content of user-centric processing will evolve (as for the architecture of its associated IS). Indeed, something pervasive cannot be efficiently controlled by laws and procedures. Here, ethics is becoming a challenging method;
  • – the conventional and large information systems architecture will be destructured due to the cloud computing capabilities, as we will see later; they enable more intensive processing and this will change the structure of large industrial and business applications such as business intelligence, event anticipation, organization and planning, etc. We are in the simplexification and decoupling era;
  • – meta-governance, self-learning and self-organization will force decision-makers to develop and introduce innovative concepts and tools for enhancing or introducing paradigm changes in the control and management of large and complex organisms. It is the end of the dictatorial eras;
  • – the “Internet of the Things” will evolve, however, in a very natural way to become the “Internet of the Mind”.

As we can see, most of these changes are limiting the generation of entropy. Indeed, we are heading toward more harmonious and consistent environments. However, as stated in the last sentence (last step), we are heading toward a complexification: the Internet of the Minds will be followed by an explosion of the knowledge. New thinking will emerge, new ideas, concepts, models, intellectual assets, cultures and spiritualities will be generated, and this disruptive step will be associated with a new diversity, that is to say an increased entropy.

11.10.3. Reminder of bio-inspired technologies and their sustainability

Compared to network sciences, bio-inspired technologies mainly bring two additional and sophisticated capabilities:

  1. Swarm intelligence

    In the natural environment, collective intelligence is carried out by simple interactions of individuals. Swarm intelligence is established from simple entities, which interact locally with each other and with their environment. Nevertheless:

    • – collective intelligence can be the result of limited cognitive abilities (e.g. ants population communicating through chemical substances called pheromones);
    • – reinforcement technique. In an ant colony, the shortest route from the nest to the food source is relevant to this technique: when the first ant finds the food source and returns to the nest leaving behind a pheromone trail, the evaporation of the pheromone trail reduces its attractive strength and this evaporation avoids the convergence to a locally optimal solution. Ants can follow many possible ways from the nest to the food source and back again, and progressively select the more attractive and shortest route.

    In manufacturing systems, which is seen as a community of autonomous and cooperative entities, self-organization is carried out by reorganizing its structure, through a local modification and matching between machine capabilities and product requirements [MAS 08, LEI 09]. Each machine has a pheromone value for a specific operation and the machine with the shortest processing time for a specific operation has the highest pheromone, without external intervention.

    In the bio-inspired concept, swarm intelligence technology can be applied in the integration of manufacturing scheduling and control where the manufacturing architecture is a swarm of agents. Each agent represents a manufacturing resource such as a robot, a machine tool and a workpiece. These agents use the ant colony algorithm for generating better operation planning, and then negotiate to generate the whole scheduling for the system. The embedded intelligence and learning skills for each agent determines the flexibility degree of its behaviors. This is the same approach which is going to be implemented in the next generation of air traffic control: more autonomy will be assigned to each plane and the routing control will be performed by the planes themselves in interaction with the other ones in a given neighborhood.

    Here, swarm intelligence technology optimizes resource use efficiency in collectively improving the solution; it is concomitant with waste minimization since, over time, the proposed path (or logical solution) produces increasingly less waste at the individual level, providing a positive action. Indeed, according to the second law of thermodynamics, high entropy wastes are incompatible with the low entropy generation inherent in nature’s biosystems. The solutions based on the system’s integration enable us to capitalize on embodied energy (experiences and errors) in previous wasted solutions. This waste thus becomes realized as feed streams, or assets, for new solutions and reusable experiences: then, the production of solutions entropy, especially due to inappropriate strategies, will remain low.

  2. Cognitive agent

    In order to increase the intelligent behaviors of agents, cognitive capabilities are equipped for agents by using the cognitive technology. Concerning the swarm intelligence aspect, manufacturing systems are considered as a swarm that shows collective intelligence by interactions among the holons or agents. In order to implement this approach, agent technology can be used [LEI 02]; a quite evolved approach, however, based on BDI [MAS 06] – belief, desire and intention concept – has been implemented in the PABADIS European project in 2004 to improve the autonomous characteristics of conventional agents.

    Here, the difficulty is to integrate cognition and smart behaviors in these cognitive agents to ensure the flexibility of the manufacturing system for adapting it to the changes and unexpected disturbances [PAR 10]. This requires an agent to use its own knowledge and experience to make a decision that is suitable for the status of the resource, and then to face with an unfamiliar status: here, self-learning and inheritance capabilities must be provided for the agent. This is why in our model of BDI agents, a hybrid approach based on knowledge technologies (CBR) and ANN were planned.

COMMENT.–

In industry, and more specifically in industrial automation, many similar applications are implemented using a wide variety of software tool sets directed at a number of different operating systems with varying degrees of commercial success.

This unchecked OS variety, however, has significantly increased automation system entropy. Herein, we refer to a measure of the complexity of software interfaces in industrial systems, their performance cost and overall lifecycle economics.

Another aspect of sustainability pertains to software control and administration processes. Regarding reliability, and facing an increasing system entropy (in finance, environment, effluents, emissions control, etc.), the systems must continue to perform their designed functions flawlessly. No matter how much people are assigned to such a control or their physical and logistic means invested in it: the objective is to continue working correctly to ensure viable and secure solutions. This sometimes means independently of a consistent and economical performance and with no consideration of a sustainable competitive advantage for the organization. Indeed, each organization develops their own formulations of management dissipative structures containing some positive and negative entropy flows. By solving these formulations, or by comparison, a best suited structure can be estimated in order to implement a more sustainable solution with a better acceptance from the population [ZHE 10].

A question may arise concerning how entropy can accurately characterize algorithms performance in a DSS. According to our experience, it seems that entropy alone cannot characterize the performance of any (or best) algorithm to be used. On the other hand, by comparing two proposed solutions, through simulation, we can give a reliable statement. Some interesting research in different areas is conducted to explore whether entropy gives good performance bounds for some online problems known in the literature. In addition, a method of determining the linear combination weights based on entropy, using optimization theory and Jayne’s maximum entropy principle, has been studied to deal with the problem of determining the weights in multiple attribute decision-making. These are improvements that we will no longer develop in this chapter.

11.10.4. What about cloud computing?

Cloud computing is a well-known concept widely used in today’s strategies. It consists of using a distributed information system among several servers and computers via a digital network, as though they are a single computer. It provides computation resources, software, data access and storage: these services do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electric grid, wherein end-users consume power without needing to understand the component devices or infrastructure required to provide the power service. This concept dates back to the 1960s, when John McCarthy stated: “computation may someday be organized as a public utility, like in the electric industry”. Figure 11.11 (from [WIK 15]) illustrates quite well the apparent structure of such a concept.

ch11-f011.jpg

Figure 11.11. Enterprise and cloud computing (source: [WIK 15])

The first characteristic of cloud computing is that the computing is “in the cloud”: as stated in this encyclopedia, the processing (and associated data) is not located in a specified, known or static place(s) but in a network of servers, or in the cloud of a service provider like Google: as in a virtual production system, the processing takes place in one or more specific servers that are known.

Another characteristic in “cloud storage” is that, for instance, data are stored on multiple virtual servers, in general hosted by third parties, rather than being hosted on dedicated servers (as it is the case with private cloud networks, for quality or security reasons); the task processing is done on the Internet via a WiFi or 3G connection. Some IS companies can operate large data centers, and people who require their data to be hosted buy or lease a storage capacity from them and use it for their storage needs. But in a physical sense, the resource may span across multiple servers.

As we can see, the cloud is a destructuration of the heterogeneous constituents included in many diverse applications (as shown in the “five boxes” icon of the J.B Waldner graph in Figure I.4); then through the cloud, we proceed to the restructuration of homogeneous contents to get wider, consistent, maintained and secured distributed clusters. Applications are also directly maintained and updated on a server where they are centralized.

The third characteristic of the cloud is that all the ingredients are there to create collective intelligence: applications are distributed in many servers and can be shared by several people working together; the database can be commonly shared; we can count on the emergence of synergies through clashes of different thoughts and sharing of reasoning results.

The only difference is that in the cloud, the collaborative technology is already implemented and ready for use either at individual or collective levels.

11.10.4.1. In what way are cloud computing and collective intelligence involved with entropy?

Entropy is associated with a pseudo randomness which is generated by an information system and made available through different applications.

In software, a source of entropy is not as random as expected: it can be found in the weakness that was introduced into an application. Indeed, a single wrong line of code in the open source of a software package may cause a side effect. This was the kind of problem we encountered when developing complex operating systems on telecommunications systems, in IBM, a long time ago on our manufacturing sites.

The same happens with the analysis, interpretation, speculation and generation of rumors in the area of collective thinking: these operations may generate unexpected troubles, deviances and societal disturbances, etc.

When a subject and unique information system or application is developed, owned and used interactively in an arbitrary manner by a wide population of humans, we may encounter huge entropy generation at the application and results level. However, concerning the entropy generation of the support system itself, the energy balance is different since this structured organization is low in entropy generation.

More precisely, in a collaborative environment, such as collective thinking and cloud computing, we generally work from a virtual and secured hardware server and with simpler common applications shared by many users. Thus, the rate of entropy generation is lower. We also rely more heavily on unattended events like network and database dysfunctioning; potential problems may also arise from the underlying hardware and generate entropy.

Thus, it is again a dual aspect of sustainability: some parts of any complex system are entropy generators, while others are entropy reductors. As we have in NLDS with positive and negative feedback loops, we have to perform global and systemic analysis of the entropy, considering positive and negative entropy generation mechanisms, that is to say, entropy and anti-entropy.

In terms of sustainability assessment: as soon a set of virtual machine instances run within a cloud-based virtualization service, they can potentially share a same source of entropy, issued from specific errors in the underlying information system. If we are able to predict the stream of entropy that might be utilized by an application on one of those instances, we can target the entropy generation related to a specific customer.

As a result, in terms of sustainability, the generation of entropy would be lower than in a dedicated private information system (without considering the security problems which are supposed to be solutioned by an encryption or whatever, and knowing that they are never 100% reliable).

11.11. Conclusions

In this section, to avoid any misunderstanding, we are discussing two points.

11.11.1. Proposal for a new approach in information and business theory

In the previous chapters, the underlying mechanisms of physical, industrial and organizational events were studied to improve and enhance the design of some products and services.

As a reminder, it is suggested to work in a multidisciplinary and transdisciplinary way in order to integrate new capabilities and achievements coming from completely different fields.

Within this framework, bio-inspiration and networking sciences were often quoted.

11.11.1.1. First weakness

The first weakness of the design approaches and methodologies is that mimicry is applied in a partial and static way. Indeed:

  • – life is a marvel of chemical reactions, in relationship with the whole environment;
  • – life possesses some specific and unusual properties related to quantum physics.

Indeed, in the first point, it is important to mention that we cannot consider a system independently of its environment. Integration of new concepts requires “system analyses”. The second learned lesson is related to the people’s behavior. Some reactions and statements such as “I don’t care, it’s not my problem” issued by a new generation of managers, are completely irrelevant: we cannot work independently from others. A firm is a living system and the notion of “general interest” is innate in our ways of thinking and acting. We are heading towards a same global entity: greed, individualism and selfishness are luxuries that we can no longer afford.

The next lesson is related to the fact that the introduction of fundamental physics in our bio-inspired processes is a necessity. Indeed, reversibility of time, quantum effects, etc., are emerging in the design of living organisms, not only at micronic level or cosmic level but also at meso-and macrolevel of our enterprises. Nature has developed and exploited such properties because they are able to bring an advantage: even if the amplitude of some physical phenomena is small under normal conditions, they are there. We can mention:

  • – the ubiquitous property of quantum physics allows an object to reach several physical states at once. This “superposition” of states allows a system to have two simultaneous ways for a given problem solving, for evolution paths or search of steady states;
  • – the tunneling effect allows us to optimize some properties in a system. In fact, every object has a dual nature: it is a particle associated with a wave. When an obstacle appears in the path of a particle or a solution, the alter ego wave can proceed its evolution or transformation, as if the obstacle did not exist. With this in mind, any exchange between two agents is, therefore, more effective; actions of agents can be optimized as happens with the catalystic enzymes or in the brain with the new discovered wireless communication ways;
  • – quantum intrication principle can be applied in higher level assembly. This property allows two quantum objects to behave in a more efficient and effective way (synchronized and consistent behaviors): when interacting together, their individual properties are, or become, complementary and inseparable, whatever their physical separation distance: this ensures more intimate structures and more stable solutions because we superimpose the intrinsic properties in the elementary parts of a system.

11.11.1.2. Second weakness

The second weakness is that universality principles always apply, whatever the fields considered and whatever the physical size of the agents. Here, the main property we are interested in is that nature is a dual one: this is valid for a given behavior or a functionality to be integrated in the design of an assembly and to get a better sustainability. This is essential in any design for sustainability (DFS): when developing a new product or information system, the presence of antagonisms is a must; when considering the contribution of physics, each of involved part in a system generates more or less entropy, thus, this has to be included in any global sustainability analysis.

The behaviors of each constituent are the results of interactions between agents which are relevant of NLDS theory and network theory. Positive and negative feedback loops are at the heart of these complex behaviors. But most important, it is mandatory to get antagonistic effects at each level of each characteristic. It is a general fractal construct based upon complementary and contradictory properties which allow ensuring a constant and sustainable evolution. Otherwise, deviances and irreversible divergences could be a danger for the equilibrium and evolution of a global system.

11.11.1.3. Third weakness

The third weakness is related to the fact that behind innovative paradigms and sustainability, thermodynamics principles apply. Indeed, life is the characteristic of autonomous agents that are energy-consuming and dissipative, able to reproduce and adapt by themselves. So, in a business intelligence, or even in information systems, it is clear that bio-inspired features can bring some enhancements: the discovery of DNA, the interacted role of proteins and enzymes, their underlying mechanisms, etc., provide obvious advantages in our decision and organizational systems; this is why decision-makers and scientists try to include their properties in our snippets of solutions, to develop more sustainable solutions.

But, we cannot ignore the various contributions of physics: as soon we are introducing changes of configurations, assembly of living agents, emergence of new orders, converging attractors, transformational processes, dissipative and chaotic behaviors, etc., thermodynamics and its associated entropy is there. Problem is to introduce the notion of entropy in our processes and to use it as a main factor able to measure the sustainability.

11.11.1.4. Fourth weakness

The fourth weakness is related to the lack of consideration about the DFS. In the area of sustainability, thermodynamics is able to account for a number of phenomena related to the self-organization, transfer and processing of information, but limits are quickly reached when we look at the concept of information taken in its integrity. In fact, a first restriction was put forward by Shannon himself about the content and meaning of a message: this would be of no interest.

When several pieces of information can react together, new concepts can emerge through a process of integration, assimilation and propagation in a cognitive corpus, as we have in cellular automata or informational thermodynamics.

11.11.2. Conclusion

As a conclusion, we can state that:

  1. Assimilation of information is an irreversible process: at the beginning, we cannot understand and use insignificant information, but once information is included in our brain and integrated in a cognitive domain, we cannot “forget a previous learning”, except in cases of mental illness or brain asset regeneration over time (as we have in the learning of best fitted search parameters on the Internet). Thus, for common knowledge, we consider there is no reversibility as required by chemical thermodynamics.
  2. Concerning the application of a model-type thermodynamics related to irreversible processes, we can raise an objection: in life sciences, an emerging stable model can be called “maximum entropy”. Conversely, in economy or industry, in complex systems, there are several information levels; here, stability, represented by attractors, is referred to as meta-stable and steady state. Each stable level is the logical result of a corpus of knowledge, with a given ranking according to the types of interactions and interdependencies. Consequently, we will get different levels of entropy generation.
  3. Finally, the best vision about information thermodynamics concerns the communication and information propagation between different agents in a homogeneous and consistent medium, in a specific field of application. Indeed, global knowledge and emergence of innovative cognition is growing, simply by exchanging information (oral, written communications or publications). Added value comes from some processings and enrichment by other people in the society or different communities.

Like in physical phenomena, or physical systems, there may be a “wear” phenomenon because, as a molecule only acts in a limited number of reactions, information is transitory. We can interpellate once, 10 or 100 times more than other people, but strength and efficiency of such a measure are progressively decreasing, as in a dissipative process to get a uniform “temperature”. Moreover, the difficulty in DFS is to integrate dynamic properties of the objects and agents and to show the absurdity and irrelevance of an object, piece of information and decision. This shows, in terms of sustainability, that information system entropy is a basic indicator which always evolves over time.

11.11.3. Concluding remarks

In information systems and business intelligence, it is a necessity to explore alternative approaches, even if they can be iconoclasts. Within this framework we can quote Jean Pierre Bernat, in reference to an excerpt from article [BER 99] published in the 1990s, the newspaper “Le Monde”: we have to think and act in quanta, keeping in mind that equilibria and sustainability are always depending on thermodynamic considerations.

There is no difference concerning bio-inspired systems and DSS, but again, what we have to keep in mind is that human beings will never be able to mimic and emulate the brain:

  • – as stated by Sir John Eccles (Nobel Prize for Medicine in 1964), a number of questions on the functioning of brain and, more specifically, the emergence of thought are still pending;
  • – as we continue to discover new underlying principles the brain will continue to evolve and new mechanisms and capabilities will emerge. Thus, it will be an endless quest for mimicry;
  • – self-organization and self-awareness always lead to new paradigms and unexpected solutions, at a minimum cost. It creates a lot of innovative significances and interpretations.

As we have a brain, the problem to be solved is how to make compatible the action of an intangible event (thinking, conscious or unconscious) on material, organs and final goods (for instance, real or ANNs) with the laws of energy conservation imposed by classical mechanics. In this area, uses of theoretical physics and comparisons of this “field of consciousness” with some fields of probability or plausible situations are described in quantum mechanics.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.96.105