2
Agent-based Modeling of Human Organizations

2.1. Introduction

This section has a narrow spine but a wide embrace. In addressing the relationship between agents and organizations, it takes in an extensive but highly fragmented set of ideas and studies embedded in organization theories. Its purpose is to try to devise a common ground model from which organization theories of various sorts can be logically derived.

Many theories have been developed to explain how organizations are structured and conducted and how the stakeholders involved behave. Each one takes a definite point of view without giving the opportunity to understand how they are possibly correlated with each other and whether a mapping of a sort between some of them can be figured out. On the contrary, founders of new approaches seem in most cases to ignore previous works. Agent ontology can function as a background model, allowing for the derivation of the main organizational theories from this base.

2.2. Concept of agenthood in the technical world

2.2.1. Some words about agents explained

In the technical field, the concept of agenthood is widely used. A general definition of what an agent is has been produced by J. Ferber (1999) (p. 9). Its adaptation is as follows:

All the terms in italics describe the key features of an agent (action, communication, objectives, autonomy and availability of resources). Virtual entities are software components and computing modules. They are not accessible to human senses as such but, as their number grows exponentially in our living ecosystem, they are contrived to become more and more the faceless partners of human beings.

According to their main missions, specific names have been given to agents, namely communicating agents (computer environment without perception of other agents), situated agents (perception of the environment, deliberation on what should be done and action in this environment), reactive agents (drive-based reflexes) and cognitive agents (capable of anticipating events and preparing for them).

It is important to contrast object, actor and agent. In the field of computer science, object and actor are conceived as structured entities bound to execute computing mechanisms. An object is depicted by three characteristics:

  • – the class/instance relationship representing the class as a structural and behavioral meta-model and the instance as a concrete model of the context attributes under consideration;
  • – inheritance enabling one class to be derived from another and benefiting from the former in terms of attributes and procedures;
  • – message discrimination to trigger polymorph procedures (methods in data-processing vernacular) as a function of incoming message contents.

The delineation between objects and communication agents is not always straightforward. This is the fate of all classifications. If a communication agent can be considered as an upgraded sort of object, conversely an object can be viewed as a degenerate communication agent whose language of expression is limited to the key words corresponding to its methods.

An agent has services (skills) and objectives embedded in its structure, whereas an object has encapsulated methods (procedures) triggered by incoming messages.

Actors in computer science perform data processing in parallel, communicate by buffered asynchronous messaging and generally ask the recipient of a message to forward the processed output to another actor.

Another concept associated with agents is what is called a multiagent system (MAS) (Ferber 1999). It has become a paradigm to address complex problems. There is no unified, generally accepted definition but communities of practice. The approaches followed by the system designer, namely functional design or object design, are chosen on the basis of answering the two following questions:

  • – What should be considered as an agent to address the issues raised by the problem to tackle?

A system is analyzed with a functional approach when centered on the functions the system has to fulfil or with an object approach when centered on the individual or the product to deliver.

  • – How are the tasks to perform allocated to each agent in the whole system from a methodological point of view?

There is no miracle recipe to achieve a good design. In addition, it is possible to analyze the same system from different angles and to deliver different designs. The approach for coming to terms with a problem is influenced by the historical development of the field involved. Many people are inclined to think of MAS as a natural extension of design by objects.

It is worth noting that the MAS paradigm has disseminated in many technical areas where centralized control was a common practice. For many a reason, especially the computing power of on-board systems and the reliability of available telecommunication services, coordination between distributed systems takes place directly between the very units of the system without any central controlling device. This situation already prevails in railway networks.

2.2.2. Some implementations of the agenthood paradigm

The concept of agenthood has been applied in various technical fields from the 1990s onwards. Two examples will be described here, namely telecommunication networks and manufacturing scheduling.

2.2.2.1. Telecommunications networks

The world of telecommunication networks is extensively modeled on the basis of this concept. A telecommunication network is a mesh of nodes fulfilling a variety of tasks. Each node is an agent. It can be defined as a computational entity:

*acting on behalf of other entities in an autonomous fashion (proxy agent);

*performing its actions with some level of proactivity and/or reactiveness;

*exhibiting some level of the key attributes of learning, cooperation and mobility.

Several agent technologies are operated mainly in the telecommunications realm. They fall into two main categories, i.e. distributed agent technology and mobile agent technology.

Distributed agent technology refers to a multi-agent system described as a network of actants with the following advantages:

  • – solving problems that may be too large for a centralized agent;
  • – providing enhanced speed and reliability;
  • – tolerating uncertain data and knowledge.

They include the following salient features:

  • – communicating between themselves;
  • – coordinating their activities;
  • – negotiating their conflicts.

“Actants” are non-human entities such as configurations of equipment, mediators and software programs and are distinguished from actors that are human beings. But actors and “actants” are entangled in ways that provoke complexity dynamics in many circumstances.

Mobile agent technology functions by encapsulating the interaction capabilities of agents into their descriptive attributes. A mobile agent is a software entity existing in a distributed software environment. The primary task of this environment is to provide the means which allow mobile agents to execute. A mobile agent is a program that chooses to migrate from machine to machine in a heterogeneous network.

The description of a mobile agent must contain all of the following models:

An agent model (autonomy, learning, cooperation).

A life-cycle model: this model defines the dynamics of operations in terms of different execution states and events, triggering the movement from one state to another (start state, running state and death state).

A computational model: this model, being closely related to the life-cycle model, describes how the execution of specified instructions occurs when the agent is in a running state (computational capabilities). Implementers of an agent gain access to other models of this agent through the computational model, the structure of which affects all other models.

A security model: mobile agent security can be split into two broad areas, i.e. protection of hosts from malicious agents and protection of agents from hosts (leakage, tampering, resource stealing and vandalism).

A communication model: communication is used when accessing services outside of the mobile agent during cooperation and coordination. A protocol is an implementation of a communication model.

A navigation model: this model concerns itself with all aspects of agent mobility from the discovery and resolution of destination hosts to the manner in which a mobile agent is transported (transportation schemes).

2.2.2.2. Manufacturing scheduling

Scheduling shop floor activities is a key issue in the manufacturing industry with respect to making the best economical use of manufacturing equipment and bringing costs under control, as well as delivering committed customer orders at due dates.

Consider the product structure from the manufacturing point of view as portrayed in Figure 2.1.

image

Figure 2.1. Product structures from a manufacturing point of view

Pi are parts machined in dedicated shops and Ai are assembled products. In terms of scheduling, the relevant combined attributes of the products and equipment units involved in machining and assembling, whatever their layout job shop or batch or continuous line production, are lead times, so that a Gantt chart can be derived by backward scheduling from the due date when the “root” product A4 or a batch thereof has to be delivered to its client. The situation is pictured in Figure 2.2.

Within the framework of decentralized decision-making for making collaborators more motivated for solving problems, they have to deal with and deploy what is called “job enrichment”; the choice of three agents, J1 in charge of the delivery of A2, a second one J2 in charge of the delivery of A3, and a third one J3 in charge of the delivery of A4 to the final client, appears to be the most pragmatic and efficient solution. This third agent is in some way the front office of all the hidden backward activities and responsible for fulfilling the commitments taken with respect to clients.

image

Figure 2.2. Gantt chart for scheduling machining and assembly activities for delivering products at a due date

The three agents Ji have to collaborate for establishing a schedule over a time horizon commensurate with the lead times. When manufacturing problems of any sort, which can impact the fulfilment of the schedule, the agents Ji have to collaborate to devise a coherent, sensible solution (outsourcing, hiring extra workforce, etc.), often without letting the top management know the details of the problems but only the adequate courses of action taken.

2.3. Concept of agenthood in the social world

2.3.1. Cursory perspective of agenthood in the social world

When considering how the concept of agenthood, if it exists, is used in the social world, we come to the concept of agency in law. It aims at defining the relationship existing when one person or party (the principal) engages another (the agent) to act for him/her, i.e. to do his/her work, to sell his/her goods, to manage his/her business on his/her behalf.

Early precedents for agency can be traced back to Roman law when slaves (though not true agents) were considered to be extensions of their masters and could make commitments and agreements on their behalf. In formal terms, a mandate is given to a proxy.

The concept of agenthood appeared in the field of economics in the past century. In 1937, R. H. Coase (1937) published a seminal article in which he developed a new approach to the theory of the firm. Later on, his line of thought was expounded by economists such as W. Baumol, R. Marris and O.E. Williamson. R. H. Coase emphasized the importance of the relations within the firm.

The theory of the firm covers many aspects of what a firm is, how it operates and how it is governed. A section of the theory of the firm is called the agency theory. It investigates the relationship between a principal and its agents within an economic context. This distinction results from the separation between business ownership (principal) and operations management (agents). One of the core issues is to understand the ways and means of how a balanced structure between the principal’s desires and its agents’ commitments and a balanced contract between both parties can be matched. Challenges at stakes, when decision-making takes place, are asymmetric information, risk aversion, ex ante adverse selection and ex-post moral hazard.

The concept of “social network” emerged in the 1930s in the Anglophone world for analyzing relationships in industrial organizations and local communities. The English anthropologist John Barn of the School of Manchester introduced the term “network” explicitly when studying a parish community in a Norwegian island (Barnes 1954). This approach was later theorized by Harrison White who developed “Social Network Analysis” as a method of structural network analysis of social relationships.

Social network theory strives to provide an explanation to an issue raised in the time of Plato by what is called social philosophy, namely the issue of social order: how to make intelligible the reasons why autonomous individuals cooperate to create enduring, functioning societies. In the 19th Century, A. Comte hoped to found a new field of “social physics” with individuals substituted for atoms. The French sociologist E. Durkheim (1951) argued that human societies are composed of interacting individuals and as such are akin to biological systems. Within this cast of thought, social order is not ascribed to the intentions of individuals but to the structure of the social context they are embedded in.

In the 1940s and 1950s, matrix algebra and graph theory were used to formalize fundamental socio-psychological concepts such as groups, social circles in network terms, making it possible to identify emergent groups in network data (Luce 1949). During that period of time, network analysis was also used by sociologists for analyzing the changing social fabric of cities in relation with the extension of urbanization.

In the 1960s, anthropologists carried out analyses with the view of social structures as networks of roles instead of individuals (Brown 1952). In the 1990s, network analysis radiated in a great number of fields, including physics and biology. It also made its way in management consulting (Cross 2004) where it is often applied in exploiting the knowledge and skills distributed across organizations’ members.

A book produced by S. Wasserman and K. Faust (1994) presents a comprehensive discussion of social network methodology. The quantitative features of this methodology rely on the theory of graphs and the properties of matrices. A graph can be either directed or not. A directed graph is an ordered pair G (V, A) where V is a set whose elements are called nodes, points or vertexes and A is a set of ordered pairs of directed edges (heads and tails). V can represent objects or subjects and A linkages between the elements of V. A special case of directed graph is the rooted directed graph in which a node has been distinguished as the root. When a graph is not directed, its edges are undirected. All the properties of graphs can be represented in the matrix formalism.

Social networkagents: within this framework, an agent is no longer an individual but a collection of individuals associated by the linkages between them. The linkages can be deterministic or stochastic. These features imply two consequences. The first one is well acknowledged: the behavior of a social network agent is differentiated from individual behaviors (the whole is not the sum of its parts). When some linkages between individuals are altered, the behavior of the composite agent is changed. Networks are categorized by how many modal representations the network has (generally one or two) and by how connection variables are measured.

One-mode networks involve measurements of variables on just a single set of actors. The variety of actors covers people, subgroups, organizations, communities and nation states. Their relations extend over a wide spectrum of characteristics:

  • – individual evaluation (friendship, liking, respect, etc.);
  • – financial transactions and exchange of material resources;
  • – transfer of immaterial resources;
  • – kinship (marriage, descent).

Two-mode networks refer to measurements of variables on two sets, either two sets of actors or a set of actors and a set of events or activities. In a two-set case, the profiles of actors are similar to those found in one-mode networks. As for relations, some can connect actors inside each set, but at least one relation of a sort must be defined between the two sets of actors.

Connection networks are two-mode networks that combine a set of actors and a set of events or activities to which the actors in the first set attend or belong. The requirement is that the actors must be connected to one or more events or activities. These characteristics of connection networks offer wide possibility and flexibility to represent organizations’ or communities’ structures and operational courses of action.

Connection networks include three types of built-in linkages: first, they show how the actors and the events or activities are directly related to each other; second, the events or activities create indirect relations between actors; and third, the actors create relations between the events or activities.

Let us take a simple example to clarify the ideas. Consider a set of children (Allison, Cindy, Dave, Doug, Ross and Sarah) and a set of events (birthday party 1, birthday party 2 and birthday party 3). The attendance of children to the parties can be represented by a matrix whose rows are children and columns are parties as shown in Figure 2.3.

image

Figure 2.3. Connection network matrix for the example of six children and three birthday parties

aij = 1 if actor i is affiliated with event j, otherwise aij = 0

A connection network can also be formalized by a bipartite graph. A bipartite graph is a graph in which the nodes can be split into two subsets and all edges are between pairs of nodes belonging to different subsets. Figure 2.4 translates the matrix of Figure 2.3 into a bipartite graph. Bipartite graphs can be generalized to n-partite graphs that visualize long-range correlations between organizations’ stakeholders. Graphs are very flexible means to visualize real world situations.

image

Figure 2.4. Bipartite graph of the connection network matrix for the example of six children and three birthday parties (Figure 2.3)

2.3.2. Organization as a collection of agents

Defining what an organization is or is not often refers to metaphors. Let us review these metaphors:

  • – an organization is a machine made of interacting parts engineered to transform inputs into outputs called deliverables (products/services);
  • – an organization is an organism achieving a goal and experiencing a life cycle (birth, growth, adaptation to environmental conditions, death) and fulfilling organic functions;
  • – an organization is a network representing a social structure directed towards some goal and created by communication between groups and/or individuals. The social structure mirrors how driving powers are distributed, influences exerted and finally decisions made to attain the set purpose.

All these instruments of organization representation are an objective symptom showing how this concept has many facets and is approached by apparently partible models. In fact, the question raised is: does an organization consist of relations of ideas or matters of facts? The two first metaphors can be better understood as non-contingent a priori knowledge and the last one as contingent a posteriori knowledge. In other words, the issue is the contingent or not contingent identity of the construct called organization.

E. Morin’s cast of thought (Morin 1977; Morin 1991) leans toward the contingent identity of the organization construct. We take his view to describe an organization: it is a mesh of relations between agents, human as well as virtual, which produces a cluster or system of actors sharing objectives and endowed with attributes and procedures for deploying courses of action, not apprehended at the level of single agents.

An organization is viewed as a society of agents interacting to achieve a purpose that is beyond their individual capabilities. The significant advantages of this vision are due to the potential abilities of agents which draw on:

  • – communication among themselves;
  • – coordination of their targeted activities;
  • – negotiation once they find themselves in conflict and mobility by transferring their processing capabilities to other agents;
  • – knowledge capitalization by learning;
  • – reaction to stimuli and some degree of autonomy by being proactive.

This form of system model allows more flexibility to describe the behavior of organizations. In other words, adaptive behavior can be easily made explicit. By adaptive, it is meant that systems are able to modify their behavior to respond to internal or external signals. Proactivity and autonomy are two essential properties that manifest themselves in a number of different ways. For instance, some agents perform an information filtering role, some of which filter in an autonomous way, only presenting the target agent with information it considers to be of interest to it. Similarly, this same type of agent can also be proactive, in that it actively searches for information that it judges would be of interest to its users.

An organization is characterized on one hand by its architecture in terms of formalized interplay between its agents (centralized or decentralized) and on the other hand by its functional capabilities, the roles of its actors and the relations between the two sets of their describable attributes (functional capabilities and actors’ roles).

2.4. BDI agents as models of organization agents

2.4.1. Description of BDI agents

Organization agents are social agents acting in a specific context. Interaction between social agents (short for social network agents) is central to understanding how organizations are structured and operated. A multiagent system in the social world must focus on how the interactions between agents are made effective, efficient and conducive to reach set objectives. In the late 1980s and the 1990s, a great deal of attention had been devoted to the study of agents capable of rational behavior. Rationality has been investigated in many a field. The economists have developed a strong interest in this concept and have built it as a normative theory of choice with a maximization of what is called a utility function, utility designating here something useful to customers. Hereafter, rationality is understood as the complete exploitation of information, sound reasoning and common sense logic. A particular type of rational agent, a Belief-Desire-Intention (BDI) agent, has been worked out by Rao and Georgeff (1995), and its implementation studied from both an ideal theoretical perspective and a more practical one.

BDI agents are cooperative agents characterized by having “mentalistic” features and, as such, they may incorporate many attitudes of human behavior. Its representative architecture is illustrated in Figure 2.5. It contains four key entities, namely beliefs, desires, intentions and plans, and an engine, “the interpreter”, securing the smooth coherence between the functional capabilities and roles fulfilled by the four key entities. In our opinion, these key structures are well suited to model the way social agents behave in a business environment and can be used as a modeling concept for organizations.

image

Figure 2.5. BDI agent architecture

Beliefs correspond to data-laden signals the agent receives from its environment. These signals are deciphered and may deliver incomplete or incorrect information, either because of the deliberate intention of signal sources or because of the lack of competencies at the reception side. Desires refer to the tasks allocated to the agent’s mission. All agents in an organization have an assignment that transforms their missions into clearly defined goals. Intentions represent desires the agent has committed to achieve and which have been chosen from among a set of possible desires, even if all these possible desires are compatible. The agent will typically keep striving to fulfill its commitment until it realizes that its intention is achieved or is no longer achievable. Plans are a set of courses of action to be implemented for achieving the agent’s intentions. They may be qualified as procedural knowledge as they are often reified by lists of instructions to follow.

The interpreter’s assignment is to detect updated beliefs captured from the surrounding world, assessing possible desires on the basis of newly elaborated beliefs, selecting from a set of current desires, some of which are to act as intentions. Finally, the interpreter chooses to deploy a plan in agreement with the agent’s committed intentions. Consistency has to be maintained between beliefs, desires, intentions and plans. Some degree of intelligence and competence is required to fulfil this functional capability and should be considered embedded in the interpreter.

The embedded intelligence and competence capabilities of the interpreter can be mainly expressed in terms of relationships between intentions and commitments. A commitment has two parts, i.e. the commitment condition and the termination condition when an active commitment is terminated (Rao and Georgeff 1995). Different types of commitments can be defined. A blindly committed agent denies any changes to its beliefs and desires which could conflict with its ongoing commitments. A single-minded agent accepts changes in beliefs and desires which could conflict with its ongoing commitments. A single-minded agent accepts changes in beliefs and drops its commitments accordingly. An open-minded agent is supposed to allow changes in both its beliefs and desires, forcing its ongoing commitments to be dropped. The commitment strategy has to be tailored, commensurate with the role(s) given to the agent according to the application context.

Let us elaborate on this mechanism by taking a general example of how the roles of the four entities (Beliefs, Desires, Intentions and Plans) articulate in the whole structure and how they are conducted by the interpreter.

An input message from another agent is captured by the data structure Beliefs and analyzed by the Interpreter. It deals with a change in the scheduled deliveries from this agent. This change is liable to impact its plan of activities and as consequence its own deliveries of products or services to its clients. Several options are open to analysis under the cover of desires and intentions. All actors in any context are keen on respecting their commitments (intentions), but a new choice has to be carried out when it appears that all desires cannot be met in terms of available resources from suppliers. A priority list of desires has to be re-established on the basis of strategic and/or tactical arguments (loyalty, criticality, etc. of clients) and converted in intentions to let the interpreter devise a new plan. The technique of “rolling schedule” in the manufacturing industry resorts to this practice.

Explaining the structural components of BDI agents in the next sections will show that BDI agents comply with the characteristics of an agent in the technical world that were given by J. Ferber.

2.4.2. Comments on the structural components of BDI agents

2.4.2.1. Definition of belief

The verb “believe” is defined in the Oxford dictionary. We are aware that the word “belief ” has been interpreted in different ways in the realm of philosophy and that its translation in other Indo-European languages (German, French, Italian, Spanish, among others) appears difficult (Cassin 2004). It is outside the scope of this book to discuss this issue. We will interpret it within the framework of what is called “the philosophy of mind” in the English language context. According to Hume’s book (1978 I sec 7), a matter of fact is “easily cleared and ascertained” and is closely correlated with reality: “if this be absurd in fact and reality, it must be absurd in idea”. These matters of fact are objects of belief: “it is certain that we must have an idea of every matter of fact which we believe… When we are convinced of any matter of fact, we do nothing but conceive it” (Hume 1978, I, III sec 8). In his book “Enquiries Concerning Human Understanding and Concerning the Principles of Morals”, Hume (1975) confirms that matters of fact and relations of ideas should be clearly distinguished: all the objects of human reason or enquiry may naturally be divided into two kinds, namely, relations of ideas or matters of fact.

Some people distinguish dispositional beliefs and occurring beliefs to try to mirror the storage structures of our memory organization. A dispositional belief is supposed to be held in the mind but not currently considered. An occurring belief is a belief being currently considered by the mind.

2.4.2.2. Attitudes and beliefs

An attitude is a state of mind favorable to behave in a positive or negative way towards an object or a subject. The information–integration tenet is one of the best credible models of the nature of attitudes and attitude change as stated by Anderson (1971), Fishbein (1975) and Wyer (1974). According to this approach, all pieces of information have the potential to affect one’s attitude. Two parameters have to be considered to understand the degree of influence information has on attitudes, i.e. the how and how much parameters. The how parameter is intended to evaluate the extent to which a piece of information received supports one’s belief. The how much parameter tries to measure the weight assigned to different pieces of information for impacting one’s attitude through a change in one’s belief.

Attitudes are dependent on a complex factor involving beliefs and evaluation. It is important to distinguish between two types of belief, i.e. belief in an object and belief about an object. When one believes in an object, one predicts a high probability of the object attributes existing. Belief about is the predicted probability that particular relationships exist between one object and others. Beliefs are embodied by the hundreds of thousands of statements we make about self and the world.

Attitudes change when beliefs are altered when acquiring new knowledge. The quantitative assessment of an attitude towards an object or a subject is measured in terms of the weighted sum of each belief about that object or subject times its circumstanced valuation. M. Rokeach has developed an extensive explanation of human behavior based on beliefs, attitudes and values (Rokeach 1969, 1973). According to him, each person has a highly organized system of beliefs, attitudes and values, which guides behavior. From M. Rokeach’s point of view, values are specific types of beliefs that act as life guidance. He concludes that people are guided by a need for consistency between their beliefs, attitudes and values. When a piece of information brings about changes in attitude towards an object or a situation, inconsistency develops and creates mistrust.

Another facet of belief and trust is linked to certainty and probability. Probability is commonly contrasted with certainty. Some of our beliefs are entertained with certainty, while there are others of which we are not sure. Furthermore, our beliefs are time-dependent along with our acquaintanceships.

2.4.2.3. Beliefs and biases

Biases are nonconscious drivers, cognitive quirks that influence how people perceive the world. They appear to be universal in most of humanity, perhaps hardwired in the brain as part of our genetic heritage. They exert their influence outside conscious awareness. We do not take action without our biases kicking in. They can be helpful by enabling people to make quick, efficient judgments and decisions with minimal cognitive effort. But they can also blind a person to new information or inhibit someone from considering valuable data when taking an important decision.

Biases often refer to beliefs that appear as the grounds on which decisions and courses of action are taken. Below is a list of biases commonly found in social life:

  • – in-group bias: perceiving people who are similar to you more positively (ethnicity, religion, etc.);
  • – out-group bias: perceiving people who are different from you more negatively;
  • – belief bias: deciding whether an argument is strong or weak on the basis of whether you agree with its implications;
  • – confirmation bias: seeking and finding evidence that confirms your beliefs and ignoring evidence that does not;
  • – availability bias: making a decision based on the data that comes to mind more quickly rather than on more objective evidence;
  • – anchoring bias: relying heavily on the first perception or piece of information offered (the anchor) when considering a decision;
  • – halo effect: letting someone’s positive qualities in a specific area influence the free will of one individual or a group of individuals (constraints, lobbying, etc.);
  • – base rate fallacy: when judging how probable an event is, ignoring the base rate (overall rate of occurrence);
  • – planning fallacy: underestimating how long a task will be taken to complete, how much it will cost, i.e. the risks incurred, while overestimating the benefits;
  • – representativeness bias: believing that something that is more representative is necessarily prevalent;
  • – hot hand fallacy: believing that someone who was successful in the past has a greater chance of achieving further success.

2.4.2.4. Degrees of belief

Belief, probability and uncertainty

An important facet of belief is linked to trust, truth and certainty. Uncertainty is commonly treated with probability methods. Some of our beliefs are entertained with certainty, while there are others of which we are not sure. John Maynard Keynes (Keynes 1921) draws a distinction between uncertainty and risk. Risk is uncertainty structured by objective probabilities. Objective means based on empirical experience gained from past records or purposely designed experimental tests.

The concept of probability is related to ideas originally centered on the notion of credibility or reasonable belief falling short of certainty. Two distinct uses of this concept are made, i.e. modeling of physical or social processes and drawing inference from, or making decisions on the basis of, inconclusive data that characterizes uncertainty.

When modeling physical or social processes, the purpose is predicting the relative frequency with which the possible outcomes will occur. In evolving a probability model for some phenomenon, an implicit assumption is made: how the natural, social and human world is configured and how it behaves. Such assumed assertions are contingent propositions that should be exposed to empirical tests.

Probability is also used as an implement for decision-making by drawing inferences when a limited volume of data is available. When combined with an assessment of utilities, it is also used for choosing a course of action in an uncertain context. Probability modeling and inference are often complementary. Inference methods are often required for choosing among competing probabilities. Thus, decision makers are faced with situations represented by sets of probability distributions, giving more weight to some assessments rather than others. These techniques are used by assurance and reassurance companies when they work out contracts for which statistical series are too short. Probability is a tool for reasoning from data akin to logic and for adjusting one’s beliefs to take action.

Uncertainty can be rigged, increased or fabricated. This is not unusual in the political and economic realms. Think of the climate change, pesticides, acid rains and medicines and so on. Anyhow, dropping out or neglecting partially certain public data is rejecting an often large volume of data that, in spite of its uncertainty weight, can be turned down with large detriment to the relevance of decisions. The data deluge that pours over us through current uncontrolled communication channels is a challenge not only for citizens but also for businesses in order to distinguish relevant from fake information items.

Measures of degrees of belief

The degrees of belief about the future are ingrained with uncertainty. The usual way to come to practical terms with uncertainty is to use the concept of probability.

Two approaches to probability are generally considered, namely the frequency approach and the Bayesian approach. These two approaches are explained on the basis of the following statement: “the probability that the stock exchange index will crash tomorrow is 80%”.

The interest in games of chance stimulated work on probability and influenced the character of an emerging theory. The probability situations were analyzed into sets of possible outcomes of a gaming operation. The relative frequency of the occurrence of an event was postulated as a number called the “probability” of this event. It was expected that the relative frequency of occurrence of the event in a large number of trials would lie quite close to that number. But the existence in the real world of such an ideal limiting frequency cannot be proved. This approach to probability is just a model of what we think to be reality.

The statement “the probability that the stock exchange index will crash tomorrow is 80%” cannot express a relative frequency (even if financial market records are part of the evidence for the statement), because tomorrow comes but once. The statement implicitly expresses the credibility of the thought that the future is included in the past on the basis that it is rational to be confident of hypothesis (index crash) in the evidence of past records. This approach has often been called subjective, because its early proponents spoke of probability being relative in part to our ignorance and in part to our knowledge (Laplace 1795). It is now acknowledged that the term is misleading, for in fact there is an “objective” relationship between the hypothesis (index crash) and the evidence borne by past records, a probability relationship similar to the deductive relations of logic (Keynes 1921). One is faced with reasonable degrees of belief relative to evidence.

The label “objective theory”, according to Keynes’ view, has been criticized by F.P. Ramsay (1926). This skepticism led Ramsey, de Finetti (1937) and Savage (1954) to develop what Savage called a theory of personal probability. Within this framework, a statement of probability is the speaker’s own assessment of the extent to which (s)he is confident of a proposition. It is remarkable that a seemingly subjective idea like this is arguably constrained by exactly the same mathematical rules governing the frequency conception of probability.

Personal degrees of belief can arguably satisfy the probability axioms. These ideas were first proposed by Ramsey (1926). He considered a probability space as a representation of psychological states of belief. P(A) stands for a person’s degree of confidence in A; it is to be evaluated behaviorally by determining the least favorable rate at which this individual would take a bet on A. If the least favorable odds are, e.g. 3:1, the probability is P(A) = ¾.

Conditional probability is denoted by P (A/B). In a frequency interpretation, this is the relative frequency with which A occurs among trials in which B occurs. Conditional probabilities may be explained in terms of conditional bets. In a personal belief interpretation, P(A/B) may be understood as the rate at which a person would make a conditional bet on A – all bets being cancelled unless condition B is fulfilled. This approach, often called Bayes’ theorem, is of serious interest from a belief point of view.

Suppose that the set Ai is an exhaustive set of mutually exclusive hypotheses of interest and that B is knowledge bearing on the hypotheses. Assume that a person, on the basis of prior knowledge, has a distribution of belief over Ai, represented by P(Ai) for each i. Call this the prior distribution, assuming that for each Ai, P(B/Ai) is defined. This is called the likelihood of getting B if Ai is true. P(A/B) is interpreted as a logical relationship between A and B.

The goal of the Bayesian method is to make inferences regarding unknowns (generic term referring to any value not known to the investigator), given the information available that can be partitioned into information obtained from the current data as well as other information obtained independently or prior to the current data, which can be assigned to the investigator’s current knowledge. The more or less assured certainty of the expected future states of nature is encoded as probability estimates conditional on the information available. Within this framework of thought, the repetitive running of a trial and error process is supposed to allow people to gain new knowledge and eventually change their beliefs. In the inceptive step, the distribution of a priori subjective probabilities with respect to the future possible states of nature and their properties is chosen on the basis of innate and acquired knowledge to build a representation of the likely outcome of future action. This procedure draws on Bayes’ theorem. When the factual outcome happens, its compliance and/or discrepancy with the expected effect are analyzed and memorized, producing incremental knowledge coming from experience.

There is a clear connection between logical probability, rationality, belief and revision of belief.

2.4.2.5. Belief, trust and truth

Truth is an attribute of beliefs (opinions, doctrines, statements, etc.). It refers to the quality of those propositions that accord with reality, or with what is perceived as reality. The contrast is with falsity, faithlessness and fakery.

Many explanations have been devised to elaborate on the correspondence between what is true and what makes it true. The correspondence theory asserts that a belief is true provided that a fact corresponding to it exists. What does it mean for a belief to correspond to a fact? How to verify that a fact exists in the context of virtual reality? A third party, trust, seems adequate to intervene within this framework to assess the credibility of information sources. The state of believing involves some degree of confidence towards a propositional object of belief.

Other theories have been proposed to explain how a belief is accepted as true. The coherence theory developed by Bradley and Blanchard asserts that a belief is verified when it is part of an entire system of beliefs that is consistent and harmonious. A statement S is considered logically true if and only if S is a substitution-instance of a valid principle of logic. The pragmatic theory produced by two American philosophers C.S. Pierce and W. James asserts that a belief is true if it works, if accepting it brings success.

In a book about the impact of blockchain technology on business operations (buying and selling goods and services and their associated money transactions), Don and Alex Tapscott (2016) estimate that a trust protocol has to be established according to four principles of integrity:

  • – Honesty has become not only an ethical issue but also an economic one. Trusting relations between all the stakeholders of business and public organizations have to be established and made sustainable.
  • – Consideration means that all parties involved respect the interests, desires or feelings of their partners.
  • – Accountability means clear commitments and abiding to them.
  • – Transparency means that information pertinent to employees, customers and shareholders must be made available to avoid the instillation of distrust.

This protocol shows how social actors have become aware of the importance of societal relationships in a faceless virtual world.

Blockchain is a distributed ledger technology. Blockchain transactions are secured by powerful cryptography that is considered unbreakable using today’s computers.

We resort to K. Lewin’s field theory to analyze how emotions, feelings, beliefs, truth and trust are dynamically articulated when agents interact and perform their activities (Lewin 1951). The fundamental construct introduced by K. Lewin is that of “field”. K. Lewin gained a scientific background in Germany before immigrating to America. That explains why he was led to introduce the concept of a “field” to characterize the spatial–temporal properties of a human ecosystem. This concept is widely used in physics to describe the physical properties of phenomena in a limited space.

All behavior in terms of actions, thinking, wishing, striving, valuing, achieving, etc. is conceived of as a change in some state of a “field” in a given time unit. Expressed in the realm of individual psychology, the field is the life space of the individual (Lebensraum in German culture). The life space is equipped with beliefs and facts that interact to produce mental states resulting in attitudes at any given time. K. Lewin’s assertion that the only determinants of attitudes at a given time are the properties of the field at the same time has caused much controversy. But it sounds reasonable to accept that all the past is incorporated into the present state of the field under consideration. To put it in a different wording, only the contemporaneous system can have effects at any time. As a matter of fact, the present field has a certain time depth. It includes the “psychological” past, “psychological” present and “psychological” futures which constitute the time dimension of the life space existing at a given time.

This idea of time dimension is also found in the concept developed by Markov to approach the description of stochastic processes by chain of events. State changes in a system that occur in time follow some probability law. The transition from a certain state at time t to another state depends only on the properties of the state at time t: all the past features of previous states are considered already included in the attributes of the present state.

All attitudes depend on the cognitive structure of the life space that includes, for each agent of a cluster, the other stakeholders of the cluster. When exposed to the behavior suggestions of other cluster agents or their critical judgment of his/her own behavior, every agent develops either a conditioned reflex based on his innate and/or acquired knowledge embedded in his/her brain’s neural connections or branches out into emotional expressions according to the way the received information is appraised as a reward or a threat. This last case occurs if (s)he feels (s)he cannot secure the right pieces of knowledge to produce an appropriate reaction. T.D. Wilson, D.T. Gilbert and D.B. Centerbar (2003) wrote “helplessness theory has demonstrated that if people feel that they cannot control or predict their environments, they are at risk for severe motivational and cognitive deficits, such as depression”.

If one organization agent trusts the other organization agents, his/her motivation is strengthened to embark on a learning process to better his/her acquired knowledge. Learning engages imagination, demands concentration, attention, efforts and trust in other agents’ good will. Conscious awareness is fully involved.

2.4.2.6. Beliefs and logic

Logic is the study of consistent sets of beliefs. A set of beliefs is consistent if the beliefs are not only compatible with each other but also do not contradict each other. Beliefs are expressed by sentences. When written, these sentences stating beliefs are called declarative.

Many sentences do not naturally state beliefs. One sentence may have different meanings or interpretations depending on the context. Beliefs are, in some way or another, the outcome of “rational” reasoning. By rational, it is meant that rules of logic are called on for justifying the conclusions reached. But which rules?

Classical logic can be understood as a set of prescriptive rules defining the way reasoning has to be conducted to yield coherent conclusions. Within this framework, truth is unique. It is implicitly assumed that a universe exists where propositions are either true or false. The “principle of the excluded third” is called on. It is well suited for data processing by computer systems. Data coded by binary digits are memorized and processed by electronic devices able to be maintained only in two states (0 or 1). Classical logic does not mirror the way we reason in our daily life. It is acknowledged that our brain does not operate as a Turing machine (Wilson 2003). If our brain is viewed as a black box converting input into output, the transformation process can be represented by algorithms. But the intimate physiological mechanism cannot be ascribed to algorithmic procedures in the way a computer system crunches numbers.

Other systems of logic, descriptive by nature, have been worked out to try to take into account the ways and means we use to make decisions in our daily activities. Modal logic is a system we practice, generally implicitly. Modality is the manner in which a proposition or statement describes or applies to its subject matter. Derivatively, modality refers to characteristics of entities or states of affairs described by modal propositions.

Modal logic (Blackburn 2001) is a branch of logic which studies and attempts to systematize those logical relations between propositions which hold by virtue of containing modal terms such as “necessarily”, “possibly” and “contingently”; must, may and can. These terms cover three modalities: necessity, actuality and possibility. In short, modal logic is the study of necessity (it is necessary that…) and possibility (it is possible that….). This is done with the help of the two operators □ and ◊ meaning “necessarily” and “possibly”, respectively, and instrumental in dealing with different conceptions of necessity and possibility:

  • – logical necessity, i.e. true by virtue of logic alone (if P then Q);
  • – contextual necessity, i.e. true by virtue of the nature and structure of reality (business context, social context, etc.);
  • – physical necessity, i.e. true by virtue of the laws of nature (water boils at 100°C under standard pressure).

Modal logic is not the name of a single logical system; there are a number of different logical systems making use of the operators □ and ◊, each with its own set of rules.

Modal operators □ and ◊ are introduced to express the modes with which propositions are true or false. They allow logical opposites to be clearly elicited. The operators □ and ◊ are regarded as quantifiers over entities called possible worlds. □ A is then interpreted as saying that A is true in all possible worlds, while ◊ A is interpreted as saying that A is true in at least one possible world.

The two operators are, in fact, connected. To say that something must be the case is to say that it is not possible for it not to be the case. That is, □ A means the same as ¬◊¬A. Similarly, to say that it is possible for something to be the case is to say that it is not necessarily the case that it is false. That is, ◊A means the same as ¬□¬A. For good measure, we can express the fact that it is impossible for A to be true, as ¬◊A (it is not possible that A) or as □¬A (A is necessarily false). The truth value of ◊A cannot be inferred from the knowledge of the truth value of A. Modal operators are situation-dependent. Following the 17th Century philosopher and logician Leibniz, logicians often call the possible options facing a decision-maker possible worlds or universes. A fresh approach to the semantics theory of possible worlds was introduced in the 1950s by Kripke (1963a and 1963b).

To say that ◊A is true, it is required to say that A is in fact true in at least one of the possible universes associated with a decision-maker’s situation. To say that □A is true implies that A is true in all the possible universes associated with a decision-maker’s situation. The modal status of a proposition is understood in terms of the worlds in which it is true and worlds in which it is false. Contingent propositions are those that are true in some possible worlds and false in others. Impossible propositions are true in no possible world.

Two logical operators, i.e. negation and the conditional operator → (if …then …), which are central in decision-making, require special attention to be applied within the framework of possible worlds. Let us assume that a decision-maker is in a situation M and that M is a set of exclusive possible worlds. Each element of the set is a world in itself. Possible worlds are not static but dynamically time-dependent. Today’s world is not tomorrow’s world. This means that each possible world evolves in time according to rules. These dynamics can be represented by a tree diagram with nodes and branches reflecting the relations between the different worlds (nodes). Each branch is a retinue of possible worlds. A tree diagram reads top-down so that from one certain node, access is not given to any other node in the tree.

It is posited that A → B in the world m if and only if in all the worlds n accessible from m, A and B are simultaneously true. ¬A is true in the world m if and only if A is false in all the worlds n accessible from m.

Let us give examples of inference employing modal operators. Consider a situation S with two associated worlds S1 and S2 and two sentences a and b that can be true (T) or false (F) as shown in Figure 2.6.

image

Figure 2.6. A situation S and its two possible worlds

Consider the inference image It is invalid: a is T at S1; hence, ◊a is true in S. Similarly, b is true in S2; hence, ◊b is true in S. But the (a & b) is true in no associated world; hence, ◊(a & b) is not true in S.

By contrast, the inference image is valid. If the premises are true at S, then a and b are true in all the worlds that are associated with S. Then, a & b is true in all those worlds, and □ (a & b) is true in S.

Each software user exposed to two environments is subject to developing a situation of his/her own in terms of rules of meaning and action and of sets of “rational” propositions pertaining to his/her situational world at a certain time. Software designers are not aware of this immanent state of affairs. But even if they were, they could hardly cope with the wide variety of possible situational worlds users of software systems are embedded in.

2.5. Patterns of agent coordination

Coordination is central to a multi-agent system for without it, any benefits of interaction vanish and the society of agents degenerates into a collection of individuals with chaotic behavior. Coordination has been studied in diverse disciplines from the social sciences to biology. Biological systems appear to be coordinated though cells or “agents” act independently in a seemingly non-collaborative way. Coordination is essentially a process in which agents engage to ensure a community of individual agents having diverse capabilities and acting in a coherent manner to achieve a goal. Different patterns of coordination can be found.

2.5.1. Organizational coordination

The easiest way of ensuring coherent behavior and resolving conflicts consists of providing the group with an agent having a wider perspective of the system, thereby exploiting an organizational structure through hierarchy. This technique yields a classic master/slave or client/server architecture for task and resource allocation. A master agent gathers information from the agents of the group, creates plans, assigns tasks to individual agents and controls how tasks are performed.

This pattern is also referred to as a blackboard architecture because agents are supposed to read their tasks from a “blackboard” and post the states of these tasks to it. The blackboard architecture is the model of shared memory.

2.5.2. Contracting for coordination

In this approach, a decentralized market structure is assumed, and agents can take two roles, manager and a contractor. If an agent cannot solve an assigned problem using local resources or expertise, it will decompose into sub-problems and try to find other willing agents with the necessary resources/expertise to solve these sub-problems.

Assigning the sub-problems is engineered by a contracting mechanism consisting of a contract announcement, submissions of bids by bidding agents, their evaluation and the awarding of a contract to the appropriate bidder. There is no possibility of bargaining.

2.5.3. Coordination by multi-agent planning

2.5.3.1. General considerations

Coordinating multiple agents is viewed as a planning problem. In this context, all actions and the interactions of agents are determined beforehand, leaving nothing to chance. There are two types of multi-agent planning, namely centralized and decentralized.

In centralized planning, the separate agents evolve their individual plans and then send them to a central supervisor which analyzes them and detects potential conflicts (Georgeff 1983). The idea behind this approach is that the central supervisor can:

  • a) identify synchronization discrepancies between the plans of the stakeholders;
  • b) suggest changes and insert them in a realistic common schedule after approval by the stakeholders.

The distributed planning technique foregoes the presence of a central supervisor. Instead, it is based on the dissemination of each other’s plans to all agents involved (Georgeff 1984, Corkill 1979). Agents exchange with each other until all conflicts are removed in order to produce individual plans coherent with others. This means that each stakeholder shares information with its partners about its resource capacities.

2.5.3.2. E-enabled coordination along a supply chain

To illustrate the role of Information Technology and especially telecommunications services in coordination planning, the example of e-enabled demand-driven supply chain management systems will be described (Briffaut 2015). By e-enabled, it is meant that all transactions are electronically paperless engineered along the goods flow from suppliers to clients. In particular, this context implies that customers place orders via a website. These cybercustomers expect to be provided without latency with data about product availability and delivery lead times.

The role of information sharing is acknowledged to be a key success factor in Supply Chain Management (SCM) in order to secure efficient coordination between all the activities of the stakeholders involved along the goods pipeline. Coordinating multi-agent systems (MAS) is realized in the case of an e-enabled SCM through a common information system engineered to share the relevant data between the stakeholders involved. The MAS approach is a relevant substitute for optimization tools and analytical resolution techniques whose efficiency is usually limited to local problems, without any adequate visibility over the behavior of the entire chain of stakeholders involved. Optimization of the operations of a whole is different from optimization of the operations of its parts A global point of view is required to bring under full control the synchronization of inter-related activities.

In traditional contexts, a front office and a back office can be identified in terms of interaction with customers. The front office carries out face-to-face dealings with customers while using a proprietary information system. Relationships with the other agents of the supply chain take place by means of messages exchanged between their information systems. Coordination between the front office and the back office is generally asynchronous and does not meet real-time requirements.

When e-commerce is implemented via a website, the delineation between the back office and the front office of the previous configuration is blurred and has no reason to be taken into consideration. The two offices merge into one entity because of the response time constraint. When queries are introduced by cybercustomers via a website portal, the collaborative information system must have the ability to produce real-time answers (availability, delivery date). Then, the information systems of the various stakeholders along the supply chain have to be interfaced in such a way that coordination between them takes place synchronously. The thing to do is to implement a workflow of a sort between the “public” parts of the stakeholders’ information systems. In other words, some parts of the stakeholders’ information systems contribute to producing a relevant answer to cybercustomers’ queries. Figure 2.7 shows the changes induced by a portal in terms of information exchange between the supply chain stakeholders.

image

Figure 2.7. Coordination between proprietary information systems through a collaborative system

2.5.3.3. Scenario setting for placing an order via a website

Each time a customer enters an order via the website, the supply chain collaborative information system shared by all stakeholders proceeds with an automatic check of projected inventories and uncommitted resource capacities per time period. If the item ordered is already available, the customer is advised accordingly and can immediately confirm the order. If the item is not available in inventory, the system checks whether the quantity and the delivery date requested can be met. In other words, the system checks whether manufacturing and supply capacities are available to meet the demand on the due date. Otherwise, it checks what the best possible date could be and/or what split quantities could be produced by using a simulation engine. Then, the customer is advised of the alternatives and can choose an option suitable to him/her. Once the option is confirmed, the system automatically creates a reservation of capacity and materials on the order promising system and forwards the order parameters to the back-office systems to be included in the production plan of the stakeholders involved. Order acknowledgments and confirmation are generated and sent by email.

2.5.3.4. Mapping the order scenario onto the structures of a BDI agent

The role of the Beliefs structure is to record order entries, send answers and process transaction data to turn them into memorized statistics. These statistics are used as entry data to update the APS (Advanced Planning System). The APS is implemented as a control tool over a short time horizon and is used as a non-repudiable commitment taken by the manufacturing shops. The Plan structure establishes and memorizes the APS pertaining to the supply chain as a whole. This means that this entity updates the supply chain APS on a regular time basis from data provided by the Beliefs structure. As the BDI agent acts as the front office of the supply chain with respect to the buyer side, it seems reasonable to ascribe it a centralized coordination role. Within this perspective, it draws up partial plans for the stakeholders on their behalf. When conflicts arise, it has the capabilities to bring the unbalance of distributed resources to terms. In other words, the Plan structure is assigned to implement the APS concept. It ensures that data required to derive their partial APS are made available in due time to all agents involved along the goods flow. It has two major features:

  • – concurrent planning of all partners’ processes;
  • – incremental planning capabilities.

APS is intended to secure a global optimization of all flows through the supply chain not only by increasing ROI (Return on Investment) and ROA (Return on Assets) but also by fulfilling customers’ satisfaction and retaining their loyalty.

The Desires structure is in charge of supporting the use of the ATP and CTP parameters. ATP stands for Available-To-Promise and CTP Capable-To-Promise. Per period of time, ATP has the capability to deliver an answer to a client request in terms of availability (quantity and delivery date). Either the request can be fulfilled from a scheduled inventory derived from the enforced APS or simulation of a sort has to be carried out through the CTP parameter to send an answer to the client. The CTP parameter takes account of lead times required to mobilize potentially available resources and allows a real-time answer to customer requests when necessary. It results, when activated, in the production of a new APS. The APS technique is generally supposed to be able to produce rolling manufacturing plans to match the demands of the buyer side.

The fulfillment of the committed schedule APS is ascribed to the Intentions structure. The PAB parameter is managed by this structure because it includes all of what is recorded as committed (APS and customer orders). The PAB (Projected Available Balance) parameter represents the number of completed items on hand at the end of each time period. It can be viewed as a means for giving some margin in cases of temporary unserviceable capability of some resources.

Let us use an example to explain how the roles of the four structures (Beliefs, Desires, Intentions and Plans) are connected and how they are performed by the interpreter. An input message coming from a customer is captured by the Beliefs data structure and analyzed by the Interpreter. It deals with a change in the delivery schedule induced by a new supply order entry. If this change requirement falls within the time fence linked to the supply lead time, it is rejected. Otherwise, as the agent has to act, the available resource capacities for the time period, be they committed or uncommitted, the projected on-hand inventory and the uncommitted planned manufacturing output are analyzed. If one of the possible supply sources can meet the specification required, it needs to select appropriate actions or procedures (Plans) to execute from a set of functions available to it and commit itself (Intentions). This simple scenario can be conceptualized by a repeat loop as shown below:

BDI-Interpreter

Initialize-state [ ];

repeat

  • a) Options:= read the event queue (Beliefs) and go to option ATP (Desires);
  • b) Selected option ATP = if ATP parameter proves relevant (order fulfilled without altering the existing MPS) then update its value and go to Intentions to update PAB , otherwise go to selected option CTP ;
  • c) Selected option CTP = if CTP proves relevant (possible adjustment of current MPS while abiding by commitments in force) then go to Intentions for updating otherwise reject the request
  • d) Execute [ ];

end repeat

2.6. Negotiation patterns

Negotiations are the very fabric of our social life. A negotiation is a discussion pertaining to decision-making with a view to agreement, full or partial, or compromise when the discussants have incompatible mind-sets. When differences in opinion between discussing parties arise, several strategies are instrumental in trying to resolve the issue by determining what the fair or just outcome should be. A first strategy might be that the parties have agreed to resort to a set of procedural rules that have been defined beforehand for covering eventual cases of conflicts for settling disputes. This situation can be formalized by a negotiation protocol. A second strategy is to seek the advice of a referee. This strategy is aimed at giving the power of intervening in the conflict to an unbiased person. But in this case, the power to decide on the issue remains in the hands of the discussants with or without the referee taking part. A third strategy would be to transfer the full responsibility to take a decisive decision over the pending issue. Then, the risk is that “asymmetric ignorance” between the parties involved leads to the absence of consensus when it comes to deploying the decision.

Coordination is predicated on the implicit idea that the agents involved share a common interest to achieve an objective. Negotiations do not necessarily imply that they always take place between opponents and competitors, but this term often bears that connotation.

There are probably many definitions of negotiation. In our opinion, a basic definition of negotiation has been given by Brussmann and Muller (1992):

“…negotiation is the communication process of a group of agents in order to reach a mutually accepted agreement on some matter”.

The purpose of any negotiation process is to reach a consensus for the “balanced” benefit of the parties. “Balanced” does not mean “optimal” for the parties involved, but what they may consider the least unfavorable solution. This process may be very complex and involve the exchange of information, the relaxation of initial constraints, mutual concessions, lies or threats. It is easy to figure out the huge and varied literature produced on this subject matter of negotiation.

Negotiation can be competitive or cooperative depending on the behavior of individual agents. Competitive negotiation takes place in situations where independent agents with their own goals attempt to group their choices over well-defined alternatives. They are not a priori prepared to share information and cooperate. Cooperative negotiation takes place where agents share the same vision of their goals and are prepared to work together to achieve efficient collaboration.

2.7. Theories behind the organization theory

In spite of the ever-lasting claim by the French to be different in terms of culture (exception culturelle), many management tools currently used in France and introduced after the Second World War are based on imported concepts and practices. The costing system, called in French “comptabilité analytique”, is taken from the German costing system. The first textbooks on costing were published in the 19th Century in Germany (Ballewski 1887). On the other hand, at the same time period, organizational concepts under the wording “organization theory” were imported from the USA. Most contributors to this discipline in the USA had a sociology background. This situation can be ascribed to the characteristics of the American cultural context. A telling insight can be found in the book The Growth of American Thought by Merle Cutti (1964). Two chapters (“The Advance of Science and Technology” – “Business and the Life of the Mind”) are of special interest to understand the involvement of sociologists in studying the working conditions of the labor force in large corporations. The promotion of applied science to the arts was oriented to give engineers and mechanicians the possibilities to oversharpen the material benefits at the expense of moral values so deeply ingrained in the Christian heritage of the Pilgrim Fathers.

The many aspects of what is called the organization theory in English and French-speaking contexts defy easy classification. No system of categories is perfectly appropriate for organizing this material. This is why some baseline theories explained in the following subsections can help to derive schemes eliciting the very nature of the multiple casts of thought in this realm in a business environment.

It is important to realize that in the past decades, information and communication technologies have had a disruptive impact on the ways and means of how corporations and communities of all sorts have been redesigned to keep up with their environments. Here are the main features of the transformations perceived:

  1. 1) The enterprise is transformed from a closed system to an open system, a network of self-governing micro-enterprises with free-flowing communication among them and mutually creative connections with outside contributors. Some popular wording can be associated with this idea of openness (networked enterprise, open innovation, co-makership, etc.).
  2. 2) Employees are transformed from executors of top-down directions to self-motivated contributors, in many cases choosing or electing the leaders and members of their teams.
  3. 3) Purchasers of business offerings are transformed from customers to lifetime users of products and services designed to solve their problems and increase their satisfaction.

2.7.1. Structural and functional theories

This includes a broad group of loosely associated approaches. Although the meanings of the terms structuralism and functionalism are not clear cut and can bear a variety of variations, they designate the belief that social structures are real and function in ways that can be observed objectively (Giddens 1979).

It is relevant at this stage to elaborate on the term “function”. When considering the function of a thing, a distinction has to be made between

  1. a) what the thing does in the normal course of events (its activity);
  2. b) what the thing brings about in the normal course of events (the result of its activity).

Of course, it is understood that the activity of a thing and the outcome thereof are strongly correlated with the structure of the entity under consideration. When a function is ascribed to an agent, it is usually implied that a certain purpose is served.

The concept of mathematical function does not oppose the previous view but complements it by stressing the relation between two terms in an ordered pair. This pair of constituents can be for instance activity and result.

A functional explanation is a way of explaining why a certain phenomenon occurs or why something acts in a certain way by showing that it is a component of a structure within which it contributes to a particular kind of outcome.

The system theory is deeply rooted in the structural–functional tradition which can be traced back to Plato and Aristotle. Modern structuralism generally recognizes E. Durkheim (1951), who emphasized the concept of social structure, and F. de Saussure, founder of structural linguistics, as key figures.

The structural technical architecture of cybernetics has explicitly or implicitly permeated the organization theory. This means that a structure is described in terms of controlled and controlling entities with the underlying assumption that it has to deliver targeted output within the framework of a contingent ecosystem. This cybernetics mind-set prevails from the design stage whose process is called organizational design.

2.7.2. Cognitive and behavioral theories

This genre of theories is a combination of two different traditions that share many characteristics. They tend to espouse the same general ideas about knowledge as structural–functional theories do. Structural and functional theories focus on social and cultural structures, whereas cognitive and behavioral theories focus on the individual.

Psychology is the primary source of cognitive and behavioral theories. Psychological behaviorism deals with the connection between stimuli and behavioral responses. The term cognition refers to thinking or the mind, so cognitivism tries to understand and explain how people think. Cognitivism (Greene 1984) goes one step further than psychology and emphasizes the information processing phase between stimuli and responses. Somehow, cognitivism tries to open the black box, converting stimuli into responses in order to understand the mechanism involved.

These two groups of theories form a basis from which many other theories revealing the tone and color of their upholders can be derived. When the focus is put on the relations between the various entities of a structure, the structural and functional theories shift to what is called interactionist theories. When theories go further than merely describing a contextual situation but also criticize theories of this kind, e.g. on the grounds of the conflicts of interest in society or the ways in which one group perpetuates domination over another one, they are called critical theories.

A behavioral view implies that beliefs are just disposition to behave in certain ways. The question is that our beliefs including their propositional content indicated by a “that” – clause, typically explain why we do what we do. Explaining action via the propositional content of beliefs is not accommodated in the behavioral approach.

2.7.3. Organization theory and German culture

When you scrutinize the syllabi of German educational institutions in the field of management, what deals with organization theory (organizationstheorie) is presented as “Grundlagen der Organization, Aufbau- Ablauf- und Prozess-” (Foundations of Organization, Structure, Fluxes and Process) with often an additional subtitle “Unternehmensführung und Strategie” (Enterprise Guidance and Strategy). Within this framework, seven issues have to be addressed to design a coherent organization. By coherent, it is meant that any organizational configuration has a purpose, an objective; otherwise, it is irrelevant to devote efforts, i.e. resources, to design and build an object without significance.

The issues to address are:

  • – What is the purpose?
  • – How: what is required for function capabilities?
  • – What is required for resources?
  • – When does this structure have to be operated?
  • – Where: what is the ecosystem of the location?
  • – What is the relevance of the strategy?
  • – What is the distribution complexity of deliveries?

The synopsis of Figure 2.8 portrays how the procedures for deriving the “Afbauworganisation” and the “Ablauforganisation” components are systematically deployed and how they are combined to deliver an effective fully fledged business organization.

image

Figure 2.8. Systematic approach to derive the Aufbau- und Ablauforganisation components of a business organization (source: Knut Bleicher (1991) S. 49)

All these processes are underpinned by a theory developed in the field of sociology. The mind-set of a social system has been an important contribution to elicit the problem of social complexity. Niklas Luhmann has brought a major contribution to address this question in his books Soziale Systeme: Grunddriss einer allgemeine Theorie and Einführung in die Systemtheorie (Luhmann 1984; Luhmann 2002). In social systems, theory Niklas Luhmann strives to incorporate the conceptual innovations of the 20th Century in the realm of social theory. He draws on systems theory, which is a major conceptual innovation of the 20th Century, to provide a framework for describing modern society as a complex system of communication that has differentiated in a network of social subsystems.

The systems theory worked out by Niklas Luhmann explores the collapse of boundaries between observers and is observed from different angles and in a variety of contexts within the framework of the second-order cybernetics. Understanding the complexity of the observed system, the complexity of its observing environment and their combination is a challenging analytical exercise. Niklas Luhmann applied the autopoiesis concept to sociology through his systems theory ideas in an endeavor to come to terms with this conundrum.

The term autopoiesis (from Greek αὐτo- auto-, meaning “self ”, and ποίησις (poiesis), meaning “creation, production”) refers to a system capable of reproducing and maintaining itself. The term was introduced in 1972 by Chilean biologists Humberto Maturana and Francisco Varela (1980) to characterize the self-maintaining chemical reactions of living cells. Since then, the concept has also been applied to the fields of cognition, systems theory and sociology.

Original definitions produced by Humberto Maturana and Francisco Varela (1980) are given in the following excerpts:

  • – “An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which:
    • i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them;
    • ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network” (p. 78).
  • – “The space defined by an autopoietic system is self-contained and cannot be described by using dimensions that define another space. When we refer to our interactions with a concrete autopoietic system, however, we project this system on the space of our manipulations and make a description of this projection” (p. 89).

This space of manipulations can be thought of akin to Levin’s field theory discussed in section 2.4.2.5.

Autopoiesis was originally presented as a system description that was intended to define and explain the nature of living systems. These structures, based on an inflow of molecules and energy, generate the components which, in turn, continue to maintain the organized bounded structure that gives rise to these components.

An autopoietic system should be contrasted with an allopoietic system, such as a car factory, which uses raw materials (components) to generate a car (an organized structure) which is something other than itself (the factory). However, if the system is extended from the factory to include components in the factory’s “environment”, such as supply chains, plant/equipment, workers, dealerships, customers, contracts, competitors, cars, spare parts and so on, then as a total viable system, it could be considered to be autopoietic.

Though others have often used the term as a synonym for self-organization, Humberto Maturana himself stated he would “[n]ever use the notion of self-organization… Operationally it is impossible. That is, if the organization of a thing changes, the thing changes” (Maturana and Varela 1980). Moreover, an autopoietic system is autonomous and operationally closed, in the sense that there are sufficient processes within it to maintain the whole. Autopoietic systems are “structurally coupled” with their environment, embedded in a dynamic of changes that can be recalled as sensory-motor coupling. This continuous dynamic is considered as a rudimentary form of knowledge or cognition can be observed throughout life forms.

Niklas Luhmann’s systems theory allows simulating complexity in order to explain it. It does so by creating a flexible network of selected interrelated concepts that can be combined in many different ways and thus be used to describe the most diverse social phenomena. Niklas Luhman defines complexity in terms of a threshold that marks the distinction between two types of systems, those in which each element can be related to every other element and those in which it is no longer the case. In information terms, complexity expresses a lack of information, preventing a system from completely observing itself or its environment. This drives observers to reduce complexity via the formation of systems models that are less complex than their environment. This approach generates an asymmetrical, simplifying relationship to observed systems. This ability to reduce complexity leads to the fact that complexity cannot be observed because “unorganized” complexity is transformed into organized, so to speak, complexity.

Niklas Luhmann insists on the difference between the conceptual abstraction (theoretically oriented) and the self-abstraction (structurally directed) of objects when modeling takes place. Conceptual abstraction makes comparisons possible, and self-abstraction enables the repetition of the same structures within objects themselves. The concept “system” serves to abstract facts that are related to objects exhibiting features justifying the use of this concept and to compare them with each other and with other kinds of facts in order to assess their difference or similarity.

2.8. Organizations and complexity

2.8.1. Structural complexity

In the current parlance, an organization is a massively parallel system of agents’ concurrent behaviors. In spite of the fact that some agents may be acknowledged to have the mission to issue and control rules, each agent is responsible for its own actions with respect to the current aggregate patterns it is embedded in. In case agents’ behaviors are consistent with these patterns, an organizational equilibrium prevails.

As a matter of fact, an organization is organic, evolutionary and contingent. This means that an organization experiences an endogenously generated nonequilibrium. Idealized equilibrium models distort reality that is not static and generate biased decisions by the stakeholders concerned. How people decide is important: they may stand back from their current situation and attempt to make sense by surmising, making guesses, using past knowledge or their imagination. “We are in a world where beliefs, strategies and actions of agents are being tested for survival within a situation or outcome or ‘ecology’ that these beliefs, strategies and actions together create” (W.B. Arthur 2013). An organization is subject to inherent feedback and feed-forward loops conducive to emerging organizational patterns with a relentless time arrow.

New organizational patterns can result from “bifurcation”, as explained by Ilya Prigogine. This feature will be addressed in chapter three. Which branch of a bifurcation is followed is impossible to forecast and results from local “instabilities” developing in global changes. This type of situation turns to be a source of uncertainty.

2.8.2. Behavioral complexity in group decision-making

Any group of agents in an organization is supposed to fulfill a mission and to reach a target by taking courses of action after a decision-making process has been explicitly or implicitly carried out. When individual decision-making is considered, arguments are made explicit to explain that “rationality” is the driving force that underpins the behavior of individuals. But what happens when this explanation is applied to groups of individual agents whose interests do not precisely coincide, but who are obliged for some reason or other to act jointly?

Consider, therefore, some alternative actions that pertain to a group of individual agents. By this, it is meant that the presence or absence of these alternatives affects each of them. Can any sense be made of the idea of preference between these alternatives where preference refers to them as a group?

Let two alternatives be X and Y: X Pi Y means person i prefers X to Y. X Ii Y means personi is indifferent between X and Y. If for some decision-makers their preference is X Pi Y and the remainder of decision-makers are indifferent between X and Y, it seems reasonable to say X P Y, where P refers to the group’s preferences.

The kind of case we contemplate here is the preference between two courses of action. They are group phenomena in the sense that their effects on any one person may be seen by another group member to be relevant to their interest. It is an interesting and important part of social group coexistence to consider how these differences are dealt with and can be brought under control. This means that if group members disagree about the relative merits of the group situations X and Y, no meaning can be attributed to XPY, YPX or XIY. It may happen that the group is obliged to make a choice between X and Y, so one would then have to analyze their choice in terms other than those regarding group preferences. How can we explore the possibility of group preferences even when there is no agreement among group members?

Inspired by the approach used by economists, we can define a preference function mirroring the goal(s) shared by all group members. This function relates individual preferences (independent variables) to the group goal (dependent variable). Let us take an example. Suppose that there are three individuals and two situational actions X and Y. The possible preferences are XPY, YPX and XIY for each individual and the group. This means that there are 27 possible combinations of individual preferences within the group. They are listed in the following table (see next page).

For each of these combinations of individual preferences, we must attach some group preferences. It is clear that there is a wide spectrum of possibilities. One is to make the group’s preferences exactly the same as those of a particular member: this is dictatorial behavior that is likely to be rejected. A second possibility is to make the group’s preferences independent of individual preference by, e.g. writing XIY all the time. This option seems pointless and rejects the cases where a consensus for XPY or YPX is shared by all individuals. Let us fill in these cases. They are 15 in number, leaving 12 where there is no consensus. Can we make any progress with this 12?

First, it can be referred to Arrow’s impossibility theorem. It states that, when a group of at least two decision-makers is confronted with at least three alternatives, no group-wide consensus can be reached. This means that there is no preference function that satisfies the Unanimity, the Independence and the Non-dictatorship axioms. An objective-oriented function maps each profile of individual preferences into a preference function. The preferences are defined on a set of at least three alternatives, and there are no restrictions on the preferences beyond the usual ordering properties. Unanimity says that when all individuals prefer an alternative X to other alternatives Y, Z, etc., then society must “prefer” X to Y, Z, etc.

image

Independence means that the only information relevant for determining “preference” on a set of alternatives is the individual preferences on the set. Non-dictatorship rules out an individual such that whenever they prefer X to Y, Z, etc., the group of individuals must “prefer” X to other alternatives. Henceforth, I apply the word “preferences” and related expressions to the group of individuals as well as to individuals without the quotation marks.

Some of the flavor of Arrow’s theorem can be seen by considering the so-called paradox of majority voting which lies at its heart. Let the preferences of three individuals among three alternatives X, Y and Z be as follows:

image

Suppose these three people decide on their group choices by majority voting. If they vote on X versus Y, two of them (1 and 2) prefer X to Y, and one of them (3) prefers Y to X. It follows that the group choice is XPY. If they vote between Y and Z, two of them (1 and 3) prefer Y to Z, and one of them (2) prefers Z to Y. It follows that the group choice is YPZ. By transitivity, the group must now prefer X to Z, but if X and Z are voted on, two of them (2 and 3) prefer Z to X and only one (1) X to Z. In other words, there is also a majority for Z against X. What this means is that if group preference is determined by majority rule, the transitivity principle may cease to hold. Another way of interpreting this fact is to note that the chronological order in which the issues are put on the agenda of discussions may be of crucial importance.

They are special cases for which the paradox of majority voting does not arise. The obvious case is when individual preferences are identical. In this case, the group preference will correspond to these identical individual preferences. A second possible case is when the group members may be paired with one member left over. Assume that the pairings are arranged so that the preferences of one individual in each pair are exactly the opposite of the other individual. On all voting, therefore, all these individuals will cancel each other out, leaving the odd person’s preferences to determine the group preferences.

It is clear that group decision-making is a “complex” context-dependent exercise. It is pointless to produce standard “recipes”, especially when projects such as information systems are implemented. As the project proceeds, the context changes, and the requirements can also change dramatically because the stakeholders experience unexpected changes in their working environment and decide unconsciously or consciously on unforeseen uses of their new equipment.

2.8.3. Autonomous agents and complexity in organization operations: inexorable stretch to artificial organization

2.8.3.1. The burgeoning backdrop

In the first era of the Internet, management thinkers talked up the networked enterprise, the Flat Corporation and open innovation as new business ecosystems, successors to the hierarchies of early-20th-Century industrial corporations. On one hand, these hierarchies remain pretty intact in big dot-com companies. On the other hand, new types of problems have emerged in terms of privacy, security and inclusion. They were solved by cryptography. New technologies, namely interconnected devices (Internet of Things), mass data storage, worldwide distributed ledgers (blockchain), etc., have enlarged these problems to such a scale that traditional organizational patterns in some economic sectors will be globally and locally overturned. It is not the purpose of this book to try to forecast how organizations will experience fundamental transformations in their structures and operations. An exercise of this sort is always hazardous. Let us focus on AI-driven autonomous agents which are coming into play. They can be defined as AI-driven devices that, on behalf of their designers, take information from their environments and are capable of choices, taking decision courses. They can modify the ways to achieve their objectives, sensing and responding to their environments over time.

Humans can interact with agents capable of varying degrees of autonomy whether in the loop (with a human constantly monitoring the operation and remaining in charge of the critical decisions), on the loop (with a human supervising the agent that can intervene at any stage of a process in progress) or out the loop (with the agent carrying out its mission without any human intervention once launched). In this last situation, the autonomous agent is under potential threats of cyber-attacks without being able to detect them and take appropriate counter-measures. Crucially, the identity of the attacker (another autonomous agent?) may be ambiguous, leaving those under attack uncertain as to how to respond when aware of the situation. This context is prone to change the very nature of the economic-competition battle field.

Any individual in our digital economy will more and more interact with faceless hidden partners, in most cases, without knowing whether the response received is sent by a human or a machine. Dealing with hidden partners is a source of uncertainty and anxiety leading to chaotic behaviors.

2.8.3.2. The looming artificial-intelligence-driven organization in the era of the Internet of Everything (IoE)

Organizations, whether they are populations, societies, groups or economic sectors, will have to come to terms with the co-presence of human agents with their psychological profiles in terms of beliefs, attitudes and knowledge and virtual agents, some of which are endowed with AI capabilities for deciding on action, communicating, reasoning, perceiving their environments, and planning their objectives.

The idea of DAC (Decentralized Autonomous Corporations) companies with no directors is being touted. They would follow a pre-programmed business model and would be managed by the applications of the block-chain tenet. In essence, the block-chain tenet means a shared mass agreement of transactions between a closed cluster of stakeholders and distributed storage of encrypted data. It is a ledger that keeps record of transactions accepted by all stakeholders and secures their storage. It is thought that the block-chain system would act as a way for a DAC to store financial accounts, insurance contracts, bonds or security records between the cluster members. This organizational architecture is appealing because outside intruders will find it hard to get access to encrypted data or to shut down the whole system in view of its decentralization. Airing these ideas may be considered as science fiction and disruptive with respect to the current context. But virtual agents have been in operation for a long time, for instance computer-supported Management Information Systems (MIS) and new types are showing on the economic stage. From finance (banking, payments, crowd-funding) to sharing economies (Uber and AirBnB-like platforms) to communications (social networks, email) to reputation systems (credit ratings, seller ratings on e-commerce sites) to governance, decentralized autonomous agents are already economic actors. Their possibilities seem endless in eliminating human intermediation in many industries, but trees do not grow up to reach the sky. Platforms like these may have massive implications on what the future will look like. When this current picture of the future face of the global economy is offered to the imagination of the public by futurologists, the basic characters of the laws of Nature should not be forgotten, namely nonlinearity, self-reorganization and chaos.

A special attention should be given to biology laws. Economic societies like all human and animal communities are composed of living organisms, and as such, reference to the approaches and concepts developed in the realm of biology could be helpful for understanding the evolution of economic societies. Jacques Monod’s seminal book Le hazard et la Nécessité – Essai sur la Philosophie Naturelle de la Biologie Moderne (Chance and Necessity – Essay on the Natural Philosophy of Modern Biology) (Monod 1971) is inspired by a quote attributed to Democritus “Everything existing in the universe is the fruit of chance and necessity”. Monod contends that mutations are unpredictable and that natural selection operates only upon the products of chance. In Chapter 7, Monod states that the decisive factor in natural selection is not the “struggle for life” but the differential rate of reproduction. The only mutations “acceptable” to an organism are those that “do not lessen the coherence of the teleonomic (end directed) apparatus, but rather, further strengthen it in their already assumed orientation” (Monod 1971, p. 119). Jacques Monod explains that the teleonomic performance is assessed through natural selection, and that this system retains only a very small fraction of mutations that will perfect and enrich the teleonomic apparatus. Jacques Monod makes the point that selection of a mutation is due to the environmental surroundings of the organism and the teleonomic performances.

What will be the balance of power between human agents and AI-aided virtual agents? It is likely that the future context will be an adapted evolution of the current one. What is impossible to forecast is the tempo of this evolution. In the 1980s, some futurologists forecast that by the end of this decade, cash money would have disappeared. Recently, the CEB has committed itself to keep printing bank notes. It is not a matter of technology but of psychology. People feel that they have the full control of cash money they can hoard. Let us try to elaborate on the position of human agents in a “mixed” context, drawing on the basis of existing situations.

Up to now, an important parameter has not been taken into consideration in the arena of Management Information System (MIS) design, namely the virtual nature of computer-aided information systems. A human making use of computer-aided information systems to collaborate in a business environment is exposed to a multiverse, a real-life universe and a virtual universe where interaction with a set of actors hidden behind a screen takes place. Figure 2.9 portrays this situation.

Each stakeholder of a collaborative computer-based context is at the interface of two environments with which they interact, namely a virtual environment via a human–machine interface and a real-life environment accessible through all their senses (sight, touch, hearing, smell) as shown in Figure 2.9.

image

Figure 2.9. A human agent at the interface of two universes

When a decision-making process is engineered, this human relies on a space of pertinent facts extracted from the real-life and virtual universes they are exposed to. In order to conceptualize how this space is operated, the theory of the coordinated management meaning (CMM) can be called on. It is made up of schemata resulting from interpretative rules of meaning, deciphering messages and events coming from a real-life universe and from a virtual universe (computer-aided systems). Regulative rules of decision for action in the real-life universe are applied to derive data-driven decisions from memorized schemata. These rules refer to modal logic that we practice, often implicitly.

In any situation where human and virtual agents (actors) interact, a context is made up of three dimensions:

  • – the agents involved characterized by their functional capabilities and roles;
  • – the shared objectives and/or a common vision evolved from overlapping individual objectives;
  • – the environment surrounding the cluster of agents bound and committed to reach a common achievement.

As it is stressed in second-order cybernetics, what is ascribed as the environment of a system, here a cluster of agents, must be considered as an agent of the system. The following arguments are intended to demonstrate it.

Message transmission between a sender and a recipient is prone to a distortion of content, often called filtering. This may incur biased interpretation by the receiver. It is an important feature to take into account when analyzing how agents communicate. It refers here to two notable elements. The first one is the semantics of messages and can be considered as a dimension of semantic interoperability. The other is the fact that messages can deliver a biased understanding of the actual context, engineered by the senders. A common basis of knowledge should be ensured between all the actors involved in order to avoid misinterpretation and, as a consequence, inefficient decision-making resulting from ambiguous mutual understanding. But accurate communication without the impairment of meaning is seldom, if ever, found in the complex realities of business life. Figure 2.10 portrays the picture of an interactive context.

image

Figure 2.10. Description of a context shared by a set of actors interacting between themselves and their environment

Two interaction patterns between the actors, i.e. direct interaction and indirect interaction, can be identified. The idea of direct interaction is straightforward to understand. Indirect interaction means that the environment is a mediator that can influence the behaviors of the actors involved through the environment that is considered as a full actor of the system. This feature underlines the importance of the environment as an entity for context-awareness in collaborative environments. With this point of view, the environment must be considered not only as an actor in itself but also the only communication channel between the set of actors when a medium-based virtual collaborative environment is implemented. The behavior of each actor in the set cannot be understood and analyzed without taking into consideration how the interacting environment is perceived by each actor and is operative. Figure 2.9 changes into Figure 2.11.

image

Figure 2.11. Set of actors interacting between themselves via their environment

A virtual collaborative environment is an artefact of a sort to which different members of a community are given access and that acts as an intermediate between them, either to exchange information and knowledge about a technical field of common practice or to help solve problems and to deliver results. In the current context, virtual collaborative environments are considered computer-based information systems shared by members of communities of practice, coming together virtually to exchange information and help members make pertinent decisions.

The universe each stakeholder is embedded in can be conceptualized as a space of tangible facts considered as relevant for making a decision in the real-life environment. To describe how this space is operated, the theory of the coordinated management of meaning (CMM) can be called on. It is the most comprehensive rule theory of communication developed by Pearce and Cronen (1980). CMM states that individuals in any social situation want to understand what is going on and apply rules to figure things out. In other words, constitutive rules in CMM are rules of meaning and rules of decision for action.

Rules of meaning are used to decipher a message or an event via interpretative rules. Rules of decision for action are regulative rules used to process interpreted messages or events. Rules of meaning and rules of decision for action are always context-dependent. Often, text (message or action) and context form a loop so that each is used to interpret the other (Cronen, Johnson, Lannamann) (Cronen 1982). The space of tangible facts for each business actor is a repository comprising interpreted messages from information systems and relevant events from the business actor’s real-life environment, events resulting or not from the business actor’s usual courses of action.

Let us delve into the background mechanism of the CMM rules of meaning. Kant uses the word “schema” when he argues, in his book, Critique of Pure Reason, that in order to apply non-empirical concepts to empirical facts, a mediating representation is necessary. He calls it a schema. In the ordinary case, there is, according to Kant, homogeneity of a certain sort between concept and object. There is no similar homogeneity in the application of concepts such as the intuitive analysis of messages and events. To apply a causal analysis to a sequence of messages and/or events so that one of them is regarded as a cause, another as the effect is more problematic, because the concept of causality involves necessity. But necessity is not always an element in our experience. The concept of causality only tells us that first, there is something and then something else. For example, the schema for causality is temporal succession according to a rule; the schema for necessity is the existence of an object at all times. Figure 2.12 shows the conceptualized processing steps for diagnosing messages and events.

image

Figure 2.12. Conceptualized processing steps for diagnosing messages and events

Rules give a sense of which interpretations and decisions for action appear to be logical or appropriate in a given context. This sense is called logical force. But this logical force has a contextual dimension. The issue raised at this point is how to make fixed logic rules and changing contexts compatible: our answer is modal logic whose main features have been developed in a previous section.

2.9. References

Anderson, N.H. (1971). Integration Theory and Attitude Change, Psychological Review, vol. 78, pp. 171–206.

Arthur W. Brian (2013). Complexity Economics – a different framework for economic thought. Santa Fe Institute, Report 2013-04-012.

Ballewski, D. (1887). Die Kalkulation von Maschinenfabriken, Magdeburg.

Barnes, J. (1954). Class and Committees in a Norwegian Island Parish, Human Relations, no. 7, pp. 39–58.

Blackburn, P., de Ryke, M., Venema, Y. (2001). Modal Logic, Cambridge University Press.

Bleicher, K. (1991). Organization – Strategien – StrukturenKulturen, Gabler Verlag Wiesbaden.

Briffaut, J.P. (2015), E-enabled Operations Management, ISTE, London and John Wiley & Sons, New York.

Brown, R. (1952). Structure and Function in Primitive Society, Free Press Glencoe Il.

Bussmann, S. and Muller, J. (1992). A negotiation framework for co-operating agents, in Proc CKBS-SIG, Dake Centre, University of Keele.

Cassin, B. (ed.), (2004). Vocabulaire Européen des Philosophies, Seuil Le Robert.

Coase, R.H. (1937). The Nature of The Firm, Economica New series, vol. 4, no. 16, pp. 386–405.

Corkill, D. (1979), Hierarchical Planning in a Distributed Environment, in Proceedings of the Sixth IJCAL, Cambridge MA Morgan-Kaufmann San Mateo California.

Cronen, V.E., Johnson, K.M. and Lannaman, J.W. (1982). Paradoxes, Double Binds and Reflexive Loops: an Alternative Theoretical Perspective, Family Process 20, pp. 91–112.

Cross, R. and Parker, A. (2004). The Hidden Power of Social Networks, Harvard Business School Press, Boston MA.

Cutti, M. (1964). The Growth of American Thought, 3rd edition, Harper and Row New York, Evanston and London.

de Finetti, B. (1937). Foresight: Its logical laws, its subjective sources, translated from French in Kyburg, H.E. Jr and Smokler, H.E., Studies in Subjective Probability, John Wiley, New York.

Dehaene, S. (2007). Le cerveau humain est-il une machine de Turing? In L’homme artificiel, J.P. Changeux (ed.), Odile Jacob Paris.

Durkheim, E. (1951). Suicide: A Study in Sociology, Free Press New York.

Ferber, J. (1999). Multi-agent System, Addison–Wesley.

Fishbein, M. and Ajzen, I. (1975). Belief, Attitude, Intention and Behavior, Addison–Wesley Reading Mass.

Georgeff, M. (1983), Communication and Interaction in Multi-Agent Planning, in Proceedings of the Third National Conference on Artificial Intelligence, Morgan-Kaufmann, San Mateo, California.

Georgeff, M. (1984). A Theory of Action for Multi-agent Planning, in Proceedings of the Fourth National Conference on Artificial Intelligence, Austin, Texas.

Giddens, A. (1979). Central Problems in Social Theory, University of California Press.

Greene, J.O. (1984). Evaluating Cognitive Explanation of Communicative Phenomena, Quarterly Journal of Speech, vol. 70, pp. 241–254.

Hume, D. (1978). A Treatise of Human Nature-I, I section 7, Nidditch (ed.) Oxford University Press.

Hume, D. (1975). Enquiries Concerning Human Understanding and Concerning the Principles of Morals, Nidditch (ed.), Clarendon Press, Oxford, p. 25.

Keynes, J.M. (1921). A Treatise on Probability, Macmillan, London.

Kripe, S. (1963a). Semantical Considerations on Modal Logic, Acta Philisophica Fennica, vol. 16, pp. 83–89.

Kripe, S. (1963b). Semantical Analysis of Modal Logic I: Normal Propositional Calculi, Zeitschrift für matematische Logik und Grundlagen der Mathematik, vol. 9, pp. 67–96.

Laplace P.S. (1795). Lecture on probabilities delivered in 1795 included in A Philosophical Essay on Probabilities translated by E.W. Tuscott and F.L. Emory John, Wiley and Sons, New York (1902), Chapman and Hall Ltd, London.

Lewin, K. (1951). Field Theory in Social Science, Harper and Row, New York.

Luce, R.D. and Perry, A. (1949). Psychometrica, vol. 14, p. 95.

Luhmann, N. (2002). Einführung in die Systemtheorie, Carl-Auer-System Verslag.

Maturana, H. and Varela, F. (1980). Autopoesis and Cognition: the Realization of the Living, Boston Studies in the Philosophy of Science, vol. 42, Kluwer Academic Publishers.

Monod, J., (1971). Chance and Necessity: an Essay on the Natural Philosophy of Modern Biology, Alfred A. Knopf Inc, New York.

Morin, E. (1977), La Méthode (1): La Nature de la Nature, Le Seuil, Paris.

Morin, E. (1991). La Méthode (4): Les Idées, leur Habitat, Le Seuil, Paris.

Pearce, W.B. and Cronen, V.E. (1980). Communication, Action and Meaning, Präger, New York.

Ramsay, F.P. (1926). Truth and probability in Foundations: Essays by F.P. Ramsey, Mellor, D.H. (ed), Routledge & Kegan Paul, London.

Rao, A. and Georgeff, M. (1995). BDI Agents: From Theory to Practice, Proceedings of the first international conference on Multi-Agent Systems, San Franscisco.

Rokeach, M. (1969). Beliefs, Attitudes and Values: A Theory of Organization and Change, Jossey-Bass, San Francisco.

Rokeach, M. (1973). The Nature of Human Values, Free Press, New York.

Savage, L.J. (1954). The Foundations of Statistics, John Wiley, New York.

Tapscott, D. and Tapscott, A. (2016). Blockchain Revolution, Portofolio Penguin.

Wasserman, S. and Faust, K. (1994). Social Network Analysis, Cambridge University Press.

Wilson, T.D., Gilbert, D.T. and Centerbar, D.B. (2003). Making sense: The cause of emotional evanescence in Brocas, I. and Catillo J. (eds), The psychology of economic decisions, vol. 1, pp. 209–233, Oxford University Press, New York.

Wyer, R.S. (1974). Cognitive Organization and Change, Erlbaum Hillsdale, NJ.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.60.117