Chapter 4

Reasoning in Multi-Granular Worlds

Abstract

The study of reasoning model draws its inspiration from the observation that reasoning is one of the main intelligent activities of human beings, and of the ways by which human thinking comes from one idea to a related idea.

Reasoning over a multi-granular world is a main ability in human problem solving. The uncertain reasoning model on an AND/OR graph (OR graph) is defined. In order for computers to reason over a multi-granular world, the homomorphism of quotient structures has to be guaranteed. We show that the efficiency of multi-granular reasoning depends on the satisfaction degree of the homomorphism principle to a great extent. The truth- and falsity-preserving principles of reasoning show that the introducing structure into quotient space model also plays an important role.

The structures can also be defined by operations, so the existence, construction, and approximation of quotient operations are discussed.

The qualitative reasoning, fuzzy reasoning, and three granular computing methods are also discussed.

Keywords

AND/OR graph; fuzzy reasoning; graph; homomorphism; qualitative reasoning; quotient operation; reasoning

Chapter Outline

4.1. Reasoning Models

The study of reasoning model draws its inspiration from the observation that reasoning is one of the main intelligent activities of human beings, and of the ways by which human thinking comes from one idea to a related idea. It has long been attempted to mimic such ability or the like for computers; the reasoning that modeled computationally is called automated reasoning. The study of automated reasoning is an area of artificial intelligence or computer science that helps to understand the characteristics of human reasoning. In the chapter, we will deal with the automated reasoning, or reasoning for short.
There are several kinds of reasoning. Deductive reasoning or logic deduction, is the reasoning from one or more general premises to reach a logical conclusion. Traditional logic reasoning based on the first order predicate calculus is a simple model of human’s reasoning process. This model is easy to mechanize and widely used in Al. For example, machine theorem proving based on resolution principle is one of the main areas in Al. In some expert systems, traditional logic reasoning has also been used. Logic reasoning is rigorous in mathematics and easy to be implemented by computers, yet it is far different from human everyday reasoning. Inductive reasoning is to formulate general statements or propositions based on previous limited observations of objects. Abductive reasoning is a form of inductive reasoning. It infers a hypothesis as an explanation of a set of observations. There are several (or infinite) explanations satisfying the observations generally, some other optimization condition is usually imposed in order to have the ‘best’ or ‘simplest’ one. In this chapter we mainly deal with deductive reasoning.
One of the major problems facing reasoning model designers is how to develop meaningful models to deal with uncertainty that is associated with the complexity of the real world. In real decision making, one can’t have complete knowledge about the world, the causal relationship between events is not precisely known, and one’s ability to deal with complex problems is limited – how would these uncertainties be taken care of by the model?
This chapter is an attempt to present a general reasoning framework for the manipulation and explanation of uncertainty in the multi-granular world, presented in Chapters 13. A new uncertain reasoning model will also be established in order to reflect some characteristics of human reasoning.
Study of the uncertain reasoning model blossomed in the mid-1970s and has enjoyed more than a decade of vigorous growth. Contributions to the growth have come from many researchers including Bayes’ Statistics, Zadeh’s Possibility Theory and Dempster-Shafer’s Belief Functions, etc. We briefly introduce some well-known uncertain reasoning models as follows.
(1). Non-Monotonic Logic
Traditional logic is monotonic. Its monotonicity means that learning a new piece of knowledge cannot reduce the set of what is known. It is only available under certain and complete knowledge, but cannot handle various kinds of reasoning tasks such as default reasoning, abductive reasoning, and belief revision, etc. In these cases, as long as new knowledge is gained, the old assumption or conclusion may be revised even abandoned. So a non-monotonic logic model is needed (McDermott and Doyle, 1980).
In traditional logic, if A and B are theories, then we have
(1) Monotonicity. When

image

(2) Idempotent

image

where image all theorems are inferred from A.
Non-monotonic logic is the extension of the traditional logic generally. In non-monotonic reasoning, adding a fact may lead to withdraw the negative conclusion. The well-known default reasoning (Reiter, 1980) and circumscription reasoning (McCarthy, 1980), etc., belong to the non-monotonic ones. For example, the aim of default reasoning is to handle the reasoning under incomplete knowledge and the exceptions expediently. One of its formalisms is image, where ‘MB’ indicates that ‘in the absence of information to the contrary, assume B’, i.e., the default assumption. Certainly, default reasoning is non-monotonic, since when the default assumption is violated then the consequent will be retracted. These theories have been applied to some artificial intelligence systems such as Truth Maintenance System and KRL-Knowledge Representation Language, etc. (Bobrow and Winograd, 1977; Doyle, 1979).
In order to overcome the uncertainty produced by incomplete knowledge and limit-processing resources, the non-monotonic reasoning restricts the reasoning in the existing knowledge and system-processing capacity, so that the reasoning temporarily becomes certain. As long as new knowledge is gained, some conclusions will be withdrawn. This is the basic idea underlying the non-monotonic reasoning. The reasoning model embodies the phased, relative truth and changing process of human cognition.
(2). Reasoning with Uncertainty Measure
Another way to deal with the uncertain reasoning is intended to measure the degree of uncertainty quantitatively. Bayesian probabilistic model is the most commonly used method. In the reasoning with uncertainty measure, it’s needed to solving the evidence synthesis and propagation problems during the reasoning process. Probability is a mature theory and can provide ready-made formulas and tools to the problems. So the probabilistic measure has widely been used in uncertain reasoning. But it has some disadvantages, for example, the cognitive complexity that followed from dealing with large numbers of conditional and prior probabilities.
In order to overcome the above deficit, some new mathematical models of uncertainty are proposed such as Dempster-Shaper belief theory (Shafer, 1976) and Zadeh possibility theory (Dubois and Prade, 2001). Taking D-S theory as an example, it first defines a basic probability assignment on a power set of a domain, then defines belief and plausibility functions based on the basic assignment. The difference between the two functions is designed to describe the degrees of ignorance about a domain. Therefore, the incompleteness of knowledge can be handled. The D-S synthetic rule is used for evidence synthesis in belief theory. But its computational complexity is very high when the domain is large. These theories are mainly still in the exploration stage and are not being widely applied.
Both probability and belief theory can be used to describe the uncertainty of truth of a proposition. But there is another kind of uncertainty. In the propositions, they have vague concepts that do not have clear intension and extension. Possibility theory provides a model for solving the issue. The fuzzy subsets and possibility distributions are used to represent fuzzy propositions. And fuzzy operations and relations are used to handle the evidence synthesis during reasoning. Fuzzy reasoning embodies the characteristics of human reasoning, i.e., the inaccuracy represented by fuzzy concepts and fuzzy causal relations. Some researchers strain at fuzzy reasoning due to the lack of its strict logical foundation.
(3). Qualitative Reasoning
When degrees of uncertainty are measured by numerical values quantitatively in reasoning, there are several disadvantages, for example, how to gain the numerical values to ensure the reasoning reliability and stability. Some researchers dislike such a way of the deterministic description of uncertainty and prefer the qualitative or symbolic description. Cohen (1985) proposed an endorsement theory. The degrees of belief are described by justifiable (symbolic) reasons rather than numerical values. Forbus (1981, 1984) presented a qualitative reasoning theory. The reasoning creates non-numerical descriptions of physical systems and their behavior, only preserving interested behavioral properties and qualitative distinctions. Its aim is to develop representation and reasoning methods that enable computers to reason about the behavior of physical systems, without precise quantitative information. It has become a topic of artificial intelligence and was successfully applied to many areas including autonomous spacecraft support, failure analysis and on-board diagnosis of vehicle systems, intelligent aids for human learning, etc. (Bredeweg and Struss, 2003).
(4). Empirical Methods
In the simple-Bayes model, it includes the assumptions: (1) faults or hypotheses are mutually exclusive and exhaustive, and (2) pieces of evidence were conditionally independent, given each fault or hypothesis. Unfortunately, the assumptions are often inaccurate in practice. Thus, in real AI expert systems, some empirical methods for uncertainty measure are used, for example, certainty factor (CF) in MYCIN system (Shortiffe, 1976) and likelihood reasoning model in PROSPECTOR system (Duda, 1978). The CF model was created for the domain of MYCIN as a practical approach to uncertainty management in rule-based systems.
Certainty factor (CF) is defined as follows.

image

image

image

Where E is evidence, H is hypothesis, image is the prior-probability of H, image is the conditional probability of H given E, image is the increment of belief of H given E, image is the increment of disbelief of H given E. A image between 0 and 1 means that a person’s belief in H given E increases, while a image between –1 and 0 means that a person’s belief decreases. The evidence synthetic rules are the following.

image

image

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.61.30