2.2. The Estimation of Computational Complexity

The aim of this section is to estimate the computational complexity, based on the mathematical model presented in Chapter 1, when the multi-granular computing is used (Zhang and Zhang, 1990d, 1992).

2.2.1. The Assumptions

A problem space is assumed to be a finite set. Symbol image denotes the number of elements in X. Sometimes, we simply use X instead of image if no confusion is made.
If we solve the problem X directly, i.e., finding a goal in X, the computational complexity is assumed to be image. Then, the complexity will be changed to image by using the multi-granular computing. Now, the point is in what conditions that we may have the result image.
Before the discussion, we make the following assumption for image.

Hypothesis A

Assume that image only depends on the number of elements in X, and independent of its structure, or other attributes. Both domain and range of image are assumed to be image, i.e., a non-negative real number. Namely,
(1) image is a monotonically increasing function on image.
(2) image, we have image.
image is called a computational complexity function on X, or complexity function for short.
Now, we solve X by using the multi-granular computing strategy. That is, X is classified with respect to image. Its corresponding quotient space is denoted by image. Then the original problem on X is converted into the corresponding problem on image. Since some information will lose on image due to the abstraction, generally, we can't find the goal of X directly from its quotient space image. Instead, we can only have a useful clue to the final solution of X from image. Thus, in order to describe the relation between the results obtained from the quotient and its original spaces, we introduce a new function as follows.

Definition 2.1

Assume that image is a quotient space of X and f is a complexity function of X. Suppose that the complexity function of image is less than image and the number of elements which might contain the goal is estimated at g at most. Define a goal estimation function image as follows:

image

Assume that image is a monotonically increasing function and image.
We have a sequence image of quotient spaces with t levels, where image is a quotient space of image, image. After the multi-granular computing with t levels, the total complexity function is assumed to be image. We next estimate the image.

2.2.2. The Estimation of the Complexity Under Deterministic Models

Our goal is to estimate the asymptotic property of complexity function under the multi-granular computing. For simplicity, let image, i.e., X has image elements. The other variables are limited to the order of image, image, and image, where image.

Local Partition Method

Definition 2.2
Assume image. The complexity function image is called divergent if for any given b > 0, have image.
image Case
Suppose that image is a quotient space of X. We seek the goal of X from image. When solving the problem on image, the computational complexity only depends on the number of elements in image. After the amount image of computation, we find element image which might contain the goal. Due to image, there is only one such element, image. Again, assume that each equivalence class has the same number of elements.
Then, we seek the goal within the equivalence class image (image is an element in X1) in X.
When image, from image we have image. Thus, the total computational complexity for solving X by the multi-granular computing with two levels is

image(2.1)

Regarding image as a set of X, if set image is too large, then image is further partitioned. Assume that image is a quotient set of set image and image The total complexity for solving X by the multi-granular computing with three levels is

image(2.2)

where, image is an element of image which might contain the goal.
From induction, the total complexity for solving X by the multi-granular computing with t levels is

image(2.3)

where

image

If in each level, its elements are classified into c equivalence classes without exception, where image is an integer, i.e., image. Since image, we have image, where image. Then, we have image, image. Substituting image into (2.3), we have image, i.e., image.
We have the following proposition.

Proposition 2.1

If a unique element which contains the goal can be found at each granular level, i.e., image, there exists a hierarchical partition method for X such that X can be solved in a linear time (image), in spite of the form of complexity function f(X) (f(X) might be divergent).

Example 2.4

We are given image coins, one of which is known to be lighter than the rest. Using a two-pan scale, we must find the counterfeit coin as soon as possible.
To solve the problem, we may weigh a pair of coins (one in each pan) at a time. That is, we find the counterfeit one from image coins directly. The mean computational complexity, the average number of weighing, is image. By the multi-granular computing, image coins may first be divided into three groups equally. Then a quotient space image is gained. Obviously, we can find which group will contain the counterfeit coin in one test. That is, by weighing A and B, the suspect will be in C in case the scale balances, the suspect will be in either A or B (the lighter one) in case the scale tips. In our terminology, in the first weighting, image and image.
The same procedure can be used for the suspect class, the outcome of the first weighting. The same process continues until the counterfeit coin is found. From (2.3) by letting image we have image. That is, the counterfeit coin can be found in n tests.

Proposition 2.2

If the complexity function of X is image, then
(1) when image, image,
(2) when image, image,
(3) when image, image.
From Proposition 2.1, the proposition is obtained directly.
The proposition indicates that as long as the complexity f(X) is greater than a linearly growing function, it can be reduced by the multi-granular computing:
image case
Quotient space image is obtained after partitioning X. Through the amount image of computation, it is found that image elements in image might contain the goal of X. These elements are assumed to be image. The total complexity for solving X by the multi-granular computing with two levels is

image

Each image is partitioned. The corresponding quotient space is denoted by image, image. The complexity for solving X by the multi-granular computing with three levels is:

image

From induction, the total complexity for solving X by the multi-granular computing with t levels is:

image(2.4)

where image is the number of elements in the i-th granular level (abstraction level), image is the goal estimation function on image.
From (2.4), we estimate the order of image. If image, we have
(1) If image is divergent, for any t, then image must be divergent;
(2) If image, letting image (constant), then image;
(3) If image, letting image, then image;
(4) If image, letting image, then image is divergent.
We only give the proof of case (3) below.
From assumption that image and image, substituting the above values into Formula (2.4), we have

image(2.5)

Letting image and substituting into Formula (2.5), we obtain

image

From the results above, we have the following proposition.

Proposition 2.3

If image, based on the above partition model, we cannot reduce the asymptotic property of the complexity for solving X by the multi-granular computing paradigm. The result is disappointing. It is just the partitioning strategy we adopted that leads to the result. Based on the above strategy, in the first granular level if elements image are suspected of containing the goal, then each image, is further partitioned, respectively. Assume that each image still has m suspects. The process goes on, after n granular levels (abstraction levels), we will have image suspects, so the divergence happens. This partitioning method is called local partition.
In order to overcome the shortage of local partition, the second alternative can be adopted. When image are found to be suspects, all image, are merged into one equivalence class a. Then, we partition the whole merged class a rather than each image. This strategy is called global partition.

Global Partition Method

Assume that in a granular level there exist image elements that are suspected of containing the goal, where x is the total number of elements in that level. That is, image.
Suppose that X is partitioned with respect to R. Its corresponding quotient space is image. Letting image, we have the complexity function:

image

Here each equivalence class in image is supposed to have the same number of elements of X.
Now, by partitioning image elements, i.e., all suspects in X1, we have a quotient space X2. Letting image, we have

image

From induction, it follows that

image(2.6)

When image, the goal is found.
Suppose that image. By letting image, we have image, i.e., image.
Generally, letting image, we have image.
Since image, we have image.
Assume image (constant), from Formula (2.6) it follows that image.
We have the following proposition.

Proposition 2.4

If we can have image, by the global partition method, when image, then the complexity for solving X is image regardless of whether the original complexity function image is divergent or not.
The proposition shows that different partition strategies may have different results. So long as in each granular level there is enough information such that image, then when image, the linear complexity can be reached by the global partition method. In the following example, since image, from Proposition 2.4, it is known that so long as image, we can get the linear complexity.

Example 2.5

We are given image coins, one of which is known to be heavier or lighter than the rest. Using a two-pan scale, we must find the counterfeit coin and determine whether it is light or heavy as soon as possible.
First, coins are divided into three groups (A, B, C) equally. By weighing A and B (one in each pan), the suspect will be in C in case the scale balances, the suspect will be in A or B in case the scale tips.
In terms of our quotient space model, we have

image

From Proposition 2.4, so long as the number t of granular levels is greater than 2.47n, the counterfeit coin can be found.
In fact, it can be proved that the counterfeit coin may be found from image coins in n tests.
It is shown that the computational complexity by the multi-granular computing also depends on the way in which the domain X is partitioned.

2.2.3. The Estimation of the Complexity Under Probabilistic Models

In the previous section, two partition methods are given. One is to partition each class that might contain the goal. It is called the local partition method. The method requires that only the unique element that might contain the goal is identified at each partition so that the computational complexity can be reduced. The other one is to partition the whole set of classes that might contain the goal. This is called the global partition method. In order to reduce the complexity by the method, image must be satisfied at each partition. These results are unsatisfactory. So we need a more reasonable model, i.e., probabilistic models.
From Definition 2.1, it's known that goal estimation function image is:

image

If image, it means that the number of elements in X1 which might contain the goal is g at most. Generally, it is not the case that each element contains the goal with the same probability. Thus, it can reasonably be described by probabilistic models.

Definition 2.3

Assume that image is a probabilistic distribution function that exactly x elements of set A might contain the goal. Let image, where image. Then, image is a goal estimation function under the statistical model.

Example 2.6

We are given image coins, one of which is known to be heavier or lighter than the rest. Using a two-pan scale, we divide the coins into three groups equally: A, B, C. First, group A is compared with B. A or B will contain the counterfeit coin in case the scale tips and C will contain the counterfeit one in case the scale balances. In terms of distribution function image, we have:

image

Here, image coins are assumed to be divided into three groups at random. Thus, the counterfeit coin falls in each group with same probability image. We have:

image

where image.
image is the result we have in only one weighting. The number of weightings can be increased. For example, when the scale tips in the first weighting, then A and C (one in each pan) are weighed. The counterfeit coin is in B in case the scale balances and the counterfeit coin is in A in case the scale tips. That is, image.
In other words, so long as increasing a certain amount of computation on image, the value of goal function image can always be reduced. Taking this into account, we have the following definition.

Definition 2.4

Assume that image is a quotient space of X. image is a computational complexity function of X. Besides the amount image of computation on image, an additional amount y of computation is added. Then, the goal distribution function is changed to image. Let

image

where image.
image is called a probabilistic model for estimating the computational complexity of X with the multi-granular computing.
Certainly, the computational complexity above depends on the forms of image and image.
Now, we only discuss two specific kinds of image, i.e., image and image.

image

We know that when y increases, then image decreases. From Formula (2.4), image decreases as well. Certainly, the reduction of computational complexity is at the cost of additional computation y. But the point is how to choose an appropriate y and t such that the total computational complexity will be decreased. The following proposition gives a complete answer.

Proposition 2.5

Assume image. If the number of granular levels is image, the computational complexity is the minimal by using the multi-granular computing with t levels. And its asymptotic complexity is image, where the number of elements in X is assumed to be image.

Proof:

Assume that the local partition approach is adopted and image.
Let image, where image is the goal estimation function in the i-th granular level (abstraction level), when the additional computation is image.
Proposition 2.4 shows that if image, by local partition method the computational complexity cannot be reduced. In order to reduce the complexity, it is demanded that image, i.e., image.
Now, let

image(2.7)

Substituting (2.7) into (2.4), we have image.
On the other hand, the total amount of additional computation is:

image

The total computational complexity is image.

image

We'll show below that if t is not equal to image, the asymptotic complexity of image will be greater than or equal to image. There are three cases.
(1) If image, letting image, then image is divergent.
Assume that image is a quotient space at the i-th granular level. Let image. Due to the local partition, we have image.
By letting each image be d, obtain image, i.e., image.
From image, we have image.
Again, from image, we have image.
Substituting the above result into Formula (2.4), it follows that image is divergent.
(2) If image, when image, letting image, the order of image will not be less than image.
Similar to the inference in (1), we have image. By letting image, we obtain image. Substituting the result into Formula (2.4), we obtain:

image

where image
Thus,

image

On the other hand, the additional amount image of computation is

image

Finally,

image(2.8)

Letting the order of Ft(X) in Formula (2.8) be less than image, it's known that the necessary and sufficient condition is:

image

This is in contradiction with image. Thus, the order of image is not less than image.
(3) If timage , the order of image is image at least, or greater than image.
From Formula (2.4), it's known that image. Thus, image. Since image, the order of image is greater than image. Still more, the order of Ft(X) is greater than image.
From the results of (1), (2) and (3), we conclude that the order of complexity is minimal by using the multi-granular computing with image levels.

image

Proposition 2.6

Assume that image and image. If image, then we have:
(i) when image, image is divergent;
(ii) when image, if image, then image;
(iii) when image, if image,image is minimal and its order is image.

Proof:

When image, from Proposition 2.5 (1), it's known that image is divergent.
When image, letting image, from Formula (2.4), we have:

image

The additional amount image of the computation is

image

Thus,

image

Since

image

From the result above, it's known that the order of image is minimal, when b=l, by using the multi-granular computing with image levels.

Proposition 2.7

Assume that image and image. Let image. When image, where image, the order of image is minimal and equals image. It is less than the order of f(X).

Proof:

If image, from Proposition 2.6, we have image.
If the additional amount image of computation satisfies

image

Where, image, finally, we have

image(2.9)

In order to have the minimal order of image

image

Substituting b by image in Formula (2.9), we have image.
Due to image, we have

image

Here, image, let image.
When image, the order of image is minimal at image and equal to image.
When image, letting image, the order of its corresponding image is minimal and equal to image. Since image, image. So the order image is less than image.
Finally, when image, letting image, the order of image is minimal and equal to image.

Proposition 2.8

If image image and image, by letting image then image is divergent.

Proof:

From image, we have

image

image

From image, we obtain B is divergent so that image is divergent.

Proposition 2.9

If image and image, then
(1) when image, by letting image, the order of image is less than that of image.
(2) when image, by letting image, the order of image is not less than that of image.

Proof:

From image, we have

image

Taking the logarithm on both sides of the above formula, we have

image

Thus, image
Substituting into Formula (2.4), we have

image

The order of the additional computation is image since c=1, hence

image

For the order of image to reach the minimal,

image

For the order of image to be less than that of image,

image

Since image, have image. When image, the order of image is less than that of image. When image, the order of image is not less than that of image.
In summary, although the complexity estimation models we use are rather simple, the results of our analysis based on these models can completely answer the questions we raised at the beginning of this chapter. That is, in what conditions the multi-granular computing can reduce the computational complexity; under different situations, in what conditions the computational complexity can reach the minimal. These results provide a fundamental basis for multi-granular computing.
The results we have already had are summarized in Table 2.1, but only dealing with image.
Applications
From the conclusions we made in Propositions 2.5–2.9, it is known that there indeed exist some proper multi-granular computing methods which can make the order of complexity Ft(X) less than that of the original one image. To the end, the condition needed is that the corresponding image when the additional amount image of computation at each granular level increases. Namely, image should grow at a negatively polynomial rate with image, i.e., image. The problem is if there exists such a relation between image and image objectively. We next discuss the problem.

Table 2.1

The order of Computational Complexity at Different Cases

imageimageimageimageimageCompared to f(⋅)
imageimageimageO(n)image<
imageimageimageb≥1image
image
b=1image
0<b<1
image
image<
imagea > 2yh
h<0
imageimageimage<
1 a 2

image

Where b∗ is the optimal value of b.

Assume that image is the goal estimation function under the nondeterministic model with additional computation image. In order for image, if image, then image (constant) and the corresponding distribution function image at image has to be arbitrarily small. This is equivalent to the goal falling into a certain element with probability one when the additional computation image gradually increases. In statistical inference, it is known that so long as the sample size (or the stopping variable in the Wald sequential probabilistic ratio test) gradually increases the goal can always be detected within a certain element with probability one. Therefore, if regarding the sample size as the additional amount of computation, there exist several statistical inference methods which have the relation image, where the meaning of image is the same as that of the image in the formula image. Then, from Proposition 2.5, it shows that if a proper statistical inference method is applied to multi-granular computing methods, the order of complexity image can reach image regardless of the form of the original complexity f(X).
Tree search is a typical example of multi-granular computing as mentioned in Chapter 1, where image and image, m is the branching factor of the tree. It is expected that tree search will benefit from introducing statistical inference methods to search. More details will be presented in Chapter 6.

2.2.4. Successive Operation in Multi-Granular Computing

From the probabilistic model, it shows that some elements in a certain granular level which contain the goal can only be found with certain probability. It implies that the elements we have already found might not contain any goal. Thus, the mistakes should be corrected as soon as they are discovered. By tracing back, we can go back to some coarse granular level and begin a new round of computation. In our terminology, it is called successive operation. In Chapter 6, we'll show that under certain conditions, by successive operation the goal can be found with probability one but the order of complexity does not increase at all.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.218.221