Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Center Type Reduction

The centroid defuzzifier combines a set of output types 1 using t-conorm and then finds the center of this set.

Height Type Reduction

The height defuzzifier replaces each rule output set with a singleton at the point with the maximum membership in that output set and then computes the centroid of the type 1 set consisting of these singletons.

Set Center Type Reduction

The set-centric type reducer replaces each result set with a centroid (if the result set is type 2, it is itself a type 1 set) and finds weight stereotypes around these centroids.

Computational Complexity of Type Reduction

Type reduction was suggested by Karnik and Mendel and others. This is the "extended version" ("extended version" of the type 1 defuzzification method), and it is a tapped type reduction given that this moves from the FLS's type 2 output set to a type 1 set with tabs. "Then we can defer this set to get a single, well-finished number." The type-reduction set can be increasingly important than a single well-performed number because it conveys a measure of the uncertainty blown through the type 2 FLS. There are many kinds of type reductions such as set center, height, and modified height.

We have introduced several types of reduction methods. Unfortunately, they have high computational complexity. Fortunately, however, type 2 FLS can be thought of as a confusion of a large number of type 1 FLS. The type- reduced set is the sum of the outputs of all these embedded type 1 FLSs. Operations for each type 1 FLS can be processed in parallel. Therefore, the computational complexity for each type 1 FLS is the same for each deferential operation. The number of parallel processors depends on the number of type 1 FLSs required for a particular type-reduction method, which depends on the underlying membership and the sampling rate of the output domain. Calculation shows the computational complexity in terms of multiplication, addition, and division. Also, the calculation compensates for the number of operations for t-norm (Kawaguchi et al., 1993).

 
<<   CONTENTS   >>

Related topics