Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font


<<   CONTENTS   >>

PROPOSED METHODOLOGY

Figure 6.2 implies the whole architecture of processes involved in SMO-KELM model, which incorporates data collection, data compression, disease diagnosis, and alert system. At the initial stage, the IoT devices are fixed to patient’s body and collect the patient data, and it is compressed using Deflate method. Once the patient data has been compressed, it is transmitted to the cloud server via wireless technologies. The cloud server performs decompression process and reconstructs the data proficiently. Then, the SMO-KELM model is executed to detect the presence of disease. Finally, an alarm will be raised in case of the occurrence of the diseases in real time to alert doctors, ambulance, and hospitals.

Deflate-Based Compression Model

Deflate is referred as lossless compression technique which has been extremely utilized over an extended time duration because of its maximum speed and optimal compression effectiveness [16]. Several techniques such as GZIP, ZLIB, ZIP, and PKZIP depend on the

Block diagram of proposed model

FIGURE 6.2 Block diagram of proposed model.

Deflate compression technique. These techniques contain LZ77 method and Huffman coding. The original information undergoes initial compression utilizing the LZ77 technique and after that the data is even reduced by the Huffman technique.

6.2.1.7 LZ77 Encoding

The LZ77 technique is a dictionary-centric compression technique. It has developed a dictionary with adjacent strings. For input string, dictionary is identified. If there is an equivalent initiate, the projected procedure string is returned with a distance as well as length of string recorded in a dictionary. Since the string is not mapped, the primary incidence of string is remained the same in the dictionary application. It returns a group of data which contains comparative position (corresponding distance), length of the corresponding string (corresponding length), and a flag denoting a chunk of information is encoded (marker). Assume that the corresponding length and distance is indicated as length as well as distance.

6.2.1.2 Huffman Coding

This coding is defined as a type of entropy coding that limits data by assigning shorter bits for happening symbols. It is composed of two technologies like Huffman code generation as well as Huffman encoding models. The Huffman tree (HT) is based on the frequency of symbols applied in the data compression. At the initial stage, two symbols with lower frequency have been selected. Here, two leaf nodes have been developed by selected symbols which are combined to develop a new node. The creation strategy is prominently applied for all symbols. The main aim of HT is that the tree assigns shorter codes for repeated symbols and longer codes to lower repeated symbols. Such computations produce a Huffman code for all symbols in the LZ77 encoding stream.

The encoded model reduces the LZ77 encoding stream utilizing the Huffman code tables build initially. During the model, all the symbols in the LZ77 encoding stream are returned by a corresponding Huffman code. The LZ77 encoding stream is comprised of variable length data elements. Additionally, Huffman encoding changes the elements of data using variable-element codes. Thus, encoded procedure analyzes the stream components, and alters using Huffman codes, and finally, it is fixed with resultant data stream.

SMO-KELM-Based Diagnosis Model

Back propagation (BP) learning technique is defined as a stochastic gradient least mean square technique. A gradient of all iterations is significantly concerned by the noise interfering in the sample. So, it will be essential for utilizing the batch model for averaging the gradient of several samples for obtaining the evaluation of the gradient. But, for a huge count of training samples, these techniques are bound to increase the calculation difficulty of all iterations. The average result ignores the variation among separate training samples, thus diminishing the sensitivity of learning.

KELM is defined as an enhanced technique that joins the kernel function with actual Extreme Learning Machine (ELM). The ELM assures that a network with optimal generalized execution optimally enhances the learning speed of Neural Networks (NN) and evades several issues of gradient descent (GD) training techniques signified with back propagation neural network (BPNN), such as simplicity of being trapped in local optimal, huge rounds, and so on. It is a multi-dimensional ELM model, although it joins the kernel function that non-linearly maps the linear, non-separable mode for the highly dimensional feature space for attaining a linear, separable, and more enhanced rate of accuracy.

ELM is defined as a training method, which comprises single layer feed-forward NNs (SLFNs). The SLFN model is determined as follows:

where x denotes a sample, /(x) implies the outcome of NNs, which is a class vector in classification model; h(x) or H represents the layer feature mapping matrix; (5 is defined as weight of hidden layer. For ELM approach,

where T represents a matrix with class flag vectors of the training sample, I implies a unit matrix, and C denotes the regularized attribute.

As the hidden layer has a feature map, h(x) is uncertain, and KELM matrix is computed in the following:

Based on equations (6.2) and (6.3), equation (6.1) is transformed as follows:

Under the application of Radial Basis Function (RBF), Gaussian kernel function is calculated in the following:

Then, a regularized attribute C and a kernel function parameter у are variables that are essential for proper tuning. The patterns of C and у are vital factors that affect the execution of KELM classification [17]. However, the variables of KELM should be optimized using SMO technique.

The SMO technique is a metaheuristic technique depends on spider monkey’s social performance, accepting the fission as well as fusion of swarm intelligence (SI) approach to forage. Generally, spider monkeys reside in a swarm. A leader is appointed for classifying the responsibility of searching food. Then, a female, being a leader, directs the swarm and provides mutable sets when the food is minimum. The set is based on food accessibility from specific application. The SMO relied model ensures the subsequent essential fundamentals of SI:

  • • Labor division: A spider monkey separates the exploring process under the development of tiny sets.
  • • Self-accessibility: The set size can be selected in order to satisfy the food accessibility.

The foraging intelligence performance assists in the creation of an intelligence decision. The foraging behavior is explained with the subsequent stages as given below:

  • 1. The swarm founds food exploration.
  • 2. Calculate distance from food sources.
  • 3. The distance of swarm from the food accomplishes while changing the positions.
  • 4. Again, the distance of swarm from a food source has been processed.

Here, a local leader contributes to find an optimal location from sub-swarms. These locations alter over time based on food accessibility. If the position is not changed by a local leader from a sub-swarm, then it is applicable for all iterations, the subgroup members are self-estimated by shifting freely in various models. Alternatively, global leader involves in finding a good position for all members of the swarm. Also, locations are changed on the basis of food accessibility. At the point of immobility, it selects the swarm into a sub-swarm of reduced sizes. The exceeded stages undergo iteration until the teaching-defined result can be accomplished. Hence, the SMO-based approach has been categorized as a nature- based model which depends upon SI.

Key Steps of SMO Algorithm Implementation

SMO is defined as a population-oriented method that approves trial-and-error relied on collaborative iterative models with six stages, such as local leader, local leader learning, local leader decision, global leader, global leader learning, and global leader decision stages [18]. The iterative process of SMO execution is explained in the following sections.

6.2.2.1.1 Initializing the Population SMO distributes the population of P spider monkeys SMp uniformly where p = l, 2,..., p and SMp indicates the pth monkey of a population. Then, monkeys are regarded as M-dimensional vectors, as M determines the entire amount of variables in the issue fields. All SMp is compared to one feasible result to the provided issue. SMO establishes all SMp utilizing the subsequent equation (6.6):

where, SMpq defines the qth dimension of pth SM; SMmlnq and SMmaxq are maximum and minimum bounds, respectively, of SMp in the qth way in which q = 1,2,,.. M UR (0, 1) mimics the random value that is distributed uniformly within [0, 1].

6.2.2.1.2 Local Leader Phase (LLP) In LLP, the SM modifies the recent position using previous incidences of local leader as well as local set candidates. The SM location has been upgraded with new location, when the novel position is composed of a fitness value which is qualified than existing places. The expression of location update for pth SM of /th local set can be determined as:

where GLq shows the position of global leader in qth dimension and (q= 1, 2, 3,,.. )Vf defines the arbitrarily selected index.

In this point, the fitness of SM has been applied for estimating the probability prbp. Based on the possibility value, SMp position has been extended in these approaches. Best position is composed of access for improved number of possibilities to deploying better one. The probability computation can be estimated using the given expression:

where fnp showcases the fitness score of pth SM. In addition, fitness of new place of SMs is determined and relevant to previous location. The place with best fitness value has been approved.

  • 6.2.2.1.3 Global Leader Learning (GLL) Phase The greedy selection methods were implemented for extending the location of GL. This objective has been enhanced using SM position, followed by optimal fitness measure in the population. The best places were assigned for GL. In absence of additional updates, the value of 1 is included to Global Limit Count.
  • 6.2.2.1.4 Local Leader Learning (LLL) Phase In this phase, greedy selection models have been applied in local set for the purpose of upgrading LL position. The LL location can be maximized using SM place under the employment of better fitness value in certain local group. The best positions are assigned for LL. When there is no supplement updates, the measure of 1 is added to Local Limit Count.

6.2.2.1.5 Local Leader Decision (LLD) Phase When LL is not upgraded with the position in applicable Local Leader Limit, all candidates of local group adjust the places in a random manner as given in step 1, it can also be accomplished using existing data from GL and LL on the basis of pr as given in equation (6.9).

6.2.2.1.6 Global Leader Decision (GLD) Phase If GL is not improved for its position till reaching Global Leader Limit, the population is divided into minimum sets, as per solution of GL. This procedure is repeated until declared with maximum groups (MG). For each iteration, an LL has been selected for novelty developed group. While bigger value is allocated for produced and GL does not maximize the location, and GL selects the way of combining the entire sets into an individual set.

 
<<   CONTENTS   >>

Related topics