Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Motivation and Contributions

Applications of the modern era not only demand storage and memory but also require networking capabilities. Such applications generally have a strict constraint on acceptable downtime and delay experienced. Departure from requested quality-of-service (QoS) can be perilous for InPs as it can lead to reduced revenue, tarnished market reputation and user dissatisfaction. To mitigate this issue InPs offered resources in the form of VDCs. Moreover, the rapid escalation of loT devices propelled applications to be deployed at such devices. These devices with limited capabilities were not adequate enough to provide services at real-time; hence, the notion of fog computing was introduced. Fog computing enabled users to perform processing at the edge of network without relying on the distant cloud. However, the dependency on cloud was not nullified as fog nodes also had limited capabilities in terms of resources. Hence, an interplay between the stratum was necessary that caters to the needs of a variety of applications. Each stage of evolution introduced different challenges of which some were resolved whereas some did not attract much attention. The overall contributions of the chapter can be highlighted as follows:

  • (a.) To study the evolution of cloud networks starting from VM based allocation to a more complicated infrastructure involving an interplay between the stratum.
  • (b.) To elaborately discuss the motivation behind such evolution and discuss the roadblocks in the process.
  • (c.) To perform a systematic review on the challenges faced at each stage of evolution and discuss the techniques used in the literature to address them.
  • (d.) We also discuss the pros and cons of the solution approaches and identify areas where open research can be conducted.

Evolution of Traditional cloud networks

As we already discussed in Section 1.1 traditional VM based offering was not suited for many network-intensive, real-time and business critical applications, where virtual data centres (VDCs) are used as a solution. Although evolution of VDCs helped InPs achieve better isolation and service quality, it also brought about a variety of new challenges. VDC embedding/provisioning is one such key problem that InPs have been frequently exposed to. Virtual data center embedding (VDCE) involves mapping/assigning VDC components (VMs and VLs) onto physical resources (servers and physical links) in/across DCs subject to different objectives. The VDCE with constraints on VMs and VLs can be reduced to the NP-Hard multi-way separator problem [98]. Even after embedding VMs mapping VLs is still NP-Hard. A number of researchers have addressed the VDCE problem subject to different agendas such as economic benefits, resource utilization efficiency, energy efficiency, survivability and quality-of-service (QoS) demands. In some cases we will use the terms virtual network (VN) and virtual data centre (VDC) interchangeably as they more or less refer to the same thing with some subtle differences [384].

Since VDCE involves two inter-dependent phases as discussed earlier, Chowdhury ef a/. [98] proposed a VN embedding to coordinate the phases involved. The overall embedding problem was formulated using a mixed integer programming model with substrate network augmentation. The authors devised two embedding algorithms, namely, D-ViNE and R-ViNE using deterministic and randomized rounding techniques. On the other hand, Rabbani et al. [310] proposed a VDCE solution considering parameters such as residual bandwidth, server de-fragmentation, communication overhead and load balancing into account. Since energy consumption at the DCs is growing with increasing user-demands, techniques used to reduce energy consumption are crucial to not only increase the revenue of InPs but also reduce its environmental impacts. In this regard, Yang et al. [384] proposed two algorithms, NSS-JointSL and NSS- GBFS, to reduce the energy consumption in lightly loaded DCs while minimizing the embedding cost in heavily loaded DCs. On the other hand, Guan et al. [147] developed a heuristic based method for VN scheduling considering the expected energy consumption at the DCs and migration costs.

As we have already discussed VDCEA^NE is NP-Hard, which implies that with increasing problem size the solution space and consequently the computation time escalates exponentially. Hence, researchers were motivated to go for meta-heuristic based solutions to obtain a compromised solution in a reasonable amount of time. Ant colony optimization (ACO) is one such meta-heuristic that simulates the behavior of artificial ants that act as agents searching solutions. In this aspect, Guan et al. [148] presented an ACO based VN embedding and scheduling technique to reduce the energy consumption of resources across DCs. A link mapping oriented protocol (L-ACS) was presented by Zheng et al.

Cloud Network Management: An loT Based Framework Table 1.1: Summary of Literature on virtual data center embedding.

Work

Provisioning Cost

Acceptance Ratio

Revenue

Single Cloud

Multi-cloud

Energy Consumption

Resource Utilization

Chowdhuryetal. [98]

/

/

/

/

X

X

X

Rabbani et al. [310]

X

/

/

/

X

X

/

Yang et al. [384]

/

/

/

/

X

/

X

Guan et al. [147]

X

X

X

/

X

/

X

Guan et al. [148]

X

/

X

/

X

/

X

Zheng et al. [165]

X

/

X

/

X

X

X

Fajjari et al. [126]

X

/

/

/

X

X

/

Dab e(a/. [102]

/

/

/

/

X

X

/

Pathak and Vidyarthi 1300]

X

у/

X

X

X

у/

Sun etal. [341]

/

X

X

/

X

/

X

Sun et al. [343]

/

X

X

/

X

X

/

Sun etal. [342]

/

/

X

/

X

X

X

Amokrane et al. [45]

X

/

/

/

X

/

X

Dietrich et al. [114]

S

У

X

X

/

X

X

[165] for mapping VDC requests. Additionally, Fajjari eta/. [126] proposed VN embedding technique called VNE-AC based on max-min ant systems (MMAS). Genetic algorithm (GA) is also a popular meta-heuristic often used to solve combinatorial optimization problems with a large search space. In this regard, Dab et al. [102] proposed a dynamic reconfiguration of VNs based on GA to achieve higher resource utilization and revenue at the InPs. Alternatively, Pathak and Vidyarthi [300] discussed a GA based algorithm to address the issue of VN embedding across a substrate network to maximize the revenue, acceptance rate and resource utilization.

The VDC requests are often exposed to time-varying resource demands which requires reconfiguration of resources. To address the issue of dynamic resource reconfiguration for evolving VDCs, Sun et al. [341] proposed an heuristic that aims to minimize the mapping cost and energy consumption for reconfiguring evolving VDC request across multiple DCs. As an alternative, Sun et al. [343] have developed a mixed integer linear programming model (MILP) to cater to the demands of evolving VDCs. The authors aim to minimize the remapping cost and resources used. Optimal provisioning of resources for hybrid virtual networks modelled as an integer linear programming model (ILP) to reduce provisioning cost of resources across DCs has been discussed in [342]. Amokrane et al. [45] discussed a resource management framework called Greenhead to attain the best possible trade-off between revenue and energy costs for provisioning VDC requests across geographically distributed DCs. Contrasting to the traditional research, Dietrich et al. [114] discuss provisioning of VNs across multiple providers with limited information discovery. Table 1.1 highlights and presents a comparative facet of the parameters under consideration for evaluating different strategies discussed in literature.

Into the Fog

Partitioning DC resources into VDCs improved the service quality and user experience catering to the demands of a new class of network sensitive applications. However, for applications that are latency sensitive, cloud is still not an ideal platform. Some instances of such applications are connected vehicles, fire detection and fire fighting, smart grid and content delivery. The key bottleneck that restricts cloud as an obvious choice for such applications is the poor connectivity between the cloud and end-devices, geo-distributed deployment of applications and service requirements at locations where a provider does not have a DC. As an immediate solution, fog computing was proposed that extends services of cloud to the edge of network [280]. This enables the latency-sensitive application to execute their services at the edge and the latency-tolerant applications can be processed at the end cloud. Further, as it is well known that cloud is not an ideal platform for majority of loT applications, fog could potentially act as the saviour. In fact fog computing acts as a complement to cloud and brings services closer to the users. The fog stratum can be formed by different providers and each domain is formed by a set of fog nodes that include edge routers, switches, gateways, access points, smart phones, set-top boxes, etc.

Fog computing paradigm introduced many new challenges for researchers such as resource management, communication issues and cloud-fog federation. However, it would be incomprehensible to discuss such issues with direct reference to fog as the three paradigms are often interdependent on each other for providing a range of services. Hence, with this motivation, in the next section we discuss the above mentioned issues and their solutions with respect to an interplay between three stratum, viz., loT, fog and cloud.

IoT Fog Cloud Interplay

In this section, firstly we discuss the challenges that are critical to make an interplay between the three stratum feasible. In this regard, a glaring issue that needs immediate attention is resource allocation and management. We discuss resource allocation and management in three different fronts, namely, resource migrations, allocations and scheduling, and look at different solution approaches proposed in the literature. Further, we also discuss issues such as communication among different stratum and resource allocation in cloud-fog federation. Secondly, we also discuss the application of loT-fog-cloud interplay for healthcare applications, connected vehicles and smart city applications. All the discussions are conducted with reference to the architecture depicted in Figure 1.2. The lowest layer depicts the end-user strata, middle layer comprise multiple fog nodes and constitutes the fog strata and topmost layer is the high capacity cloud strata.

loT-Fog-Cloud Interplay Architecture [279]

Figure 1.2: loT-Fog-Cloud Interplay Architecture [279].

Challenges in IoT Fog Cloud Interplay

The challenges encountered are discussed mainly in three different fronts, viz., (a.) Resource Management, (b.) Inter- and Intra-Stratum Communication, and (c.) Cloud-Fog Federation.

Resource Management

Migrating VMs is an essential aspect of resource management which generally involves seamlessly transferring the state of the VM from one node to another [323], [22]. In this regard, Bittencourt etal. [65] discussed a layer based architecture for resource migration focused on VM migration between the fog nodes. The VMs contain user data, components and are available so that as the user moves, the migration is carried out with minimal impact on services. On the other hand, Agarwal et al. [23] focused on workload distribution that optimally distributes tasks generated by end-devices (loT devices) between fog and cloud depending on the availability of resources. Basically, tasks are executed in one of the following three modes, i.e., directly at the fog nodes, suspended and later executed at fog nodes or dispatched to a distant cloud for execution. With regards to scheduling tasks, Cardellini et al. [80] focused on exploiting local computing resources at fog nodes keeping the quality-of-services (QoS) requirements intact to schedule data stream processing (DSP) applications. On the other hand, Kapsalis et al. [188] departed from the traditional hierarchical and centralized fog model, and adopted a federated model for cooperating edge devices to allocate and manage computational resources required to host varying application components.

Inter and Intra Stratum Communication

A complex architecture involving three different execution environments, i.e., fog, cloud and local execution (at the devices) often involves tasks constituting an application getting executed at different environments/locations. Hence, proper communication between inter- and intra-stratum is essential and has to be controlled. In this regard, Shi eta/. [329] discussed an inter-stratum communication protocol between the end-devices and fog nodes. To be specific the authors use constrained application protocol (CoAP) for communication. On the other hand, Krishnan et al. [201] proposed an inter-stratum architecture that enables user devices to decide on its destination to execute the task on, i.e., fog or cloud. Contrary to Shi et al. [329] that deals with communication between loT devices and fog, Aazam and Huh [18] studied the communication between fog and cloud stratum and proposed a protocol to reduce the number of packets transmitted to cloud to reduce the overall communication overhead. Alternatively, Slabicki and Grochla [335] explored intra-stratum communication between loT devices. The authors analyse communication delay for data exchange in three different scenarios, namely, (/'.) direct communication between devices, (/'/'.) communication through fog, and (Hi.) communication via cloud.

Cloud Fog Federation

Since, resource sharing is an indispensable aspect of any evolutionary network, hence there is a need to promote sharing of resources between fog and cloud. A major reason that intensifies this cooperation is that fog nodes are resource limited and need assistance of cloud for executing resource-rich applications. Keeping in view such cooperation, Zhanikeev [401] proposed a model called cloud visitation platform (CVP) to facilitate the sharing of resources among cloud and fog stratum.

Applications of IoT Fog Cloud Interplay

A interplay among loT-Fog-Cloud paradigms is quintessential to support modern applications. However, in this subsection we discuss its impacts on applications such as healthcare, connected vehicles and smart city.

Healthcare Applications

Healthcare industry has benefited from this interplay architecture as majority of such applications are latency sensitive. Fog computing is an enabler to record and process data of patients' vitals that are monitored by different loT devices to be stored on remote cloud for future reference. Although not all diseases can be monitored some chronic ailments such as obstructive pulmonary disease (COPD), Parkinson's, speech disorders, and ECG/EEG feature extraction that can be monitored using this architecture. In this context, Stantchev et al. [338] proposed an architecture for nursing services for elderly people. The authors validated their model by a use-case scenario, where loT devices were used to monitor the blood pressure, fog strata was used for temporary storage of data and cloud was used as a permanent storage to be later referred to by the doctor. On the other hand, Fratu et al. [137] presented a model that extends its support to patients suffering from COPD and mild dementia. In the loT layer different sensors such as temperature and infra-red movement detectors were deployed. The fog layer was responsible for real-time processing and emergency handling for instance when the oxygen level in the patient's body was out of normal range. Further, cloud is used to maintain data of the patients over a long period of time for future reference. Monteiro et al. [275] proposed a fog computing interface called FIT to analyse the clinical speech data of patients with Parkinson's disease and speech disorders. At the loT end an android smart watch is used to acquire speech data to be subsequently analysed at the fog nodes and stored at cloud to monitor the progress of patients over a period of time. Gia et al. [141] proposed an loT healthcare monitoring architecture to detect activities of the heart and brain by exploiting fog and its advantages such as ensured QoS and emergency notifications. Data regarding temperature, ECG and EMG are generated by wearable sensors to be processed at smart gateways acting as fog nodes. Zao et al. [389] developed a brain computer interaction (BCI) game called "EEG Tractor Beam'' that monitors the state of the brain using a mobile application on a smart phone. The players of the game are required to identify a ring surrounding a target object and ever)' player has to pull the target towards itself by concentrating. The raw data stream generated during the game is sensed by a smart phone and processed at a nearby fog node.

Connected Vehicles

Hou et al. [168] proposed an architecture called vehicular fog computing (VFC) that utilizes vehicles for communication and computation. It utilizes a collaborative multitude of end-user clients or near-user edge devices for better utilization of individual communication and computational resources of each vehicle. Datta et al. [105] discussed an architecture for connected vehicles with Road Side Units (RSUs) and M2M gateways to provide consumer centric services such as data analytic with semantic discovery and management of connected vehicles. Truong et al. [362] proposed a new vehicular-ad-hoc-network (VANET) architecture called FSDN which combines two emergent computing and network paradigms: software defined networking (SDN) and fog Computing. SDN-based architecture provides flexibility, scalability, programmability and global knowledge while fog Computing offers delay-sensitive and location-awareness services which could satisfy the demands of future VANET architectures.

Smart City Applications

In addition to healthcare and vehicular networks the most important aspect of such an interplay architecture is its suitability for smart living and smart cities. In this regard, Li etal. [218] discussed an architecture for smart living, i.e., smart healthcare and smart energy. Smart healthcare implies monitoring and detecting chronic heart problems at real-time depending on the data collected by sensors and processed at fog nodes. Concerning smart city applications functionalities of fog nodes can be used for application deployment, network configuration and billing. Yan and Su [380] proposed a smart metering architecture to improve upon the traditional metering scheme. The authors exploit an interplay between the loT nodes, i.e., the smart home and the smart meters that act as a data node and store user data. Further, periodic data transmission is done to the cloud from fog nodes as a backup. Brzoza-Woch et al. [77] designed an architecture for advanced telemetry systems that support automated detection of floods, earthquakes and landslides. This proposal makes use of all three layers with sensors deployed for measurement; many distributed telemetry stations acting for data processing and cloud are used for communication. Tang et al. [355] presented a hierarchical distributed fog Computing architecture to support the integration of massive infrastructure components catering to future requirements of smart cities. To add security to future communities, authors propose to build large-scale, geo-spatial sensing networks, perform big data analysis, identify anomalous and hazardous events, and offer optimal responses in real-time using an interplay between the three stratum.

Research Challenges and Solution Approach

In this section, we identify potential areas where research can be conducted. Considering the network dependency of new age applications and their stochastic resource demands the virtual data centre embedding (VDCE) models can employ learning techniques to predict beforehand the demand spikes and subsequently use a proper relocation mechanism to improve the service quality. The relocation of VDC requests involves VM migration over a distributed network; hence, security is a major concern. The relocations are performed frequently and generally involve transferring of sensitive data; hence, a lightweight cryptographic algorithm should be enforced to provide security for transmission. To solve the security concern, blockchain can be used a solution strategy as it enforces a distributed, non-tamperable protocol that is ideal for such environments. Referring to the literature reviewed on fog computing, researchers have mainly focused on resource allocation, communication issues and cloud-fog federation. However, the focus has not been on predicting the amount of resources that the fog nodes should posses so that the end-users as well as providers are benefited. The interplay between stratum promotes decentralization and often involves data exchange among nodes that are geographically distributed and has to be carried over a secure channel. This poses a challenge to the traditional cryptographic algorithms that do not support decentralization; hence, blockchain can be used a security mechanism.

Conclusion

In this chapter, we discuss the evolution of traditional cloud architectures based on VM based offerings to virtual data centres that cater to the needs of modern network-intensive applications. However, with the rapid proliferation of loT devices, servicing latency sensitive and real-time applications using cloud was not feasible. This marked a paradigm shift to fog, i.e., cloud services near to end-users. Since fog nodes are distributed and have limited resources the dependence on cloud for large scale processing and storage was still persistent. In order to conquer this problem, an interplay architecture between loT-Fog-Cloud was used that not only provides immediate response but also enables large scale processing and storage. Further, such a complex interaction has enormous applications but also introduced many challenges. In this chapter, we discuss some challenges such as resource provisioning, inter- and intra-stratum communication and cloud-fog federation scenarios and highlight approaches used in the literature to handle it. Additionally, we also highlight the applications that can benefit from such an interplay such as healthcare, connected vehicles and smart city applications. Finally, we discuss some future research directions and potential solution approaches.

 
<<   CONTENTS   >>

Related topics