Collaboration is Key to Innovation

You’r most welcome to collaborate with us in recent research topic

“The fun for me in collaboration is, one, working with other people just makes you smarter; that’s proven.” – Lin-Manuel Miranda

Rahul Yadav

Rahul Yadav

PostDoc

Peng Cheng Laboratory

Biography

Rahul Yadav is currently working at the Peng Cheng Laboratory, Shenzhen, China as a PostDoc. He received the M.S. and Ph.D degree in computer science from the Department of Computer Science and Mathematics, South Asian University, New Delhi, India, and the School of Computer Science and Technology, Harbin Institute of Technology, China, respectively. He served as a Guest Editor in Wireless Communications and Mobile Computing journal. He also serves as reviewer in highly reputable journals, including IEEE TVT, IEEE TDSC, IEEE TSC, IEEE TCC, IEEE TII, IEEE IoT etc. He has published reputable conference and journal papers. He is actively involved in the research on IoT, computation offloading, efficient-energy management, cloud/fog/edge computing, Vehicular fog computing, optimal utilization of data center resources, cost-efficient virtual machine consolidation, and delay estimation.

Interests

  • Cloud/Fog/Edge Computing
  • Vehicular Computing
  • Internet of Things

Education

  • PhD in Computer Science, 2020

    Harbin Institute of Technology

  • MSc. in Computer Science, 2015

    South Asian University

Worldwide Academic Collaborations

Researchers

Avatar

Omprakash Kaiwartya

Senior Lecturer

Drone Enabled Networking, E-Mobility Centric Electric Vehicles, IoT Enabled Smart Services

Avatar

Prof. Houbing Song

Assistant Professor

Cybersecurity and Privacy, Unmanned Aircraft Systems, Communications and Networking

Avatar

Prof. Weizhe Zhang

Professor

Cloud/Fog Computing, Distributed Computing, Internet of Things

Avatar

Prof. Yu-Chu Tian (Glen)

Professor

Cloud Computing, Energy System Optimization, Wireless Sensor Networks

Publications

Managing overloaded hosts for energy-efficiency in cloud data centers

Traditional data centers are shifted toward the cloud computing paradigm. These data centers support the increasing demand for computational and data storage that consumes a massive amount of energy at a huge cost to the cloud service provider and the environment. Considerable energy is wasted to constantly operate idle virtual machines (VMs) on hosts during periods of low load. Dynamic consolidation of VMs from overloaded or underloaded hosts is an effective strategy for improving energy consumption and resource utilization in cloud data centers. The dynamic consolidation of VM from an overloaded host directly influences the service level agreements (SLAs), utilization of resources, and quality of service (QoS) delivered by the system. We proposed an algorithm, namely, GradCent, based on the Stochastic Gradient Descent technique. This algorithm is used to develop an upper CPU utilization threshold for detecting overloaded hosts by using a real CPU workload. Moreover, we proposed a dynamic VM selection algorithm called Minimum Size Utilization (MSU) for selecting the VMs from an overloaded host for VM consolidation. GradCent and MSU maintain the trade-off between energy consumption minimization and QoS maximization under specified SLA goal. We used the CloudSim simulations with real-world workload traces from more than a thousand PlanetLab VMs.

Energy-Latency Tradeoff for Dynamic Computation Offloading in Vehicular Fog Computing

Vehicular Fog Computing (VFC) provides solutions to relieves overload cloudlet nodes, reduces service latency during peak times, and saves energy for battery-powered cloudlet nodes by offloading user tasks to a vehicle (vehicular node) by exploiting the under-utilized computation resources of nearby vehicular node. However, the wide deployment of VFC still confronts several critical challenges: lack of energy-latency tradeoff and efficient resource allocation mechanisms. In this paper, we address the challenges and provide an Energy-efficient dynamic Computation Offloading and resources allocation Scheme ( ECOS ) to minimize energy consumption and service latency. We first formulate the ECOS problem as a joint energy and latency cost minimization problem while satisfying vehicular node mobility and end-to-end latency deadline constraints. We then propose an ECOS scheme with three phases. In the first phase, we propose an overload cloudlet node detection policy based on resource utilization. In the second phase, we propose a computational offloading selection policy to select a task from an overloaded cloudlet node for offloading, which minimizes offloading cost and the risk of overload. Next, we propose a heuristic approach to solve the resource allocation problem between the vehicular node and selected user tasks for energy-latency tradeoff. Extensive simulations have been conducted under realistic highway and synthetic scenarios to examine the ECOS scheme’s performance. In comparison, our proposed scheme outperforms the existing schemes in terms of energy-saving, service latency, and joint energy-latency cost.

Cache Performance Optimization of QoC Framework

The main aim of this paper is based on the cache performance test of the QoC: quality of experience framework for cloud computing on the server. QoC framework is based on the server-side design and implementation of the use of hierarchical architecture. Reverse proxy technology is used to build a server cluster, which is composed of front-end access layer to achieve the server for load balancing, improve the performance of the system and the use of built-in distributed cache server. The cluster consists of the cache acceleration layer, which reduces the load of the backend database. The second database server cluster, which is constructed by the database master and slave synchronization technology, forms the data storage layer, which realizes the database read and writes separation and data redundancy. The server-side hierarchical architecture improves the performance and stability of the entire system, and has a high degree of scalability, laying a solid foundation for future expansion of system business logic and increases user volume. This paper presents new cache replacement algorithm for inconsistent video file size and then analyses the specific needs for the multi-terminal type of QoC framework, and gives the client and server-side outline design; it describes the implementation details of the client and the server-side and finally the whole system of detailed functional and performance testing.

Popular Topics

Contact