Abstract:
The rise of user-centric design demands ubiquitous access to infrastructure and applications,
facilitated by the Edge-Cloud network and microservices. However, efficiently managing resource allocation
while orchestrating microservice placement in such dynamic environments presents a significant challenge.
These challenges stem from the limited resources of edge devices, the need for low latency responses, and the
potential for performance degradation due to service failures or inefficient deployments. This paper addresses
the challenge of microservice placement in Edge-Cloud environments by proposing a novel Reinforcement
Learning algorithm called Bi-Generic Advantage Actor-Critic for Microservice Placement Policy. This
algorithm’s ability to learn and adapt to the dynamic environment makes it well-suited for optimizing
resource allocation and service placement decisions within the Edge-Cloud. We compare this algorithm
against three baseline algorithms through simulations on a real-world dataset, evaluating performance
metrics such as execution time, network usage, average migration delay, and energy consumption. The results
demonstrate the superiority of the proposed method, with an 8% reduction in execution time, translating
to faster response times for users. Additionally, it achieves a 4% decrease in network usage and a 2%
decrease in energy consumption compared to the best-performing baseline. This research contributes by
reproducing the Edge-Cloud environment, applying the novel Bi-Generic Advantage Actor-Critic technique,
and demonstrating significant improvements over the state-of-the-art baseline algorithms in microservice
placement and resource management within Edge-Cloud environments.