Dynamic Task Allocation For Robotic Edge System Resilience Using Deep Reinforcement Learning
Article
Article Title | Dynamic Task Allocation For Robotic Edge System Resilience Using Deep Reinforcement Learning |
---|---|
ERA Journal ID | 40499 |
Article Category | Article |
Authors | Afrin, Mahbuba, Jin, Jiong, Rahman, Ashfaqur, Li, Shi, Tian, Yu-Chuv and Li, Yan |
Journal Title | IEEE Transactions on Systems, Man and Cybernetics: Systems |
Journal Citation | 54 (3), pp. 1438-1450 |
Number of Pages | 13 |
Year | 2024 |
Publisher | IEEE (Institute of Electrical and Electronics Engineers) |
Place of Publication | United States |
ISSN | 1083-4427 |
1558-2426 | |
2168-2216 | |
2168-2232 | |
Digital Object Identifier (DOI) | https://doi.org/10.1109/TSMC.2023.3327959 |
Web Address (URL) | https://ieeexplore.ieee.org/document/10320435 |
Abstract | Incorporating edge and cloud computing with robotics provides extended options for robots to perform real-time sensing and actuation operations in various cyber–physical systems (CPSs), including smart farms. Such systems are prone to uncertain failures triggered by mechanical disruptions. Consequently, the overall system performance degrades, primarily when location-specific tasks are already assigned to a faulty robot and require immediate recovery. Using edge and cloud computing resources is not always feasible due to communication and latency constraints. Therefore, this article exclusively focuses on harnessing the mobility of robots to support the computation tasks affected by uncertain failures of previously assigned robots and ensure faster resiliency management by relocating active robots near task sources. The proposed mobility-as-a-resilience-service (MaaRS) is formulated using a Markov decision process (MDP). Later, an edge server proximal to the robots is trained using deep reinforcement learning (DRL) to assign tasks among the robots. Specifically, a multiple deep Q -network (MDQN)-based dynamic task allocation mechanism is proposed to converge to a solution exploring reward uncertainties with the best exploitation. Numerical evaluation using Python and TensorFlow validates the effectiveness of the proposed approach compared to other benchmarks. |
Keywords | deep reinforcement learning (DRL); edge computing; multirobot system; smart farming; task allocation |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 461103. Deep learning |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | Curtin University |
Swinburne University of Technology | |
Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia | |
Queensland University of Technology | |
School of Mathematics, Physics and Computing |
https://research.usq.edu.au/item/z56z3/dynamic-task-allocation-for-robotic-edge-system-resilience-using-deep-reinforcement-learning
39
total views0
total downloads5
views this month0
downloads this month