Wei, QingMa, HailanChen, ChunlinDong, Daoyi2025-03-232025-03-23Scopus:85101773065PubMed:33600343https://dspace-test.anu.edu.au/handle/1885/733731516In this article, a novel training paradigm inspired by quantum computation is proposed for deep reinforcement learning (DRL) with experience replay. In contrast to the traditional experience replay mechanism in DRL, the proposed DRL with quantum-inspired experience replay (DRL-QER) adaptively chooses experiences from the replay buffer according to the complexity and the replayed times of each experience (also called transition), to achieve a balance between exploration and exploitation. In DRL-QER, transitions are first formulated in quantum representations and then the preparation operation and depreciation operation are performed on the transitions. In this process, the preparation operation reflects the relationship between the temporal-difference errors (TD-errors) and the importance of the experiences, while the depreciation operation is taken into account to ensure the diversity of the transitions. The experimental results on Atari 2600 games show that DRL-QER outperforms state-of-the-art algorithms, such as DRL-PER and DCRL on most of these games with improved training efficiency and is also applicable to such memory-based DRL approaches as double network and dueling network.This work was supported in part by the National Natural Science Foundation of China under Grant 71732003, Grant 62073160, and Grant 61828303; in part by the Australian Research Council's Discovery Projects Funding Scheme under Project DP190101566; and in part by the National Key Research and Development Program of China under Grant 2018AAA0101100.13EnglishPublisher Copyright: © 2013 IEEE.Deep reinforcement learning (DRL)quantum computationquantum reinforcement learningquantum-inspired experience replay (QER)Deep Reinforcement Learning with Quantum-Inspired Experience Replay2022-09-0110.1109/TCYB.2021.3053414http://www.scopus.com/inward/record.url?scp=85101773065&partnerID=8YFLogxK