Add like
Add dislike
Add to saved papers

On Practical Robust Reinforcement Learning: Adjacent Uncertainty Set and Double-Agent Algorithm.

Robust reinforcement learning (RRL) aims to seek a robust policy by optimizing the worst case performance over an uncertainty set. This set contains some perturbed Markov decision processes (MDPs) from a nominal MDP (N-MDP) that generate samples for training, which reflects some potential mismatches between the training simulator (i.e., N-MDP) and real-world settings (i.e., the testing environments). Unfortunately, existing RRL algorithms are only applied to the tabular setting and it is still an open problem to extend them into more general continuous state space. We contribute to this subject in the following ways. We first construct an elaborated uncertainty set, which contains plausible (perturbed) MDPs only compared with the existing sets. Based on this, we propose a sample-based RRL algorithm named adjacent robust Q-learning (ARQ-Learning) for the tabular setting and characterize its finite-time error bound. Also, it is proved that ARQ-Learning converges as fast as the standard Q-learning and robust Q-learning (Robust-Q) while guaranteeing better robustness. Our major contribution is to introduce an additional pessimistic agent that can address the major hurdle for the extension of ARQ-Learning into cases with large or continuous state spaces. Leveraging this double-agent approach, we for the first time develop (model-free) RRL algorithms for continuous state/action spaces. Via experiments, we demonstrate the effectiveness of our algorithms.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

Related Resources

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app