论文部分内容阅读
As a widely used reinforcement learning method,Q-learning is bedeviled by the curse of dimensionality:The computational complexity grows dramatically with the size of state-action space.To combat this difficulty,an integrated hierarchical Q-learning framework is proposed based on the hybrid Markov decision process (MDP) using temporal abstraction instead of the simple MDP.The learning process is naturally organized into multiple levels of learning,e.g.,quantitative (lower) level and qualitative (upper) level,which are modeled as MDP and semi-MDP (SMDP),respectively.This hierarchical control architecture constitutes a hybrid MDP as the model of hierarchical Q-learning,which bridges the two levels of learning.The proposed hierarchical Q-learning can scale up very well and speed up learning with the upper level learning process.Hence this approach is an effective integral learning and control scheme for complex problems.Several experiments are carried out using a puzzle problem in a gridworld environment and a navigation control problem for a mobile robot.The experimental results demonstrate the effectiveness and efficiency of the proposed approach.
As a widely used reinforcement learning method, Q-learning is bedeviled by the curse of dimensionality: The computational complexity grows dramatically with the size of state-action space. To combat this difficulty, an integrated hierarchical Q-learning framework is proposed based on the Hybrid Markov decision process (MDP) using temporal abstraction instead of the simple MDP. The learning process is orchestrated into multiple levels of learning, eg, quantitative (lower) level and qualitative (upper) level, which are modeled as MDP and semi- MDP (SMDP), respectively. This hierarchical control architecture constitutes a hybrid MDP as the model of hierarchical Q-learning, which bridges the two levels of learning. The proposed hierarchical Q-learning can scale up very well and speed up learning with the upper level learning process .ence this approach is an effective integral learning and control scheme for complex problems. Several experiments are carried out using a puzzle problem in a gridw orld environment and a navigation control problem for a mobile robot. the experimental results demonstrate the effectiveness and efficiency of the proposed approach.