Nac

Abstract. This paper investigates a novel model-free reinforcement
learning architecture, the Natural Actor-Critic. The actor updates are
based on stochastic policy gradients employing Amari’s natural gradient
approach, while the critic obtains both the natural policy gradient and
additional parameters of a value function simultaneously by linear regression.
We show that actor improvements with natural policy gradients are
particularly appealing as these are independent of coordinate frame of
the chosen policy representation, and can be estimated more efficiently
than regular policy gradients. The critic makes use of a special basis
function parameterization motivated by the policy-gradient compatible
function approximation. We show that several well-known reinforcement
learning methods such as the original Actor-Critic and Bradtke’s Linear
Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical
evaluations illustrate the effectiveness of our techniques in comparison
to previous methods, and also demonstrate their applicability for
learning control on an anthropomorphic robot arm.

Natural Actor-Critic

Powered by Hexo and Hexo-theme-hiker

Copyright © 2013 - 2017 Universality All Rights Reserved.

UV : | PV :