This is the final report for the spring internship 2016 at Preferred Networks. gym_torcs is released in mt github account: https://github.com/ugo-nama-kun/gym_torcs
Novel 3D-Printed Soft Linear and Bending Actuators
PFN Spring Internship Final Report: Autonomous Drive by Deep RL
1. Driving in TORCS with
Deep Deterministic Policy Gradient
Final Report
Naoto Yoshida
2. About Me
● Ph.D. student from Tohoku University
● My Hobby:
○ Reading Books
○ TBA
● NEWS:
○ My conference paper on the reward function was
accepted!
■ SCIS&ISIS2016 @ Hokkaido
3. Outline
● TORCS and Deep Reinforcement Learning
● DDPG: An Overview
● In Toy Domains
● In TORCS Domain
● Conclusion / Impressions
5. TORCS: The Open source Racing Car Simulator
● Open source
● Realistic (?) dynamics simulation of the car environment
6. Deep Reinforcement Learning
● Reinforcement Learning + Deep Learning
○ From Pixel to Action
■ General game play in ATARI domain
■ Car Driver
■ (Go Expert)
● Deep Reinforcement Learning in Continuous Action Domain: DDPG
○ Lillicrap, Timothy P., et al.
"Continuous control with deep reinforcement learning." , ICLR 2016
Vision-based Car agent in TORCS
Steering + Accel/Blake
= 2 dim continuous actions
8. GOAL: Maximization of in expectation
Reinforcement Learning
Agent
Environment
Action : a
State : s
Reward : r
9. GOAL: Maximization of in expectation
Reinforcement Learning
Agent
Environment
Action : a
State : s
Reward : r
Interface
Raw output: u
Raw input: x
10. Deterministic Policy Gradient
● Formal Objective Function: Maximization of True Action Value
● Policy Evaluation: Approximation of the objective function
● Policy Improvement: Improvement of the objective function
where
Bellman equation
wrt Deterministic Policy
Loss for
Critic
Update direction
of Actor
Silver, David, et al. "Deterministic policy gradient algorithms."
ICML. 2014.
11. Deep Deterministic Policy Gradient
Initialization
Update of Critic
+ minibatch
Update of Actor
+ minibatch
Update of Target
Sampling / Interaction
RL agent
(DDPG)
s, ra
TORCS
Lillicrap, Timothy P., et al. "Continuous control
with deep reinforcement learning.", ICLR 2016
12. Deep Architecture of DDPG
Three-step observation
Simultaneous training of two deep convolutional networks
13. Exploration: Ornstein–Uhlenbeck process
● Gaussian noise with moments
○ θ,σ:parameters
○ dt:time difference
○ μ:mean (= 0.)
● Stochastic Differential Equation:
● Exact Solution for the discrete time step:
Wiener Process
Gaussian
19. Toy Problem 2: Cart-pole Balancing
● Another classical benchmark task
○ Action: Horizontal Force
○ State:
○ Reward: (other definition is possible)
■ +1 (angle is in the area)
■ 0 (Episode Terminal)
Angle Area
26. VTORCS-RL-color
● Visual TORCS
○ TORCS for Vision-based AI agent
■ Original TORCS does not have vision API!
■ vtorcs:
● Koutník et al., "Evolving deep unsupervised convolutional networks for vision-based reinforcement learning, ACM,
2014.
○ Monochrome image from TORCS server
■ Modification for the color vision → vtorcs-RL-color
○ Restart bug
■ Solved with help of mentors’ substantial suggestions!
28. What was the cause of the failure?
● DDPG implementation?
○ Worked correctly, at least in toy domains.
■ The approximation of value functions → ok
● However, policy improvement failed in the end.
■ Default exploration strategy is problematic in TORCS environment
● This setting may be for general tasks
● Higher order exploration in POMDP is required
● TROCS environment?
○ Still several unknown environment parameters
■ Reward → ok (DDPG author check)
■ Episode terminal condition → still various possibilities
(from DDPG paper)
30. Impressions
● On DDPG
○ Learning of the continuous control is a tough problem :(
■ Difficulty of policy update in DDPG
■ “Twice” recommendation of Async method by DDPG author (^ ^;)
● Throughout this PFN internship:
○ Weakness: Coding
■ Thank you! Fujita-san, Kusumoto-san
■ I knew many weakness of my coding style
○ Advantage: Reinforcement Learning Theory
■ and its branched algorithms, topics and relationships between RL and Inference
■ For DEEP RL, Fujita-san is an auth. in Japan :)
32. Cart-pole Balancing
● DDPG could learn the successful policy
○ Still unstable after the several successful trial
33. Success in Half-Cheetah Experiment
● We could run successful experiment with identical hyper parameters in cart-
pole.
300-step total reward
Episode
34. Keys in DDPG / deep RL
● Normalization of the environment
○ Preprocess is known to be very important for deep learning.
■ This is also true in deep RL.
■ Scaling of inputs (possibly, and actions, rewards) will help the agent to learn.
● Possible normalization:
○ Simple normalization helps: x_norm = (x - mean_x)/std_x
○ Mean and Standard deviation are obtained during the initial exploration.
○ Other normalization like ZCA/PCA whitening may also help.
● Epsilon parameter in Adam/RMS prop can be large value
○ 0.1, 0.01, 0.001… We still need a hand-tuning / grid search...