9e2ee3e63243d3ae7805ca6b9ea1043dc01be2d3,a3c.py,A3C,act,#A3C#Any#Any#Any#,24

Before Change


                v = self.v_function(self.past_states[i])
                advantage = R - v
                // Accumulate gradients of policy
                log_prob = F.log(self.past_action_prob[i])
                (- log_prob * float(advantage.data)).backward()
                // Accumulate gradients of value function
                (advantage ** 2).backward()

After Change



            self.optimizer.zero_grads()

            pi_loss = 0
            v_loss = 0
            for i in reversed(xrange(self.t_start, self.t)):
                R *= self.gamma
                R += self.past_rewards[i]
                v = self.v_function(self.past_states[i])
                advantage = R - v
                // Accumulate gradients of policy
                log_prob = self.past_action_log_prob[i]
                pi_loss += (- log_prob * float(advantage.data))
                // Accumulate gradients of value function
                v_loss += advantage ** 2
            pi_loss.backward()
            v_loss.backward()

            self.optimizer.update()
Italian Trulli
In pattern: SUPERPATTERN

Frequency: 3

Non-data size: 3

Instances


Project Name: chainer/chainerrl
Commit Name: 9e2ee3e63243d3ae7805ca6b9ea1043dc01be2d3
Time: 2016-03-08
Author: muupan@gmail.com
File Name: a3c.py
Class Name: A3C
Method Name: act


Project Name: PacktPublishing/Deep-Reinforcement-Learning-Hands-On
Commit Name: 4296a765125fff6491892a1bb70fb32ac516dae6
Time: 2018-02-10
Author: max.lapan@gmail.com
File Name: ch15/01_train_a2c.py
Class Name:
Method Name:


Project Name: PacktPublishing/Deep-Reinforcement-Learning-Hands-On
Commit Name: 99abcc6e9b57f441999ce10dbc31ca1bed79c356
Time: 2018-02-10
Author: max.lapan@gmail.com
File Name: ch15/04_train_ppo.py
Class Name:
Method Name: