ec849adaf4ceb42ed52ca142c839f627c34b9434,slm_lab/agent/algorithm/reinforce.py,Reinforce,calc_advantage,#Reinforce#Any#,158
Before Change
big_r = 0
for r in epi_rewards[::-1]:
big_r = r + self.gamma * big_r
rewards.insert(0, big_r)
rewards = torch.Tensor(rewards)
logger.debug3(f"Rewards: {rewards}")
rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps)
logger.debug3(f"Normalized rewards: {rewards}")
After Change
big_r = 0
T = len(epi_rewards)
returns = np.empty(T, "float32")
for t in reversed(range(T)):
big_r = epi_rewards[t] + self.gamma * big_r
returns[t] = big_r
logger.debug3(f"Rewards: {returns}")
returns = (returns - returns.mean()) / (returns.std() + 1e-08)
returns = torch.from_numpy(returns)
logger.debug3(f"Normalized returns: {returns}")
In pattern: SUPERPATTERN
Frequency: 3
Non-data size: 4
Instances
Project Name: kengz/SLM-Lab
Commit Name: ec849adaf4ceb42ed52ca142c839f627c34b9434
Time: 2018-05-21
Author: kengzwl@gmail.com
File Name: slm_lab/agent/algorithm/reinforce.py
Class Name: Reinforce
Method Name: calc_advantage
Project Name: kengz/SLM-Lab
Commit Name: ec849adaf4ceb42ed52ca142c839f627c34b9434
Time: 2018-05-21
Author: kengzwl@gmail.com
File Name: slm_lab/agent/algorithm/reinforce.py
Class Name: Reinforce
Method Name: calc_advantage
Project Name: IndicoDataSolutions/finetune
Commit Name: 1cbe695f7e05ec94df08b821a81475a7647d53b6
Time: 2019-06-20
Author: matthew.bayer@indico.io
File Name: finetune/target_models/deployment_model.py
Class Name: DeploymentModel
Method Name: predict
Project Name: IndicoDataSolutions/finetune
Commit Name: cf73f9fd2638c07e0bddefb9ab918486067c4e80
Time: 2019-06-20
Author: matthew.bayer@indico.io
File Name: finetune/target_models/deployment_model.py
Class Name: DeploymentModel
Method Name: predict