03a57decde62c76783ef7e2288bd61bc87f6e266,fairseq/optim/fp16_optimizer.py,FP16Optimizer,step,#FP16Optimizer#Any#,168

Before Change


        // copy FP32 params back into FP16 model
        offset = 0
        for p in self.params:
            if not p.requires_grad:
                continue
            numel = p.data.numel()
            p.data.copy_(self.fp32_params.data[offset:offset+numel].view_as(p.data))
            offset += numel

After Change


            for p in group["params"]:
                p.data = p.data.half()
                if p.grad is not None:
                    p.grad.data = p.grad.data.half()

    def zero_grad(self):
        Clears the gradients of all optimized parameters.
        self.wrapped_optimizer.zero_grad()
Italian Trulli
In pattern: SUPERPATTERN

Frequency: 4

Non-data size: 6

Instances


Project Name: elbayadm/attn2d
Commit Name: 03a57decde62c76783ef7e2288bd61bc87f6e266
Time: 2018-12-24
Author: myleott@fb.com
File Name: fairseq/optim/fp16_optimizer.py
Class Name: FP16Optimizer
Method Name: step


Project Name: pytorch/fairseq
Commit Name: 7633129ba8d5f0e28bd6b6d6027b14352482ef31
Time: 2019-01-04
Author: myleott@fb.com
File Name: fairseq/trainer.py
Class Name: Trainer
Method Name: __init__