Giter Site home page Giter Site logo

Comments (3)

cyugao avatar cyugao commented on August 21, 2024

I agree! I was troubleshooting why I kept getting identical gradients on different GPUs, and only later I found out I didn't include the forward step in the no_sync context.

from sam.

cyugao avatar cyugao commented on August 21, 2024

My minimal example (adapted from https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) using gradient accumulation is given below

import argparse
import os
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp

from torch.nn.parallel import DistributedDataParallel as DDP

def setup(rank, world_size):
    os.environ['MASTER_ADDR'] = 'localhost'
    os.environ['MASTER_PORT'] = '12355'

    # initialize the process group
    dist.init_process_group("nccl", rank=rank, world_size=world_size)

def cleanup():
    dist.destroy_process_group()

class ToyModel(nn.Module):
    def __init__(self):
        super(ToyModel, self).__init__()
        self.net1 = nn.Linear(10, 10)
        self.relu = nn.ReLU()
        self.net2 = nn.Linear(10, 5)

    def forward(self, x):
        return self.net2(self.relu(self.net1(x)))


def demo_basic(rank, world_size, grad_accum):
    print(f"Running basic DDP example on rank {rank}.")
    setup(rank, world_size)

    # create model and move it to GPU with id rank
    toymodel = ToyModel().to(rank)
    model = DDP(toymodel, device_ids=[rank])

    loss_fn = nn.MSELoss()
    optimizer = optim.SGD(model.parameters(), lr=0.001)

    optimizer.zero_grad()
    for epoch in range(5):
        ### Main Loop ###
        if rank == 0:
            print(f"\n----Epoch {epoch} started")
        for i in range(grad_accum):
            outputs = model(torch.randn(20, 10))
            labels = torch.randn(20, 5).to(rank)
            loss = loss_fn(outputs, labels) / grad_accum
            if i == grad_accum - 1:
                loss.backward()
            else:
                with model.no_sync():
                    print(f"No sync, in epoch {epoch} iter {i}, on rank {rank}, loss={loss:5f}")
                    if epoch > 0 or i > 0:
                        print(f"On rank {rank}: Before step, {next(model.parameters()).grad.flatten()[:5].tolist()}")
                    loss.backward()
                    print(f"On rank {rank}:  After step, {next(model.parameters()).grad.flatten()[:5].tolist()}")

        optimizer.step()
        optimizer.zero_grad()

    cleanup()


def run_demo(demo_fn, world_size, grad_accum):
    mp.spawn(demo_fn,
             args=(world_size, grad_accum),
             nprocs=world_size,
             join=True)

if __name__ == "__main__":
    # parse args
    parser = argparse.ArgumentParser()
    parser.add_argument("--gpus", "-g", type=int, default=2)
    parser.add_argument("--grad_accum", "-ga", type=int, default=3)
    args = parser.parse_args()
    run_demo(demo_basic, args.gpus, args.grad_accum)

A fix is given below by replacing the MAIN LOOP with

import contextlib
# ...
        ### Main Loop ###
        if rank == 0:
            print(f"\n----Epoch {epoch} started")
        for i in range(grad_accum):
            context = model.no_sync() if (i != grad_accum - 1) else contextlib.suppress()
            with context:
                outputs = model(torch.randn(20, 10))
                labels = torch.randn(20, 5).to(rank)
                loss = loss_fn(outputs, labels) / grad_accum
                if i == grad_accum - 1:
                    loss.backward()
                else:
                    print(f"No sync, in epoch {epoch} iter {i}, on rank {rank}, loss={loss:5f}")
                    if epoch > 0 or i > 0:
                        print(f"On rank {rank}: Before step, {next(model.parameters()).grad.flatten()[:5].tolist()}")
                    loss.backward()
                    print(f"On rank {rank}:  After step, {next(model.parameters()).grad.flatten()[:5].tolist()}")
        optimizer.step()
        optimizer.zero_grad()
# ...

from sam.

stale avatar stale commented on August 21, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

from sam.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.