Giter Site home page Giter Site logo

ekfac-pytorch's Introduction

EKFAC and K-FAC Preconditioners for Pytorch

This repo contains a Pytorch implementation of the EKFAC and K-FAC preconditioners. If you find this software useful, please check the references below and cite accordingly!

Presentation

We implemented K-FAC and EKFAC as preconditioners. Preconditioners are similar Pytorch's optimizer class, with the exception that they do not perform the update of the parameters, but only change the gradient of those parameters. They can thus be used in combination with your favorite optimizer (we used SGD in our experiments). Note that we only implemented them for Linear and Conv2d modules, so they will silently skip all the other modules of your network.

Usage

Here is a simple example showing how to add K-FAC or EKFAC to your code:

# 1. Instantiate the preconditioner
preconditioner = EKFAC(network, 0.1, update_freq=100)

# 2. During the training loop, simply call preconditioner.step() before optimizer.step().
#    The optimiser is usually SGD.
for i, (inputs, targets) in enumerate(train_loader):
    optimizer.zero_grad()
    outputs = network(inputs)
    loss = criterion(outputs, targets)
    loss.backward()
    preconditioner.step()  # Add a step of preconditioner before the optimizer step.
    optimizer.step()

References

EKFAC:

K-FAC:

K-FAC for Convolutions:

Norm Constraint:

ekfac-pytorch's People

Contributors

flennerhag avatar joe-khawand avatar noamwies avatar tfjgeorge avatar thrandis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ekfac-pytorch's Issues

EKFAC with sua=True error unsupported operation: more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation.

I encountered this error at line 194 when running EKFAC preconditioner with parameter sua=True:

preconditioner params:  {'eps': 0.0005, 'sua': True, 'ra': True, 'update_freq': 50, 'alpha': 0.75}
2021-02-20 07:33:20.072364: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
current_lr:  0.004
Traceback (most recent call last):
  File "trainer.py", line 122, in <module>
    precond.step()
  File "/content/kfac-backpack/lib/ekfac_precond.py", line 71, in step
    self._precond_sua_ra(weight, bias, group, state)
  File "/content/kfac-backpack/lib/ekfac_precond.py", line 194, in _precond_sua_ra
    m2.mul_(self.alpha).add_((1. - self.alpha) * bs, g_kfe**2)
RuntimeError: unsupported operation: more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation.

If I replace in-place operations mul_() and add_() with mul() and add(), then I get a different error:

  m2 = m2.mul(self.alpha).add((1. - self.alpha) * bs, g_kfe**2)
Traceback (most recent call last):
  File "trainer.py", line 122, in <module>
    precond.step()
  File "/content/kfac-backpack/lib/ekfac_precond.py", line 71, in step
    self._precond_sua_ra(weight, bias, group, state)
  File "/content/kfac-backpack/lib/ekfac_precond.py", line 196, in _precond_sua_ra
    g_nat = self._to_kfe_sua(g_nat_kfe, kfe_x.t(), kfe_gy.t())
  File "/content/kfac-backpack/lib/ekfac_precond.py", line 295, in _to_kfe_sua
    g = torch.mm(vg.t(), g.view(sg[0], -1)).view(vg.size(1), sg[1], sg[2], sg[3])
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

Any idea how can I run EKFAC with SUA approximation, as mentioned in the paper?

To implement KFAC/EKFAC in a convolutional neural network, we rely on the SUA approximation (Grosse &Martens, 2016) which has been shown to be competitive in practice (Laurent et al., 2018)

I'm running this in Google Colab which has pytorch 1.7 if this matters.

Thanks in advance

how to deal with depthwise conv

Thanks for the great work.
I want to use EKFAC to train MobileNet. However, it has depthwise conv whose groups is not equal to 1. How to implement it?

Question about backward hook

Hi, thank you for the implementation of EKFAC.
I am trying to understand why do we need to multiply the gradient with the batch size in the backward hook as shown here?
Is it because the loss is averaged over the batch?

Question: KFAC/EKFAC with SGLD optimizer

Thanks for publishing the source code for your KFAC/EKFAC preconditioner implementation!

I'm sorry if you think this isn't the right place to ask/discuss this. But I tried using both KFAC and EKFAC preconditioner in combination with SGLD (basically SGD + Gaussian noise) optimizer to perform K-FAC SGLD. But the result is not as expected: training loss was increasing consistently over time. I kinda stuck because this observation didn't change despite I tried to randomly increase/decrease the value or flip the boolean value of the various preconditioner's hyperparameters.

The model that I used contains 2 Conv2d layers and 2 Linear layers. Below is the loss and accuracy plots with hyperparameters as follows:

  • learning rate: 0.0005
  • eps: 0.5
  • sua: True
  • pi: True
  • update_freq: 5
  • alpha: 1.
  • constraint_norm: True

ksgld_loss
ksgld_accuracy

Changing the optimizer to classic SGD without changing the preconditioner's hyperparameters worked incredibly well though. It achieved 99.99% accuracy on MNIST after just 3 epochs.

Any suggestion on how to tweak the hyperparameters or to which direction? Or do you think this way of preconditioning SGLD using KFAC/EKFAC won't work for some reason?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.