Giter Site home page Giter Site logo

lukashedegaard / continual-inference Goto Github PK

View Code? Open in Web Editor NEW
46.0 5.0 9.0 1.98 MB

A Python library for Continual Inference Networks in PyTorch

Makefile 0.85% Python 97.88% C++ 1.28%
pytorch deep-learning cnn continual-inference convolutional-neural-networks deep-neural-networks efficiency machine-learning neural-networks recurrent-neural-network

continual-inference's Introduction

๐Ÿ‘‹ Hi, I'm Lukas Hedegaard

PostDoc at Aarhus University, Denmark researching Deep Learning, network acceleration and Transfer Learning applied to Computer Vision and Natural Language Processing. Recent works include:

Apart from doing ML research projects (like real-time Human Activity Recognition using CoX3D โ˜๏ธ), I like to package code up nicely and open-source whenever it may be of value. Open-source libraries include:

  • Continual Inference [๐Ÿ, C++] downloads - Building blocks for Continual Inference Networks in PyTorch.
  • Ride [๐Ÿ] downloads - Training wheels, side rails, and helicopter parent for your Deep Learning projects in PyTorch.
  • OpenDR [๐Ÿ, C++] downloads - A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning.
  • DatasetOps [๐Ÿ] downloads - Fluent dataset operations, compatible with your favorite libraries.
  • PyTorch Benchmark [๐Ÿ] downloads - Easily benchmark PyTorch model FLOPs, latency, throughput, allocated gpu memory and energy consumption.
  • Co-Rider [๐Ÿ] downloads - Tiny configuration library tailored for the Ride ecosystem.
  • Supers [๐Ÿ] downloads - Call a function in all superclasses using supers(self).foo(42).
  • react-native-svg-pan-zoom [JS] downloads - Pan-zoom via two-finger "Google Maps"-style pinch and drag gestures.
  • redux-maybe [JS] downloads - Nodejs Package for attaching callback functions to redux messages.
  • redux-blabber [JS] downloads - Redux store enhancer for synchronizing states and actions across store instances.

Find me ๐ŸŒ

continual-inference's People

Contributors

lukashedegaard avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

continual-inference's Issues

Weight loading

Hi Lukas,

I've played around a bit with your framework and it looks great and speeds up my inference stage a lot.

But I'm not really sure what's the optimal way to train the model and then load the weights for inference.

What I've done so far:

  • I have a Pytorch (non-continual) model with trained weights (all layers are supported by your framework and there is no padding on the time-dimension)
  • I have (re-)implemented the model in your framework.
  • I realized I cant just load the weights from the Pytorch model into the continual model, since the weight names are different.
  • I have manually mapped the weights from the Pytorch model to the weights of the continual model. This works now but it's really a big mess and it took me way too long.

Is there an easier way? How are you doing it? ;)

Best wishes,
Sean

Docs

The project doesn't supply any documentation other than the README (and the code itself).

Before Reaching v1.0.0, this is an absolute must-have.

Receptive field property

Motivation

It would be nice to have a receptive field property, as this has important implications for the model dynamics.

Behaviour

Discussion

It may not be feasible to implement it for other dimensions than the temporal.
Adaptive average pool in the spatial dimensions would effectively make the receptive field infinite.
In the temporal dimension, the receptive field would be finite for all current modules (v0.11.0). Adding RNN support (#4), would make the temporal receptive field infinite in some cases as well.

Proposed behaviour

If the receptive field is finite, return an integer value. If it is infinite, return math.inf.

Implementation

The property's implementation could be close to that of the delay property.

Change default value of pad_end to False

Currently, the pad_end option in forward_steps is set to True by default.

Since forward_steps is mostly intended as an initialisation function or to catch up with lost computational steps, it would make better sense to not return values corresponding to a padded end.

The pad_end option is mostly there to serve as an easy way to check the forward and fordward_steps implementations against one another.

Support GroupNorm

I've been wondering whether this framework supports GroupNorm?
Since LayerNorm is supported, it seems to me that it might be at least possible?

Great work by the way

Advanced routing

In advanced use cases, it would be beneficial to have a set of modules, which can handle parallel streams.
The old co.Parallel essentially performs a BroadcastReduce operation, and could be named accordingly.

A new Parallel module would then take a sequence of tensors, and map one to each of its children, repacking the outputs in a list.
Moreover, we would need

  • Broadcast: one to multiple streams
  • Reduce: multiple to one stream
    and could consider
  • Route: many to may mapping of parallel streams
  • Shuffle: shuffle order of streams
  • Split: Split input into multiple streams (e.g. by channel)

Context managers

An idea worth considering is to create context managers to change network behaviour.

with co.no_temporal_padding():
    y = net(x)

with co.temporal_pooling_size(16):
    y = net(x)

Container instance naming

For large networks built with the Sequential and BroadcastReduce modules, the __repr__ string may become very cluttered.
It might be nice to have the option of overloading this __repr__ with a custom name.

E.g.

my_module = co.Sequential(
    ...,
    name="MyModule",
)

assert my_module.__repr__() == "MyModule()"

TorchScript support

Hi,

are there any plans to make the continual model (i.e. with call_mode == "forward_step") exportable as TorchScript?
My use case would be using the framework in a C++ environment.
I guess the TensorPlaceholder concept would need to be changed?

Best

Clear memory

Hi @LukasHedegaard,

Is there any way to quickly reset the memory when I change the visual context, for example, I fed one video as input and then changed it to another. At the same time I want to reset the delay and ignore the previous context.
Should I fill this delay with empty frames? Forgive me in advance if I missed it in the documentation.

Thank you

Containers don't account for TensorPlaceholder

Currently, the container modules do not take into account whether a contained module output a TensorPlaceholder during a step rather than an actual Tensor.

This makes co.Sequential unusable for modules with temporal stride larger than 1.

Continual pooling should use same interface as torch.nn pooling

Currently, separate temporal_xxx parameters must be specified in the definition of pooling modules, e.g.:

nn.MaxPool3d(kernel_size=(2, 2, 2))

co.MaxPool3d(temporal_kernel_size=2, kernel_size=(2, 2))

Ideally, both of the above should be initialised as MaxPool3d(kernel_size=(2, 2, 2))

Auto shrink

Hi there again,

is this actually correct?

return input[:, :, self.delay : -self.delay]

For my understanding it does not make sense that for auto_shrink="lagging" we throw away self.delay samples, but for auto_shrink="centered" we throw away 2*self.delay samples?!

I also think a third option

if self.auto_shrink == "online":
    return input[:, :, self.delay:]

would make sense, at least I could use it ;)

Additional BatchNorm modules

As of yet, there is only a single BatchNorm module, which hasn't been tested thoroughly.
Additional modules and tests should be added.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.