Giter Site home page Giter Site logo

pytorch-timeseries's Introduction

pytorch-timeseries

PyTorch implementations of deep neural neural nets for time series classification.

Currently, the following papers are implemented:

Beyond the UCR/UEA archive

There are two ways use the Inception Time model on your own data:

  1. Copy the models, and write new training loops
  2. Extend the base trainer by implementing an initializer, get_loaders and save. This allows the training code (which handles both single and multi-class outputs) to be used - an example of this is the UCRTrainer.

Setup

Anaconda running python 3.7 is used as the package manager. To get set up with an environment, install Anaconda from the link above, and (from this directory) run

conda env create -f environment.yml

This will create an environment named inception with all the necessary packages to run the code. To activate this environment, run

conda activate inception

In addition, UCR/UEA archive must be downloaded and stored in the data folder.

Scripts

Example scripts showing how to train and evaluate the model can be found in the scripts folder.

pytorch-timeseries's People

Contributors

gabrieltseng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pytorch-timeseries's Issues

InceptionModel fails on kernel sizes of 3 or less

Replicate:

from inception import InceptionModel
model = InceptionModel(1, 2, 2, 0, 3, num_pred_classes=2)

Traceback:

---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-10-7484f8b2266d> in <module>
      4 
      5 # Kernel
----> 6 model = InceptionModel(1, 2, 2, 0, 3, num_pred_classes=2)
      7 
      8 model(X)

~/Documents/github/COMP4660-Assignment-1/submission/pytorch-timeseries/src/models/inception.py in __init__(self, num_blocks, in_channels, out_channels, bottleneck_channels, kernel_sizes, use_residuals, num_pred_classes)
     62         )
     63 
---> 64         self.blocks = nn.Sequential(*[
     65             InceptionBlock(in_channels=channels[i], out_channels=channels[i + 1],
     66                            residual=use_residuals[i], bottleneck_channels=bottleneck_channels[i],

~/Documents/github/COMP4660-Assignment-1/submission/pytorch-timeseries/src/models/inception.py in <listcomp>(.0)
     63 
     64         self.blocks = nn.Sequential(*[
---> 65             InceptionBlock(in_channels=channels[i], out_channels=channels[i + 1],
     66                            residual=use_residuals[i], bottleneck_channels=bottleneck_channels[i],
     67                            kernel_size=kernel_sizes[i]) for i in range(num_blocks)

~/Documents/github/COMP4660-Assignment-1/submission/pytorch-timeseries/src/models/inception.py in __init__(self, in_channels, out_channels, residual, stride, bottleneck_channels, kernel_size)
    112         # Conv1dSamePadding(in_channels=channels[1], out_channels=channels[1 + 1],
    113         #                       kernel_size=kernel_size_s[1], stride=stride, bias=False)
--> 114         # print(2)
    115         # Conv1dSamePadding(in_channels=channels[2], out_channels=channels[2 + 1],
    116         #                       kernel_size=kernel_size_s[2], stride=stride, bias=False)

~/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py in __init__(self, in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias, padding_mode)
    248         padding_ = _single(padding)
    249         dilation_ = _single(dilation)
--> 250         super(Conv1d, self).__init__(
    251             in_channels, out_channels, kernel_size_, stride_, padding_, dilation_,
    252             False, _single(0), groups, bias, padding_mode)

~/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py in __init__(self, in_channels, out_channels, kernel_size, stride, padding, dilation, transposed, output_padding, groups, bias, padding_mode)
    110         else:
    111             self.register_parameter('bias', None)
--> 112         self.reset_parameters()
    113 
    114     def reset_parameters(self) -> None:

~/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py in reset_parameters(self)
    113 
    114     def reset_parameters(self) -> None:
--> 115         init.kaiming_uniform_(self.weight, a=math.sqrt(5))
    116         if self.bias is not None:
    117             fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)

~/.local/lib/python3.8/site-packages/torch/nn/init.py in kaiming_uniform_(tensor, a, mode, nonlinearity)
    374     fan = _calculate_correct_fan(tensor, mode)
    375     gain = calculate_gain(nonlinearity, a)
--> 376     std = gain / math.sqrt(fan)
    377     bound = math.sqrt(3.0) * std  # Calculate uniform bounds from standard deviation
    378     with torch.no_grad():

ZeroDivisionError: float division by zero

Potential solution:
Add the following assertion to the __init__ method of InceptionBlock to make it easier for users to diagnose the issue

assert kernel_size > 3, "Kernel size must be strictly greater than 3"

InceptionTime blocks incorrectly implemented

The general flow of data for an inception block is:

input data -> bottleneck -> [filters of different sizes] -> output

In your implementation, the filters of different sizes are applied sequentially like this:

input data -> bottleneck -> filter_size_a -> filter_size_b -> filter_size_c  -> output

In the actual implementation the flow is something like the following.

input data -> bottleneck -> { -> filter_size_a ->         } -> concat-> output
                                -> filter_size_b ->
                                -> filter_size_c ->
                                -> max_pool_filter ->

Different implementation for ResNet

I've noticed some differences to the original ResNet code at https://github.com/cauchyturing/UCR_Time_Series_Classification_Deep_Learning_Baseline/blob/master/ResNet.py.

You're missing at BatchNorm right at the start of the network, and also the residual should be added before applying ReLU in the final layer of each ResNet block, not after applying ReLU.

In my experiments, adding the above two changes seems to generate results that better match the original results.

InceptionBlock has no Maxpool

I didn't find the Maxpool operation inside the InceptionBlock, and the results of the different convolution layers didn't concat up

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.