- CALLARD Baptiste (MVA)
- DE SENNEVILLE Adhemar (MVA)
Dans le cadre du cours Apprentissage pour les séries temporelles du cours de L. OUDRE nous avons étudier le papier "Soft-DTW: a Differentiable Loss Function for Time-Series".
The report delves into Soft-Dynamic Time Warping (Soft-DTW), a differentiable version of Dynamic Time Warping, suitable for gradient-based optimization in machine learning. It involves reimplementation of models, theoretical and practical analysis, and experimentation with datasets like ArrowHead and ECG200. The findings include :
- An optimized PyTorch-compatible Soft-DTW
- Applications in barycenter averaging
- K-Means clustering
- Anomaly detection
The report concludes with the potential and computational challenges of Soft-DTW, suggesting directions for future research.
Our code is compatible with any native Pytorch implementation. We over-write the backward for efficiency purposes.
import torch
from tslearn.datasets import UCR_UEA_datasets
from DTWLoss_CUDA import DTWLoss
# load data
ucr = UCR_UEA_datasets()
X_train, y_train, X_test, y_test = ucr.load_dataset("SonyAIBORobotSurface2")
# convert to torch
X_train = torch.from_numpy(X_train).float().requires_grad_(True)
loss = DTWLoss(gamma=0.1)
optimizer = # your optimizer
##############
# your code ##
##############
value = loss(X_train[0].unsqueeze(0), X_train[1].unsqueeze(0))
optimizer.zero_grad()
value.backward()
optimizer.step()
Soft-dtw: a differentiable loss function for time-series by Cuturi, Marco and Blondel, Mathieu in International conference on machine learning