Comments (5)
Hi @mcvageesh,
We have the updated version of the NBEATSx paper with a summary of our findings of hyperparameter exploration in Appendix A.5. Hope it helps.
from nbeatsx.
Hi @kdgutier ,
Thanks :), I did go through the section A.5 before. The main reason I requested for the hyperparameter files was to avoid running the hyperparameter search all over again, and also to compare the validation MAE of the best model found by the search against some other model I am trying to use. The authors Lago et al. (https://github.com/jeslago/epftoolbox) provided this in their toolbox and it was really helpful, so if you still are working on this and have time, please upload the files - it will be really helpful!
from nbeatsx.
I would advice to run again hyperparameter selection on an informed/restricted space based on the A.5 appendix suggestions here.
As we mention in Appendix A.3 of the paper you can achieve even better results with a fraction of the hyperparameter exploration steps.
The most expensive part of the forecasting pipeline is the rolled windows evaluation, because of the model recalibration.
We did it sequentially, but If you have the computational resources I would suggest you to try to parallelize the test evaluation.
from nbeatsx.
Okay, thank you!
from nbeatsx.
Hi again!
I am a bit confused about the data shuffling operation used for splitting the train-validation sets.
Let us take the function train_val_split(len_series, offset, window_sampling_limit, n_val_weeks, ds_per_day) in src/utils/experiment/utils_experiment.py
Let len_series = 4 * 365 * 24
offset = 0
window_sampling_limit = 4 * 365 * 24
n_val_weeks = 42
ds_per_day = 24
Then, on running the function, we obtain len(train_days) = 1096 and len(validation_days) = 364.
Let us take the day 70, which was picked as a validation day (for example, and hence the days 70, 71, 72, 73, 74, 75, 76 are selected as validation days). Let day 77 not be picked for the validation set, and hence remains part of the training set.
If the day 77 remains in the training set, wouldn't that cause data contamination in the sense that the lagged values (day 70, 71, 72, 73, 74, 75, 76) used to predict day 77 in the training set are part of the validation set?
Is this problem avoided in some way in the code? Or would you say that this is a necessary evil, and cannot be avoided without incurring significant computational time (rolling validation) or significant reduction in training samples (blocked validation)?
Thank you!
from nbeatsx.
Related Issues (16)
- [Question] SELU weights and dropout HOT 1
- [Question] dropout_prob_exogenous
- Non-EPF Example HOT 4
- Number of stacks confusion HOT 4
- Number of stacks confusion
- Forecasting with long output dimension HOT 1
- Any advice, please. HOT 1
- FCNN realization doesn't take exogenous variables as arguments HOT 2
- nothing
- n_val_weeks in the paper and code do not match HOT 3
- `run_test_nbeatsx` confusion HOT 2
- Missing import statements in nbeats.py HOT 1
- insample_mask and outsample_mask HOT 1
- Can NbeatsX be used for multiple-variate datasets?
- datasets of nbeatsx can not download HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nbeatsx.