Comments (9)
cc @kashif
from transformers.
thanks! having a look
from transformers.
Appreciate it. Please let me know for any additional information that can help with the debugging.
from transformers.
If I understand correctly, the issue is in this code snippet:
sample = {
'past_values': self.X[idx],
'past_time_features': torch.zeros((time_dim, feature_dim)),
'past_observed_mask': self.observed_mask[idx],
'future_values': future_values,
'future_time_features': future_values,
}
Appreciate your timely support!
from transformers.
@sivavishnubramma indeed the issue is in the sizes of the inputs... so what are the shapes of "past_values" and "future_values" tensors?
from transformers.
also note that the future_time_features
are the known covariates int he prediction window (i.e. the known date-time covariates for example or any static covariates)
from transformers.
I added a few time variant to the code. Here are the shapes of the parameter "
Sample 523 - past_values shape: torch.Size([10, 74]), past_time_features shape: torch.Size([10, 74]), past_observed_mask shape: torch.Size([10, 74]), future_values shape: torch.Size([10, 74])
Sample 186 - past_values shape: torch.Size([10, 74]), past_time_features shape: torch.Size([10, 74]), past_observed_mask shape: torch.Size([10, 74]), future_values shape: torch.Size([10, 74])
Sample 441 - past_values shape: torch.Size([10, 74]), past_time_features shape: torch.Size([10, 74]), past_observed_mask shape: torch.Size([10, 74]), future_values shape: torch.Size([10, 74])
fyi- Prior to adding the time variants, the shape was [10, 70]
from transformers.
The error messages are somewhat challenging for someone new to this area to decode. I have a more specific question regarding dataset construction for the Time Series Transformed Model training:
- I have a dataset from a CSV file with columns: {Date, Open, Close, Volume, Price_change, Bull_Bear_Signal}.
- I've added the following time-related features: {'day_of_week', 'month_of_year'}, increasing the total feature count to 8, including the Date.
- I then removed the Date column, leaving 7 features in the dataset.
- The look-back period is set to 10, and the prediction length is 1.
- The model outputs 2 predictions: {Price_change, Bull_Bear_Signal}.
Could you please guide me on how to correctly structure the following dataset to avoid previous errors and ensure successful model training?
(OR)
If there is a builder/adapter class that gets the user requirement similar to the above list and produces the dataset necessary to train the model, please let me know.
sample = {
'past_values': <>,
'past_time_features': <>,
'past_observed_mask': <>,
'future_values': <>,
'future_time_features': <>
}
Let me know for any additional details.
from transformers.
@sivavishnubramma we do have a blog post explaining the inputs and what the batch samples should be here: https://huggingface.co/blog/time-series-transformers
did you manage to have a read of the blog post?
from transformers.
Related Issues (20)
- "use_safetensors" not enforced with "local_files_only", loads bin file
- adalomo is not a valid OptimizerNames HOT 1
- flash attention support for chatglm3-6b HOT 1
- [Bug] Modifying normalizer for pretrained tokenizers don't consistently work HOT 1
- Failed to import transformers HOT 3
- Generation with HybridCache fails (affecting Gemma-2) HOT 1
- Qwen2-1.5B eos_token NoneType Error prevents generation HOT 2
- Vit-hybrid is deprecated, however still shown in the official documentation (with broken links) HOT 4
- compute_metric(eval_pred) in trainer is not mini-batch HOT 1
- transformers.pipeline does not load tokenizer passed as string for custom models HOT 1
- Do we need a config to change `padding_side='left` before the evaluation? HOT 3
- Label Leakage in Gemma 2 Finetuning HOT 1
- QLORA + FSDP distributed fine-tuning failed at the end during model saving stage
- Error running inference on CogVLM2 when distributing it on multiple GPUs: Expected all tensors to be on the same device, but found at least two devices HOT 2
- Mismatch with epoch when using gradient_accumulation HOT 1
- AttributeError: 'str' object has no attribute 'shape' HOT 3
- Whisper - list index out of range with word level timestamps HOT 1
- NameError: free variable 'state_dict' referenced before assignment in enclosing scope HOT 1
- Any config for DeBERTa series as decoders for TSDAE? HOT 2
- Unable to load models with adapter weights in offline mode HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers.