Giter Site home page Giter Site logo

kwuking / timemixer Goto Github PK

View Code? Open in Web Editor NEW
1.1K 92.0 148.0 5.38 MB

[ICLR 2024] Official implementation of "TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting"

Home Page: https://openreview.net/pdf?id=7oLshfEIC2

License: Apache License 2.0

Python 87.11% Shell 12.89%
deep-learning machine-learning time-series time-series-forecasting

timemixer's Introduction

(ICLR'24) TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting


🙋 Please let us know if you find out a mistake or have any suggestions!

🌟 If you find this resource helpful, please consider to star this repository and cite our research:

@inproceedings{wang2023timemixer,
  title={TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting},
  author={Wang, Shiyu and Wu, Haixu and Shi, Xiaoming and Hu, Tengge and Luo, Huakun and Ma, Lintao and Zhang, James Y and ZHOU, JUN},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2024}
}

Updates

🚩 News (2024.07) TimeMixer has evolved into a large model supporting comprehensive time series analysis, including long-term forecasting, short-term forecasting, anomaly detection, imputation, and classification. In the future, we will further explore additional types of time series analysis tasks and strive to break through the limitations of current long-term forecasting to achieve efficient extreme-long-term time series forecasting.

🚩 News (2024.06) Introduction of TimeMixer in Chinese is available.

🚩 News (2024.05) TimeMixer has now released a 28-page full paper version on arXiv. Furthermore, we have provided a brief video to facilitate your understanding of our work.

🚩 News (2024.05) TimeMixer currently supports using future temporal features for prediction. This feature has been well-received by the community members. You can now decide whether to enable this feature by using the parameter use_future_temporal_feature.

🚩 News (2024.03) TimeMixer has been included in [Time-Series-Library] and achieve the consistent 🏆state-of-the-art in long-term time and short-term series forecasting.

🚩 News (2024.03) TimeMixer has added a time-series decomposition method based on DFT, as well as downsampling operation based on 1D convolution.

🚩 News (2024.02) TimeMixer has been accepted as ICLR 2024 Poster.

Introduction

🏆 TimeMixer, as a fully MLP-based architecture, taking full advantage of disentangled multiscale time series, is proposed to achieve consistent SOTA performances in both long and short-term forecasting tasks with favorable run-time efficiency.

🌟Observation 1: History Extraction

Given that seasonal and trend components exhibit significantly different characteristics in time series, and different scales of the time series reflect different properties, with seasonal characteristics being more pronounced at a fine-grained micro scale and trend characteristics being more pronounced at a coarse macro scale, it is therefore necessary to decouple seasonal and trend components at different scales.

🌟Observation 2: Future Prediction

Integrating forecasts from different scales to obtain the final prediction results, different scales exhibit complementary predictive capabilities.

Overall Architecture

TimeMixer as a fully MLP-based architecture with Past-Decomposable-Mixing (PDM) and Future-Multipredictor-Mixing (FMM) blocks to take full advantage of disentangled multiscale series in both past extraction and future prediction phases.

Past Decomposable Mixing

we propose the Past-Decomposable-Mixing (PDM) block to mix the decomposed seasonal and trend components in multiple scales separately.

Empowered by seasonal and trend mixing, PDM progressively aggregates the detailed seasonal information from fine to coarse and dive into the macroscopic trend information with prior knowledge from coarser scales, eventually achieving the multiscale mixing in past information extraction.

Future Multipredictor Mixing

Note that Future Multipredictor Mixing (FMM) is an ensemble of multiple predictors, where different predictors are based on past information from different scales, enabling FMM to integrate complementary forecasting capabilities of mixed multiscale series.

Get Started

  1. Install requirements. pip install -r requirements.txt
  2. Download data. You can download the all datasets from Google Driver, Baidu Driver or Kaggle Datasets. All the datasets are well pre-processed and can be used easily.
  3. Train the model. We provide the experiment scripts of all benchmarks under the folder ./scripts. You can reproduce the experiment results by:
bash ./scripts/long_term_forecast/ETT_script/TimeMixer_ETTm1.sh
bash ./scripts/long_term_forecast/ECL_script/TimeMixer.sh
bash ./scripts/long_term_forecast/Traffic_script/TimeMixer.sh
bash ./scripts/long_term_forecast/Solar_script/TimeMixer.sh
bash ./scripts/long_term_forecast/Weather_script/TimeMixer.sh
bash ./scripts/short_term_forecast/M4/TimeMixer.sh
bash ./scripts/short_term_forecast/PEMS/TimeMixer.sh

Main Results

We conduct extensive experiments to evaluate the performance and efficiency of TimeMixer, covering long-term and short-term forecasting, including 18 real-world benchmarks and 15 baselines. 🏆 TimeMixer achieves consistent state-of-the-art performance in all benchmarks, covering a large variety of series with different frequencies, variate numbers and real-world scenarios.

Long-term Forecasting

To ensure model comparison fairness, experiments were performed with standardized parameters, aligning input lengths, batch sizes, and training epochs. Additionally, given that results in various studies often stem from hyperparameter optimization, we include outcomes from comprehensive parameter searches.

Short-term Forecasting: Multivariate data

Short-term Forecasting: Univariate data

Model Abalations

To verify the effectiveness of each component of TimeMixer, we provide detailed ablation study on every possible design in both Past-Decomposable-Mixing and Future-Multipredictor-Mixing blocks on all 18 experiment benchmarks (see our paper for full results 😊).

Model Efficiency

We compare the running memory and time against the latest state-of-the-art models under the training phase, where TimeMixer consistently demonstrates favorable efficiency, in terms of both GPU memory and running time, for various series lengths (ranging from 192 to 3072), in addition to the consistent state-of-the-art perfor- mances for both long-term and short-term forecasting tasks. It is noteworthy that TimeMixer, as a deep model, demonstrates results close to those of full-linear models in terms of efficiency. This makes TimeMixer promising in a wide range of scenarios that require high model efficiency.

Further Reading

1, Time-LLM: Time Series Forecasting by Reprogramming Large Language Models, in ICLR 2024. [GitHub Repo]

Authors: Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, Qingsong Wen

@inproceedings{jin2023time,
  title={{Time-LLM}: Time series forecasting by reprogramming large language models},
  author={Jin, Ming and Wang, Shiyu and Ma, Lintao and Chu, Zhixuan and Zhang, James Y and Shi, Xiaoming and Chen, Pin-Yu and Liang, Yuxuan and Li, Yuan-Fang and Pan, Shirui and Wen, Qingsong},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2024}
}

2, iTransformer: Inverted Transformers Are Effective for Time Series Forecasting, in ICLR 2024 Spotlight. [GitHub Repo]

Authors: Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, Mingsheng Long

@article{liu2023itransformer,
  title={iTransformer: Inverted Transformers Are Effective for Time Series Forecasting},
  author={Liu, Yong and Hu, Tengge and Zhang, Haoran and Wu, Haixu and Wang, Shiyu and Ma, Lintao and Long, Mingsheng},
  journal={arXiv preprint arXiv:2310.06625},
  year={2023}
}

Acknowledgement

We appreciate the following GitHub repos a lot for their valuable code and efforts.

Contact

If you have any questions or want to use the code, feel free to contact:

timemixer's People

Contributors

kwuking avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

timemixer's Issues

有关数据处理相关问题

作者您好,如果我将数据进行模态分解,直接将分解后的数据选择M,即多变量预测多变量,最后将预测结果相加,能否实现预测效果的准确性提升呢,这样可不可行呢?在这里再次感谢作者的源码分享。

Pycharm运行run.py,数据集ETTm1,出错

屏幕截图 2024-06-04 155254
下载的源码只修改了这几个参数:
parser.add_argument('--model', type=str, default='TimeMixer',help='model name, options: [Autoformer, Transformer, TimesNet]')
# data loader
parser.add_argument('--data', type=str, default='ETTm1', help='dataset type')
parser.add_argument('--root_path', type=str, default='./data/ETT/', help='root path of the data file')
parser.add_argument('--data_path', type=str, default='ETTm1.csv', help='data file')
其余未修改,想问下是哪里出了问题?

关于模型benchmark脚本文件

在论文中看到长期预测有两组实验,一组是Unified hyperparameter results,另一组是hyperparameter searching results
第一个问题:第一组的统一超参数的概念不是很明白,不同模型之间的超参数能统一吗
第二个问题:代码仓库中长期预测只看到unify的脚本文件,超参数搜索是更改了哪些参数,从结果上来看,是否是改变了输入的序列长度seq_len,可否提供完整的脚本文件。
万分感谢

B, T, N = x.size()出错

Traceback (most recent call last):
File "E:/PyCharm Community Edition 2023.2.4/plugins/python-ce/helpers/pydev/pydevd.py", line 1500, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "E:\PyCharm Community Edition 2023.2.4\plugins\python-ce\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:\ Time-Series-Library-main\Times_Series_Lib\new_main.py", line 36, in
main() #模型训练和测试
File "D:\ Time-Series-Library-main\Times_Series_Lib\Model.py", line 164, in main
exp.train(setting)
File "D:\ Time-Series-Library-main\Times_Series_Lib\exp\exp_long_term_forecasting.py", line 133, in train
outputs = self.model(batch_x, batch_x_mark, dec_inp, batch_y_mark)
File "E:\anaconda\envs\pytorch-38\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\anaconda\envs\pytorch-38\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\ Time-Series-Library-main\Times_Series_Lib\models\TimeMixer.py", line 384, in forward
dec_out_list = self.forecast(x_enc, x_mark_enc, x_dec, x_mark_dec)
File "D:\ Time-Series-Library-main\Times_Series_Lib\models\TimeMixer.py", line 324, in forecast
B, T, N = x.size()
ValueError: not enough values to unpack (expected 3, got 2)

这个问题是怎么造成的呢

Traceback (most recent call last):
File "C:\Users\13521\Desktop\TimeMixer-main (2)\TimeMixer-main\run.py", line 139, in
exp.train(setting)
File "C:\Users\13521\Desktop\TimeMixer-main (2)\TimeMixer-main\exp\exp_long_term_forecasting.py", line 101, in train
vali_data, vali_loader = self._get_data(flag='val')
File "C:\Users\13521\Desktop\TimeMixer-main (2)\TimeMixer-main\exp\exp_long_term_forecasting.py", line 30, in _get_data
data_set, data_loader = data_provider(self.args, flag)
File "C:\Users\13521\Desktop\TimeMixer-main (2)\TimeMixer-main\data_provider\data_factory.py", line 88, in data_provider
print(flag, len(data_set))
ValueError: len() should return >= 0

More about the question about the forecastability

I appreciate your quick reply. :-)

Q: It seems a counterintuitive phenomenon of forecastability. Table 1 shows that the weather and Electricity datasets possess high forecastabilities. In other words, they are easier to predict than these datasets with low forecastability, e.g., ETT (4 datasets). However, the ETT, traffic datasets have obvious periodic patterns derived from the visualizations and ACF. Especially, it is harder to find the pattern to fit, as shown in Fig 12 of the paper. Could you provide some insights into the above question?

A: Thank you very much for your attention to our work. Forecastability is a key issue in time series analysis. Currently, our forecastability calculations are performed through global sampling on the time series. The ETT dataset is actually recognized as one with a higher degree of forecasting difficulty. The visual showcases only display a portion of the content and do not represent the complete time series. Compared to larger datasets such as Electricity and Traffic, the patterns of cycles and trends are more evident, resulting in stronger predictability.

Moreover, How about the weather dataset? No matter the complete or partial time series, there are no obvious cycles or trends. Why is it easy to predict?

The question about the forecastability

It seems a counterintuitive phenomenon of forecastability.

Table 1 shows that the weather and Electricity datasets possess high forecastabilities. In other words, they are easier to predict than these datasets with low forecastability, e.g., ETT (4 datasets).

However, the ETT, traffic datasets have obvious periodic patterns derived from the visualizations and ACF. Especially, it is harder to find the pattern to fit, as shown in Fig 12 of the paper.

Could you provide some insights into the above question?

关于lab_len

请问为什么脚本中设置的Lab_len都为0,其他的时序预测论文中大部分都不为0,我将lab_len改为48后会报错,请问在源码中Lab_len只能设置成0吗

更换自己的数据,遇到的问题。AttributeError: 'Namespace' object has no attribute 'frequency_map'

您好,拜读过您的文章,十分钦佩。但是在更换代码的时候遇到了如下的报错:
Traceback (most recent call last):
File "D:\TimeMixer-main\run.py", line 140, in
exp.train(setting)
File "D:\TimeMixer-main\exp\exp_short_term_forecasting.py", line 101, in train
loss_value = criterion(batch_x, self.args.frequency_map, outputs, batch_y, batch_y_mark)
AttributeError: 'Namespace' object has no attribute 'frequency_map'

我想实现短期的12步预测,但是未能实现,这是什么问题呢
Namespace(task_name='short_term_forecast', is_training=1, model_id='Data_May', model='TimeMixer', data='custom', root_path='./dataset/WQX/', data_path='Data_May.csv', features='MS', target='OT', freq='t', checkpoints='./checkpoints/', seq_len=96, label_len=48, pred_len=12, seasonal_patterns='Monthly', inverse=False, top_k=5, num_kernels=6, enc_in=6, dec_in=6, c_out=6, d_model=32, n_heads=8, e_layers=4, d_layers=1, d_ff=32, moving_avg=25, factor=3, distil=True, dropout=0.05, embed='timeF', activation='gelu', output_attention=False, channel_independence=1, decomp_method='moving_avg', use_norm=1, down_sampling_layers=1, down_sampling_window=2, down_sampling_method='avg', use_future_temporal_feature=0, num_workers=0, itr=1, train_epochs=50, batch_size=128, patience=20, learning_rate=0.001, des='test', loss='SMAPE', lradj='TST', pct_start=0.2, use_amp=False, comment='none', use_gpu=False, gpu=0, use_multi_gpu=False, devices='0,1', p_hidden_dims=[128, 128], p_hidden_layers=2)
这是我参数设置,麻烦题主能给解惑,再次感谢!

运行脚本出现以下错误,请问该怎么处理

Traceback (most recent call last):
File "", line 1, in
File "D:\anaconda\envs\TimeMixer\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "D:\anaconda\envs\TimeMixer\lib\multiprocessing\spawn.py", line 125, in _main
prepare(preparation_data)
File "D:\anaconda\envs\TimeMixer\lib\multiprocessing\spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "D:\anaconda\envs\TimeMixer\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "D:\anaconda\envs\TimeMixer\lib\runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "D:\anaconda\envs\TimeMixer\lib\runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "D:\anaconda\envs\TimeMixer\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "E:\YD\code\TimeMixer-main\run.py", line 139, in
exp.train(setting)
File "E:\YD\code\TimeMixer-main\exp\exp_long_term_forecasting.py", line 132, in train
for i, (batch_x, batch_y, batch_x_mark, batch_y_mark) in enumerate(train_loader):
File "D:\anaconda\envs\TimeMixer\lib\site-packages\torch\utils\data\dataloader.py", line 352, in iter
return self._get_iterator()
File "D:\anaconda\envs\TimeMixer\lib\site-packages\torch\utils\data\dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "D:\anaconda\envs\TimeMixer\lib\site-packages\torch\utils\data\dataloader.py", line 801, in init
w.start()
File "D:\anaconda\envs\TimeMixer\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "D:\anaconda\envs\TimeMixer\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\anaconda\envs\TimeMixer\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "D:\anaconda\envs\TimeMixer\lib\multiprocessing\popen_spawn_win32.py", line 45, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "D:\anaconda\envs\TimeMixer\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "D:\anaconda\envs\TimeMixer\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

在TimeMixer.py中249行的函数forecast中出现报错

image
可以看到,经过for loop之后的x只有两个维度,那么再经过下一句会报错。
image
同时再往下一句的normalize中,normalize_layer的层数是否有len(x_enc),经过查看层数只有down_sampling_layers+1,这里却是self.normalize_layer[i],i最后会达到15,而脚本给出的参数down_sampling_layers是3(使用的为ETTm1数据文件及其脚本给出的参数),这也是对应不上的。
image

The “freeze_support()” line can be omitted if the program is not going to be frozen to produce an executable.

raise RuntimeError(‘’'
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if name == ‘main‘:
freeze_support()
...
The “freeze_support()” line can be omitted if the program
is not going to be frozen to produce an executable.

There are no SOLAR directory files in the dataset

93/5000
In the dataset compression file you provided, I did not find the solar-energy directory. According to the definition in your program, it should be in the./dataset/solar/ , and there should be a solar_AL.txt file in this directory.
image

parser.add_argument('--inverse', action='store_true', help='inverse output data', default=True)

作者您好,我再归一化时遇到了这个问题,当我选择True时,程序提示错误,这是因为什么呢?
Traceback (most recent call last):
File "D:\ZJT\Apredict_suanfa\TimeMixer-main\run.py", line 143, in
exp.test(setting)
File "D:\ZJT\Apredict_suanfa\TimeMixer-main\exp\exp_long_term_forecasting.py", line 278, in test
input = test_data.inverse_transform(input.squeeze(0)).reshape(shape)
ValueError: cannot select an axis to squeeze out which has size not equal to one
期待作者您的回复,感谢!

请问这两行代码的作用是什么?

您好,
在TimeMixer.py->class Model(nn.Module)->def forecast(self, x_enc, x_mark_enc, x_dec, x_mark_dec)中,我有两个问题:
图片
问题1:请问图中328行的x_mark = x_mark.repeat(N, 1, 1)有什么作用?运行后,x_mark的batchsize维度被乘了N,导致我在embedding时,无法执行加法x = self.value_embedding(x) + self.temporal_embedding(x_mark),因为两个部分的batchsize维度不相等了,前者仍是batchsize,后者则是N*batchsize。所以我不得不注释了这一行代码。。。
问题2:请问图中342行x_list = self.pre_enc(x_list) 有什么作用呀?我感觉是在时间维度做了移动平均和做差,使输出的x_list有了[0][1]两个tuple。我想问一下,如下图所示,将x_list分成两个tuple,x_list[0]和x_list[1],分别进入embedding和future_multi_mixing是出于什么考虑呢?我好像没有在论文中找到解释,QAQ
图片
烦请答疑,非常感谢!

设置 args.features = 'MS'时程序错误

在测试'electricity.csv'数据集时,设置args.features = 'MS',args.target = 'OT',args.enc_in = 321, args.dec_in = 321,args.c_out = 1,其余参数采用默认参数,程序会报错:RuntimeError: shape '[32, 1, 96]' is invalid for input of size 986112

seq_len为奇数的时候__multi_scale_process_inputs有问题

down_sampling_window=2的时候,如果seq_len=45(奇数),x_enc用torch.nn.AvgPool1d或者torch.nn.MaxPool1d向下采样的长度22,
x_mark_enc向下采样的时候x_mark_enc_mark_ori[:, ::self.configs.down_sampling_window, :] 长度23,会不一致把

运行short_term_forecast出现如下报错

bash .\scripts\short_term_forecast\M4\TimeMixer.sh

Args in experiment:
Namespace(task_name='short_term_forecast', is_training=1, model_id='m4_Monthly', model='TimeMixer', data='m4', root_path='./dataset/m4', data_path='ETTh1.csv', features='M', target='OT', freq='h', checkpoints='./checkpoints/', seq_len=96, label_len=48, pred_len=96, seasonal_patterns='Monthly', inverse=False, top_k=5, num_kernels=6, enc_in=1, dec_in=1, c_out=1, d_model=32, n_heads=4, e_layers=4, d_layers=1, d_ff=32, moving_avg=25, factor=3, distil=True, dropout=0.1, embed='timeF', activation='gelu', output_attention=False, channel_independence=1, decomp_method='moving_avg', use_norm=1, down_sampling_layers=1, down_sampling_window=2, down_sampling_method='avg', num_workers=10, itr=1, train_epochs=50, batch_size=128, patience=20, learning_rate=0.01, des='Exp', loss='SMAPE', lradj='TST', pct_start=0.2, use_amp=False, comment='none', use_gpu=False, gpu=0, use_multi_gpu=False, devices='0,1', p_hidden_dims=[128, 128], p_hidden_layers=2)
Use CPU

start training : short_term_forecast_m4_Monthly_none_TimeMixer_m4_sl96_pl96_dm32_nh4_el4_dl1_df32_fc3_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 48000
val 48000
Traceback (most recent call last):
File "..\code\TimeMixer\run.py", line 139, in
exp.train(setting)
File "..\code\TimeMixer\exp\exp_short_term_forecasting.py", line 95, in train
outputs = self.model(batch_x, None, dec_inp, None)
File "C:\ProgramData\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "..\code\TimeMixer\models\TimeMixer.py", line 383, in forward
dec_out_list = self.forecast(x_enc, x_mark_enc, x_dec, x_mark_dec)
File "..\code\TimeMixer\models\TimeMixer.py", line 316, in forecast
x_enc, x_mark_enc = self.__multi_scale_process_inputs(x_enc, x_mark_enc)
File ..\TimeMixer\models\TimeMixer.py", line 304, in __multi_scale_process_inputs
x_mark_sampling_list.append(x_mark_enc_mark_ori[:, ::self.configs.down_sampling_window, :])
TypeError: 'NoneType' object is not subscriptable

在短期预测读取m4数据

在短期预测读取m4数据
training_values = np.array([v for v in dataset.values[dataset.groups == self.seasonal_patterns]],dtype=np.float32)报错

ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (23000,) + inhomogeneous part.
想问一下怎么解决呀

normalize

你好,我想问一下源码里自定义的Normalize方法的作用,为什么数据在norm之后,代码的最后部分还需要denorm。一般的laynorm不需要这么做。

各位大佬好,我是一个萌新,想复现这个模型,但是在数据集输入进去之后,会报出这样的错误,是为什么呢?

usage: run.py [-h] --task_name TASK_NAME --is_training IS_TRAINING --model_id MODEL_ID --model MODEL --data DATA [--root_path ROOT_PATH]
[--data_path DATA_PATH] [--features FEATURES] [--target TARGET] [--freq FREQ] [--checkpoints CHECKPOINTS] [--seq_len SEQ_LEN]
[--label_len LABEL_LEN] [--pred_len PRED_LEN] [--seasonal_patterns SEASONAL_PATTERNS] [--inverse] [--top_k TOP_K]
[--num_kernels NUM_KERNELS] [--enc_in ENC_IN] [--dec_in DEC_IN] [--c_out C_OUT] [--d_model D_MODEL] [--n_heads N_HEADS]
[--e_layers E_LAYERS] [--d_layers D_LAYERS] [--d_ff D_FF] [--moving_avg MOVING_AVG] [--factor FACTOR] [--distil] [--dropout DROPOUT]
[--embed EMBED] [--activation ACTIVATION] [--output_attention] [--channel_independence CHANNEL_INDEPENDENCE]
[--decomp_method DECOMP_METHOD] [--use_norm USE_NORM] [--down_sampling_layers DOWN_SAMPLING_LAYERS]
[--down_sampling_window DOWN_SAMPLING_WINDOW] [--down_sampling_method DOWN_SAMPLING_METHOD]
[--use_future_temporal_feature USE_FUTURE_TEMPORAL_FEATURE] [--num_workers NUM_WORKERS] [--itr ITR] [--train_epochs TRAIN_EPOCHS]
[--batch_size BATCH_SIZE] [--patience PATIENCE] [--learning_rate LEARNING_RATE] [--des DES] [--loss LOSS] [--lradj LRADJ]
[--pct_start PCT_START] [--use_amp] [--comment COMMENT] [--use_gpu USE_GPU] [--gpu GPU] [--use_multi_gpu] [--devices DEVICES]
[--p_hidden_dims P_HIDDEN_DIMS [P_HIDDEN_DIMS ...]] [--p_hidden_layers P_HIDDEN_LAYERS]
run.py: error: the following arguments are required: --task_name, --is_training, --model_id, --model, --data

How to add future covariates

My task is to carry out power load forecasting, the data is one data point per 15min (96 time points a day), I also have temperature data in the same format.After reading your source code, I found that the input have x_enc, x_mark_enc, but I want to use the historical load and temperature data of 480 points (5 days), combined with the forecast temperature of 96 points in the future, to predict the load of 96 points in the future.I would like to ask you how to add future variables to network.

预测结果滞后性

感谢作者开源算法,实际在个人数据集上,训练好之后,测试集推理结果都存在滞后性,如下图所示,请教一下,有什么好的解决思路?
image

Forecastability的计算

你好,我想知道原文中数据集的Forecastability是怎么计算出来的。我尝试过使用'data_analysis.py'中'forecastabilty_moving‘和’forecastabilty‘函数,但得到的结果与原文相差很大。
想问一下有更详细的计算过程代码吗?烦请答疑,万分感谢

Why pre_enc decomposes seasonality and trend when channel_independence == 0

    def pre_enc(self, x_list):
        if self.channel_independence == 1:
            return (x_list, None)
        else:
            out1_list = []
            out2_list = []
            for x in x_list:
                x_1, x_2 = self.preprocess(x)
                out1_list.append(x_1)
                out2_list.append(x_2)
            return (out1_list, out2_list)

When channel_independence == 0, the following embedding layer only takes the seasonality part. What's the motivation? Thank you

大佬们好,首先要感谢昨天的回答。我解决了昨天的问题后,又有新的问题,KeyError: 'Autoformer'是为什么呀?

Use GPU: cuda:0
Traceback (most recent call last):
File "d:\code\TimeMixer-main\run.py", line 140, in
exp = Exp(args) # set experiments
File "d:\code\TimeMixer-main\exp\exp_long_term_forecasting.py", line 20, in init
super(Exp_Long_Term_Forecast, self).init(args)
File "d:\code\TimeMixer-main\exp\exp_basic.py", line 13, in init
self.model = self._build_model().to(self.device)
File "d:\code\TimeMixer-main\exp\exp_long_term_forecasting.py", line 23, in _build_model
model = self.model_dict[self.args.model].Model(self.args).float()
KeyError: 'Autoformer'

周期趋势的分解

您好,我想知道您在附录里提到的基于DFT增强的分解具体是怎么实现的?需要对每个变量都计算嘛?选最重要的前K个频率又咋选呢? 有没有代码可以学习一下~

Perhaps the logic errors in the code

在下面这段代码中,x_mark = x_mark.repeat(N, 1, 1) 似乎无法实现真正意义上的对齐。如果batch size > 1, 直接对x_mark进行repeat实际上并不能和 x = x.permute(0, 2, 1).contiguous().reshape(B * N, T, 1) 中的reshape操作对应上。这将导致如果送入模型不同batch szie大小的数据(例如bs = 64, bs = 32),并且他们中有一条公共的数据,那么两个batch size中对这条公共数据预测得到的结果将是不一致的。

将其改为 x_mark = x_mark.unsqueeze(1).repeat(1, N, 1, 1).reshape(B * N, T, -1) 可产生正确预测结果。

x_list = []
        x_mark_list = []
        if x_mark_enc is not None:
            for i, x, x_mark in zip(range(len(x_enc)), x_enc, x_mark_enc):
                B, T, N = x.size()
                x = self.normalize_layers[i](x, 'norm')
                if self.channel_independence == 1:
                    x = x.permute(0, 2, 1).contiguous().reshape(B * N, T, 1)
                x_list.append(x)
                x_mark = x_mark.repeat(N, 1, 1)
                x_mark_list.append(x_mark)

loss exploding after some iterations

I use a custom dataset and train the model, but after some iterations the loss explode.
image

How to solve this issue, or what could be the root cause ?

`!python -u run.py
--task_name long_term_forecast
--is_training 1
--freq t
--model TimesNet
--target OT
--model_id ECL_120'_'96
--model TimeMixer
--data custom
--features M
--seq_len 96
--label_len 0
--pred_len 96
--moving_avg 15
--e_layers 3
--d_layers 1
--enc_in 6
--dec_in 6
--c_out 6
--des 'Exp'
--itr 1
--d_model 32
--d_ff 64
--batch_size 8
--seasonal_patterns Daily
--learning_rate 0.01
--train_epochs 1
--patience 3
--down_sampling_layers 3
--down_sampling_method avg
--down_sampling_window 2

Test Loss: 41927.7059582 ??? With default hyper-parameters

xxx@xxxx:~/TimeMixer$ bash ./scripts/long_term_forecast/Weather_script/TimeMixer_unify.sh
Args in experiment:
Namespace(task_name='long_term_forecast', is_training=1, model_id='weather_96_96', model='TimeMixer', data='custom', root_path='./dataset/weather/', data_path='weather.csv', features='M', target='OT', freq='h', checkpoints='./checkpoints/', seq_len=96, label_len=0, pred_len=96, seasonal_patterns='Monthly', inverse=False, top_k=5, num_kernels=6, enc_in=21, dec_in=21, c_out=21, d_model=16, n_heads=4, e_layers=3, d_layers=1, d_ff=32, moving_avg=25, factor=3, distil=True, dropout=0.1, embed='timeF', activation='gelu', output_attention=False, channel_independence=1, decomp_method='moving_avg', use_norm=1, down_sampling_layers=3, down_sampling_window=2, down_sampling_method='avg', num_workers=10, itr=1, train_epochs=20, batch_size=128, patience=10, learning_rate=0.01, des='Exp', loss='MSE', lradj='TST', pct_start=0.2, use_amp=False, comment='none', use_gpu=True, gpu=0, use_multi_gpu=False, devices='0,1', p_hidden_dims=[128, 128], p_hidden_layers=2)
Use GPU: cuda:0

start training : long_term_forecast_weather_96_96_none_TimeMixer_custom_sl96_pl96_dm16_nh4_el3_dl1_df32_fc3_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 36696
val 5175
test 10444
iters: 100, epoch: 1 | loss: 0.4307363
speed: 0.0692s/iter; left time: 389.1167s
iters: 200, epoch: 1 | loss: 0.4297568
speed: 0.0611s/iter; left time: 337.2267s
Epoch: 1 cost time: 17.889004945755005
Epoch: 1, Steps: 286 | Train Loss: 0.5415171 Vali Loss: 0.4307196 Test Loss: 0.1833659
Validation loss decreased (inf --> 0.430720). Saving model ...
Updating learning rate to 0.0018082204734143938
iters: 100, epoch: 2 | loss: 0.3520306
speed: 0.1521s/iter; left time: 811.5801s
iters: 200, epoch: 2 | loss: 0.3565684
speed: 0.0618s/iter; left time: 323.7447s
Epoch: 2 cost time: 17.85987901687622
Epoch: 2, Steps: 286 | Train Loss: 0.4447075 Vali Loss: 0.4094866 Test Loss: 0.1705955
Validation loss decreased (0.430720 --> 0.409487). Saving model ...
Updating learning rate to 0.005206596517931138
iters: 100, epoch: 3 | loss: 0.3173375
speed: 0.1528s/iter; left time: 771.3829s
iters: 200, epoch: 3 | loss: 0.3592640
speed: 0.0621s/iter; left time: 307.5004s
Epoch: 3 cost time: 17.934012413024902
Epoch: 3, Steps: 286 | Train Loss: 0.4251685 Vali Loss: 0.3993056 Test Loss: 0.1661765
Validation loss decreased (0.409487 --> 0.399306). Saving model ...
Updating learning rate to 0.008601101999279598
iters: 100, epoch: 4 | loss: 0.3249778
speed: 0.1530s/iter; left time: 728.5612s
iters: 200, epoch: 4 | loss: 0.4692132
speed: 0.0622s/iter; left time: 289.8614s
Epoch: 4 cost time: 17.94803547859192
Epoch: 4, Steps: 286 | Train Loss: 0.4286626 Vali Loss: 0.4020122 Test Loss: 0.1657454
EarlyStopping counter: 1 out of 10
Updating learning rate to 0.009999998821672624
iters: 100, epoch: 5 | loss: 0.3110182
speed: 0.1529s/iter; left time: 684.4151s
iters: 200, epoch: 5 | loss: 5966753.5000000
speed: 0.0621s/iter; left time: 271.7755s
Epoch: 5 cost time: 17.991345167160034
Epoch: 5, Steps: 286 | Train Loss: 73039167.1294913 Vali Loss: 124857.1351563 Test Loss: 41927.7059582
EarlyStopping counter: 2 out of 10
Updating learning rate to 0.00990325594987406
iters: 100, epoch: 6 | loss: 27256.5839844
speed: 0.1534s/iter; left time: 642.9746s
iters: 200, epoch: 6 | loss: 40716.7031250
speed: 0.0625s/iter; left time: 255.5338s
Epoch: 6 cost time: 18.048863887786865
Epoch: 6, Steps: 286 | Train Loss: 61819.4581956 Vali Loss: 19785.5467773 Test Loss: 8566.0788152
EarlyStopping counter: 3 out of 10
Updating learning rate to 0.009618084470288236
iters: 100, epoch: 7 | loss: 42804.7265625
speed: 0.1541s/iter; left time: 601.9182s
iters: 200, epoch: 7 | loss: 14851.3417969
speed: 0.0624s/iter; left time: 237.5837s
Epoch: 7 cost time: 18.051480293273926
Epoch: 7, Steps: 286 | Train Loss: 30657.5081574 Vali Loss: 13869.5492432 Test Loss: 4513.3062156
EarlyStopping counter: 4 out of 10
Updating learning rate to 0.009155443362949628
iters: 100, epoch: 8 | loss: 9879.9335938
speed: 0.1534s/iter; left time: 555.0071s
iters: 200, epoch: 8 | loss: 26258.4062500
speed: 0.0624s/iter; left time: 219.5428s
Epoch: 8 cost time: 18.00772786140442
Epoch: 8, Steps: 286 | Train Loss: 23289.7381600 Vali Loss: 7120.0204468 Test Loss: 2942.5684035
EarlyStopping counter: 5 out of 10
Updating learning rate to 0.008533111666161134
iters: 100, epoch: 9 | loss: 6651.8051758
speed: 0.1532s/iter; left time: 510.6945s
iters: 200, epoch: 9 | loss: 7979.3217773
speed: 0.0624s/iter; left time: 201.8479s
Epoch: 9 cost time: 18.038314819335938
Epoch: 9, Steps: 286 | Train Loss: 13656.0039387 Vali Loss: 9894.2515869 Test Loss: 4533.5889727
EarlyStopping counter: 6 out of 10
Updating learning rate to 0.007775005238022703
iters: 100, epoch: 10 | loss: 7160.9116211
speed: 0.1537s/iter; left time: 468.3441s
iters: 200, epoch: 10 | loss: 11085.7910156
speed: 0.0624s/iter; left time: 183.7755s
Epoch: 10 cost time: 18.018856048583984
Epoch: 10, Steps: 286 | Train Loss: 11040.1564498 Vali Loss: 7765.1726685 Test Loss: 3500.1894863
EarlyStopping counter: 7 out of 10
Updating learning rate to 0.006910257683416708
iters: 100, epoch: 11 | loss: 21434.4316406
speed: 0.1536s/iter; left time: 424.1434s
iters: 200, epoch: 11 | loss: 4110.5146484
speed: 0.0624s/iter; left time: 166.1284s
Epoch: 11 cost time: 18.04583215713501
Epoch: 11, Steps: 286 | Train Loss: 8240.5528419 Vali Loss: 5190.9347229 Test Loss: 1577.0768572
EarlyStopping counter: 8 out of 10
Updating learning rate to 0.005972100765910645
iters: 100, epoch: 12 | loss: 3391.3840332
speed: 0.1540s/iter; left time: 381.0550s
iters: 200, epoch: 12 | loss: 22552.2753906
speed: 0.0625s/iter; left time: 148.4305s
Epoch: 12 cost time: 18.07685947418213
Epoch: 12, Steps: 286 | Train Loss: 7187.2250772 Vali Loss: 3726.9754517 Test Loss: 1578.9364279
EarlyStopping counter: 9 out of 10
Updating learning rate to 0.004996587329719809
iters: 100, epoch: 13 | loss: 3442.0224609
speed: 0.1533s/iter; left time: 335.4760s
iters: 200, epoch: 13 | loss: 2756.2189941
speed: 0.0624s/iter; left time: 130.3737s
Epoch: 13 cost time: 18.014405965805054
Epoch: 13, Steps: 286 | Train Loss: 7158.0969733 Vali Loss: 5071.9943359 Test Loss: 2097.3409002
EarlyStopping counter: 10 out of 10
Early stopping
testing : long_term_forecast_weather_96_96_none_TimeMixer_custom_sl96_pl96_dm16_nh4_el3_dl1_df32_fc3_ebtimeF_dtTrue_Exp_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 10444
test shape: (81, 128, 96, 21) (81, 128, 96, 21)
test shape: (10368, 96, 21) (10368, 96, 21)
mse:0.1661764234304428, mae:0.21156392991542816
rmse:0.40764743089675903, mape:0.5020378828048706, mspe:25357502.0
Args in experiment:
Namespace(task_name='long_term_forecast', is_training=1, model_id='weather_96_192', model='TimeMixer', data='custom', root_path='./dataset/weather/', data_path='weather.csv', features='M', target='OT', freq='h', checkpoints='./checkpoints/', seq_len=96, label_len=0, pred_len=192, seasonal_patterns='Monthly', inverse=False, top_k=5, num_kernels=6, enc_in=21, dec_in=21, c_out=21, d_model=16, n_heads=4, e_layers=3, d_layers=1, d_ff=32, moving_avg=25, factor=3, distil=True, dropout=0.1, embed='timeF', activation='gelu', output_attention=False, channel_independence=1, decomp_method='moving_avg', use_norm=1, down_sampling_layers=3, down_sampling_window=2, down_sampling_method='avg', num_workers=10, itr=1, train_epochs=20, batch_size=128, patience=10, learning_rate=0.01, des='Exp', loss='MSE', lradj='TST', pct_start=0.2, use_amp=False, comment='none', use_gpu=True, gpu=0, use_multi_gpu=False, devices='0,1', p_hidden_dims=[128, 128], p_hidden_layers=2)
Use GPU: cuda:0
start training : long_term_forecast_weather_96_192_none_TimeMixer_custom_sl96_pl192_dm16_nh4_el3_dl1_df32_fc3_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 36600
val 5079
test 10348
iters: 100, epoch: 1 | loss: 0.5949332
speed: 0.0736s/iter; left time: 412.3825s
iters: 200, epoch: 1 | loss: 0.6008663
speed: 0.0653s/iter; left time: 359.0430s
Epoch: 1 cost time: 19.023356199264526
Epoch: 1, Steps: 285 | Train Loss: 0.6265348 Vali Loss: 0.5050493 Test Loss: 0.2290545
Validation loss decreased (inf --> 0.505049). Saving model ...
Updating learning rate to 0.0018082286694699988
iters: 100, epoch: 2 | loss: 0.4368003
speed: 0.1606s/iter; left time: 853.6822s
iters: 200, epoch: 2 | loss: 0.6394349
speed: 0.0654s/iter; left time: 341.2403s
Epoch: 2 cost time: 18.83457350730896
Epoch: 2, Steps: 285 | Train Loss: 0.5097023 Vali Loss: 0.5010504 Test Loss: 0.2251554
Validation loss decreased (0.505049 --> 0.501050). Saving model ...
Updating learning rate to 0.005206619683914479
iters: 100, epoch: 3 | loss: 0.5027322
speed: 0.1605s/iter; left time: 807.3658s
iters: 200, epoch: 3 | loss: 0.5735677
speed: 0.0655s/iter; left time: 322.8571s
Epoch: 3 cost time: 18.84622883796692
Epoch: 3, Steps: 285 | Train Loss: 0.4948312 Vali Loss: 0.4692470 Test Loss: 0.2095508
Validation loss decreased (0.501050 --> 0.469247). Saving model ...
Updating learning rate to 0.008601126519745959
iters: 100, epoch: 4 | loss: 0.7092599
speed: 0.1600s/iter; left time: 759.4899s
iters: 200, epoch: 4 | loss: 0.3938503
speed: 0.0655s/iter; left time: 304.3535s
Epoch: 4 cost time: 18.848021268844604
Epoch: 4, Steps: 285 | Train Loss: 0.4878404 Vali Loss: 0.4703723 Test Loss: 0.2100072
EarlyStopping counter: 1 out of 10
Updating learning rate to 0.009999998813389152
iters: 100, epoch: 5 | loss: 0.4158416
speed: 0.1604s/iter; left time: 715.7460s
iters: 200, epoch: 5 | loss: 0.4023744
speed: 0.0654s/iter; left time: 285.3155s
Epoch: 5 cost time: 18.826069593429565
Epoch: 5, Steps: 285 | Train Loss: 0.4828563 Vali Loss: 0.4683839 Test Loss: 0.2124947
Validation loss decreased (0.469247 --> 0.468384). Saving model ...
Updating learning rate to 0.009903253591993106
iters: 100, epoch: 6 | loss: 0.4359638
speed: 0.1606s/iter; left time: 670.6529s
iters: 200, epoch: 6 | loss: 0.5145217
speed: 0.0655s/iter; left time: 266.8324s
Epoch: 6 cost time: 18.856943607330322
Epoch: 6, Steps: 285 | Train Loss: 0.4796506 Vali Loss: 0.4690022 Test Loss: 0.2101424
EarlyStopping counter: 1 out of 10
Updating learning rate to 0.009618079853421842
iters: 100, epoch: 7 | loss: 0.5960991
speed: 0.1601s/iter; left time: 623.0693s
iters: 200, epoch: 7 | loss: 0.6308466
speed: 0.0655s/iter; left time: 248.4005s
Epoch: 7 cost time: 18.844536066055298
Epoch: 7, Steps: 285 | Train Loss: 0.4764077 Vali Loss: 0.4654915 Test Loss: 0.2081809
Validation loss decreased (0.468384 --> 0.465492). Saving model ...
Updating learning rate to 0.009155436664521378
iters: 100, epoch: 8 | loss: 0.4419442
speed: 0.1602s/iter; left time: 577.6258s
iters: 200, epoch: 8 | loss: 0.5641533
speed: 0.0654s/iter; left time: 229.2836s
Epoch: 8 cost time: 18.828977584838867
Epoch: 8, Steps: 285 | Train Loss: 0.4711395 Vali Loss: 0.4669213 Test Loss: 0.2122254
EarlyStopping counter: 1 out of 10
Updating learning rate to 0.008533103143587874
iters: 100, epoch: 9 | loss: 0.3767900
speed: 0.1602s/iter; left time: 531.9960s
iters: 200, epoch: 9 | loss: 0.4241773
speed: 0.0655s/iter; left time: 210.8604s
Epoch: 9 cost time: 18.84018850326538
Epoch: 9, Steps: 285 | Train Loss: 0.4696788 Vali Loss: 0.4655947 Test Loss: 0.2091171
EarlyStopping counter: 2 out of 10
Updating learning rate to 0.007774995218822139
iters: 100, epoch: 10 | loss: 0.3299536
speed: 0.1606s/iter; left time: 487.4349s
iters: 200, epoch: 10 | loss: 0.4966879
speed: 0.0654s/iter; left time: 192.1428s
Epoch: 10 cost time: 18.850160121917725
Epoch: 10, Steps: 285 | Train Loss: 0.4663261 Vali Loss: 0.4718109 Test Loss: 0.2086317
EarlyStopping counter: 3 out of 10
Updating learning rate to 0.006910246552621103
iters: 100, epoch: 11 | loss: 0.3497610
speed: 0.1603s/iter; left time: 441.0321s
iters: 200, epoch: 11 | loss: 0.3784502
speed: 0.0655s/iter; left time: 173.7020s
Epoch: 11 cost time: 18.853699684143066
Epoch: 11, Steps: 285 | Train Loss: 0.4630501 Vali Loss: 0.4703967 Test Loss: 0.2108531
EarlyStopping counter: 4 out of 10
Updating learning rate to 0.005972088951270229
iters: 100, epoch: 12 | loss: 0.3815632
speed: 0.1602s/iter; left time: 395.0873s
iters: 200, epoch: 12 | loss: 0.4500147
speed: 0.0656s/iter; left time: 155.1679s
Epoch: 12 cost time: 18.86597967147827
Epoch: 12, Steps: 285 | Train Loss: 0.4593053 Vali Loss: 0.4724175 Test Loss: 0.2148597
EarlyStopping counter: 5 out of 10
Updating learning rate to 0.004996575285264588
iters: 100, epoch: 13 | loss: 0.3511541
speed: 0.1607s/iter; left time: 350.4397s
iters: 200, epoch: 13 | loss: 0.3464415
speed: 0.0656s/iter; left time: 136.5284s
Epoch: 13 cost time: 18.88262128829956
Epoch: 13, Steps: 285 | Train Loss: 0.4552088 Vali Loss: 0.4763499 Test Loss: 0.2151347
EarlyStopping counter: 6 out of 10
Updating learning rate to 0.004021193997714411
iters: 100, epoch: 14 | loss: 0.5604454
speed: 0.1611s/iter; left time: 305.4688s
iters: 200, epoch: 14 | loss: 0.3495321
speed: 0.0656s/iter; left time: 117.8210s
Epoch: 14 cost time: 18.897520542144775
Epoch: 14, Steps: 285 | Train Loss: 0.4506181 Vali Loss: 0.4754393 Test Loss: 0.2156542
EarlyStopping counter: 7 out of 10
Updating learning rate to 0.003083428444500124
iters: 100, epoch: 15 | loss: 0.4728407
speed: 0.1609s/iter; left time: 259.2434s
iters: 200, epoch: 15 | loss: 0.3594473
speed: 0.0655s/iter; left time: 98.9966s
Epoch: 15 cost time: 18.869282245635986
Epoch: 15, Steps: 285 | Train Loss: 0.4469513 Vali Loss: 0.4774691 Test Loss: 0.2158166
EarlyStopping counter: 8 out of 10
Updating learning rate to 0.002219316429926769
iters: 100, epoch: 16 | loss: 0.5374044
speed: 0.1606s/iter; left time: 212.9617s
iters: 200, epoch: 16 | loss: 0.3515621
speed: 0.0655s/iter; left time: 80.3071s
Epoch: 16 cost time: 18.862897157669067
Epoch: 16, Steps: 285 | Train Loss: 0.4417733 Vali Loss: 0.4801658 Test Loss: 0.2175966
EarlyStopping counter: 9 out of 10
Updating learning rate to 0.0014620652941147994
iters: 100, epoch: 17 | loss: 0.4957572
speed: 0.1602s/iter; left time: 166.7883s
iters: 200, epoch: 17 | loss: 0.4278955
speed: 0.0655s/iter; left time: 61.6600s
Epoch: 17 cost time: 18.845292806625366
Epoch: 17, Steps: 285 | Train Loss: 0.4374476 Vali Loss: 0.4815650 Test Loss: 0.2188107
EarlyStopping counter: 10 out of 10
Early stopping
testing : long_term_forecast_weather_96_192_none_TimeMixer_custom_sl96_pl192_dm16_nh4_el3_dl1_df32_fc3_ebtimeF_dtTrue_Exp_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 10348
test shape: (80, 128, 192, 21) (80, 128, 192, 21)
test shape: (10240, 192, 21) (10240, 192, 21)
mse:0.2081807404756546, mae:0.2515397369861603
rmse:0.4562682807445526, mape:0.5633121132850647, mspe:25455112.0
Args in experiment:
Namespace(task_name='long_term_forecast', is_training=1, model_id='weather_96_336', model='TimeMixer', data='custom', root_path='./dataset/weather/', data_path='weather.csv', features='M', target='OT', freq='h', checkpoints='./checkpoints/', seq_len=96, label_len=0, pred_len=336, seasonal_patterns='Monthly', inverse=False, top_k=5, num_kernels=6, enc_in=21, dec_in=21, c_out=21, d_model=16, n_heads=4, e_layers=3, d_layers=1, d_ff=32, moving_avg=25, factor=3, distil=True, dropout=0.1, embed='timeF', activation='gelu', output_attention=False, channel_independence=1, decomp_method='moving_avg', use_norm=1, down_sampling_layers=3, down_sampling_window=2, down_sampling_method='avg', num_workers=10, itr=1, train_epochs=20, batch_size=128, patience=10, learning_rate=0.01, des='Exp', loss='MSE', lradj='TST', pct_start=0.2, use_amp=False, comment='none', use_gpu=True, gpu=0, use_multi_gpu=False, devices='0,1', p_hidden_dims=[128, 128], p_hidden_layers=2)
Use GPU: cuda:0
start training : long_term_forecast_weather_96_336_none_TimeMixer_custom_sl96_pl336_dm16_nh4_el3_dl1_df32_fc3_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 36456
val 4935
test 10204
iters: 100, epoch: 1 | loss: 0.6762619
speed: 0.0802s/iter; left time: 447.6532s
iters: 200, epoch: 1 | loss: 0.7450211
speed: 0.0717s/iter; left time: 392.8241s
Epoch: 1 cost time: 20.772141218185425
Epoch: 1, Steps: 284 | Train Loss: 0.7176838 Vali Loss: 0.6182648 Test Loss: 0.2868061
Validation loss decreased (inf --> 0.618265). Saving model ...
Updating learning rate to 0.00180823692331512
iters: 100, epoch: 2 | loss: 0.6044372
speed: 0.1747s/iter; left time: 925.1899s
iters: 200, epoch: 2 | loss: 0.5523940
speed: 0.0717s/iter; left time: 372.3988s
Epoch: 2 cost time: 20.55257225036621
Epoch: 2, Steps: 284 | Train Loss: 0.5827757 Vali Loss: 0.5536075 Test Loss: 0.2660022
Validation loss decreased (0.618265 --> 0.553607). Saving model ...
Updating learning rate to 0.005206643013182129
iters: 100, epoch: 3 | loss: 0.5766494
speed: 0.1750s/iter; left time: 877.4838s
iters: 200, epoch: 3 | loss: 0.5486959
speed: 0.0716s/iter; left time: 351.8329s
Epoch: 3 cost time: 20.537157773971558
Epoch: 3, Steps: 284 | Train Loss: 0.5633632 Vali Loss: 0.5502650 Test Loss: 0.2656547
Validation loss decreased (0.553607 --> 0.550265). Saving model ...
Updating learning rate to 0.008601151212863664
iters: 100, epoch: 4 | loss: 0.6538562
speed: 0.1747s/iter; left time: 826.1667s
iters: 200, epoch: 4 | loss: 0.6420984
speed: 0.0717s/iter; left time: 331.7201s
Epoch: 4 cost time: 20.538926124572754
Epoch: 4, Steps: 284 | Train Loss: 0.5586641 Vali Loss: 0.5445257 Test Loss: 0.2642392
Validation loss decreased (0.550265 --> 0.544526). Saving model ...
Updating learning rate to 0.009999998805018026
iters: 100, epoch: 5 | loss: 0.5255791
speed: 0.1752s/iter; left time: 778.9302s
iters: 200, epoch: 5 | loss: 0.5437405
speed: 0.0717s/iter; left time: 311.3697s
Epoch: 5 cost time: 20.580398559570312
Epoch: 5, Steps: 284 | Train Loss: 0.5525623 Vali Loss: 0.5473270 Test Loss: 0.2656774
EarlyStopping counter: 1 out of 10
Updating learning rate to 0.009903251217478603
iters: 100, epoch: 6 | loss: 0.5712382
speed: 0.1752s/iter; left time: 729.1598s
iters: 200, epoch: 6 | loss: 0.5956228
speed: 0.0718s/iter; left time: 291.4277s
Epoch: 6 cost time: 20.575769901275635
Epoch: 6, Steps: 284 | Train Loss: 0.5469062 Vali Loss: 0.5537988 Test Loss: 0.2652604
EarlyStopping counter: 2 out of 10
Updating learning rate to 0.009618075204015223
iters: 100, epoch: 7 | loss: 0.6023598
speed: 0.1744s/iter; left time: 675.9986s
iters: 200, epoch: 7 | loss: 0.6505283
speed: 0.0717s/iter; left time: 270.8363s
Epoch: 7 cost time: 20.550930738449097
Epoch: 7, Steps: 284 | Train Loss: 0.5425374 Vali Loss: 0.5475795 Test Loss: 0.2653736
EarlyStopping counter: 3 out of 10
Updating learning rate to 0.009155429918896733
iters: 100, epoch: 8 | loss: 0.4873362
speed: 0.1751s/iter; left time: 628.9643s
iters: 200, epoch: 8 | loss: 0.5438763
speed: 0.0717s/iter; left time: 250.5183s
Epoch: 8 cost time: 20.549409866333008
Epoch: 8, Steps: 284 | Train Loss: 0.5379630 Vali Loss: 0.5472488 Test Loss: 0.2629513
EarlyStopping counter: 4 out of 10
Updating learning rate to 0.008533094560975773
iters: 100, epoch: 9 | loss: 0.4517022
speed: 0.1747s/iter; left time: 578.2332s
iters: 200, epoch: 9 | loss: 0.4840907
speed: 0.0718s/iter; left time: 230.2531s
Epoch: 9 cost time: 20.570077657699585
Epoch: 9, Steps: 284 | Train Loss: 0.5332942 Vali Loss: 0.5556108 Test Loss: 0.2662907
EarlyStopping counter: 5 out of 10
Updating learning rate to 0.007774985129047554
iters: 100, epoch: 10 | loss: 0.4708846
speed: 0.1747s/iter; left time: 528.4759s
iters: 200, epoch: 10 | loss: 0.5694571
speed: 0.0718s/iter; left time: 209.9871s
Epoch: 10 cost time: 20.54496145248413
Epoch: 10, Steps: 284 | Train Loss: 0.5300522 Vali Loss: 0.5519323 Test Loss: 0.2673647
EarlyStopping counter: 6 out of 10
Updating learning rate to 0.00691023534342841
iters: 100, epoch: 11 | loss: 0.5525529
speed: 0.1752s/iter; left time: 480.2941s
iters: 200, epoch: 11 | loss: 0.5092630
speed: 0.0717s/iter; left time: 189.4442s
Epoch: 11 cost time: 20.616445064544678
Epoch: 11, Steps: 284 | Train Loss: 0.5261522 Vali Loss: 0.5618097 Test Loss: 0.2690014
EarlyStopping counter: 7 out of 10
Updating learning rate to 0.0059720770534224185
iters: 100, epoch: 12 | loss: 0.5237783
speed: 0.1754s/iter; left time: 431.0713s
iters: 200, epoch: 12 | loss: 0.5110543
speed: 0.0719s/iter; left time: 169.5107s
Epoch: 12 cost time: 20.627060413360596
Epoch: 12, Steps: 284 | Train Loss: 0.5216495 Vali Loss: 0.5661159 Test Loss: 0.2704441
EarlyStopping counter: 8 out of 10
Updating learning rate to 0.0049965631559892795
iters: 100, epoch: 13 | loss: 0.4331133
speed: 0.1759s/iter; left time: 382.2790s
iters: 200, epoch: 13 | loss: 0.4477478
speed: 0.0718s/iter; left time: 148.9245s
Epoch: 13 cost time: 20.607174396514893
Epoch: 13, Steps: 284 | Train Loss: 0.5160140 Vali Loss: 0.5630207 Test Loss: 0.2732388
EarlyStopping counter: 9 out of 10
Updating learning rate to 0.004021182103132853
iters: 100, epoch: 14 | loss: 0.4574574
speed: 0.1752s/iter; left time: 331.0030s
iters: 200, epoch: 14 | loss: 0.5201182
speed: 0.0718s/iter; left time: 128.4090s
Epoch: 14 cost time: 20.578966856002808
Epoch: 14, Steps: 284 | Train Loss: 0.5108669 Vali Loss: 0.5685348 Test Loss: 0.2741197
EarlyStopping counter: 10 out of 10
Early stopping
testing : long_term_forecast_weather_96_336_none_TimeMixer_custom_sl96_pl336_dm16_nh4_el3_dl1_df32_fc3_ebtimeF_dtTrue_Exp_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 10204
test shape: (79, 128, 336, 21) (79, 128, 336, 21)
test shape: (10112, 336, 21) (10112, 336, 21)
mse:0.2642389237880707, mae:0.29182228446006775
rmse:0.5140417814254761, mape:0.6117508411407471, mspe:22384474.0
Args in experiment:
Namespace(task_name='long_term_forecast', is_training=1, model_id='weather_96_720', model='TimeMixer', data='custom', root_path='./dataset/weather/', data_path='weather.csv', features='M', target='OT', freq='h', checkpoints='./checkpoints/', seq_len=96, label_len=0, pred_len=720, seasonal_patterns='Monthly', inverse=False, top_k=5, num_kernels=6, enc_in=21, dec_in=21, c_out=21, d_model=16, n_heads=4, e_layers=3, d_layers=1, d_ff=32, moving_avg=25, factor=3, distil=True, dropout=0.1, embed='timeF', activation='gelu', output_attention=False, channel_independence=1, decomp_method='moving_avg', use_norm=1, down_sampling_layers=3, down_sampling_window=2, down_sampling_method='avg', num_workers=10, itr=1, train_epochs=20, batch_size=128, patience=10, learning_rate=0.01, des='Exp', loss='MSE', lradj='TST', pct_start=0.2, use_amp=False, comment='none', use_gpu=True, gpu=0, use_multi_gpu=False, devices='0,1', p_hidden_dims=[128, 128], p_hidden_layers=2)
Use GPU: cuda:0
start training : long_term_forecast_weather_96_720_none_TimeMixer_custom_sl96_pl720_dm16_nh4_el3_dl1_df32_fc3_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 36072
val 4551
test 9820
iters: 100, epoch: 1 | loss: 0.8142495
speed: 0.0949s/iter; left time: 524.2079s
iters: 200, epoch: 1 | loss: 0.6766396
speed: 0.0866s/iter; left time: 469.3050s
Epoch: 1 cost time: 24.730847358703613
Epoch: 1, Steps: 281 | Train Loss: 0.8068021 Vali Loss: 0.7239532 Test Loss: 0.3527675
Validation loss decreased (inf --> 0.723953). Saving model ...
Updating learning rate to 0.001808262037764916
iters: 100, epoch: 2 | loss: 0.6173640
speed: 0.2044s/iter; left time: 1071.2935s
iters: 200, epoch: 2 | loss: 0.6137844
speed: 0.0867s/iter; left time: 445.8729s
Epoch: 2 cost time: 24.55042862892151
Epoch: 2, Steps: 281 | Train Loss: 0.6680108 Vali Loss: 0.6862536 Test Loss: 0.3416217
Validation loss decreased (0.723953 --> 0.686254). Saving model ...
Updating learning rate to 0.005206713998138916
iters: 100, epoch: 3 | loss: 0.6730509
speed: 0.2076s/iter; left time: 1029.5066s
iters: 200, epoch: 3 | loss: 0.6434111
speed: 0.0866s/iter; left time: 420.6864s
Epoch: 3 cost time: 24.582441329956055
Epoch: 3, Steps: 281 | Train Loss: 0.6507671 Vali Loss: 0.6802924 Test Loss: 0.3408227
Validation loss decreased (0.686254 --> 0.680292). Saving model ...
Updating learning rate to 0.008601226346554308
iters: 100, epoch: 4 | loss: 0.7849593
speed: 0.2058s/iter; left time: 962.6568s
iters: 200, epoch: 4 | loss: 0.6803224
speed: 0.0878s/iter; left time: 401.9479s
Epoch: 4 cost time: 24.905867099761963
Epoch: 4, Steps: 281 | Train Loss: 0.6441614 Vali Loss: 0.6722143 Test Loss: 0.3388726
Validation loss decreased (0.680292 --> 0.672214). Saving model ...
Updating learning rate to 0.009999998779366193
iters: 100, epoch: 5 | loss: 0.6364537
speed: 0.2161s/iter; left time: 950.2600s
iters: 200, epoch: 5 | loss: 0.6571795
speed: 0.0883s/iter; left time: 379.2287s
Epoch: 5 cost time: 25.106192588806152
Epoch: 5, Steps: 281 | Train Loss: 0.6346934 Vali Loss: 0.6763712 Test Loss: 0.3437357
EarlyStopping counter: 1 out of 10
Updating learning rate to 0.009903243992354873
iters: 100, epoch: 6 | loss: 0.6608462
speed: 0.2095s/iter; left time: 862.1168s
iters: 200, epoch: 6 | loss: 0.5328134
speed: 0.0879s/iter; left time: 352.8371s
Epoch: 6 cost time: 24.89567756652832
Epoch: 6, Steps: 281 | Train Loss: 0.6301392 Vali Loss: 0.6720090 Test Loss: 0.3452646
Validation loss decreased (0.672214 --> 0.672009). Saving model ...
Updating learning rate to 0.009618061057077049
iters: 100, epoch: 7 | loss: 0.5821897
speed: 0.2097s/iter; left time: 804.1047s
iters: 200, epoch: 7 | loss: 0.6629183
speed: 0.0870s/iter; left time: 324.9126s
Epoch: 7 cost time: 24.728896141052246
Epoch: 7, Steps: 281 | Train Loss: 0.6234477 Vali Loss: 0.6766704 Test Loss: 0.3424388
EarlyStopping counter: 1 out of 10
Updating learning rate to 0.009155409393803014
iters: 100, epoch: 8 | loss: 0.6022717
speed: 0.2096s/iter; left time: 745.0661s
iters: 200, epoch: 8 | loss: 0.6708240
speed: 0.0878s/iter; left time: 303.3604s
Epoch: 8 cost time: 24.855897903442383
Epoch: 8, Steps: 281 | Train Loss: 0.6200137 Vali Loss: 0.6743950 Test Loss: 0.3449188
EarlyStopping counter: 2 out of 10
Updating learning rate to 0.008533068446494351
iters: 100, epoch: 9 | loss: 0.6271792
speed: 0.2133s/iter; left time: 698.0489s
iters: 200, epoch: 9 | loss: 0.5900708
speed: 0.0876s/iter; left time: 277.9912s
Epoch: 9 cost time: 24.90035605430603
Epoch: 9, Steps: 281 | Train Loss: 0.6172251 Vali Loss: 0.6732223 Test Loss: 0.3422192
EarlyStopping counter: 3 out of 10
Updating learning rate to 0.0077749544287433045
iters: 100, epoch: 10 | loss: 0.6388667
speed: 0.2089s/iter; left time: 625.1309s
iters: 200, epoch: 10 | loss: 0.5596918
speed: 0.0877s/iter; left time: 253.6024s
Epoch: 10 cost time: 24.834704160690308
Epoch: 10, Steps: 281 | Train Loss: 0.6112832 Vali Loss: 0.6812492 Test Loss: 0.3471961
EarlyStopping counter: 4 out of 10
Updating learning rate to 0.006910201237096809
iters: 100, epoch: 11 | loss: 0.6107982
speed: 0.2092s/iter; left time: 567.1717s
iters: 200, epoch: 11 | loss: 0.7271204
speed: 0.0874s/iter; left time: 228.1348s
Epoch: 11 cost time: 24.82628321647644
Epoch: 11, Steps: 281 | Train Loss: 0.6062498 Vali Loss: 0.6967529 Test Loss: 0.3520924
EarlyStopping counter: 5 out of 10
Updating learning rate to 0.005972040851750663
iters: 100, epoch: 12 | loss: 0.5930983
speed: 0.2091s/iter; left time: 508.2303s
iters: 200, epoch: 12 | loss: 0.5886881
speed: 0.0874s/iter; left time: 203.6228s
Epoch: 12 cost time: 24.776528120040894
Epoch: 12, Steps: 281 | Train Loss: 0.6009362 Vali Loss: 0.6849611 Test Loss: 0.3488447
EarlyStopping counter: 6 out of 10
Updating learning rate to 0.00499652625018731
iters: 100, epoch: 13 | loss: 0.5866392
speed: 0.2081s/iter; left time: 447.2292s
iters: 200, epoch: 13 | loss: 0.5042546
speed: 0.0875s/iter; left time: 179.2445s
Epoch: 13 cost time: 24.757347345352173
Epoch: 13, Steps: 281 | Train Loss: 0.5950525 Vali Loss: 0.7007942 Test Loss: 0.3564042
EarlyStopping counter: 7 out of 10
Updating learning rate to 0.004021145911469944
iters: 100, epoch: 14 | loss: 0.5368530
speed: 0.2076s/iter; left time: 387.8565s
iters: 200, epoch: 14 | loss: 0.6062331
speed: 0.0877s/iter; left time: 155.0381s
Epoch: 14 cost time: 24.7646701335907
Epoch: 14, Steps: 281 | Train Loss: 0.5889814 Vali Loss: 0.7063530 Test Loss: 0.3574873
EarlyStopping counter: 8 out of 10
Updating learning rate to 0.0030833831550158746
iters: 100, epoch: 15 | loss: 0.4924599
speed: 0.2067s/iter; left time: 328.1095s
iters: 200, epoch: 15 | loss: 0.5690606
speed: 0.0870s/iter; left time: 129.3131s
Epoch: 15 cost time: 24.61445379257202
Epoch: 15, Steps: 281 | Train Loss: 0.5833323 Vali Loss: 0.6996056 Test Loss: 0.3548250
EarlyStopping counter: 9 out of 10
Updating learning rate to 0.0022192756776522196
iters: 100, epoch: 16 | loss: 0.5421556
speed: 0.2064s/iter; left time: 269.5861s
iters: 200, epoch: 16 | loss: 0.4972541
speed: 0.0871s/iter; left time: 105.0962s
Epoch: 16 cost time: 24.679931163787842
Epoch: 16, Steps: 281 | Train Loss: 0.5771793 Vali Loss: 0.7160063 Test Loss: 0.3596286
EarlyStopping counter: 10 out of 10
Early stopping
testing : long_term_forecast_weather_96_720_none_TimeMixer_custom_sl96_pl720_dm16_nh4_el3_dl1_df32_fc3_ebtimeF_dtTrue_Exp_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 9820
test shape: (76, 128, 720, 21) (76, 128, 720, 21)
test shape: (9728, 720, 21) (9728, 720, 21)
mse:0.345264196395874, mae:0.3470550775527954
rmse:0.5875918865203857, mape:0.677718460559845, mspe:23126632.0

实验复现

按照给出的readme文件,执行scripts中weather的long term forecast 的 TimeMixer.sh 脚本,给出的训练和运行结果如图
image
只是 scale = 0 的结果,应该修改哪个参数得到Mixed的准确预测结果

关于channel_independence

当channel_independence设置为0的时候,程序报错,xmark和x的第一维度不匹配,无法合并,请问你们是如何处理的
TG5TPA5 }ZI5)J76P5G5FP8

Does this TimeMixer support multiple input single output prediction?

Thank you for providing such great work. Does this TimeMixer support multiple input single output prediction? MISO is more practical since we were trying to use many covariate (Exogenous) time series to predict the target time series.

I am also looking forward to seeing the code for TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables.

How to add future features to predict

Hello, thank you very much for providing such excellent work. I am now studying the photovoltaic power prediction and would like to test it with your TimeMixer. I have data in the same format as the ETT dataset, with slightly different feature fields: ['date',...(other features), target feature], here my (other features) are some weather features, such as irradiance, temperature, humidity, etc. target feature is the actual power generation. And the resolution of my dataset is 15min(96 points a day). Now I think, using the historical data of the past 5-10 days (historical weather and historical power), combined with the weather data of the next 10 days (960 points), to predict the power of the next 10 days (960 points), how can I adjust the input variables of the network? I notice that the model now supports using future temporal features for prediction, but the future temporal features seem to be encoded time series, which I don't quite understand, I think the main factor that affects power is future weather data, so I want to add future weather data for forecasting power

报错:B, T, N = x.size() ValueError: not enough values to unpack (expected 3, got 2)

我使用了ETTm1数据集,我发现在TimeMixer model中有这样一段代码:for i, x, x_mark in zip(range(len(x_enc)), x_enc, x_mark_enc):
在此处x_enc是3维的[Batchsize,Sequence length,Feature],经过循环得到的x应该是二维的,后面B, T, N = x.size()难以理解。我只能认为x_enc应该是一个4维向量,那么请问它的第一维是什么?我应该如何改正这个问题?

IndexError: list index out of range

Traceback (most recent call last):
File "run.py", line 143, in
exp.train(setting)
File "/root/.conda/TimeMixer-main/exp/exp_long_term_forecasting.py", line 175, in train
outputs = self.model(batch_x, batch_x_mark, dec_inp, batch_y_mark)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/.conda/TimeMixer-main/models/TimeMixer.py", line 402, in forward
dec_out_list = self.forecast(x_enc, x_mark_enc, x_dec, x_mark_dec)
File "/root/.conda/TimeMixer-main/models/TimeMixer.py", line 367, in forecast
enc_out_list = self.pdm_blocksi
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/.conda/TimeMixer-main/models/TimeMixer.py", line 176, in forward
out_season_list = self.mixing_multi_scale_season(season_list)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/.conda/TimeMixer-main/models/TimeMixer.py", line 62, in forward
out_low = season_list[1]
IndexError: list index out of range

多特征预测

您好,非常感谢您的工作!
我想要使用您的模型做多特征预测,比如说PEMS08数据集,我想要使用流量和速度两个特征,有什么好办法吗?需要对模型做哪些修改工作?(数据集已处理好,不知道模型部分如何修改)
非常感谢!

GELU hard coded

Hi,

It seems that the activation func has been hard coded in TimeMixer.py as nn.GELU. I cannot set it by --activation parameter.

Thanks

请问各位大佬,用自己的数据集训练都需要改什么呢?

我在run.py里把root path和data path都换成了自己的数据,执行的时候就会报出
“Traceback (most recent call last):
File "d:\code\TimeMixer-main\run.py", line 142, in
exp.train(setting)
File "d:\code\TimeMixer-main\exp\exp_long_term_forecasting.py", line 101, in train
vali_data, vali_loader = self._get_data(flag='val')
File "d:\code\TimeMixer-main\exp\exp_long_term_forecasting.py", line 30, in _get_data
data_set, data_loader = data_provider(self.args, flag)
File "d:\code\TimeMixer-main\data_provider\data_factory.py", line 88, in data_provider
print(flag, len(data_set))
ValueError: len() should return >= 0”
的错误。是不是输入的数据集不对?我需要怎么才能换成自己的数据呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.