W-LSTMix: A Hybrid Modular Forecasting Framework for Trend and Pattern Learning in Short-Term Load Forecasting
W-LSTMix is a lightweight, modular hybrid forecasting model designed for building-level load forecasting across diverse building types. With approximately 0.13 million parameters, W-LSTMix combines:
- Wavelet-based signal decomposition
- N-BEATS for ensemble forecasting
- LSTM for gated memory
- MLP-Mixer for efficient patch-wise mixing
This model achieves high forecasting accuracy with a minimal computational footprint.
π Features
- Hybrid Architecture Combining N-BEATS, LSTM and MLP-Mixer
- Lightweight: ~0.13M parameters and Edge-Deployable
- Modular design for flexible adaptation
- Effective generalization across building types
- Zero-shot capabilities
π Colab Quickstart
Use the following steps to try W-LSTMix on Google Colab:
!git clone https://github.com/shivDwd/W-LSTMix.git
%cd W-LSTMix
!git clone https://huggingface.co/datasets/shivDwd/W_LSTMix_test_dataset
!pip install -r requirements.txt
!python test.py
π Real-World Building Datasets
This model is trained on large-scale real-world building energy datasets from commercial and residential domains, collected from multiple countries.
Dataset | Location | Type | # Buildings | # Observations | Years |
---|---|---|---|---|---|
IBlend | India | Commercial | 9 | 296,357 | 2013β2017 |
Enernoc | USA | Commercial | 100 | 877,728 | 2012 |
NEST | Switzerland | Residential | 1 | 34,715 | 2019β2023 |
Ireland | Ireland | Residential | 20 | 174,398 | 2020 |
MFRED | USA | Residential | 26 | 227,622 | 2019 |
CEEW | India | Residential | 84 | 923,897 | 2019β2021 |
SMART* | USA | Residential | 114 | 958,998 | 2016 |
Prayas | India | Residential | 116 | 1,536,409 | 2018β2020 |
NEEA | USA | Residential | 192 | 2,922,289 | 2018β2020 |
SGSC | Australia | Residential | 13,735 | 172,277,213 | 2011β2014 |
GoiEner | Spain | Residential | 25,559 | 632,313,933 | 2014β2022 |
Total: 39,956 buildings and 812M+ hourly observations
β οΈ These datasets are used under their respective terms/licenses for academic research only.
π Comparative Evaluation
We benchmark W-LSTMix against state-of-the-art Time Series Foundation Models (TSFMs) and N-BEATS under two broad settings: zero-shot and fine-tuning. Please refer to the publication for a detailed summary of the results.
W-LSTMix: A Hybrid Modular Forecasting Framework for Trend and Pattern Learning in Short-Term Load Forecasting
Shivam Dwivedi, Anuj Kumar, Harish Kumar Saravanan, Pandarasamy Arjunan
In Proceedings of the 1st ICML Workshop on Foundation Models for Structured Data, Vancouver, Canada. 2025
https://openreview.net/pdf?id=bG04Z3Jioc
To know more about W-LSTMix, please regfer to the official Github repository.
π Citation
If you use W-LSTMix in your research or applications, please cite our paper:
@inproceedings{
dwivedi2025wlstmix,
title={W-{LSTM}ix: A Hybrid Modular Forecasting Framework for Trend and Pattern Learning in Short-Term Load Forecasting},
author={SHIVAM DWIVEDI and Anuj Kumar and Harish Kumar Saravanan and Pandarasamy Arjunan},
booktitle={1st ICML Workshop on Foundation Models for Structured Data},
year={2025},
url={https://openreview.net/forum?id=bG04Z3Jioc}
}