|
- iTransformer: Inverted Transformers Are Effective for Time Series . . .
The iTransformer model achieves state-of-the-art on challenging real-world datasets, which further empowers the Transformer family with promoted performance, generalization ability across different variates, and better utilization of arbitrary lookback windows, making it a nice alternative as the fundamental backbone of time series forecasting
- ITRANSFORMER: INVERTED TRANSFORMERS ARE EFFECTIVE FOR TIME SERIES . . .
The iTransformer model achieves state-of-the-art on challenging real-world datasets, which further empowers the Transformer family with promoted performance, generalization ability across differ-ent variates, and better utilization of arbitrary lookback windows, making it a nice alternative as the fundamental backbone of time series forecasting
- A Time Series is Worth 64 Words: Long-term Forecasting with. . .
Channel-independent patch time series transformer works very well for long-term forecasting and representation learning
- Differential Transformer - OpenReview
Transformer tends to overallocate attention to irrelevant context In this work, we introduce Diff Transformer, which amplifies attention to the relevant context while canceling noise Specifically
- Crossformer: Transformer Utilizing Cross-Dimension Dependency for . . .
We propose Crossformer, a Transformer-based model that explicitly utilizes cross-dimension dependency for multivariate time series forecasting
- TimeMixer++: A General Time Series Pattern Machine for Universal . . .
Building Blocks: iTransformer primarily uses feed-forward networks (FFN) for encoding 1D time series In contrast, TimeMixer++ transforms time series into multi-resolution 2D time images, enabling dual-axis attention for pattern decomposition and hierarchical convolutions for mixing
- Diffusion Auto-regressive Transformer for Effective Self-supervised . . .
Self-supervised learning has become an essential and popular approach for enhancing time series forecasting, enabling models to learn universal representations from unlabeled data However
- PINNsFormer: A Transformer-Based Framework For Physics-Informed. . .
Physics-Informed Neural Networks (PINNs) have emerged as a promising deep learning framework for approximating numerical solutions to partial differential equations (PDEs) However, conventional
|
|
|