Home Community Google Research Introduces TimesFM: A Single Forecasting Model Pre-Trained on a Large Time-Series Corpus of 100B Real World Time-Points

Google Research Introduces TimesFM: A Single Forecasting Model Pre-Trained on a Large Time-Series Corpus of 100B Real World Time-Points

0
Google Research Introduces TimesFM: A Single Forecasting Model Pre-Trained on a Large Time-Series Corpus of 100B Real World Time-Points

Time Series forecasting is a vital task in machine learning and is regularly utilized in various domains comparable to finance, manufacturing, healthcare, and natural sciences. Researchers from Google introduced a decoder-only model for the duty, called TimeFM, based on pretraining a patched-decoder style attention model on a big time-series corpus comprising each real-world and artificial datasets. Time series data, collected at regular intervals over time, plays an important role in predicting future values. Traditional methods like ARIMA and GARCH have been widely used. The recent advancements in deep learning, particularly in large language models (LLMs) for Natural Language Processing (NLP), have opened latest ways for researchers to handle time series forecasting by applying these models to the duty.

The prevailing deep learning models comparable to DeepAR, Temporal Convolutions, and NBEATS are popular for time series forecasting, outperforming traditional statistical methods. There was recent work on reusing or fine-tuning large language models (LLMs) like GPT-3 and LLaMA-2 for time series forecasting. Within the paper, the researchers aim to analyze if a model pre-trained on massive amounts of time-series data can learn temporal patterns useful for accurate forecasting on previously unseen datasets.

TimesFM’s architecture involves a stacked transformer with a patched-decoder style attention mechanism inspired by successful patch-based modeling in long-horizon forecasting. The proposed model uses decoder-only training, which allows the model to predict the longer term by seeing different numbers of input patches in parallel. The info for training includes each real-world and artificial data. The actual-world data is taken from diverse sources like Google Trends and Wiki Pageviews, while the synthetic data is generated from statistical models like ARIMA.

Experiments show that TimesFM achieves impressive zero-shot forecasting performance. Not only the performance of the model is impressive but in addition it’s more efficient than the prevailing models in parameter size and pretraining data. The model is evaluated on public datasets from Darts, Monash, and Informer, showcasing its ability to generalize and outperform specialized baselines.

Training on a large corpus of synthetic and real-world data, TimesFM is a groundbreaking time series foundation model. The model’s unique architecture, which incorporates a patched-decoder attention mechanism and decoder-only training, contributes to its strong zero-shot forecasting performance. TimesFM’s ability to outperform baselines across multiple datasets demonstrates the potential of enormous pre-trained models for time series forecasting, providing a promising avenue for reducing training data and computational requirements on this field.


Try the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and Google News. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

Should you like our work, you’ll love our newsletter..

Don’t Forget to hitch our Telegram Channel


Pragati Jhunjhunwala is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Kharagpur. She is a tech enthusiast and has a keen interest within the scope of software and data science applications. She is at all times reading concerning the developments in several field of AI and ML.


🚀 LLMWare Launches SLIMs: Small Specialized Function-Calling Models for Multi-Step Automation [Check out all the models]

LEAVE A REPLY

Please enter your comment!
Please enter your name here