Toto-2.0-313m

Toto (Time Series Optimized Transformer for Observability) is a family of time series foundation models for multivariate forecasting developed by Datadog. Toto 2.0 is the current generation, featuring u-ΞΌP-scaled transformers ranging from 4M to 2.5B parameters.


✨ Key Features

  • Zero-Shot Forecasting: Forecast without fine-tuning on your specific time series.
  • Multi-Variate Support: Efficiently process multiple variables using alternating time/variate attention.
  • Probabilistic Predictions: Generate point forecasts and uncertainty estimates via a quantile output head.
  • Decoder-Only Architecture: Support for variable prediction horizons and context lengths.
  • u-ΞΌP Scaling: Stable training transfer across all model sizes.
Toto 2.0 architecture Overview of the Toto 2.0 architecture.

⚑ Quick Start

Inference code is available on GitHub.

Installation

pip install "toto-2 @ git+https://github.com/DataDog/toto.git#subdirectory=toto2"

Inference Example

import torch
from toto2 import Toto2Model

model = Toto2Model.from_pretrained("Datadog/Toto-2.0-313m")
model = model.to("cuda").eval()

# (batch, n_variates, time_steps)
target = torch.randn(1, 1, 512, device="cuda")
target_mask = torch.ones_like(target, dtype=torch.bool)
series_ids = torch.zeros(1, 1, dtype=torch.long, device="cuda")

# Returns quantiles of shape (9, batch, n_variates, horizon)
# Quantile levels: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
quantiles = model.forecast(
    {"target": target, "target_mask": target_mask, "series_ids": series_ids},
    horizon=96,
)

For more examples, see the Quick Start notebook and GluonTS integration notebook.


πŸ’Ύ Available Checkpoints

Checkpoint Parameters
Toto-2.0-4m 4M
Toto-2.0-22m 22M
Toto-2.0-313m 313M
Toto-2.0-1B 1B
Toto-2.0-2.5B 2.5B

πŸ”— Additional Resources


πŸ“– Citation

(citation coming soon)
Downloads last month
42
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Datadog/Toto-2.0-313m