The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ImportError Message: To be able to use Salesforce/cloudops_tsf, you need to install the following dependency: gluonts. Please install it using 'pip install gluonts' for instance. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1914, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1880, in dataset_module_factory return HubDatasetModuleFactoryWithScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1504, in get_module local_imports = _download_additional_modules( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 354, in _download_additional_modules raise ImportError( ImportError: To be able to use Salesforce/cloudops_tsf, you need to install the following dependency: gluonts. Please install it using 'pip install gluonts' for instance.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain
Datasets accompanying the paper "Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain".
Quick Start
pip install datasets==2.12.0 fsspec==2023.5.0
azure_vm_traces_2017
from datasets import load_dataset
dataset = load_dataset('Salesforce/cloudops_tsf', 'azure_vm_traces_2017')
print(dataset)
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real'],
num_rows: 17568
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real'],
num_rows: 159472
})
})
borg_cluster_data_2011
dataset = load_dataset('Salesforce/cloudops_tsf', 'borg_cluster_data_2011')
print(dataset)
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 11117
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 143386
})
})
alibaba_cluster_trace_2018
dataset = load_dataset('Salesforce/cloudops_tsf', 'alibaba_cluster_trace_2018')
print(dataset)
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 6048
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 58409
})
})
Dataset Config
from datasets import load_dataset_builder
config = load_dataset_builder('Salesforce/cloudops_tsf', 'azure_vm_traces_2017').config
print(config)
CloudOpsTSFConfig(
name='azure_vm_traces_2017',
version=1.0.0,
data_dir=None,
data_files=None,
description='',
prediction_length=48,
freq='5T',
stride=48,
univariate=True,
multivariate=False,
optional_fields=(
'feat_static_cat',
'feat_static_real',
'past_feat_dynamic_real'
),
rolling_evaluations=12,
test_split_date=Period('2016-12-13 15:55', '5T'),
_feat_static_cat_cardinalities={
'pretrain': (
('vm_id', 177040),
('subscription_id', 5514),
('deployment_id', 15208),
('vm_category', 3)
),
'train_test': (
('vm_id', 17568),
('subscription_id', 2713),
('deployment_id', 3255),
('vm_category', 3)
)
},
target_dim=1,
feat_static_real_dim=3,
past_feat_dynamic_real_dim=2
)
test_split_date
is provided to achieve the same train-test split as given in the paper.
This is essentially the date/time of rolling_evaluations * prediction_length
time steps before the last time step in the dataset.
Note that the pre-training dataset includes the test region, and thus should also be filtered before usage.
Acknowledgements
The datasets were processed from the following original sources. Please cite the original sources if you use the datasets.
Azure VM Traces 2017
- Bianchini. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 153–167, 2017.
- https://github.com/Azure/AzurePublicDataset
Borg Cluster Data 2011
- John Wilkes. More Google cluster data. Google research blog, November 2011. Posted at http://googleresearch.blogspot.com/2011/11/more-google-cluster-data.html.
- https://github.com/google/cluster-data
Alibaba Cluster Trace 2018
- Jing Guo, Zihao Chang, Sa Wang, Haiyang Ding, Yihui Feng, Liang Mao, and Yungang Bao. Who limits the resource efficiency of my datacenter: An analysis of alibaba datacenter traces. In Proceedings of the International Symposium on Quality of Service, pp. 1–10, 2019.
- https://github.com/alibaba/clusterdata
Citation
@article{woo2023pushing, title={Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain}, author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Sahoo, Doyen}, journal={arXiv preprint arXiv:2310.05063}, year={2023} }
- Downloads last month
- 379