World of Central Banks Model

Model Name: WCB Uncertainty Estimation Model

Model Type: Text Classification

Language: English

License: CC-BY-NC-SA 4.0

Base Model: RoBERTa

Dataset Used for Training: gtfintechlab/WCB_380k_sentences

Model Overview

WCB Uncertainty Estimation Model is a fine-tuned RoBERTa-based model designed to classify text data on Uncertain Estimation. This label is annotated in the model_WCB_certainty_label dataset, which focuses on meeting minutes for the all 25 central banks, listed in the paper Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications.

Intended Use

This model is intended for researchers and practitioners working on subjective text classification, particularly within financial and economic contexts. It is specifically designed to assess the Uncertain Estimation label, aiding in the analysis of subjective content in financial and economic communications.

How to Use

To utilize this model, load it using the Hugging Face transformers library:

from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoConfig

# Load tokenizer, model, and configuration
tokenizer = AutoTokenizer.from_pretrained("gtfintechlab/model_WCB_certainty_label", do_lower_case=True, do_basic_tokenize=True)
model = AutoModelForSequenceClassification.from_pretrained("gtfintechlab/model_WCB_certainty_label", num_labels=2)
config = AutoConfig.from_pretrained("gtfintechlab/model_WCB_certainty_label")

# Initialize text classification pipeline
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, config=config, framework="pt")

# Classify Uncertain Estimation
sentences = [
    "[Sentence 1]",
    "[Sentence 2]"
]
results = classifier(sentences, batch_size=128, truncation="only_first")

print(results)

In this script:

  • Tokenizer and Model Loading:
    Loads the pre-trained tokenizer and model from gtfintechlab/model_WCB_certain_label.

  • Configuration:
    Loads model configuration parameters, including the number of labels.

  • Pipeline Initialization:
    Initializes a text classification pipeline with the model, tokenizer, and configuration.

  • Classification:
    Labels sentences based on Uncertain Estimation.

Ensure your environment has the necessary dependencies installed.

Label Interpretation

  • LABEL_0: Certain; indicates that the sentence presents information definitively.
  • LABEL_1: Uncertain; indicates that the sentence presents information with speculation, possibility, or doubt.

Training Data

The model was trained on the model_WCB_certainty_label dataset, comprising annotated sentences from 25 central banks, labeled by Uncertainty Estimation. The dataset includes training, validation, and test splits.

Citation

If you use this model in your research, please cite the model_WCB_certainty_label:

@article{WCBShahSukhaniPardawala,
  title={Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications},
  author={Agam Shah, Siddhant Sukhani, Huzaifa Pardawala et al.},
  year={2025}
}

For more details, refer to the model_WCB_certainty_label dataset documentation.

Contact

For any model_WCB_certainty_label related issues and questions, please contact:

  • Huzaifa Pardawala: huzaifahp7[at]gatech[dot]edu

  • Siddhant Sukhani: ssukhani3[at]gatech[dot]edu

  • Agam Shah: ashah482[at]gatech[dot]edu

Downloads last month
29
Safetensors
Model size
355M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for gtfintechlab/model_WCB_certain_label

Finetuned
(1629)
this model

Collection including gtfintechlab/model_WCB_certain_label