Text Classification
Transformers
Safetensors
English
roberta

World of Central Banks Model

Model Name: Bank of Mexico (Banco de México) Stance Detection Model

Model Type: Text Classification

Language: English

License: CC-BY-NC-SA 4.0

Base Model: roberta-base

Dataset Used for Training: gtfintechlab/bank_of_mexico

Model Overview

Bank of Mexico (Banco de México) Stance Detection Model is a fine-tuned roberta-base model designed to classify text data on Stance Detection. This label is annotated in the bank_of_mexico dataset, which focuses on meeting minutes for the Bank of Mexico (Banco de México).

Intended Use

This model is intended for researchers and practitioners working on subjective text classification for the Bank of Mexico (Banco de México), particularly within financial and economic contexts. It is specifically designed to assess the Stance Detection label, aiding in the analysis of subjective content in financial and economic communications.

How to Use

To utilize this model, load it using the Hugging Face transformers library:

from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoConfig

# Load tokenizer, model, and configuration
tokenizer = AutoTokenizer.from_pretrained("gtfintechlab/bank_of_mexico", do_lower_case=True, do_basic_tokenize=True)
model = AutoModelForSequenceClassification.from_pretrained("gtfintechlab/bank_of_mexico", num_labels=4)
config = AutoConfig.from_pretrained("gtfintechlab/bank_of_mexico")

# Initialize text classification pipeline
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, config=config, framework="pt")

# Classify Stance Detection
sentences = [
    "[Sentence 1]",
    "[Sentence 2]"
]
results = classifier(sentences, batch_size=128, truncation="only_first")

print(results)

In this script:

  • Tokenizer and Model Loading:
    Loads the pre-trained tokenizer and model from gtfintechlab/bank_of_mexico.

  • Configuration:
    Loads model configuration parameters, including the number of labels.

  • Pipeline Initialization:
    Initializes a text classification pipeline with the model, tokenizer, and configuration.

  • Classification:
    Labels sentences based on Stance Detection.

Ensure your environment has the necessary dependencies installed.

Label Interpretation

  • LABEL_0: Hawkish; the sentnece supports contractionary monetary policy.
  • LABEL_1: Dovish; the sentence supports expansionary monetary policy.
  • LABEL_2: Neutral; the sentence contains neither hawkish or dovish sentiment, or both hawkish and dovish sentiment.
  • LABEL_3: Irrelevant; the sentence is not related to monetary policy.

Training Data

The model was trained on the bank_of_mexico dataset, comprising annotated sentences from the Bank of Mexico (Banco de México) meeting minutes, labeled by Stance Detection. The dataset includes training, validation, and test splits.

Citation

If you use this model in your research, please cite the bank_of_mexico:

@article{WCBShahSukhaniPardawala,
  title={Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications},
  author={Agam Shah, Siddhant Sukhani, Huzaifa Pardawala et al.},
  year={2025}
}

For more details, refer to the bank_of_mexico dataset documentation.

Contact

For any bank_of_mexico related issues and questions, please contact:

  • Huzaifa Pardawala: huzaifahp7[at]gatech[dot]edu

  • Siddhant Sukhani: ssukhani3[at]gatech[dot]edu

  • Agam Shah: ashah482[at]gatech[dot]edu

Downloads last month
3
Safetensors
Model size
355M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for gtfintechlab/model_bank_of_mexico_stance_label

Finetuned
(1629)
this model

Dataset used to train gtfintechlab/model_bank_of_mexico_stance_label

Collection including gtfintechlab/model_bank_of_mexico_stance_label