Text Classification
Transformers
Safetensors
English
bert

World of Central Banks Model

Model Name: Bank of Russia Temporal Classification Model

Model Type: Text Classification

Language: English

License: CC-BY-NC-SA 4.0

Base Model: bert-base-uncased

Dataset Used for Training: gtfintechlab/central_bank_of_the_russian_federation

Model Overview

Bank of Russia Temporal Classification Model is a fine-tuned bert-base-uncased model designed to classify text data on Temporal Classification. This label is annotated in the central_bank_of_the_russian_federation dataset, which focuses on meeting minutes for the Bank of Russia.

Intended Use

This model is intended for researchers and practitioners working on subjective text classification for the Bank of Russia, particularly within financial and economic contexts. It is specifically designed to assess the Temporal Classification label, aiding in the analysis of subjective content in financial and economic communications.

How to Use

To utilize this model, load it using the Hugging Face transformers library:

from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoConfig

# Load tokenizer, model, and configuration
tokenizer = AutoTokenizer.from_pretrained("gtfintechlab/central_bank_of_the_russian_federation", do_lower_case=True, do_basic_tokenize=True)
model = AutoModelForSequenceClassification.from_pretrained("gtfintechlab/central_bank_of_the_russian_federation", num_labels=2)
config = AutoConfig.from_pretrained("gtfintechlab/central_bank_of_the_russian_federation")

# Initialize text classification pipeline
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, config=config, framework="pt")

# Classify Temporal Classification
sentences = [
    "[Sentence 1]",
    "[Sentence 2]"
]
results = classifier(sentences, batch_size=128, truncation="only_first")

print(results)

In this script:

  • Tokenizer and Model Loading:
    Loads the pre-trained tokenizer and model from gtfintechlab/central_bank_of_the_russian_federation.

  • Configuration:
    Loads model configuration parameters, including the number of labels.

  • Pipeline Initialization:
    Initializes a text classification pipeline with the model, tokenizer, and configuration.

  • Classification:
    Labels sentences based on Temporal Classification.

Ensure your environment has the necessary dependencies installed.

Label Interpretation

  • LABEL_0: Forward-looking; the sentence discusses future economic events or decisions.
  • LABEL_1: Not forward-looking; the sentence discusses past or current economic events or decisions.

Training Data

The model was trained on the central_bank_of_the_russian_federation dataset, comprising annotated sentences from the Bank of Russia meeting minutes, labeled by Temporal Classification. The dataset includes training, validation, and test splits.

Citation

If you use this model in your research, please cite the central_bank_of_the_russian_federation:

@article{WCBShahSukhaniPardawala,
  title={Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications},
  author={Agam Shah, Siddhant Sukhani, Huzaifa Pardawala et al.},
  year={2025}
}

For more details, refer to the central_bank_of_the_russian_federation dataset documentation.

Contact

For any central_bank_of_the_russian_federation related issues and questions, please contact:

  • Huzaifa Pardawala: huzaifahp7[at]gatech[dot]edu

  • Siddhant Sukhani: ssukhani3[at]gatech[dot]edu

  • Agam Shah: ashah482[at]gatech[dot]edu

Downloads last month
0
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for gtfintechlab/model_central_bank_of_the_russian_federation_time_label

Finetuned
(5191)
this model

Dataset used to train gtfintechlab/model_central_bank_of_the_russian_federation_time_label

Collection including gtfintechlab/model_central_bank_of_the_russian_federation_time_label