shahagam4's picture
Update README.md
98459bc verified
---
license: cc-by-4.0
datasets:
- gtfintechlab/WCB
language:
- en
metrics:
- accuracy
- f1
- precision
- recall
base_model:
- roberta-base
pipeline_tag: text-classification
library_name: transformers
---
# World of Central Banks Model
**Model Name:** WCB Temporal Classification Model
**Model Type:** Text Classification
**Language:** English
**License:** [CC-BY 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en)
**Base Model:** [RoBERTa](https://huggingface.co/FacebookAI/roberta-base)
**Dataset Used for Training:** [gtfintechlab/all_annotated_sentences_25000](https://huggingface.co/datasets/gtfintechlab/all_annotated_sentences_25000)
## Model Overview
WCB Temporal Classification Model is a fine-tuned RoBERTa-based model designed to classify text data on **Temporal Classification**. This label is annotated in the model_WCB_time_label dataset, which focuses on meeting minutes for the all 25 central banks, listed in the paper _Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications_.
## Intended Use
This model is intended for researchers and practitioners working on subjective text classification, particularly within financial and economic contexts. It is specifically designed to assess the **Temporal Classification** label, aiding in the analysis of subjective content in financial and economic communications.
## How to Use
To utilize this model, load it using the Hugging Face `transformers` library:
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoConfig
# Load tokenizer, model, and configuration
tokenizer = AutoTokenizer.from_pretrained("gtfintechlab/model_WCB_time_label", do_lower_case=True, do_basic_tokenize=True)
model = AutoModelForSequenceClassification.from_pretrained("gtfintechlab/model_WCB_time_label", num_labels=2)
config = AutoConfig.from_pretrained("gtfintechlab/model_WCB_time_label")
# Initialize text classification pipeline
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, config=config, framework="pt")
# Classify Temporal Classification
sentences = [
"[Sentence 1]",
"[Sentence 2]"
]
results = classifier(sentences, batch_size=128, truncation="only_first")
print(results)
```
In this script:
- **Tokenizer and Model Loading:**
Loads the pre-trained tokenizer and model from `gtfintechlab/model_WCB_time_label`.
- **Configuration:**
Loads model configuration parameters, including the number of labels.
- **Pipeline Initialization:**
Initializes a text classification pipeline with the model, tokenizer, and configuration.
- **Classification:**
Labels sentences based on **Temporal Classification**.
Ensure your environment has the necessary dependencies installed.
## Label Interpretation
- **LABEL_0:** Forward-looking; the sentence discusses future economic events or decisions.
- **LABEL_1:** Not forward-looking; the sentence discusses past or current economic events or decisions.
## Training Data
The model was trained on the model_WCB_time_label dataset, comprising annotated sentences from 25 central banks, labeled by Temporal Classification. The dataset includes training, validation, and test splits.
## Citation
If you use this model in your research, please cite the model_WCB_time_label:
```bibtex
@article{WCBShahSukhaniPardawala,
title={Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications},
author={Agam Shah, Siddhant Sukhani, Huzaifa Pardawala et al.},
year={2025}
}
```
For more details, refer to the [model_WCB_time_label dataset documentation](https://huggingface.co/gtfintechlab/model_WCB_time_label).
## Contact
For any model_WCB_time_label related issues and questions, please contact:
- Huzaifa Pardawala: huzaifahp7[at]gatech[dot]edu
- Siddhant Sukhani: ssukhani3[at]gatech[dot]edu
- Agam Shah: ashah482[at]gatech[dot]edu