metadata
library_name: transformers
license: apache-2.0
datasets:
- gtfintechlab/fomc_communication
- Sorour/fomc
language:
- en
metrics:
- accuracy
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
Model Card for Model ID
Fine-Tuned Transformer for FOMC Sentiment Classification
Model Details
Model Description
This model is a fine-tuned version of DistilBERT for FOMC meeting sentiment classification. It predicts whether a sentence from U.S. Federal Open Market Committee (FOMC) statements is Dovish, Hawkish, or Neutral.
- Developed by: [Ao Chen]
- Model type: [Encoder-only Transformer (DistilBERT)]
- Language(s) (NLP): [en]
- License: [Apache 2.0]
- Finetuned from model [optional]: [distilbert-base-uncased]
Uses
Direct Use
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "achen0525/DistilBERT_FOMC_Classifier"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "The Committee decided to maintain the target range for the federal funds rate."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
pred = torch.argmax(outputs.logits, dim=1)
labels = ['Dovish', 'Hawkish', 'Neutral']
print(labels[pred.item()])
Model Card Contact
For questions or feedback, reach out to: [email protected]