Model Card Climate-TwitterBERT-step-2

Overview:

Using Climate-TwitterBERT-step-1 (https://huggingface.co/Climate-TwitterBERT/Climate-TwitterBERT-step1) as the starting model, we fine-tuned on the downstream task to classify whether a given climate tweet belongs to hard/soft/promotion climate tweet.

The model provides a label and probability score, indicating whether a given tweet belongs to hard (label = 0), soft (label = 1), or promotion (label = 2).

Performance metrics:

Based on the test set, the model achieves the following results:

• Loss: 0.2613 • F1-weighted: 0.8008
• F1: 0.7798 • Accuracy: 0.8050 • Precision: 0.8034 • Recall: 0. 0.8050

Example usage:

from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification

task_name = 'text-classification'
model_name = 'Climate-TwitterBERT/ Climate-TwitterBERT-step2'

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

pipe = pipeline(task=task_name, model=model, tokenizer=tokenizer)

tweet = "We are committed to significantly cutting our carbon emissions by 30% before 2030."
result = pipe(tweet)
# The 'result' variable will contain the classification output: 0 = hard climate tweet, 1= soft climate tweet, and 2 = promotion tweet.

Citation:

@article{fzz2025climatetwitter,
  title={Responding to Climate Change Crisis: Firms' Tradeoffs},
  author={Fritsch, Felix and Zhang, Qi and Zheng, Xiang},
  journal={Journal of Accounting Research},
  year={2025},
  doi={10.1111/1475-679X.12625}
}

Fritsch, F., Zhang, Q., & Zheng, X. (2025). Responding to Climate Change Crisis: Firms' Tradeoffs. Journal of Accounting Research. https://doi.org/10.1111/1475-679X.12625

Framework versions

• Transformers 4.28.1 • Pytorch 2.0.1+cu118 • Datasets 2.14.1 • Tokenizers 0.13.3

Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support