DLA_LLMSANALYSIS
Collection
This collection groups four fine-tuned language models on the IMDB dataset for binary sentiment analysis: Bert, Bart, GptNeo and an Ensemble model.
β’
4 items
β’
Updated
The wakaflocka17/bart-imdb-finetuned
model is a fine-tuned version of facebook/bart-base
for the sentiment classification task on the IMDb dataset.
Trained on movie reviews, it can distinguish between positive and negative sentiment with excellent accuracy.
Below you will find its model card, evaluation metrics, training parameters, and a practical example of its use in Google Colab.
Metric | Value |
---|---|
Accuracy | 0.87968 |
Precision | 0.8839 |
Recall | 0.8742 |
F1-score | 0.8790 |
Parameter | Values |
---|---|
Base model | facebook/bart-base |
Repo pretrained | facebook/bart-base |
Repo finetuned | models/bart_base |
Repo downloaded | models/downloaded/bart_base |
Epochs | 3 |
Batch size (train) | 8 |
Batch size (eval) | 16 |
Labels number | 2 |
!pip install --upgrade transformers huggingface_hub
from huggingface_hub import login
login(token="hf_yourhftoken")
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
repo_id = "wakaflocka17/bart-imdb-finetuned"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
# Override default labels
model.config.id2label = {0: 'NEGATIVE', 1: 'POSITIVE'}
model.config.label2id = {'NEGATIVE': 0, 'POSITIVE': 1}
# Create the classification pipeline
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
testo = "This movie was absolutely fantasticβwonderful performances and a gripping story!"
risultati = pipe(testo)
print(risultati)
# Example output:
# [{'label': 'POSITIVE', 'score': 0.95}, {'label': 'NEGATIVE', 'score': 0.05}]
π How to cite If you use this model in your work, you can cite it as:
@misc{Sentiment-Project,
author = {Francesco Congiu},
title = {Sentiment Analysis with Pretrained, Fine-tuned and Ensemble Transformer Models},
howpublished = {\url{https://github.com/wakaflocka17/DLA_LLMSANALYSIS}},
year = {2025}
}
π Reference Repository
All the file structure and script examples can be found at: https://github.com/wakaflocka17/DLA_LLMSANALYSIS/tree/main
Base model
facebook/bart-base