πŸ“ Model Card: bart-imdb-finetuned

πŸ” Introduction

The wakaflocka17/bart-imdb-finetuned model is a fine-tuned version of facebook/bart-base for the sentiment classification task on the IMDb dataset. Trained on movie reviews, it can distinguish between positive and negative sentiment with excellent accuracy. Below you will find its model card, evaluation metrics, training parameters, and a practical example of its use in Google Colab.

πŸ“Š Evaluation Metrics

Metric Value
Accuracy 0.87968
Precision 0.8839
Recall 0.8742
F1-score 0.8790

βš™οΈ Training Parameters

Parameter Values
Base model facebook/bart-base
Repo pretrained facebook/bart-base
Repo finetuned models/bart_base
Repo downloaded models/downloaded/bart_base
Epochs 3
Batch size (train) 8
Batch size (eval) 16
Labels number 2

πŸš€ Example of use in Colab

Installing dependencies

!pip install --upgrade transformers huggingface_hub

(Optional) Authentication for private models.

from huggingface_hub import login
login(token="hf_yourhftoken")

Loading tokenizer and model

from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline

repo_id   = "wakaflocka17/bart-imdb-finetuned"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model     = AutoModelForSequenceClassification.from_pretrained(repo_id)

# Override default labels
model.config.id2label = {0: 'NEGATIVE', 1: 'POSITIVE'}
model.config.label2id = {'NEGATIVE': 0, 'POSITIVE': 1}

# Create the classification pipeline
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)

Inference on a text example

testo     = "This movie was absolutely fantasticβ€”wonderful performances and a gripping story!"
risultati = pipe(testo)
print(risultati)
# Example output:
# [{'label': 'POSITIVE', 'score': 0.95}, {'label': 'NEGATIVE', 'score': 0.05}]

πŸ“– How to cite If you use this model in your work, you can cite it as:

@misc{Sentiment-Project,
  author       = {Francesco Congiu},
  title        = {Sentiment Analysis with Pretrained, Fine-tuned and Ensemble Transformer Models},
  howpublished = {\url{https://github.com/wakaflocka17/DLA_LLMSANALYSIS}},
  year         = {2025}
}

πŸ”— Reference Repository

All the file structure and script examples can be found at: https://github.com/wakaflocka17/DLA_LLMSANALYSIS/tree/main

Downloads last month
37
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for wakaflocka17/bart-imdb-finetuned

Base model

facebook/bart-base
Finetuned
(411)
this model

Dataset used to train wakaflocka17/bart-imdb-finetuned

Collection including wakaflocka17/bart-imdb-finetuned