Text Classification
Transformers
Safetensors
roberta

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for Model ID


library_name: transformers license: mit tags: - roberta - text-classification - political-bias - transformers - nlp - fine-tuned datasets: - pranjali97/Bias-detection-combined - peekayitachi/allsides - custom-political-bias-data

🧠 RoBERTa Political Bias Classifier

This is a fine-tuned RoBERTa model for political bias detection in text. It classifies a sentence or article snippet into one of the following three categories:

  • πŸ”΄ Right
  • 🟑 Center
  • πŸ”΅ Left

Trained on a combination of public and custom-labeled datasets, the model is capable of classifying political leaning in Indian and general English news/opinion text.


πŸ“₯ Example Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model = AutoModelForSequenceClassification.from_pretrained("peekayitachi/roberta-political-bias")
tokenizer = AutoTokenizer.from_pretrained("peekayitachi/roberta-political-bias")

text = "Our nation's sovereignty must be protected, and we should prioritize national interests."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
    logits = model(**inputs).logits
    predicted = torch.argmax(logits, dim=1).item()

label_map = {0: "Left", 1: "Center", 2: "Right"}
print("Predicted Bias:", label_map[predicted])




## Model Details

### Model Description

Base model: roberta-base

Architecture: Transformer encoder with classification head

Fine-tuned on: Multi-source labeled data (~38k samples)

Languages: English (Indian and global political context)

License: MIT

Author: peekayitachi (Pranav)

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

[More Information Needed]

### Downstream Use [optional]

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

[More Information Needed]

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

[More Information Needed]

## Bias, Risks, and Limitations

This model reflects the labeling choices and distribution of the training data. It may:

Overfit to news-style text and miss subtle bias in blogs/social media

Be less accurate on texts that are neutral in tone or multi-opinionated

Reflect U.S./Indian-centric definitions of political categories



### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

[More Information Needed]

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Preprocessing [optional]

[More Information Needed]


#### Training Hyperparameters

- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

#### Speeds, Sizes, Times [optional]

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->

[More Information Needed]

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

[More Information Needed]

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

[More Information Needed]

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

[More Information Needed]

### Results

[More Information Needed]

#### Summary



## Model Examination [optional]

<!-- Relevant interpretability work for the model goes here -->

[More Information Needed]



### Model Architecture and Objective

[More Information Needed]

### Compute Infrastructure

[More Information Needed]
Downloads last month
14
Safetensors
Model size
125M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Datasets used to train peekayitachi/roberta-political-bias