Text Classification
Transformers
Safetensors
English
roberta
English
RoBERTa-base
Text Classification
Instructions to use oeg/RoBERTa-Repository-Proposal with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use oeg/RoBERTa-Repository-Proposal with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="oeg/RoBERTa-Repository-Proposal")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("oeg/RoBERTa-Repository-Proposal") model = AutoModelForSequenceClassification.from_pretrained("oeg/RoBERTa-Repository-Proposal") - Notebooks
- Google Colab
- Kaggle
RoBERTa base Fine-Tuned for Proposal Sentence Classification
Overview
- Language: English
- Model Name: oeg/RoBERTa_Repository_Proposal
Description
This model is a fine-tuned RoBERTa-base model trained to classify sentences into two classes: proposal and non-proposal sentences. The training data includes sentences proposing a software or data repository. The model is trained to recognize and classify these sentences accurately.
How to use
To use this model in Python:
from transformers import RobertaForSequenceClassification, RobertaTokenizer
import torch
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
model = RobertaForSequenceClassification.from_pretrained("oeg/RoBERTa-Repository-Proposal")
sentence = "Your input sentence here."
inputs = tokenizer(sentence, return_tensors="pt")
outputs = model(**inputs)
probabilities = torch.nn.functional.softmax(outputs.logits, dim=1)
- Downloads last month
- 6