halilbabacan's picture
Update README.md
3558f20
---
tags:
- autotrain
- text-classification
- cognitive distortion
- psychology
language:
- unk
widget:
- text: I love AutoTrain
datasets:
- halilbabacan/autotrain-data-cognitive_distortion_gpt_roberta
co2_eq_emissions:
emissions: 1.5120249278420834
---
The article is under publication. For communication, you can send an e-mail to [email protected].
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 73173139143
- CO2 Emissions (in grams): 1.5120
## Validation Metrics
- Loss: 0.000
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/halilbabacan/autotrain-cognitive_distortion_gpt_roberta-73173139143
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("halilbabacan/autotrain-cognitive_distortion_gpt_roberta-73173139143", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("halilbabacan/autotrain-cognitive_distortion_gpt_roberta-73173139143", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```