Flattery Prediction from Text

This model was finetuned to predict flattery in transcripts of English earning calls. It was introduced in This Paper Had the Smartest Reviewers -- Flattery Detection Utilising an Audio-Textual Transformer-Based Approach, which was published at INTERSPEECH 2024.

Model Details

Model Description

This is a fine-tuned variant of RoBERTa-base. It is trained using a dataset comprising single sentences uttered in business calls, which were labeled for flattery in a binary manner. The training set comprised 7167 sentences, 1878 sentences were used as development set. For more details, please refer to the paper, especially Sections 2 for the dataset, 3.1 for the training procedure and 4.1 for the results. The checkpoint provided here was trained using human gold-standard transcripts. It achieves Unweighed Average Recall (UAR) values of .8512 and .8865 on the development and test partition, respectively.

Model Sources

Uses

The following snippet illustrates the usage of the model.

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from torch import sigmoid
import torch

# initialize model and tokenizer
checkpoint = "chrlukas/flattery_prediction_text"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
model.eval()

# predict flattery in a sentence
example = 'This is a great example!'    # should predict flattery
tokenized = tokenizer(example, return_tensors='pt')
with torch.no_grad():
  logits = model(**tokenized).logits
prediction = sigmoid(logits).item()
flattery = prediction >= 0.5
print(f'Flattery detected? {flattery}')

Bias, Risks, and Limitations

The model is trained on a highly-domain specific dataset sourced from earning calls, i.e., typically conversations between business analysts and CEOs of US-American companies. Hence, it can not be expected to generalize well to other domains and contexts.

Citation

BibTeX:

@inproceedings{christ24_interspeech,
  title     = {{This Paper Had the Smartest Reviewers - Flattery Detection Utilising an Audio-Textual Transformer-Based Approach}},
  author    = {Lukas Christ and Shahin Amiriparian and Friederike Hawighorst and Ann-Kathrin Schill and Angelo Boutalikakis and Lorenz Graf-Vlachy and Andreas König and Björn Schuller},
  year      = {2024},
  booktitle = {{Interspeech 2024}},
  pages     = {3530--3534},
  doi       = {10.21437/Interspeech.2024-87},
  issn      = {2958-1796},
}
Downloads last month
35
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for chrlukas/flattery_prediction_text