--- license: apache-2.0 language: - en pipeline_tag: text-classification library_name: transformers tags: - emotions - multi-task-learning - text-classification - sentiment-analysis - natural-language-processing - psychological-modeling - emotion-recognition - bert - deep-learning - emotion-embeddings - valence-arousal - emotion-ontology ---
🌌 EmotionVerse-2: Galactic Emotional Intelligence
EmotionVerse-2 unifies psychologically-grounded labeling (Plutchik) with a valence-arousal manifold, neural plasticity, and memory consolidation. It doesn’t just tag text—it resolves why it feels the way it does, across interlocking tasks that share a single encoder for maximal transfer.
A single BERT encoder feeds six specialized heads—Primary, Secondary, Meta, Sentiment, Interaction, and Context—trained end-to-end with dynamic loss weighting. The result is emergent cross-task awareness: subtle shifts, mixed feelings, and psychologically consistent outputs.
The evaluation deliberately pits generalist GoEmotions models against a specialist trained on Plutchik’s eight core emotions. The “unfairness” is instructive: it exposes the cost of vocabulary mismatch and the power of a psychologically aligned label space.
Model | F1 Macro |
---|---|
EMOTIONVERSE-2 | 0.951 🌟 |
GoEmotions-RoBERTa | 0.024 🔻 |
GoEmotions-BERT | 0.012 📉 |
DistilBERT-Emotion | 0.007 👻 |
Diagonal dominance indicates surgical separation of Plutchik classes; off-diagonals remain low and psychologically adjacent when present.
Emotion labels are embedded vectors, not IDs. Selected dimensions:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
name = "ayjays132/EmotionVerse-2"
tok = AutoTokenizer.from_pretrained(name)
mdl = AutoModelForSequenceClassification.from_pretrained(name)
text = "I’m thrilled yet anxious about tomorrow’s launch."
batch = tok(text, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
out = mdl(**batch)
import torch.nn.functional as F
plutchik_labels = ["joy","trust","anticipation","surprise","anger","sadness","fear","disgust"]
plutchik_logits = out.logits[:, :len(plutchik_labels)] # slice depends on export
probs = F.softmax(plutchik_logits, dim=-1)[0].tolist()
for lbl, p in sorted(zip(plutchik_labels, probs), key=lambda x: -x[1]):
print(f"{lbl:12s} {p:.3f}")
def coherence(valence_arousal_primary, valence_arousal_sentiment):
# cosine similarity across valence/arousal heads
a, b = valence_arousal_primary, valence_arousal_sentiment
return (a @ b) / ((a.norm()+1e-8) * (b.norm()+1e-8))
The EmotionVerse dataset anchors this model with 3K+ entries annotated across six dimensions, including meta-emotional and contextual narratives.
from torch.optim import AdamW
from torch.cuda.amp import autocast, GradScaler
scaler = GradScaler()
optim = AdamW(mdl.parameters(), lr=3e-5, weight_decay=0.01)
for step, batch in enumerate(train_loader):
with autocast():
out = mdl(**{k:v.to(mdl.device) for k,v in batch.items()})
# combine task losses (weights dynamically updated)
loss = out.loss # if model returns weighted sum
scaler.scale(loss).backward()
scaler.step(optim); scaler.update(); optim.zero_grad()
from sklearn.metrics import f1_score
all_y, all_p = [], []
for batch in val_loader:
with torch.no_grad():
out = mdl(**{k:v.to(mdl.device) for k,v in batch.items()})
logits = out.logits[:, :8] # primary head slice
preds = logits.argmax(-1).cpu().tolist()
all_p += preds; all_y += batch["labels"].tolist()
print("F1 Macro:", f1_score(all_y, all_p, average="macro"))
Released under the Apache 2.0 License. Use, modify, and ship with confidence.
Thanks to Hugging Face (transformers
, datasets
) and the research community advancing affective computing, psychological modeling, and trustworthy NLP.