license: apache-2.0
language:
- en
metrics:
- precision
- recall
- f1
- accuracy
new_version: v1.0
datasets:
- custom
- chatgpt
pipeline_tag: text-classification
library_name: transformers
tags:
- emotion
- classification
- text-classification
- neurobert
- emojis
- emotions
- v1.0
- sentiment-analysis
- nlp
- lightweight
- chatbot
- social-media
- mental-health
- short-text
- emotion-detection
- transformers
- real-time
- expressive
- ai
- machine-learning
- english
- inference
- edge-ai
- smart-replies
- tone-analysis
- contextual-ai
- wearable-ai
base_model:
- neurobert
π NeuroFeel β Lightweight NeuroBERT for Real-Time Emotion Detection π
Table of Contents
- π Overview
- β¨ Key Features
- π« Supported Emotions
- π§ Model Architecture
- βοΈ Installation
- π₯ Download Instructions
- π Quickstart: Emotion Detection
- π‘ Use Cases
- π₯οΈ Hardware Requirements
- π Training Details
- π§ Fine-Tuning Guide
- βοΈ Comparison to Other Models
- π·οΈ Tags
- π License
- π Credits
- π¬ Support & Community
- βοΈ Contact
π Model Training Tutorial Video
Watch this step-by-step guide to train your machine learning model! π₯
Click the image above to watch the tutorial!
Overview
NeuroFeel
is a lightweight NLP model built on NeuroBERT, fine-tuned for short-text emotion detection on edge and IoT devices. With a quantized size of ~25MB and ~7M parameters, it classifies text into 13 nuanced emotional categories (e.g., Happiness, Sadness, Anger, Love) with high precision. Optimized for low-latency and offline operation, NeuroFeel is perfect for privacy-focused applications like chatbots, social media sentiment analysis, mental health monitoring, and contextual AI in resource-constrained environments such as wearables, smart home devices, and mobile apps.
- Model Name: NeuroFeel
- Size: ~25MB (quantized)
- Parameters: ~7M
- Architecture: Lightweight NeuroBERT (4 layers, hidden size 256, 8 attention heads)
- Description: Compact 4-layer, 256-hidden model for emotion detection
- License: Apache-2.0 β free for commercial and personal use
Key Features
- β‘ Ultra-Compact Design: ~25MB footprint for devices with limited storage.
- π§ Rich Emotion Detection: Classifies 13 emotions with expressive emoji mappings.
- πΆ Offline Capability: Fully functional without internet connectivity.
- βοΈ Real-Time Inference: Optimized for CPUs, mobile NPUs, and microcontrollers.
- π Versatile Applications: Supports emotion detection, sentiment analysis, and tone analysis for short texts.
- π Privacy-First: On-device processing ensures user data stays local.
Supported Emotions
NeuroFeel classifies text into one of 13 emotional categories, each paired with an emoji for enhanced interpretability:
Emotion | Emoji |
---|---|
Sadness | π’ |
Anger | π |
Love | β€οΈ |
Surprise | π² |
Fear | π± |
Happiness | π |
Neutral | π |
Disgust | π€’ |
Shame | π |
Guilt | π |
Confusion | π |
Desire | π₯ |
Sarcasm | π |
Model Architecture
NeuroFeel is derived from NeuroBERT, a lightweight transformer model optimized for edge computing. Key architectural details:
- Layers: 4 transformer layers for reduced computational complexity.
- Hidden Size: 256, balancing expressiveness and efficiency.
- Attention Heads: 8, enabling robust contextual understanding.
- Parameters: ~7M, significantly fewer than standard BERT models.
- Quantization: INT8 quantization for minimal memory usage and fast inference.
- Vocabulary Size: 30,522 tokens, compatible with NeuroBERTβs tokenizer.
- Max Sequence Length: 64 tokens, ideal for short-text inputs like social media posts or chatbot messages.
This architecture ensures NeuroFeel delivers high accuracy for emotion detection while maintaining compatibility with resource-constrained devices like Raspberry Pi, ESP32, or mobile NPUs.
Installation
Install the required dependencies:
pip install transformers torch
Ensure your environment supports Python 3.6+ and has ~25MB of storage for model weights.
Download Instructions
Via Hugging Face:
- Access the model at boltuix/NeuroFeel.
- Download the model files (~25MB) or clone the repository:
git clone https://huggingface.co/boltuix/NeuroFeel
Via Transformers Library:
- Load the model directly in Python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("boltuix/NeuroFeel") tokenizer = AutoTokenizer.from_pretrained("boltuix/NeuroFeel")
- Load the model directly in Python:
Manual Download:
- Download quantized model weights (Safetensors format) from the Hugging Face model hub.
- Extract and integrate into your edge/IoT application.
π Emotions Dataset β Infuse Your AI with Human Feelings! ππ’π‘
Quickstart: Emotion Detection
Basic Inference Example
Classify emotions in short text inputs using the Hugging Face pipeline:
from transformers import pipeline
# Load the fine-tuned NeuroFeel model
sentiment_analysis = pipeline("text-classification", model="boltuix/NeuroFeel")
# Analyze emotion
result = sentiment_analysis("i love you")
print(result)
Output:
[{'label': 'Love', 'score': 0.8563215732574463}]
This indicates the emotion is Love β€οΈ with 85.63% confidence.
Extended Example with Emoji Mapping
Enhance the output with human-readable emotions and emojis:
from transformers import pipeline
# Load the fine-tuned NeuroFeel model
sentiment_analysis = pipeline("text-classification", model="boltuix/NeuroFeel")
# Define label-to-emoji mapping
label_to_emoji = {
"Sadness": "π’",
"Anger": "π ",
"Love": "β€οΈ",
"Surprise": "π²",
"Fear": "π±",
"Happiness": "π",
"Neutral": "π",
"Disgust": "π€’",
"Shame": "π",
"Guilt": "π",
"Confusion": "π",
"Desire": "π₯",
"Sarcasm": "π"
}
# Input text
text = "i love you"
# Analyze emotion
result = sentiment_analysis(text)[0]
label = result["label"].capitalize()
emoji = label_to_emoji.get(label, "β")
# Output
print(f"Text: {text}")
print(f"Predicted Emotion: {label} {emoji}")
print(f"Confidence: {result['score']:.2%}")
Output:
Text: i love you
Predicted Emotion: Love β€οΈ
Confidence: 85.63%
Note: Fine-tune the model for domain-specific tasks to boost accuracy.
NeuroFeel excels in classifying a wide range of emotions in short texts, particularly in IoT, social media, and mental health contexts. Fine-tuning enhances performance on subtle emotions like Sarcasm or Shame.
Evaluation Metrics
Metric | Value (Approx.) |
---|---|
β Accuracy | ~92β96% on 13-class emotion tasks |
π― F1 Score | Balanced for multi-class classification |
β‘ Latency | <40ms on Raspberry Pi 4 |
π Recall | Competitive for lightweight models |
Note: Metrics depend on hardware and fine-tuning. Test on your target device for precise results.
Use Cases
NeuroFeel is tailored for edge and IoT scenarios requiring real-time emotion detection for short texts. Key applications include:
- Chatbot Emotion Understanding: Detect user emotions, e.g., βI love youβ (predicts βLove β€οΈβ) to tailor responses.
- Social Media Sentiment Tagging: Analyze posts, e.g., βThis is disgusting!β (predicts βDisgust π€’β) for moderation or trend analysis.
- Mental Health Context Detection: Monitor mood, e.g., βI feel so aloneβ (predicts βSadness π’β) for wellness apps or crisis alerts.
- Smart Replies and Reactions: Suggest replies, e.g., βIβm so happy!β (predicts βHappiness πβ) for positive emojis or animations.
- Emotional Tone Analysis: Adjust IoT settings, e.g., βIβm terrified!β (predicts βFear π±β) to dim lights or play calming music.
- Voice Assistants: Local emotion-aware parsing, e.g., βWhy does it break?β (predicts βAnger π β) to prioritize fixes.
- Toy Robotics: Emotion-driven interactions, e.g., βI really want that!β (predicts βDesire π₯β) for engaging animations.
- Fitness Trackers: Analyze feedback, e.g., βWait, what?β (predicts βConfusion πβ) to clarify instructions.
- Wearable Devices: Real-time mood tracking, e.g., βIβm stressed outβ (predicts βFear π±β) to suggest breathing exercises.
- Smart Home Automation: Contextual responses, e.g., βIβm so tiredβ (predicts βSadness π’β) to adjust lighting or music.
- Customer Support Bots: Detect frustration, e.g., βThis is ridiculous!β (predicts βAnger π β) to escalate to human agents.
- Educational Tools: Analyze student feedback, e.g., βI donβt get itβ (predicts βConfusion πβ) to offer tailored explanations.
Hardware Requirements
- Processors: CPUs, mobile NPUs, or microcontrollers (e.g., ESP32-S3, Raspberry Pi 4, Snapdragon NPUs)
- Storage: ~25MB for model weights (quantized, Safetensors format)
- Memory: ~70MB RAM for inference
- Environment: Offline or low-connectivity settings
Quantization ensures efficient memory usage, making NeuroFeel ideal for resource-constrained devices.
Training Details
NeuroFeel was fine-tuned on a custom emotion dataset augmented with ChatGPT-generated data to enhance diversity and robustness. Key training details:
- Dataset:
- Custom Emotion Dataset: ~10,000 labeled short-text samples covering 13 emotions (e.g., Happiness, Sadness, Love). Sourced from social media posts, IoT user feedback, and chatbot interactions.
- ChatGPT-Augmented Data: Synthetic samples generated to balance underrepresented emotions (e.g., Sarcasm, Shame) and improve generalization.
- Preprocessing: Lowercasing, emoji removal, and tokenization with NeuroBERTβs tokenizer (max length: 64 tokens).
- Training Process:
- Base Model: NeuroBERT, pre-trained on general English text for masked language modeling.
- Fine-Tuning: Supervised training for 13-class emotion classification using cross-entropy loss.
- Hyperparameters:
- Epochs: 5
- Batch Size: 16
- Learning Rate: 2e-5
- Optimizer: AdamW
- Scheduler: Linear warmup (10% of steps)
- Hardware: Fine-tuned on a single NVIDIA A100 GPU, but inference optimized for edge devices.
- Quantization: Post-training INT8 quantization to reduce model size to ~25MB and improve inference speed.
- Data Augmentation:
- Synonym replacement and back-translation to enhance robustness.
- Synthetic negative sampling to improve detection of nuanced emotions like Guilt or Confusion.
- Validation:
- Split: 80% train, 10% validation, 10% test.
- Validation F1 score: ~0.93 across 13 classes.
Fine-tuning on domain-specific data is recommended to optimize performance for specific use cases (e.g., mental health apps or smart home devices).
Fine-Tuning Guide
To adapt NeuroFeel for custom emotion detection tasks:
- Prepare Dataset: Collect labeled data with 13 emotion categories.
- Fine-Tune with Hugging Face:
import pandas as pd from transformers import BertTokenizer, BertForSequenceClassification, Trainer, TrainingArguments from sklearn.model_selection import train_test_split import torch from torch.utils.data import Dataset # === 1. Load and preprocess data === dataset_path = '/content/dataset.csv' df = pd.read_csv(dataset_path) # Use the correct original column name 'Label' in dropna df = df.dropna(subset=['Label']) # Ensure no missing labels df.columns = ['text', 'label'] # Normalize column names # === 2. Encode labels === labels = sorted(df["label"].unique()) label_to_id = {label: idx for idx, label in enumerate(labels)} id_to_label = {idx: label for label, idx in label_to_id.items()} df['label'] = df['label'].map(label_to_id) # === 3. Train/val split === train_texts, val_texts, train_labels, val_labels = train_test_split( df['text'].tolist(), df['label'].tolist(), test_size=0.2, random_state=42 ) # === 4. Tokenizer === tokenizer = BertTokenizer.from_pretrained("boltuix/NeuroBERT-Pro") # === 5. Dataset class === class SentimentDataset(Dataset): def __init__(self, texts, labels, tokenizer, max_length=128): self.texts = texts self.labels = labels self.tokenizer = tokenizer self.max_length = max_length def __len__(self): return len(self.texts) def __getitem__(self, idx): encoding = self.tokenizer( self.texts[idx], padding='max_length', truncation=True, max_length=self.max_length, return_tensors='pt' ) return { 'input_ids': encoding['input_ids'].squeeze(0), 'attention_mask': encoding['attention_mask'].squeeze(0), 'labels': torch.tensor(self.labels[idx], dtype=torch.long) } # === 6. Load datasets === train_dataset = SentimentDataset(train_texts, train_labels, tokenizer) val_dataset = SentimentDataset(val_texts, val_labels, tokenizer) # === 7. Load model === model = BertForSequenceClassification.from_pretrained( "boltuix/NeuroBERT-Pro", num_labels=len(label_to_id) ) # Optional: Ensure tensor layout is contiguous for param in model.parameters(): param.data = param.data.contiguous() # === 8. Training arguments === training_args = TrainingArguments( output_dir='./results', run_name="NeuroFeel", num_train_epochs=5, per_device_train_batch_size=16, per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, logging_dir='./logs', logging_steps=10, eval_strategy="epoch", report_to="none" ) # === 9. Trainer setup === trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset ) # === 10. Train and evaluate === trainer.train() trainer.evaluate() # === 11. Save model and label mappings === model.config.label2id = label_to_id model.config.id2label = id_to_label model.config.num_labels = len(label_to_id) model.save_pretrained("./neuro-feel") tokenizer.save_pretrained("./neuro-feel") print("β Training complete. Model and tokenizer saved to ./neuro-feel")
- Deploy: Export to ONNX or TensorFlow Lite for edge devices.
Comparison to Other Models
Model | Parameters | Size | Edge/IoT Focus | Tasks Supported |
---|---|---|---|---|
NeuroFeel | ~7M | ~25MB | High | Emotion Detection, Classification |
NeuroBERT | ~7M | ~30MB | High | MLM, NER, Classification |
BERT-Lite | ~2M | ~10MB | High | MLM, NER, Classification |
DistilBERT | ~66M | ~200MB | Moderate | MLM, NER, Classification, Sentiment |
NeuroFeel is specialized for 13-class emotion detection, offering superior performance for short-text sentiment analysis on edge devices compared to general-purpose models like NeuroBERT, while being far more efficient than DistilBERT.
Tags
#NeuroFeel
#edge-nlp
#emotion-detection
#on-device-ai
#offline-nlp
#mobile-ai
#sentiment-analysis
#text-classification
#emojis
#emotions
#lightweight-transformers
#embedded-nlp
#smart-device-ai
#low-latency-models
#ai-for-iot
#efficient-neurobert
#nlp2025
#context-aware
#edge-ml
#smart-home-ai
#emotion-aware
#voice-ai
#eco-ai
#chatbot
#social-media
#mental-health
#short-text
#smart-replies
#tone-analysis
#wearable-ai
License
Apache-2.0 License: Free to use, modify, and distribute for personal and commercial purposes. See LICENSE for details.
Credits
- Base Model: neurobert
- Optimized By: Boltuix, fine-tuned and quantized for edge AI applications
- Library: Hugging Face
transformers
team for model hosting and tools
Support & Community
For issues, questions, or contributions:
- Visit the Hugging Face model page
- Open an issue on the repository
- Join discussions on Hugging Face or contribute via pull requests
- Check the Transformers documentation for guidance
We welcome community feedback to enhance NeuroFeel for IoT and edge applications!
Contact
- π¬ Email: [email protected]