Model Card for Llama-3-8B-Distil-MetaHate
Llama-3-8B-Distil-MetaHate is a distilled model of the Llama 3 architecture designed specifically for hate speech explanation and classification. This model leverages Chain-of-Thought methodologies to improve interpretability and operational efficiency in hate speech detection tasks.
Model Details
Model Description
- Developed by: IRLab
- Model type: text-generation
- Language(s) (NLP): English
- License: Llama3
- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct
Model Sources
- Repository: https://github.com/palomapiot/distil-metahate
- Paper (preprint): https://arxiv.org/abs/2412.13698
Uses
This model is intended for research and practical applications in detecting and explaining hate speech. It aims to enhance the understanding of the model's predictions, providing users with insights into why a particular text is classified as hate speech.
Bias, Risks, and Limitations
While the model is designed to improve interpretability, it may still produce biased outputs, reflecting the biases present in the training data. Users should exercise caution and perform their due diligence when deploying the model.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "irlab-udc/Llama-3-8B-Distil-MetaHate"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
input_text = "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
Training Details
Link to the publication soon.
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: RTX A6000 (TDP of 300W)
- Hours used: 15
- Carbon Emitted: 0.432 kgCO2eq/kWh
Citation
@misc{piot2024efficientexplainablehatespeech,
title={Towards Efficient and Explainable Hate Speech Detection via Model Distillation},
author={Paloma Piot and Javier Parapar},
year={2024},
eprint={2412.13698},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.13698},
}
Model Card Contact
For questions, inquiries, or discussions related to this model, please contact:
- Email: [email protected]
Framework versions
- PEFT 0.11.1
Acknowledgements
The authors thank the funding from the Horizon Europe research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 101073351. The authors also thank the financial support supplied by the Consellería de Cultura, Educación, Formación Profesional e Universidades (accreditation 2019-2022 ED431G/01, ED431B 2022/33) and the European Regional Development Fund, which acknowledges the CITIC Research Center in ICT of the University of A Coruña as a Research Center of the Galician University System and the project PID2022-137061OB-C21 (Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación, Proyectos de Generación de Conocimiento; supported by the European Regional Development Fund). The authors also thank the funding of project PLEC2021-007662 (MCIN/AEI/10.13039/501100011033, Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación, Plan de Recuperación, Transformación y Resiliencia, Unión Europea-Next Generation EU).
- Downloads last month
- 9
Model tree for irlab-udc/Llama-3-8B-Distil-MetaHate
Base model
unsloth/llama-3-8b-Instruct-bnb-4bit