batch 2 - llama 3.1 70b lora Collection Models are datasets trained using batch 2 (some with a mixture of xlam function calling data) on llama 3.1 70b loras. • 8 items • Updated about 16 hours ago
view post Post 2584 https://huggingface.co/organizations/nerdyface/share/xvWxWxYmYpCLqZlvNJEZbJHFsDITAicJAT 🚀 3 3 + Reply
BigBIO: A Framework for Data-Centric Biomedical Natural Language Processing Paper • 2206.15076 • Published Jun 30, 2022 • 4
Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order Paper • 2404.00399 • Published Mar 30, 2024 • 43
Stateful Memory-Augmented Transformers for Dialogue Modeling Paper • 2209.07634 • Published Sep 15, 2022 • 1
TextGAIL: Generative Adversarial Imitation Learning for Text Generation Paper • 2004.13796 • Published Apr 7, 2020
Memformer: A Memory-Augmented Transformer for Sequence Modeling Paper • 2010.06891 • Published Oct 14, 2020
Alternating Recurrent Dialog Model with Large-scale Pre-trained Language Models Paper • 1910.03756 • Published Oct 9, 2019
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model Paper • 2211.05100 • Published Nov 9, 2022 • 31