Model Description
Quantization: EXL2, 4.0 bits per weight
max_seq_len: 4096 Model Sources
Base
- Repository: https://huggingface.co/common-pile/comma-v0.1-2t
Comma v0.1-2T is a 7 billion parameter language model trained on 2 trillion tokens from the Comma v0.1 dataset, comprising of openly licensed text from the Common Pile. Comma v0.1-2T is a "base model" that can be used a the starting point for finetuning and post-training.
Citation
@article{kandpal2025common,
title={{The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text}},
author={Nikhil Kandpal and Brian Lester and Colin Raffel and Sebastian Majstorovic and Stella Biderman and Baber Abbasi and Luca Soldaini and Enrico Shippole and A. Feder Cooper and Aviya Skowron and Shayne Longpre and Lintang Sutawika and Alon Albalak and Zhenlin Xu and Guilherme Penedo and Loubna Ben and Elie Bakouch and John David and Honglu Fan and Dashiell Stander and Guangyu Song and Aaron Gokaslan and John Kirchenbauer and Tom Goldstein and Brian R and Bhavya Kailkhura and Tyler Murray},
journal={arXiv preprint},
year={2025}
}
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Abstract4700/comma-v0.1-2t-4.0bpw-exl2
Base model
common-pile/comma-v0.1-2t