Original Model Link : https://huggingface.co/common-pile/comma-v0.1-2t
name: comma-v0.1-2t-MLX-Q8
base_model: common-pile/comma-v0.1-2t
license: apache-2.0
tags:
- Comma
- MLX
- EleutherAI
- Llama
- Meta
pipeline_tag: text-generation
size:
- 7440657364
- 7.35GB
datasets: common-pile/comma_v0.1_training_dataset
tasks :
- text-generation
- text-to-text
- text2text-generation
language: en
library_name: mlx
funded_by :
- Mozilla Foundation
- Sutter Hill Ventures
- Natural Sciences and Engineering Research Council of Canada (NSERC-CSE)
hardware_type: 512 AMD MI300A GPU
get_started_code: uvx --from mlx-lm mlx_lm.generate --model "darkshapes/comma-v0.1-2t-MLX-Q8" --prompt 'Test prompt'
comma-v0.1-2t-MLX-Q8
Comma v0.1-2T is a 7 billion parameter transformer model trained on openly licensed text from the Common Pile. This repo is a quantized model that lowers memory requirement from 14GB to 7GB, increasing speed and availability at the sacrifice of some accuracy.
MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4)
Generation using uv https://docs.astral.sh/uv/**:
uvx --from mlx-lm mlx_lm.generate --model “darkshapes/comma-v0.1-2t-ML-Q8" --prompt 'Test prompt'
Generation using pip:
pipx --from mlx-lm mlx_lm.generate --model “darkshapes/comma-v0.1-2t-ML-Q8" --prompt 'Test prompt'
- Downloads last month
- 35
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for darkshapes/comma-v0.1-2t-MLX-Q8
Base model
common-pile/comma-v0.1-2t