Orpheus 3B 8-bit GPTQ
This is an 8-bit GPTQ quantized version of baseten/orpheus-3b-0.1-ft.
Model Details
- Base Model: baseten/orpheus-3b-0.1-ft
- Quantization: 8-bit GPTQ
- Group Size: 128
- Calibration Dataset: canopylabs/zac-sample-dataset (TTS-specific)
- Library: auto-gptq
Usage
from auto_gptq import AutoGPTQForCausalLM
from transformers import AutoTokenizer
# Load the quantized model
model = AutoGPTQForCausalLM.from_quantized(
"Hariprasath28/orpheus-3b-8bit-gptq",
device="cuda:0", # or "cpu"
use_triton=False,
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("Hariprasath28/orpheus-3b-8bit-gptq", trust_remote_code=True)
# Generate TTS tokens
text = "tara: Hello, this is a test of the quantized Orpheus model."
inputs = tokenizer(text, return_tensors="pt").to("cuda:0")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.7,
do_sample=True
)
generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated)
- Downloads last month
- 297
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Hariprasath28/orpheus-3b-8bit-gptq
Base model
meta-llama/Llama-3.2-3B-Instruct
Finetuned
canopylabs/orpheus-3b-0.1-pretrained
Finetuned
baseten/orpheus-3b-0.1-ft