Sentient Simulations Plumbob

[🏠Sentient Simulations] | [Discord] | [Patreon]


Llama-3.1-70B-ArliAI-RPMax-v1.3-GPTQ

This repository contains a 4 bit GPTQ-quantized version of the Tarek07/Legion-V2.1-LLaMa-70B model using llm-compressor.

Quantization Settings

Attribute Value
Algorithm GPTQ
Layers Linear
Weight Scheme W4A16
Group Size 128
Calibration Dataset openerotica/erotiquant3
Calibration Sequence Length 4096
Calibration Samples 512

Dataset Preprocessing

The dataset was preprocessed with the following steps:

  1. Extract and structure the conversation data using role-based templates (SYSTEM, USER, ASSISTANT).
  2. Convert the structured conversations into a tokenized format using the model's tokenizer.
  3. Filter out sequences shorter than 4096 tokens.
  4. Shuffle and select 512 samples for calibration.

Quantization Process

View the shell and python script used to quantize this model.

2 rtx pro 6000 with 565GB of ram, 300GB of disk space was rented on runpod.

Quantization took approximately 3.5 hours with a total of $14.32 in compute costs.

Acknowledgments

patreon.PNG

Downloads last month
14
Safetensors
Model size
11.2B params
Tensor type
I64
·
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GusPuffy/Legion-V2.1-LLaMa-70B-GPTQ

Quantized
(11)
this model

Dataset used to train GusPuffy/Legion-V2.1-LLaMa-70B-GPTQ