|
--- |
|
base_model: |
|
- TareksLab/Anathema-V2-LLaMA-70B |
|
library_name: transformers |
|
base_model_relation: quantized |
|
tags: |
|
- exl2 |
|
- 4-bit |
|
--- |
|
# Quantization |
|
|
|
EXL2 Quants by [ArtusDev](https://huggingface.co/ArtusDev). |
|
|
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1) |
|
* [ReadyArt/Forgotten-Safeword-70B-3.6](https://huggingface.co/ReadyArt/Forgotten-Safeword-70B-3.6) |
|
* [ReadyArt/Fallen-Safeword-70B-R1-v4.1](https://huggingface.co/ReadyArt/Fallen-Safeword-70B-R1-v4.1) |
|
* [allura-org/Bigger-Body-70b](https://huggingface.co/allura-org/Bigger-Body-70b) |
|
* [ReadyArt/Fallen-Abomination-70B-R1-v4.1](https://huggingface.co/ReadyArt/Fallen-Abomination-70B-R1-v4.1) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: ReadyArt/Fallen-Abomination-70B-R1-v4.1 |
|
parameters: |
|
select_topk: 0.15 |
|
- model: ReadyArt/Fallen-Safeword-70B-R1-v4.1 |
|
parameters: |
|
select_topk: 0.15 |
|
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1 |
|
parameters: |
|
select_topk: 0.15 |
|
- model: ReadyArt/Forgotten-Safeword-70B-3.6 |
|
parameters: |
|
select_topk: 0.15 |
|
- model: allura-org/Bigger-Body-70b |
|
parameters: |
|
select_topk: 0.15 |
|
merge_method: sce |
|
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B |
|
dtype: bfloat16 |
|
tokenizer: |
|
source: base |
|
``` |
|
|