base_model: | |
- unsloth/Mistral-Nemo-Instruct-2407 | |
- yamatazen/LorablatedStock-12B | |
library_name: peft | |
tags: | |
- mergekit | |
- peft | |
# LorablatedStock-12B-LoRA-Rank128 | |
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit). | |
## LoRA Details | |
This LoRA adapter was extracted from [yamatazen/LorablatedStock-12B](https://huggingface.co/yamatazen/LorablatedStock-12B) and uses [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) as a base. | |
### Parameters | |
The following command was used to extract this LoRA adapter: | |
```sh | |
C:\Users\yamat\AppData\Local\Programs\Python\Python312\Scripts\mergekit-extract-lora --model yamatazen/LorablatedStock-12B --base-model unsloth/Mistral-Nemo-Instruct-2407 --out-path LorablatedStock-12B-LoRA-Rank128 --max-rank 128 | |
``` | |