base_model: Nekochu/Luminia-13B-v3
datasets:
- Nekochu/discord-unstable-diffusion-SD-prompts
- glaiveai/glaive-function-calling-v2
- TIGER-Lab/MathInstruct
- Open-Orca/SlimOrca
- GAIR/lima
- sahil2801/CodeAlpaca-20k
- garage-bAInd/Open-Platypus
language:
- en
library_name: transformers
license: apache-2.0
model_creator: Nekochu
model_name: Luminia 13B v3
model_type: llama2
prompt_template: >-
Below is an instruction that describes a task. Write a response that
appropriately completes the request. ### Instruction: {Instruction} {summary}
### input: {category} ### Response: {prompt}
quantized_by: mradermacher
tags:
- llama-factory
- lora
- generated_from_trainer
- llama2
- llama
- instruct
- finetune
- gpt4
- synthetic data
- stable diffusion
- alpaca
- llm
About
static quants of https://huggingface.co/Nekochu/Luminia-13B-v3
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.
Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.
Thanks
I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.