Uploaded model
- Developed by: Disya
- License: apache-2.0
- Finetuned from model : Disya/Mistral-qwq-12b-merge-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Disya/mistral-qwq-12b-sft-test4-lora
Base model
Disya/Mistral-qwq-12b-merge
Quantized
Disya/Mistral-qwq-12b-merge-4bit