Description
Further fine-tuned Locutusque/Hyperion-2.0-Mistral-7B at a higher learning rate. This was done to see if performance increased. Read Locutusque/Hyperion-2.0-Mistral-7B's model card for more information. Slight performance gain was observed. More checkpoints will be released in the future.
Disclaimer
This model is very compliant. It will respond to any request without refusal. If you intend to deploy this model at an enterprise level, I would recommend aligning this model using DPO.
Quants
ExLlamaV2: https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-exl2
GGUF: https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF
AWQ: https://huggingface.co/solidrust/Hyperion-2.1-Mistral-7B-AWQ
- Downloads last month
- 93
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.