Astra-v1-12B

Astra-v1-12B is a fine-tuned version of the base model Mistral-Nemo-Base-2407, developed for general-purpose natural language processing tasks. It was fine-tuned to replicate the quality and style of Claude 3's Sonnet and Opus models.

Astra-v1-12B

Model Description

Astra-v1-12B is a general-purpose transformer-based language model fine-tuned for instruction-following tasks. The fine-tuning was designed to match the high-quality generation seen in Claude 3's Sonnet and Opus models, optimized for tasks such as text generation, summarization, question answering, and more.

Model Sources

Uses

Direct Use

Astra-v1-12B can be used directly for a wide range of NLP tasks, including:

  • Text generation
  • Summarization
  • Question answering
  • Dialogue systems

Out-of-Scope Use

Astra-v1-12B is not intended for real-time decision-making in critical applications or generating harmful or biased content.

How to Get Started with the quantized model

To run the quantized version of the model, you can use KoboldCPP, which allows you to run quantized GGUF models locally.

I encourage you to provide feedback on the model's performance. If you'd like to create your own quantizations, feel free to do so and let me know how it works for you!

Downloads last month
34
GGUF
Model size
12.2B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for P0x0/Astra-v1-12B-GGUF

Finetuned
P0x0/Astra-v1-12B
Quantized
(1)
this model