Edit model card

Agente-Director-Qwen2-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit

  • Developed by: Agnuxo
  • License: apache-2.0
  • Finetuned from model: Qwen/Qwen2-7B-Instruct

This model was fine-tuned using Unsloth and Huggingface's TRL library.

Model Details

  • Model Parameters: 7070.63 million
  • Model Size: 13.61 GB
  • Quantization: 8-bit quantized
  • Estimated GPU Memory Required: ~13 GB

Note: The actual memory usage may vary depending on the specific hardware and runtime environment.

Benchmark Results

This model has been fine-tuned and evaluated on the GLUE MRPC task:

  • Accuracy: 0.6078
  • F1 Score: 0.6981

GLUE MRPC Metrics

For more details, visit my GitHub.

Thanks for your interest in this model!

Downloads last month
37
GGUF
Model size
7.62B params
Architecture
qwen2

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Evaluation results