I am now basing all future releases of the MFANN experiment using llama-3 as a base model, I may continue fine-tuning mistral-7b every other release

this model uses meta's llama-3 as its base, and benchmarks are pending

image/png

changed the model name to MFANNV0.6 due to a failed benchmark and the need to resubmit

edit: due to continuous benchmark fails I am renaming the model back to MFANNver0.6, the 3b model is also failing benchmarks for some reason despite the fact both models run fine on my machine :(

Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for netcat420/MFANNv0.6

Quantizations
1 model

Dataset used to train netcat420/MFANNv0.6