Text Generation
scaling

GGUF

#3
by enelpe - opened

Could you convert and upload a GGUF version of the model?

I assume this will no easily be possible since the model being setup to work with AA's scaling library for inference (https://github.com/Aleph-Alpha/scaling). I dont know if the config.yml can be converted to a config.json to work with llama.cpp's gguf conversion pipeline.

Apparently they don't want the whole world to be able to test the model. Okay, there is now an HF version

yes 4 months and and same ... no GGUF ... or another base model that is compatible to llama.cpp

or the german dont know how it works ^^

Sign up or log in to comment