https://huggingface.co/authormist/authormist-originality

#1163
by dylrob1 - opened

Model: https://huggingface.co/authormist/authormist-originality
License: MIT (conversion allowed)

Since the model is so small I am really only interested in testing (f32, f16, Q8_0), mainly just need the GGUF. I am doing extensive research into AI detection and evasion and the constant cat and mouse game that seems to be going on between the two fields and what we might expect in the coming years.

Many thanks!

It's queued! :D
We unfortunately don't provide any F32 quants at the moment but you should get F16 and Q8_0 and all the other quants booth as static and weighted/imatrix quants.

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#authormist-originality-GGUF for quants to appear.

@nicoboss It is possible to create jobs that include f32 already, but I'd be hard pressed to see it's usefulness, compared to the effort of manually or automatically deciding when to provide one (as most source models are 16-bit).

Interesting so everything in the source model is in F32.
convert_hf_to_gguf.py without specifying --outtype creates a mixture between F16 and F32.
convert_hf_to_gguf.py with --outtype f32 convearts everything to F32
convert_hf_to_gguf.py with --outtype auto creates a mixture between BF16 and F32.

If possible, could I get the F32 GGUF model put into my summary page for download? This is one of the very few use cases where maybe their is a statistical difference between F32 and F16 and Q8. I am curious to see if AI detection scores have different averages across the different precision levels. I'd be more than happy to keep you guys updated on my findings!

I will try to upload it. I forgot the exact process to manualy upload a file to mradermacher. If I can't figgure it put I will just upload it to my account.

I figured it out hoe to upload it to mradermacher. Took me a while to figgure out that the current working directory matters and only relative path work but makes sense in hindsight.

nico1 /tmp/quant# /llmjob/share/bin/hfu authormist-originality-GGUF/authormist-originality.f32.gguf
using hugggingface-cli
nice: cannot set niceness: Permission denied
/llmjob/share/python/lib/python3.11/site-packages/huggingface_hub/commands/upload.py:215: UserWarning: Ignoring `--exclude` since a single file is uploaded.
  warnings.warn("Ignoring `--exclude` since a single file is uploaded.")
Uploading...: 100%|____________________________| 12.3G/12.3G [01:08<00:00, 181MB/s]
https://huggingface.co/mradermacher/authormist-originality-GGUF/blob/main/authormist-originality.f32.gguf

@dylrob1 You should be able to download the FP32 source model from https://huggingface.co/mradermacher/authormist-originality-GGUF/blob/main/authormist-originality.f32.gguf
I don't think it will apear on the download page or README.md but we will see.

Edit: Oh wow nice the download page shows my F32 quant.

THANK YOU!!!!

Sign up or log in to comment