--- license: apache-2.0 base_model: Qwen/Qwen2-7B-Instruct --- These are a whole bunch of conversions of qwen7b v2 in an attempt to fix the reduced performance while quantizing. The bf16 versions will NOT work with apple GPUs but will works with most cpus and newer nvidia cards ( older ones like 1080 series don't support bf16 inference well). Perplexity benchmarks will come later once an automated suite is written by me or whoemever, sorry have just been too busy and doing those properly takes all day.