-
-
-
-
-
-
Inference Providers
Active filters:
quark
fxmarty/llama-tiny-testing-quark-indev
Updated
fxmarty/llama-tiny-int4-per-group-sym
fxmarty/llama-tiny-w-fp8-a-fp8
fxmarty/llama-tiny-w-fp8-a-fp8-o-fp8
Updated
fxmarty/llama-tiny-w-int8-per-tensor
Updated
fxmarty/llama-small-int4-per-group-sym-awq
fxmarty/quark-legacy-int8
Updated
fxmarty/llama-tiny-w-int8-b-int8-per-tensor
fxmarty/llama-small-int4-per-group-sym-awq-old
amd-quark/llama-tiny-w-int8-per-tensor
Updated
•
48
amd-quark/llama-tiny-w-int8-b-int8-per-tensor
Updated
•
50
amd-quark/llama-tiny-w-fp8-a-fp8
Updated
•
49
amd-quark/llama-tiny-w-fp8-a-fp8-o-fp8
Updated
•
49
amd-quark/llama-tiny-int4-per-group-sym
Updated
•
49
amd-quark/llama-small-int4-per-group-sym-awq
Updated
•
49
amd-quark/quark-legacy-int8
amd/Llama-3.1-8B-Instruct-FP8-KV-Quark-test
amd/Llama-3.1-8B-Instruct-w-int8-a-int8-sym-test
EmbeddedLLM/Llama-3.1-8B-Instruct-w_fp8_per_channel_sym
Text Generation
•
Updated
•
7
amd-quark/llama-tiny-fp8-quark-quant-method
aigdat/Qwen2.5-Coder-7B-quantized-ppl-14
aigdat/Qwen2-7B-Instruct_quantized_int4_bfloat16
Updated
•
3
aigdat/Qwen2.5-1.5B-Instruct_quantized_int4_bfloat16
Updated
•
4
Davidqian123/granite-3.2-2b-instruct-amd-npu
aigdat/Qwen2.5-0.5B-Instruct-awq-int4-asym-g128-fp16
Davidqian123/llama-2-7b-chat-hf-amd-npu
Davidqian123/small_llama_npu
Davidqian123/small_llama3_npu
Davidqian123/small_granite_npu
superbigtree/Mistral-Nemo-Instruct-2407-FP8