medmekk HF Staff commited on
Commit
c068d63
·
verified ·
1 Parent(s): 98a6390

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -2
README.md CHANGED
@@ -111,8 +111,6 @@ For convenience and performance, we have provided `fp8`-quantized model checkpoi
111
 
112
  You can use the Qwen3-32B-FP8 model with serveral inference frameworks, including `transformers`, `sglang`, and `vllm`, as the original bfloat16 model.
113
  However, please pay attention to the following known issues:
114
- - `transformers`:
115
- - there are currently issues with the "fine-grained fp8" method in `transformers` for distributed inference. You may need to set the environment variable `CUDA_LAUNCH_BLOCKING=1` if multiple devices are used in inference.
116
 
117
  ## Switching Between Thinking and Non-Thinking Mode
118
 
 
111
 
112
  You can use the Qwen3-32B-FP8 model with serveral inference frameworks, including `transformers`, `sglang`, and `vllm`, as the original bfloat16 model.
113
  However, please pay attention to the following known issues:
 
 
114
 
115
  ## Switching Between Thinking and Non-Thinking Mode
116