--- base_model: - RoadToNowhere/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-w8a8 --- # RoadToNowhere/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-w8a8 (Quantized) ## Description This model is a quantized version of the original model `RoadToNowhere/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-w8a8`. It has been quantized using int8_weight_only quantization with torchao. ## Quantization Details - **Quantization Type**: int8_weight_only - **Group Size**: None ## Usage You can use this model in your applications by loading it directly from the Hugging Face Hub: ```python from transformers import AutoModel model = AutoModel.from_pretrained("RoadToNowhere/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-w8a8")