Xonaz81 commited on
Commit
93dde67
·
verified ·
1 Parent(s): db587b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -11,9 +11,9 @@ pipeline_tag: text-generation
11
 
12
  # Xonaz81/XortronCriminalComputingConfig-mlx-6Bit
13
 
14
- The Model [Xonaz81/XortronCriminalComputingConfig-mlx-6Bit](https://huggingface.co/Xonaz81/XortronCriminalComputingConfig-mlx-6Bit) was converted to MLX format from [darkc0de/XortronCriminalComputingConfig](https://huggingface.co/darkc0de/XortronCriminalComputingConfig) using mlx-lm version **0.22.3**.
15
 
16
- ## Use with mlx
17
 
18
  ```bash
19
  pip install mlx-lm
 
11
 
12
  # Xonaz81/XortronCriminalComputingConfig-mlx-6Bit
13
 
14
+ Because this model seems to be promising and there was no 6-bit version to be found, I decided to create one from the full model weights. This is a normal 6-bit MLX quant. No advanced DWQ quants for now but coming in the future! The original model [Xonaz81/XortronCriminalComputingConfig-mlx-6Bit](https://huggingface.co/Xonaz81/XortronCriminalComputingConfig-mlx-6Bit) was converted to MLX format from [darkc0de/XortronCriminalComputingConfig](https://huggingface.co/darkc0de/XortronCriminalComputingConfig) using mlx-lm
15
 
16
+ ## Use with mlx or LM-studio
17
 
18
  ```bash
19
  pip install mlx-lm