auto-patch README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,8 @@ base_model: praeclarumjj3/Qwen-2.5-7B-deepscale-RL
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
|
|
|
|
6 |
quantized_by: mradermacher
|
7 |
---
|
8 |
## About
|
@@ -15,6 +17,9 @@ quantized_by: mradermacher
|
|
15 |
static quants of https://huggingface.co/praeclarumjj3/Qwen-2.5-7B-deepscale-RL
|
16 |
|
17 |
<!-- provided-files -->
|
|
|
|
|
|
|
18 |
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
|
19 |
## Usage
|
20 |
|
@@ -58,6 +63,6 @@ questions you might have and/or if you want some other model quantized.
|
|
58 |
|
59 |
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
60 |
me use its servers and providing upgrades to my workstation to enable
|
61 |
-
this work in my free time.
|
62 |
|
63 |
<!-- end -->
|
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
6 |
+
mradermacher:
|
7 |
+
readme_rev: 1
|
8 |
quantized_by: mradermacher
|
9 |
---
|
10 |
## About
|
|
|
17 |
static quants of https://huggingface.co/praeclarumjj3/Qwen-2.5-7B-deepscale-RL
|
18 |
|
19 |
<!-- provided-files -->
|
20 |
+
|
21 |
+
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen-2.5-7B-deepscale-RL-GGUF).***
|
22 |
+
|
23 |
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
|
24 |
## Usage
|
25 |
|
|
|
63 |
|
64 |
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
65 |
me use its servers and providing upgrades to my workstation to enable
|
66 |
+
this work in my free time.
|
67 |
|
68 |
<!-- end -->
|