Transformers
GGUF
English
code
hpc
parallel
axonn
mradermacher commited on
Commit
109daf4
·
verified ·
1 Parent(s): 05268d4

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -7,6 +7,8 @@ datasets:
7
  language:
8
  - en
9
  library_name: transformers
 
 
10
  quantized_by: mradermacher
11
  tags:
12
  - code
@@ -24,6 +26,9 @@ tags:
24
  static quants of https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b
25
 
26
  <!-- provided-files -->
 
 
 
27
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/hpc-coder-v2-6.7b-i1-GGUF
28
  ## Usage
29
 
@@ -67,6 +72,6 @@ questions you might have and/or if you want some other model quantized.
67
 
68
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
69
  me use its servers and providing upgrades to my workstation to enable
70
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
71
 
72
  <!-- end -->
 
7
  language:
8
  - en
9
  library_name: transformers
10
+ mradermacher:
11
+ readme_rev: 1
12
  quantized_by: mradermacher
13
  tags:
14
  - code
 
26
  static quants of https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b
27
 
28
  <!-- provided-files -->
29
+
30
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#hpc-coder-v2-6.7b-GGUF).***
31
+
32
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/hpc-coder-v2-6.7b-i1-GGUF
33
  ## Usage
34
 
 
72
 
73
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
74
  me use its servers and providing upgrades to my workstation to enable
75
+ this work in my free time.
76
 
77
  <!-- end -->