mradermacher commited on
Commit
ecd1e4c
·
verified ·
1 Parent(s): 268912b

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -5,6 +5,8 @@ language:
5
  - ko
6
  library_name: transformers
7
  license: llama3.1
 
 
8
  quantized_by: mradermacher
9
  tags:
10
  - llama-3.1
@@ -21,6 +23,9 @@ tags:
21
  static quants of https://huggingface.co/lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied
22
 
23
  <!-- provided-files -->
 
 
 
24
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-VARCO-8B-Instruct-LLaMAfied-i1-GGUF
25
  ## Usage
26
 
@@ -64,6 +69,6 @@ questions you might have and/or if you want some other model quantized.
64
 
65
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
66
  me use its servers and providing upgrades to my workstation to enable
67
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
68
 
69
  <!-- end -->
 
5
  - ko
6
  library_name: transformers
7
  license: llama3.1
8
+ mradermacher:
9
+ readme_rev: 1
10
  quantized_by: mradermacher
11
  tags:
12
  - llama-3.1
 
23
  static quants of https://huggingface.co/lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied
24
 
25
  <!-- provided-files -->
26
+
27
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-VARCO-8B-Instruct-LLaMAfied-GGUF).***
28
+
29
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-VARCO-8B-Instruct-LLaMAfied-i1-GGUF
30
  ## Usage
31
 
 
69
 
70
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
71
  me use its servers and providing upgrades to my workstation to enable
72
+ this work in my free time.
73
 
74
  <!-- end -->