avemio-digital commited on
Commit
982fc71
verified
1 Parent(s): 7272494

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -1,15 +1,15 @@
1
  ---
2
  license: llama3.1
3
  datasets:
4
- - avemio/GRAG-CPT-HESSIAN-AI
5
- - avemio/GRAG-SFT-ShareGPT-HESSIAN-AI
6
- - avemio/GRAG-ORPO-ShareGPT-HESSIAN-AI
7
  - VAGOsolutions/SauerkrautLM-Fermented-GER-DPO
8
  - VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO
9
  language:
10
  - en
11
  - de
12
- base_model: avemio/GRAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI
13
  base_model_relation: merge
14
  pipeline_tag: question-answering
15
  tags:
@@ -23,9 +23,9 @@ tags:
23
  - gguf-my-repo
24
  ---
25
 
26
- # avemio-digital/GRAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI-Q8_0-GGUF
27
- This model was converted to GGUF format from [`avemio/GRAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI`](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
28
- Refer to the [original model card](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI) for more details on the model.
29
 
30
  ## Use with llama.cpp
31
  Install llama.cpp through brew (works on Mac and Linux)
@@ -38,12 +38,12 @@ Invoke the llama.cpp server or the CLI.
38
 
39
  ### CLI:
40
  ```bash
41
- llama-cli --hf-repo avemio-digital/GRAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI-Q8_0-GGUF --hf-file grag-llama-3.1-8b-merged-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
42
  ```
43
 
44
  ### Server:
45
  ```bash
46
- llama-server --hf-repo avemio-digital/GRAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI-Q8_0-GGUF --hf-file grag-llama-3.1-8b-merged-hessian-ai-q8_0.gguf -c 2048
47
  ```
48
 
49
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -60,9 +60,9 @@ cd llama.cpp && LLAMA_CURL=1 make
60
 
61
  Step 3: Run inference through the main binary.
62
  ```
63
- ./llama-cli --hf-repo avemio-digital/GRAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI-Q8_0-GGUF --hf-file grag-llama-3.1-8b-merged-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
64
  ```
65
  or
66
  ```
67
- ./llama-server --hf-repo avemio-digital/GRAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI-Q8_0-GGUF --hf-file grag-llama-3.1-8b-merged-hessian-ai-q8_0.gguf -c 2048
68
  ```
 
1
  ---
2
  license: llama3.1
3
  datasets:
4
+ - avemio/German-RAG-CPT-HESSIAN-AI
5
+ - avemio/German-RAG-SFT-ShareGPT-HESSIAN-AI
6
+ - avemio/German-RAG-ORPO-ShareGPT-HESSIAN-AI
7
  - VAGOsolutions/SauerkrautLM-Fermented-GER-DPO
8
  - VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO
9
  language:
10
  - en
11
  - de
12
+ base_model: avemio/German-RAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI
13
  base_model_relation: merge
14
  pipeline_tag: question-answering
15
  tags:
 
23
  - gguf-my-repo
24
  ---
25
 
26
+ # avemio-digital/German-RAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI-Q8_0-GGUF
27
+ This model was converted to GGUF format from [`avemio/German-RAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI`](https://huggingface.co/avemio/German-RAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
28
+ Refer to the [original model card](https://huggingface.co/avemio/German-RAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI) for more details on the model.
29
 
30
  ## Use with llama.cpp
31
  Install llama.cpp through brew (works on Mac and Linux)
 
38
 
39
  ### CLI:
40
  ```bash
41
+ llama-cli --hf-repo avemio-digital/German-RAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-llama-3.1-8b-merged-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
42
  ```
43
 
44
  ### Server:
45
  ```bash
46
+ llama-server --hf-repo avemio-digital/German-RAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-llama-3.1-8b-merged-hessian-ai-q8_0.gguf -c 2048
47
  ```
48
 
49
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
60
 
61
  Step 3: Run inference through the main binary.
62
  ```
63
+ ./llama-cli --hf-repo avemio-digital/German-RAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-llama-3.1-8b-merged-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
64
  ```
65
  or
66
  ```
67
+ ./llama-server --hf-repo avemio-digital/German-RAG-LLAMA-3.1-8B-MERGED-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-llama-3.1-8b-merged-hessian-ai-q8_0.gguf -c 2048
68
  ```