Cebtenzzre commited on
Commit
0188c9b
·
1 Parent(s): 393a6bc
Files changed (1) hide show
  1. README.md +2 -10
README.md CHANGED
@@ -14,19 +14,13 @@ tags:
14
  - sentence-similarity
15
  ---
16
 
17
- ***
18
- **Note**: For compatiblity with current llama.cpp, please download the files published on 2/15/2024. The files originally published here will fail to load.
19
- ***
20
-
21
- <br/>
22
-
23
  # nomic-embed-text-v1.5 - GGUF
24
 
25
  Original model: [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5)
26
 
27
  ## Usage
28
 
29
- Embedding text with `nomic-embed-text` requires task instruction prefixes at the beginning of each string.
30
 
31
  For example, the code below shows how to use the `search_query` prefix to embed user questions, e.g. in a RAG application.
32
 
@@ -36,9 +30,7 @@ To see the full set of task instructions available & how they are designed to be
36
 
37
  This repo contains llama.cpp-compatible files for [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) in GGUF format.
38
 
39
- llama.cpp will default to 2048 tokens of context with these files. To use the full 8192 tokens that Nomic Embed is benchmarked on, you will have to choose a context extension method. The original model uses Dynamic NTK-Aware RoPE scaling, but that is not currently available in llama.cpp. A combination of YaRN and linear scaling is an acceptable substitute.
40
-
41
- These files were converted and quantized with llama.cpp [PR 5500](https://github.com/ggerganov/llama.cpp/pull/5500), commit [34aa045de](https://github.com/ggerganov/llama.cpp/pull/5500/commits/34aa045de44271ff7ad42858c75739303b8dc6eb).
42
 
43
  ## Example `llama.cpp` Command
44
 
 
14
  - sentence-similarity
15
  ---
16
 
 
 
 
 
 
 
17
  # nomic-embed-text-v1.5 - GGUF
18
 
19
  Original model: [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5)
20
 
21
  ## Usage
22
 
23
+ Embedding text with `nomic-embed-text` requires task instruction prefixes at the beginning of each string.
24
 
25
  For example, the code below shows how to use the `search_query` prefix to embed user questions, e.g. in a RAG application.
26
 
 
30
 
31
  This repo contains llama.cpp-compatible files for [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) in GGUF format.
32
 
33
+ llama.cpp will default to 2048 tokens of context with these files. For the full 8192 token context length, you will have to choose a context extension method. The 🤗 Transformers model uses Dynamic NTK-Aware RoPE scaling, but that is not currently available in llama.cpp.
 
 
34
 
35
  ## Example `llama.cpp` Command
36