Sentence Similarity
GGUF
feature-extraction
Cebtenzzre commited on
Commit
8a7e7cb
·
1 Parent(s): 064e143

update README

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -127,8 +127,14 @@ This model can be used with the [llama.cpp server](https://github.com/ggml-org/l
127
 
128
  Embedding text with `nomic-embed-text` requires task instruction prefixes at the beginning of each string.
129
 
130
- For example, the code below shows how to use the `search_query` prefix to embed user questions, e.g. in a RAG application:
131
 
 
 
 
 
 
 
132
  ```python
133
  import requests
134
 
@@ -149,7 +155,7 @@ for d, e in zip(docs, docs_embed):
149
  print(f'similarity {dot(query_embed, e):.2f}: {d!r}')
150
  ```
151
 
152
- Output:
153
  ```
154
  query: '跟我讲讲嵌入'
155
  similarity 0.48: '嵌入很酷'
 
127
 
128
  Embedding text with `nomic-embed-text` requires task instruction prefixes at the beginning of each string.
129
 
130
+ For example, the code below shows how to use the `search_query` prefix to embed user questions, e.g. in a RAG application.
131
 
132
+ Start a llama.cpp server:
133
+ ```
134
+ llama-server -m nomic-embed-text-v2-moe.bf16.gguf --embeddings
135
+ ```
136
+
137
+ And run this code:
138
  ```python
139
  import requests
140
 
 
155
  print(f'similarity {dot(query_embed, e):.2f}: {d!r}')
156
  ```
157
 
158
+ You should see output similar to this:
159
  ```
160
  query: '跟我讲讲嵌入'
161
  similarity 0.48: '嵌入很酷'