Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- neural-bridge/rag-full-20000
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: question-answering
|
8 |
+
tags:
|
9 |
+
- retrieval-augmented-generation
|
10 |
+
---
|
11 |
+
# **Rago v2 13B**
|
12 |
+
**Rago v2 13B is a [Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf)-based retrieval-augmented generation-optimized model built by [Neural Bridge AI](https://www.neuralbridge.ai/) and trained on [RAG Full Dataset 20000](https://huggingface.co/datasets/neural-bridge/rag-full-20000). It is available under [Apache license 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).**
|
13 |
+
|
14 |
+
## **Model Details**
|
15 |
+
Rago v2 13B model is a retrieval-augmented generation-optimized (RAGO) model that enhances large language models by integrating an external authoritative knowledge base (context) for generating responses. This integration significantly improves the model's ability to produce relevant, accurate, and context-specific output across specialized domains or internal data without necessitating retraining. It addresses key challenges of large language models (LLMs), such as unpredictability, reliance on potentially outdated data, and the propagation of incorrect information, thereby improving user trust in AI applications. Rago v2 13B, specifically, is an advancement built upon the [Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf) model, optimized for retrieval-augmented generation, making it particularly effective in contextually aware response generation.
|
16 |
+
|
17 |
+
```python
|
18 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
19 |
+
import transformers
|
20 |
+
import torch
|
21 |
+
|
22 |
+
model = "neural-bridge/Rago-v2-13b"
|
23 |
+
|
24 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
25 |
+
pipeline = transformers.pipeline(
|
26 |
+
"text-generation",
|
27 |
+
model=model,
|
28 |
+
tokenizer=tokenizer,
|
29 |
+
torch_dtype=torch.bfloat16,
|
30 |
+
trust_remote_code=True,
|
31 |
+
device_map="auto",
|
32 |
+
)
|
33 |
+
sequences = pipeline(
|
34 |
+
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
|
35 |
+
max_length=200,
|
36 |
+
do_sample=True,
|
37 |
+
top_k=10,
|
38 |
+
num_return_sequences=1,
|
39 |
+
eos_token_id=tokenizer.eos_token_id,
|
40 |
+
)
|
41 |
+
for seq in sequences:
|
42 |
+
print(f"Result: {seq['generated_text']}")
|
43 |
+
```
|
44 |
+
|
45 |
+
|