File size: 2,086 Bytes
3763e43
 
 
05c31c9
 
 
 
 
3763e43
 
05c31c9
 
 
 
 
 
 
6c5d5e9
 
 
05c31c9
 
 
 
 
 
 
3763e43
9e950f7
3763e43
9e950f7
05c31c9
 
9e950f7
05c31c9
 
 
 
9e950f7
 
 
05c31c9
9e950f7
6c5d5e9
 
 
 
05c31c9
9e950f7
 
05c31c9
9e950f7
05c31c9
 
 
 
 
 
3763e43
05c31c9
9e950f7
3763e43
05c31c9
 
 
 
9e950f7
05c31c9
 
 
9e950f7
05c31c9
 
3763e43
05c31c9
 
3763e43
05c31c9
 
3763e43
05c31c9
 
4bd95e8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- bigcode/starcoderdata
- HuggingFaceH4/ultrafeedback_binarized
- OEvortex/vortex-mini
- Open-Orca/OpenOrca
language:
- en
metrics:
- speed
library_name: transformers
tags:
- Text-Generation
- Transformers
- HelpingAI
license: other
license_name: hsul
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md
widget:
- text: |
    <|system|>
    You are a chatbot who can be a teacher!</s>
    <|user|>
    Explain me working of AI .</s>
    <|assistant|>
---
🌟 **HelpingAI-Lite-1.5T Model Card** 🌟

πŸ“Š **Datasets used:**
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- bigcode/starcoderdata
- HuggingFaceH4/ultrafeedback_binarized
- OEvortex/vortex-mini
- Open-Orca/OpenOrca

πŸ—£οΈ **Language:**
- English (en)


πŸ”’ **License:**


HelpingAI Simplified Universal License (HSUL)


🧠 **Model Overview:**
HelpingAI-Lite-1.5T is an advanced version of the HelpingAI-Lite model, trained on a vast corpus of 1.5 trillion tokens. This extensive training data enables the model to provide precise and insightful responses, particularly for coding tasks.

πŸ”§ **Usage Example:**
```python
from transformers import pipeline
from accelerate import Accelerator

# Initialize the accelerator
accelerator = Accelerator()

# Initialize the pipeline
pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite-1.5T", device=accelerator.device)

# Define the messages
messages = [
    {
        "role": "system",
        "content": "You are interacting with a sophisticated chatbot model optimized for coding tasks!",
    },
    {
        "role": "user",
        "content": "Please generate a Python function that calculates the factorial of a given number.",
    },
]

# Prepare the prompt
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

# Generate predictions
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)

# Print the generated text
print(outputs[0]["generated_text"])
```