|
--- |
|
datasets: |
|
- cerebras/SlimPajama-627B |
|
- HuggingFaceH4/ultrachat_200k |
|
- bigcode/starcoderdata |
|
- HuggingFaceH4/ultrafeedback_binarized |
|
- OEvortex/vortex-mini |
|
- Open-Orca/OpenOrca |
|
language: |
|
- en |
|
metrics: |
|
- speed |
|
library_name: transformers |
|
tags: |
|
- Text-Generation |
|
- Transformers |
|
- HelpingAI |
|
license: other |
|
license_name: hsul |
|
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md |
|
widget: |
|
- text: | |
|
<|system|> |
|
You are a chatbot who can be a teacher!</s> |
|
<|user|> |
|
Explain me working of AI .</s> |
|
<|assistant|> |
|
--- |
|
π **HelpingAI-Lite-1.5T Model Card** π |
|
|
|
π **Datasets used:** |
|
- cerebras/SlimPajama-627B |
|
- HuggingFaceH4/ultrachat_200k |
|
- bigcode/starcoderdata |
|
- HuggingFaceH4/ultrafeedback_binarized |
|
- OEvortex/vortex-mini |
|
- Open-Orca/OpenOrca |
|
|
|
π£οΈ **Language:** |
|
- English (en) |
|
|
|
|
|
π **License:** |
|
|
|
|
|
HelpingAI Simplified Universal License (HSUL) |
|
|
|
|
|
π§ **Model Overview:** |
|
HelpingAI-Lite-1.5T is an advanced version of the HelpingAI-Lite model, trained on a vast corpus of 1.5 trillion tokens. This extensive training data enables the model to provide precise and insightful responses, particularly for coding tasks. |
|
|
|
π§ **Usage Example:** |
|
```python |
|
from transformers import pipeline |
|
from accelerate import Accelerator |
|
|
|
# Initialize the accelerator |
|
accelerator = Accelerator() |
|
|
|
# Initialize the pipeline |
|
pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite-1.5T", device=accelerator.device) |
|
|
|
# Define the messages |
|
messages = [ |
|
{ |
|
"role": "system", |
|
"content": "You are interacting with a sophisticated chatbot model optimized for coding tasks!", |
|
}, |
|
{ |
|
"role": "user", |
|
"content": "Please generate a Python function that calculates the factorial of a given number.", |
|
}, |
|
] |
|
|
|
# Prepare the prompt |
|
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
|
|
# Generate predictions |
|
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
|
|
# Print the generated text |
|
print(outputs[0]["generated_text"]) |
|
``` |