Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "mrcuddle/Tiny-DarkLlama-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
Datasets used in training:
- ChaoticNeutrals/Synthetic-Dark-RP
- ChaoticNeutrals/Synthetic-RP
- ChaoticNeutrals/Luminous_Opus
- NobodyExistsOnTheInternet/ToxicQAFinal
Eval
huggingface (pretrained=mrcuddle/tiny-darkllama-chat), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 16
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
hellaswag | 1 | none | 0 | acc | ↑ | 0.4659 | ± | 0.0050 |
none | 0 | acc_norm | ↑ | 0.6044 | ± | 0.0049 | ||
lambada_openai | 1 | none | 0 | acc | ↑ | 0.6101 | ± | 0.0068 |
none | 0 | perplexity | ↓ | 5.9720 | ± | 0.1591 |
- Downloads last month
- 101
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for mrcuddle/Tiny-DarkLlama-Chat
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0