Probably don't use this model, I'm just tinkering, but it's a multi-turn, multi-speaker model attempt trained from /r/wallstreetbets data that you can find: https://huggingface.co/datasets/Sentdex/WSB-003.005

#https://huggingface.co/docs/peft/quicktour 

from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
import torch

model = AutoPeftModelForCausalLM.from_pretrained("Sentdex/Walls1337bot-Llama2-7B-003.005.5000")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Llama-2-7b-chat-hf")

model = model.to("cuda")
model.eval()

prompt = "Your text here."
formatted_prompt = f"### BEGIN CONVERSATION ###\n\n## Speaker_0: ##\n{prompt}\n\n## Walls1337bot: ##\n"
inputs = tokenizer(formatted_prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=128)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0])
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Sentdex/Walls1337bot-Llama2-7B-003.005.5000

Adapter
(367)
this model

Dataset used to train Sentdex/Walls1337bot-Llama2-7B-003.005.5000

Space using Sentdex/Walls1337bot-Llama2-7B-003.005.5000 1