A fine-tuned version of the LLaMA-2-7b model, trained specifically to generate humorous responses. It is optimized using Supervised Fine-Tuning (SFT) on a dataset of prompts and completions curated to enhance humor, with the data scraped from a SubReddit known for its comedic content. The model is designed to understand and produce witty, contextually relevant, and engaging responses, making it suitable for applications requiring humor generation.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ALEXIOSTER/Humorous_SFT_LLama2_7b

Finetuned
(816)
this model