A fine-tuned version of the LLaMA-2-7b model, trained specifically to generate humorous responses. It is optimized using Supervised Fine-Tuning (SFT) on a dataset of prompts and completions curated to enhance humor, with the data scraped from a SubReddit known for its comedic content. The model is designed to understand and produce witty, contextually relevant, and engaging responses, making it suitable for applications requiring humor generation.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ALEXIOSTER/Humorous_SFT_LLama2_7b
Base model
meta-llama/Llama-2-7b-hf