WizardLM Fine-Tuned on Pilgrims Dataset
This model is a fine-tuned version of TheBloke/wizardLM-7B-HF using QLoRA on a custom dataset designed around spiritual, philosophical, and existential questions.
Model Description
- Base Model: WizardLM 7B (HF format)
- Fine-tuning Method: QLoRA (Quantized Low-Rank Adaptation)
- Training Data: Custom pilgrims dataset (e.g.
Vibe: Atheist\nQuestion: How can I...
) - Intended Use: Conversational assistant for users exploring personal meaning, spiritual identity, or philosophical reflection.
Usage Example
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("chaima01/wizard-pilgrims-finetuned")
tokenizer = AutoTokenizer.from_pretrained("chaima01/wizard-pilgrims-finetuned")
input_text = "#### Human: Vibe: Atheist\nQuestion: How can I really get to know who I am beyond all the labels and roles I’ve taken on?"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs.input_ids,
max_new_tokens=256,
temperature=0.7,
- Downloads last month
- 32
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support