You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

LoRA Prabhupada Model

This directory contains the LoRA (Low-Rank Adaptation) model files fine-tuned on the Prabhupada Dataset. This model is designed to be merged with a base Llama 2 model to enhance its ability to generate text and answer questions based on the teachings and writings of A.C. Bhaktivedanta Swami Prabhupada.

Model Description

The model is a LoRA adapter, meaning it contains only the necessary weights to adapt a pre-trained large language model (like Llama 2) to a specific domain (in this case, the works of Srila Prabhupada). This approach significantly reduces the size of the fine-tuned model and makes it more efficient to store and deploy.

Training Details

This LoRA model was fine-tuned using the prabhupada_dataset.jsonl dataset, which consists of texts from various works by A.C. Bhaktivedanta Swami Prabhupada. The training process aimed to teach the base model to generate responses consistent with the style and content of these texts.

Key training parameters included:

  • Base Model: Llama 2 (or a compatible variant)
  • PEFT Method: LoRA
  • Quantization: 4-bit (int4)
  • Epochs: 3
  • Learning Rate: 0.00003
  • LoRA R: 16
  • LoRA Alpha: 32
  • Gradient Accumulation Steps: 4

Usage

To use this model, you will need to load a compatible base Llama 2 model and then merge these LoRA adapter weights with it. The resulting merged model can then be used for inference, such as generating text, summarizing documents, or answering questions related to the Prabhupada texts.

Example (conceptual):

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model and tokenizer
model_name = "meta-llama/Llama-2-7b-hf" # Example base model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Load LoRA adapter
lora_model_path = "./huggingface_autotrain/model"
model = PeftModel.from_pretrained(model, lora_model_path)
model = model.merge_and_unload() # Merge LoRA weights into the base model

# Now you can use the 'model' for inference

Files in this Directory

  • adapter_config.json: Configuration for the LoRA adapter.
  • adapter_model.safetensors: The LoRA adapter weights.
  • special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json, merges.txt: Tokenizer files necessary for processing text with the model.

License

This LoRA model is derived from a base Llama 2 model and fine-tuned on the Prabhupada Dataset. Please refer to the licenses of the base model and the dataset for comprehensive licensing information. The use of this model for research and non-commercial purposes is generally permissible, but commercial use may require further consideration.

Downloads last month
45