dolly-3b-lora(Finetuned)
This model is a fine-tuned version of the Dolly V2 3B language model, enhanced with Parameter-Efficient Fine-Tuning (PEFT) using Low-Rank Adaptation (LoRA). It was fine-tuned on the LaMini-instruction dataset to improve its ability to follow instructions and generate coherent responses for various tasks.
Model Details
Model Description
This is a fine-tuned version of the databricks/dolly-v2-3b
model, adapted using LoRA on the LaMini-instruction dataset. The model is designed for instruction-following tasks, leveraging the efficiency of LoRA to fine-tune approximately 2.93% of the total parameters while maintaining performance. It supports text generation tasks and has been optimized for use on GPU hardware with 8-bit quantization, with a fallback to CPU if needed.
- Developed by: avinashhm
- Shared by : avinashhm
- Model type: Causal Language Model
- Language(s) (NLP): English
- License: Apache-2.0
- Finetuned from model : databricks/dolly-v2-3b
Model Sources
- Repository: https://huggingface.co/avinashhm/dolly-3b-lora
Uses
Direct Use
The model is intended for direct use in text generation tasks, particularly for instruction-following scenarios such as answering questions, generating lists, or writing short narratives. It can be used by developers, researchers, or hobbyists working on natural language processing applications.
Downstream Use [optional]
The model can be further fine-tuned for specific tasks, such as chatbots, virtual assistants, or specialized text generation applications. It can be integrated into larger ecosystems requiring instruction-based text generation.
Out-of-Scope Use
The model is not designed for real-time, safety-critical applications or tasks requiring factual accuracy without verification, as it may generate incorrect or biased responses. It should not be used for malicious purposes, such as generating harmful content or misinformation.
Bias, Risks, and Limitations
The model inherits biases from the LaMini-instruction dataset and the base Dolly V2 3B model. It may produce biased, incomplete, or factually incorrect responses, particularly for sensitive topics. Performance is limited by the small fine-tuning dataset (200 samples) and LoRA configuration, which may not generalize well to all instruction types. Some responses may lack depth or coherence for complex tasks due to limited training data and epochs.
Recommendations
Users should verify outputs for accuracy and appropriateness, especially in sensitive applications. Further fine-tuning with a larger, more diverse dataset could improve performance and reduce biases. Caution is advised when deploying in public-facing applications to avoid unintended consequences from biased or harmful outputs.
How to Get Started with the Model
Use the code below to get started with the model:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from peft import PeftModel
# Model names
base_model_name = "databricks/dolly-v2-3b"
peft_model_name = "avinashhm/dolly-3b-lora"
# Load Tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
# Load Base Model
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Load PEFT (LoRA) Adapter
model = PeftModel.from_pretrained(
base_model,
peft_model_name,
torch_dtype=torch.float16
)
# Merge adapter weights into base model (optional, improves speed)
model = model.merge_and_unload()
# Define prompt template
prompt_template = """Below is an instruction that describes a task. Write a response that appropriately completes the request. Instruction: {instruction}\n Response:"""
# Create Text Generation Pipeline
inf_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=256,
pad_token_id=tokenizer.eos_token_id,
truncation=True,
do_sample=True,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
# Example prompt
prompt = "List 5 reasons why someone should learn to cook."
formatted_prompt = prompt_template.format(instruction=prompt)
response = inf_pipeline(formatted_prompt)[0]['generated_text'].split(" Response:")[-1].strip()
print(response)
Model tree for avinashhm/dolly-3b-lora
Base model
databricks/dolly-v2-3b