mirror_dolly / README.md
dipeshmajithia's picture
Create README.md
9a34ab4 verified
metadata
language:
  - en
  - hi
  - gu
license:
  - apache-2.0
  - cc-by-sa-4.0
tags:
  - gguf
  - assistant
  - AI
  - Mirror
  - mirror_code
  - LLM
  - LoRA
  - ollama
  - llama.cpp
library_name: llama.cpp
model_creator: Dipesh Majithia
model_name: Mirror Dolly (GGUF)
datasets:
  - databricks/databricks-dolly-15k
base_model:
  - dipeshmajithia/MirrorCode

๐Ÿชž Mirror Dolly (GGUF) โ€“ Model Card

๐Ÿง  Summary

Mirror Dolly is a fine-tuned assistant-style language model built on top of dipeshmajithia/MirrorCode. It was fine-tuned for 1000 iterations on the Dolly 15k dataset using LoRA, and later merged and converted to GGUF for local inference.

Mirror Dolly is designed for structured and emotionally aware assistant conversations and supports lightweight deployment with llama.cpp, ollama, or text-generation-webui.


๐Ÿ“ฆ Model Overview

  • Base model: dipeshmajithia/MirrorCode
  • LoRA fine-tuning:
    • Dataset: Dolly 15k
    • Iterations: 1000
    • Layers: 4
    • Rank: 8
  • Merged and Converted: To GGUF via transformers + convert_hf_to_gguf.py
  • Quantization options: f16, q8_0, q4_0
  • Use cases:
    • Personal assistant
    • Structured explanations
    • Lightweight offline inference

๐Ÿ›  How to Use

โ–ถ๏ธ With llama.cpp

./main -m mirror_dolly.gguf -p "Who are you?"