You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Qwen3 0.6B - Real Estate Fine-Tuned Adapter (LoRA)

This model is a Qwen3-0.6B adapter fine-tuned using LoRA on a real estate dataset for tasks such as property description generation and value estimation. Fine-tuning was performed using LLaMA Factory.

Base Model

  • Qwen/Qwen3-0.6B
  • Fine-tuned using LoRA with lora_rank=64 targeting all transformer layers.

Fine-Tuning Details

Setting Value
Framework LLaMA Factory
Finetuning Type LoRA
LoRA Rank 64
Dataset Custom real estate dataset
Cutoff Length 3500 tokens
Epochs 3
Batch Size 1 (accumulated over 8 steps)
Learning Rate 1e-4
Scheduler Cosine
Evaluation Metric eval_loss
Best Model Criterion Lowest validation loss

Dataset

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-0.6B", device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B", trust_remote_code=True)

adapter = PeftModel.from_pretrained(base_model, "heba1998/Qwen-LoRA-Estate")
Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for heba1998/Qwen-LoRA-Estate

Finetuned
Qwen/Qwen3-0.6B
Adapter
(19)
this model

Dataset used to train heba1998/Qwen-LoRA-Estate