1B-Building-Engineering-LLM

Fine-tuned EleutherAI/pythia-1b for building-engineering tasks using 4-bit quant + LoRA

Model Size Quantization Adapter

A Birthday Gift
"For my father - who taught me that strong foundations matter in both buildings and life."
β€” Happy Birthday, Dad! (June 2025)


πŸ”— Quick Links

GitHub HuggingFace License


πŸ› οΈ Technical Specifications

Architecture

Component Implementation Details
Base Model EleutherAI/pythia-1b-deduped
Quantization 4-bit via BitsAndBytes
Adapter LoRA (r=8, alpha=16)
Training Framework PyTorch + HuggingFace Transformers

Training Data

  • Curated building-engineering corpus (4 months collection)
  • Key domains covered:
    • Structural design principles
    • Material specifications (concrete, insulation)
    • Building code compliance
    • Thermal performance metrics

Basic usage


from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "Irfanuruchi/1B-building-engineering-llm",
    trust_remote_code=True
)

tokenizer = AutoTokenizer.from_pretrained("Irfanuruchi/1B-building-engineering-llm")

prompt = """You are an experienced building engineer. Answer concisely:
Q: What factors affect concrete curing time?
A:"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Licence and compliance

This project is released under the Apache 2.0 License, covering both:

Model weights (inherited from base model) Training code and recipes

** Key requirments **

Include original copyright notice Document modifications if redistributing No additional restrictions may be applied

Disclaimer

While trained on quality engineering data:

Not certified for safety-critical applications Always verify critical advice with human experts Knowledge cutoff: June 2025 (may not reflect latest codes)

Downloads last month
14
Safetensors
Model size
622M params
Tensor type
F32
Β·
F16
Β·
U8
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support