SaiSaketh's picture
Update README.md
40a43e8 verified
metadata
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
  - text-generation
  - bloomz
  - fine-tuned
  - hf-inference-endpoint
model-index:
  - name: unh-academic-integrity-policy-560m
    results: []
language:
  - en

unh-academic-integrity-policy-560m

This model is a fine-tuned version of bigscience/bloomz-560m designed to generate academic policy-aligned text. It incorporates Retrieval-Augmented Generation (RAG) and Parameter-Efficient Fine-Tuning (PEFT) techniques to improve response relevance and reduce memory usage.

Intended Use

  • Academic policy Q/A text-generation
  • Text generation with contextual grounding
  • Research use in NLP and LLM alignment

Training Details

  • Learning rate: 2e-05
  • Batch size: 2 (train), 8 (eval)
  • Epochs: 2
  • Optimizer: Adam
  • Gradient accumulation: 4
  • Precision: Mixed (AMP)
  • Scheduler: Linear decay

Deployment

Optimized for deployment with Hugging Face Inference Endpoints. Also supports:

  • Amazon SageMaker
  • Azure ML
  • Friendli Inference

Deployment Status

This model is not currently deployed on a public inference provider. You can deploy it using Hugging Face Inference Endpoints or export it to services like Amazon SageMaker or Azure ML.

Framework Versions

  • Transformers: 4.35.2
  • PyTorch: 2.1.0+cu121
  • Datasets: 2.15.0
  • Tokenizers: 0.15.0