Fine-Tuned Gemma3 1B-IT - Web Accessibility Q&A

This model is a fine-tuned version of Google's Gemma 1B-IT, adapted for the task of answering questions related to web accessibility, WCAG (Web Content Accessibility Guidelines) principles, and general digital accessibility best practices. It was fine-tuned using the LoRA (Low-Rank Adaptation) method.

Model Details

Model Description

This model is a Causal Language Model (AutoModelForCausalLM) fine-tuned from google/gemma-3-1b-it. Its primary function is to serve as a conversational assistant for queries concerning web accessibility. It processes user questions about accessibility guidelines and concepts (e.g., alt text, color contrast, keyboard navigation) and generates informative answers based on the patterns learned from its fine-tuning dataset.

  • Developed by: omark807
  • Shared by: omark807
  • Model type: Causal Language Model (Fine-tuned via PEFT/LoRA)
  • Language(s) (NLP): English
  • License: GPL
  • Finetuned from model: google/gemma-3-1b-it

Model Sources

Uses

Direct Use

This model is intended for direct use as a conversational agent or a knowledge base query system for specific questions about digital accessibility. Potential users include web developers, designers, accessibility auditors, or anyone seeking quick information about WCAG guidelines and their implementation.

Downstream Use

This model could potentially be integrated into:

  • Accessibility testing tools to provide explanations for detected issues.
  • Chatbots on accessibility resource websites.
  • Educational platforms teaching web accessibility.

Out-of-Scope Use

This model is not intended for:

  • Providing legal advice or certifications regarding accessibility compliance.
  • Diagnosing user disabilities.
  • Generating content unrelated to web accessibility.
  • Generating harmful, biased, or unethical content.
  • Substituting professional accessibility audits or human expertise.

Bias, Risks, and Limitations

  • Data Limitations: The model's knowledge is limited to the omark807/web_a11y_dataset (73 samples for training, 15 for evaluation). It may not cover all aspects of web accessibility comprehensively, nor may it respond accurately to questions outside the scope of its training data. Its responses are highly dependent on the quality and breadth of this specific dataset.
  • Generalization: Due to the relatively small dataset size, the model's ability to generalize to new or complex accessibility questions beyond the training examples may be limited.
  • Bias Inheritance: The model inherits potential biases present in its base model (google/gemma-3-1b-it) and any biases inadvertently introduced through the fine-tuning dataset.
  • Factual Accuracy: While fine-tuned on factual data, LLMs can still "hallucinate" or provide inaccurate information. Users should cross-reference critical information.
  • No Human Evaluation: The evaluation performed is quantitative (loss-based), not qualitative or human-based, which is important for conversational models.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. For critical applications or for definitive answers, always consult official WCAG documentation, accessibility experts, or perform thorough manual audits.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Replace 'omark807' with your actual username/organization if different
model_id = "omark807/gemma3-finetuned-web-accessibility" # This will be the ID on Hugging Face Hub

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)

# Load the model. Adjust load_in_4bit and torch_dtype based on your GPU and desired precision.
# Model was trained in float16 due to bitsandbytes import failure.
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") 
# Or, if you want to try 4-bit inference (requires bitsandbytes to work):
# from bitsandbytes import BitsAndBytesConfig
# bnb_config_inference = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16)
# model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config_inference, device_map="auto")

# Example prompt following Gemma's instruction format
prompt = "<start_of_turn>user\nWhat is the purpose of alt text in images?<end_of_turn>\n<start_of_turn>model\n"

# Tokenize input
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)

# Generate response
outputs = model.generate(
    **input_ids, 
    max_new_tokens=256, 
    num_return_sequences=1,
    do_sample=True, # Recommended for more varied responses
    top_p=0.9,      # Top-p sampling
    temperature=0.7 # Sampling temperature
)

# Decode and print response
response = tokenizer.decode(outputs[0], skip_special_tokens=False)
print(response)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for omark807/web_a11y_model

Finetuned
(140)
this model

Dataset used to train omark807/web_a11y_model