Veriforge-Gemma-2B-IT πŸ”§

veriforge-gemma-2b-it is a QLoRA-fine-tuned version of google/gemma-2b-it that specializes in prompt-based circuit synthesis for digital logic design, specifically in Verilog HDL.

πŸš€ Model Description

  • Base Model: google/gemma-2b-it
  • Fine-tuned By: louijiec
  • Method: QLoRA using PEFT and bitsandbytes
  • Data: 500 simulated Verilog gate examples (AND, OR, NAND, etc.)
  • Platform: Google Colab

🧐 Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "louijiec/veriforge-gemma-2b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

prompt = "### Prompt:\nWrite Verilog code for a 3-input XOR gate.\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ§ͺ Sample Output

module nand_3_input (output y, input a0, a1, a2);
  assign y = ~(a0 & a1 & a2);
endmodule

πŸ“š Training Details

  • LoRA rank: 8
  • Bits: 4-bit (QLoRA)
  • Max tokens: 512
  • Optimizer: AdamW, FP16
  • Epochs: 10
  • Batch Size: 2
  • Gradient Accumulation: 4
  • Logging Steps: 10

πŸ“Œ Citations

⚠️ Limitations

  • Trained only on simple gates
  • No memory/state logic (flip-flops, FSMs, etc.)
  • No formal verification or testbench evaluation

πŸ’ͺ Future Work

  • Add support for more circuit components (MUX, ALU)
  • Formal testbench generation
  • Build EDA pipeline integrations
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support