Veriforge-Gemma-2B-IT π§
veriforge-gemma-2b-it
is a QLoRA-fine-tuned version of google/gemma-2b-it
that specializes in prompt-based circuit synthesis for digital logic design, specifically in Verilog HDL.
π Model Description
- Base Model:
google/gemma-2b-it
- Fine-tuned By: louijiec
- Method: QLoRA using PEFT and bitsandbytes
- Data: 500 simulated Verilog gate examples (AND, OR, NAND, etc.)
- Platform: Google Colab
π§ Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "louijiec/veriforge-gemma-2b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = "### Prompt:\nWrite Verilog code for a 3-input XOR gate.\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π§ͺ Sample Output
module nand_3_input (output y, input a0, a1, a2);
assign y = ~(a0 & a1 & a2);
endmodule
π Training Details
- LoRA rank: 8
- Bits: 4-bit (QLoRA)
- Max tokens: 512
- Optimizer: AdamW, FP16
- Epochs: 10
- Batch Size: 2
- Gradient Accumulation: 4
- Logging Steps: 10
π Citations
- Gemma by Google: https://huggingface.co/google/gemma-2b-it
- QLoRA: https://arxiv.org/abs/2305.14314
- PEFT: https://github.com/huggingface/peft
β οΈ Limitations
- Trained only on simple gates
- No memory/state logic (flip-flops, FSMs, etc.)
- No formal verification or testbench evaluation
πͺ Future Work
- Add support for more circuit components (MUX, ALU)
- Formal testbench generation
- Build EDA pipeline integrations
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support