VulnZap Faraday Mini 1

VulnZap Faraday Mini 1 is a 4‑bit GGUF LoRA‑finetuned snapshot of unsloth/Qwen2.5‑Coder‑7B‑Instruct, trained on ~363 security‑oriented code snippets and patches.

intended use

  • Patch suggestion – given a vulnerable code block, return a fixed version.
  • Risk explanation – describe why the snippet is vulnerable and how the patch mitigates it.
  • CWE classification – identify vulnerability class from raw code.

Not designed for general chat; it excels at short, code‑focused prompts.

training details

  • Base model: Qwen 2.5 Coder 7 B (instruction‑tuned)
  • Fine‑tuning: LoRA (r = 64, α = 128) with Unsloth on A100 40 GB
  • Sequence length: 4 096 tokens
  • Epochs: 3 (effective batch = 8)
  • Quantisation: Q4_K_M via Unsloth’s GGUF exporter
  • Date: 2025-07-19

example

### task: patch the vulnerability
### language: typescript
### cwe: CWE‑434 – unrestricted file upload
### code
import express from 'express';
import multer from 'multer';
const upload = multer();
app.post('/upload', upload.single('file'), (req, res) => {
  /* vulnerable: accepts any file type without checks */
});
### response
<assistant outputs fixed code here>

evaluation

  • manual spot‑checks on 50 held‑out snippets show

    • 100 % patch compiles,
    • 76 % fully resolves the vulnerability.

(Automatic benchmarks will be added soon.)

limitations & bias

  • trained on open‑source repos → may under‑perform on exotic frameworks
  • no guarantee the patch is production‑ready; always review before deploying
  • doesn’t reason about business‑logic flaws beyond the CWE classes seen in fine‑tuning data

license

Apache 2.0 for both the adapter and the exported GGUF.


(c) 2025 PlawLabs – questions: yaz [at] plawlabs.com

Downloads last month
21
Safetensors
Model size
7.78B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for plawlabs/vulnzap-faraday-mini-1

Base model

Qwen/Qwen2.5-7B
Quantized
(13)
this model