Model Card for Wazuh Alert Classifier

Model Details

Model Description

This model is designed to classify Wazuh alerts as either true positive or false positive. It helps SOC analysts reduce false positives by filtering out non-critical alerts. The model is fine-tuned on security logs and Wazuh alerts using instruction-based learning.

  • Developed by: holil
  • Funded by: holil
  • Shared by: holil
  • Model type: Transformer-based classification model
  • Language(s) (NLP): English
  • License: MIT
  • Finetuned from model: LLaMA 3.1 8B

Model Sources

Uses

Direct Use

This model is intended for classifying Wazuh alerts as true positive or false positive. It assists SOC analysts in focusing on actionable alerts and reducing noise from false positives.

Downstream Use

This model can be integrated into SIEM systems, security automation platforms, and SOC dashboards to enhance alert classification.

Out-of-Scope Use

The model is not designed for general cybersecurity analysis outside of Wazuh alerts. It should not be used as a standalone security solution but as an aid for SOC analysts.

Bias, Risks, and Limitations

  • False Classifications: The model may misclassify alerts, requiring human verification.
  • Security Data Bias: Training data may not cover all attack patterns, leading to gaps in detection.
  • Limited Scope: The model is optimized for Wazuh alerts and may not perform well on other security logs.

Recommendations

  • Always validate classifications with security experts.
  • Regularly update training data to adapt to evolving attack patterns.

How to Get Started with the Model

To use the model, follow these steps:

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model_name = "kholil-lil/wazuh-model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Define the Wazuh alert input
input_text = """Classify the following Wazuh alert as a true positive or false positive. Respond only with 'True Positive' or 'False Positive'.

{
  "timestamp": "2025-03-05T00:03:50.341+0000",
  "rule": {
    "level": 5,
    "description": "Web server 400 error code.",
    "id": "31101",
    "firedtimes": 1,
    "mail": false,
    "groups": [
      "web",
      "accesslog",
      "attack"
    ],
    "pci_dss": [
      "6.5",
      "11.4"
    ],
    "gdpr": [
      "IV_35.7.d"
    ],
    "nist_800_53": [
      "SA.11",
      "SI.4"
    ],
    "tsc": [
      "CC6.6",
      "CC7.1",
      "CC8.1",
      "CC6.1",
      "CC6.8",
      "CC7.2",
      "CC7.3"
    ]
  },
  "agent": {
    "id": "xxx",
    "name": "xxx-environment",
    "ip": "xxx.xxx.xxx.xxx"
  },
  "manager": {
    "name": "wazuh-server"
  },
  "id": "1741133030.18922",
  "full_log": "8.220.202.246 - - [05/Mar/2025:00:03:49 +0000] \"GET /Kb8w HTTP/1.1\" 400 264 \"-\" \"Go-http-client/1.1\"",
  "decoder": {
    "name": "web-accesslog"
  },
  "data": {
    "protocol": "GET",
    "srcip": "8.220.202.246",
    "id": "400",
    "url": "/Kb8w"
  },
  "location": "/var/log/nginx/apidev-access.log"
}"""

# Tokenize the input
inputs = tokenizer(input_text, return_tensors="pt")

# Generate a response
outputs = model.generate(**inputs, max_new_tokens=5)

# Decode the output
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print("Model Response:", response)

# Example output
Model Response: True Positive

Training Details

Training Data

The model is trained on a dataset of Wazuh alerts labeled as true positive or false positive, collected from real-world SOC operations. The dataset was preprocessed and formatted using a custom Alpaca-style instruction-tuning template.

Training Procedure

  • Preprocessing: Log normalization, tokenization, and label encoding.
  • Training Regime: Mixed-precision FP16 training for efficiency.
  • Batch Size: 2
  • Gradient Accumulation Steps: 4
  • Learning Rate: 2e-4
  • Optimizer: AdamW 8-bit
  • Epochs: 3
  • Lora Config: r=16, lora_alpha=16, dropout=0
  • Memory Optimization: 4-bit quantization using unsloth

Evaluation

Testing Data, Factors & Metrics

Testing Data

A held-out set of labeled Wazuh alerts was used for evaluation.

Factors

  • Alert severity levels
  • Variations in log format

Metrics

  • Accuracy: 92%
  • Precision: 91%
  • Recall: 90%
  • F1 Score: 90.5%

Results

The model demonstrates high accuracy in classifying Wazuh alerts but may require periodic retraining to adapt to new threats.

Environmental Impact

  • Hardware Type: T4 GPU
  • Hours Used: 5
  • Cloud Provider: GCP
  • Carbon Emitted: Estimated using ML Impact Calculator

Technical Specifications

Model Architecture and Objective

  • Transformer-based classification model fine-tuned on LLaMA 3.1 8B
  • Optimized for security log classification
  • LoRA-based fine-tuning with unsloth

Compute Infrastructure

Hardware

  • NVIDIA Tesla T4 GPU (15 GB RAM)
  • RAM 12.7 GB

Software

  • Framework: PyTorch
  • Libraries: Transformers, Hugging Face Datasets, Unsloth
  • Training Platform: Google Cloud Platform (Compute Engine)

Citation

BibTeX:

@article{kholil2025wazuh,
  author = {Kholil Haq Alim Hakim},
  title = {Wazuh Alert Classification using LLaMA 3.1 8B},
  year = {2025},
  publisher = {Hugging Face}
}

Model Card Contact

For inquiries, contact holil.

Downloads last month
186
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kholil-lil/wazuh-model

Quantized
(211)
this model