Model Card for LLaMA 3.1 8B Instruct - YARA Rule Generation Fine-tuned

This model is a fine-tuned version of the LLaMA 3.1 8B Instruct model, specifically adapted for YARA rule generation and cybersecurity-related tasks.

Model Details

Model Description

This model is based on the LLaMA 3.1 8B Instruct model and has been fine-tuned on a custom dataset of YARA rules and cybersecurity-related content. It is designed to assist in generating YARA rules and provide more accurate and relevant responses to queries in the cybersecurity domain, with a focus on malware detection and threat hunting.

  • Developed by: Wyatt Roersma (No organization affiliation)
  • Model type: Instruct-tuned Large Language Model
  • Language(s) (NLP): English (primary), with potential for limited multilingual capabilities
  • License: [Specify the license, likely related to the original LLaMA 3.1 license]
  • Finetuned from model: meta-llama/Meta-Llama-3.1-8B-Instruct

Model Sources

Uses

Direct Use

This model can be used for a variety of cybersecurity-related tasks, including:

  • Generating YARA rules for malware detection
  • Assisting in the interpretation and improvement of existing YARA rules
  • Answering questions about YARA syntax and best practices
  • Providing explanations of cybersecurity threats and vulnerabilities
  • Offering guidance on malware analysis and threat hunting techniques

Out-of-Scope Use

This model should not be used for:

  • Generating or assisting in the creation of malicious code
  • Providing legal or professional security advice without expert oversight
  • Making critical security decisions without human verification
  • Replacing professional malware analysis or threat intelligence processes

Bias, Risks, and Limitations

  • The model may reflect biases present in its training data and the original LLaMA 3.1 model.
  • It may occasionally generate incorrect or inconsistent YARA rules, especially for very specific or novel malware families.
  • The model's knowledge is limited to its training data cutoff and does not include real-time threat intelligence.
  • Generated YARA rules should always be reviewed and tested by security professionals before deployment.

Recommendations

Users should verify and test all generated YARA rules before implementation. The model should be used as an assistant tool to aid in rule creation and cybersecurity tasks, not as a replacement for expert knowledge or up-to-date threat intelligence. Always consult with cybersecurity professionals for critical security decisions and rule deployments.

How to Get Started with the Model

Use the following code to get started with the model:

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig

# Load the model
model_name = "vtriple/Llama-3.1-8B-yara"
config = PeftConfig.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, model_name)

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)

# Example usage
prompt = "Generate a YARA rule to detect a PowerShell-based keylogger"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

Training Data

The model was fine-tuned on a custom dataset of YARA rules, cybersecurity-related questions and answers, and malware analysis reports. [You may want to add more specific details about your dataset here]

Training Procedure

Training Hyperparameters

  • Training regime: bf16 mixed precision
  • Optimizer: AdamW
  • Learning rate: 5e-5
  • Batch size: 4
  • Gradient accumulation steps: 4
  • Epochs: 5
  • Max steps: 4000

Evaluation

A custom YARA evaluation dataset was used to assess the model's performance in generating accurate and effective YARA rules. [You may want to add more details about your evaluation process and results]

Environmental Impact

  • Hardware Type: NVIDIA A100
  • Hours used: 12 Hours
  • Cloud Provider: vast.io

Technical Specifications

Model Architecture and Objective

This model uses the LLaMA 3.1 8B architecture with additional LoRA adapters for fine-tuning. It was trained using a causal language modeling objective on YARA rules and cybersecurity-specific data.

Compute Infrastructure

Hardware

Single NVIDIA A100 GPU

Software

  • Python 3.8+
  • PyTorch 2.0+
  • Transformers 4.28+
  • PEFT 0.12.0

Model Card Author

Wyatt Roersma

Model Card Contact

For questions about this model, please email Wyatt Roersma at [email protected].

Downloads last month
44
GGUF
Model size
83.9M params
Architecture
llama
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for vtriple/Llama-3.1-8B-yara

Adapter
(499)
this model