base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B pipeline_tag: text-generation
Oracle-LLM-8B-GGUF
This is a fine-tuned version of deepseek-ai/DeepSeek-R1-Distill-Llama-8B
, converted to GGUF format (Q4_K_M quantization) for use with LM Studio and llama.cpp
.
It’s designed for cybersecurity tasks, specializing in network security, Blue Team/Red Team strategies, and securing operating systems.
Model Details
- Base Model:
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- Fine-Tuning: Trained on the Cybersecurity Bundle and Kali Linux Bundle, with a focus on adversarial tradecraft, penetration testing, and defensive strategies.
- Quantization: Q4_K_M (4-bit) to fit on GPUs with 12GB VRAM (e.g., RTX 4070 Super).
Usage with LM Studio
- Install LM Studio from https://lmstudio.ai/.
- Download
oracle_llm_8b_q4km.gguf
from this repository. - Load the model with 4-bit quantization (Q4_K_M).
- Set the system prompt:
- Repeat with oracle_llm_8b.gguf (Which is Full-Precision Model)
Disclaimer
This Model cannot Generate any Malware for educational or intentional purposes what so ever (It sucks). It can however analyze files. This is more of a lighter and faster reasoning model. This model, Oracle-LLM-8B-GGUF, is provided for educational and research purposes only. The creator is not responsible for any misuse, including the creation, distribution, or use of malware, or any other illegal or harmful activities. Users assume full responsibility for their actions and are advised to use this model at their own discretion, in compliance with all applicable laws and ethical standards.
- Downloads last month
- 0