Papers
arxiv:2504.21039

Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report

Published on Apr 28
· Submitted by AmanPriyanshu on May 1
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

As transformer-based large language models (LLMs) increasingly permeate society, they have revolutionized domains such as software engineering, creative writing, and digital arts. However, their adoption in cybersecurity remains limited due to challenges like scarcity of specialized training data and complexity of representing cybersecurity-specific knowledge. To address these gaps, we present Foundation-Sec-8B, a cybersecurity-focused LLM built on the Llama 3.1 architecture and enhanced through continued pretraining on a carefully curated cybersecurity corpus. We evaluate Foundation-Sec-8B across both established and new cybersecurity benchmarks, showing that it matches Llama 3.1-70B and GPT-4o-mini in certain cybersecurity-specific tasks. By releasing our model to the public, we aim to accelerate progress and adoption of AI-driven tools in both public and private cybersecurity contexts.

Community

Paper author Paper submitter

This paper introduces Foundation-Sec-8B, a cybersecurity-focused LLM based on Llama 3.1 architecture with continued pretraining on a specialized security corpus. Evaluation demonstrates comparable performance to larger models on security-specific tasks. The model is a publicly released open-weights model to support more AI adoption within cybersecurity contexts (https://huggingface.co/fdtn-ai/Foundation-Sec-8B).

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.21039 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 1