OLMo-2-0425-1B GGUF Models

Model Generation Details

This model was generated using llama.cpp at commit 8c83449.

Choosing the Right Model Format

Selecting the correct model format depends on your hardware capabilities and memory constraints.

BF16 (Brain Float 16) – Use if BF16 acceleration is available

  • A 16-bit floating-point format designed for faster computation while retaining good precision.
  • Provides similar dynamic range as FP32 but with lower memory usage.
  • Recommended if your hardware supports BF16 acceleration (check your device's specs).
  • Ideal for high-performance inference with reduced memory footprint compared to FP32.

πŸ“Œ Use BF16 if:
βœ” Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
βœ” You want higher precision while saving memory.
βœ” You plan to requantize the model into another format.

πŸ“Œ Avoid BF16 if:
❌ Your hardware does not support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.


F16 (Float 16) – More widely supported than BF16

  • A 16-bit floating-point high precision but with less of range of values than BF16.
  • Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
  • Slightly lower numerical precision than BF16 but generally sufficient for inference.

πŸ“Œ Use F16 if:
βœ” Your hardware supports FP16 but not BF16.
βœ” You need a balance between speed, memory usage, and accuracy.
βœ” You are running on a GPU or another device optimized for FP16 computations.

πŸ“Œ Avoid F16 if:
❌ Your device lacks native FP16 support (it may run slower than expected).
❌ You have memory limitations.


Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference

Quantization reduces model size and memory usage while maintaining as much accuracy as possible.

  • Lower-bit models (Q4_K) β†’ Best for minimal memory usage, may have lower precision.
  • Higher-bit models (Q6_K, Q8_0) β†’ Better accuracy, requires more memory.

πŸ“Œ Use Quantized Models if:
βœ” You are running inference on a CPU and need an optimized model.
βœ” Your device has low VRAM and cannot load full-precision models.
βœ” You want to reduce memory footprint while keeping reasonable accuracy.

πŸ“Œ Avoid Quantized Models if:
❌ You need maximum accuracy (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).


Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)

These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.

  • IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.

    • Use case: Best for ultra-low-memory devices where even Q4_K is too large.
    • Trade-off: Lower accuracy compared to higher-bit quantizations.
  • IQ3_S: Small block size for maximum memory efficiency.

    • Use case: Best for low-memory devices where IQ3_XS is too aggressive.
  • IQ3_M: Medium block size for better accuracy than IQ3_S.

    • Use case: Suitable for low-memory devices where IQ3_S is too limiting.
  • Q4_K: 4-bit quantization with block-wise optimization for better accuracy.

    • Use case: Best for low-memory devices where Q6_K is too large.
  • Q4_0: Pure 4-bit quantization, optimized for ARM devices.

    • Use case: Best for ARM-based devices or low-memory environments.

Summary Table: Model Format Selection

Model Format Precision Memory Usage Device Requirements Best Use Case
BF16 Highest High BF16-supported GPU/CPUs High-speed inference with reduced memory
F16 High High FP16-supported devices GPU inference when BF16 isn't available
Q4_K Medium Low Low CPU or Low-VRAM devices Best for memory-constrained environments
Q6_K Medium Moderate CPU with more memory Better accuracy while still being quantized
Q8_0 High Moderate CPU or GPU with enough VRAM Best accuracy among quantized models
IQ3_XS Very Low Very Low Ultra-low-memory devices Extreme memory efficiency and low accuracy
Q4_0 Low Low ARM or low-memory devices llama.cpp can optimize for ARM devices

Included Files & Details

OLMo-2-0425-1B-bf16.gguf

  • Model weights preserved in BF16.
  • Use this if you want to requantize the model into a different format.
  • Best if your device supports BF16 acceleration.

OLMo-2-0425-1B-f16.gguf

  • Model weights stored in F16.
  • Use if your device supports FP16, especially if BF16 is not available.

OLMo-2-0425-1B-bf16-q8_0.gguf

  • Output & embeddings remain in BF16.
  • All other layers quantized to Q8_0.
  • Use if your device supports BF16 and you want a quantized version.

OLMo-2-0425-1B-f16-q8_0.gguf

  • Output & embeddings remain in F16.
  • All other layers quantized to Q8_0.

OLMo-2-0425-1B-q4_k.gguf

  • Output & embeddings quantized to Q8_0.
  • All other layers quantized to Q4_K.
  • Good for CPU inference with limited memory.

OLMo-2-0425-1B-q4_k_s.gguf

  • Smallest Q4_K variant, using less memory at the cost of accuracy.
  • Best for very low-memory setups.

OLMo-2-0425-1B-q6_k.gguf

  • Output & embeddings quantized to Q8_0.
  • All other layers quantized to Q6_K .

OLMo-2-0425-1B-q8_0.gguf

  • Fully Q8 quantized model for better accuracy.
  • Requires more memory but offers higher precision.

OLMo-2-0425-1B-iq3_xs.gguf

  • IQ3_XS quantization, optimized for extreme memory efficiency.
  • Best for ultra-low-memory devices.

OLMo-2-0425-1B-iq3_m.gguf

  • IQ3_M quantization, offering a medium block size for better accuracy.
  • Suitable for low-memory devices.

OLMo-2-0425-1B-q4_0.gguf

  • Pure Q4_0 quantization, optimized for ARM devices.
  • Best for low-memory environments.
  • Prefer IQ4_NL for better accuracy.

πŸš€ If you find these models useful

❀ Please click "Like" if you find this useful!
Help me test my AI-Powered Network Monitor Assistant with quantum-ready security checks:
πŸ‘‰ Free Network Monitor

πŸ’¬ How to test:
Choose an AI assistant type:

  • TurboLLM (GPT-4o-mini)
  • HugLLM (Hugginface Open-source)
  • TestLLM (Experimental CPU-only)

What I’m Testing

I’m pushing the limits of small open-source models for AI network monitoring, specifically:

  • Function calling against live network services
  • How small can a model go while still handling:
    • Automated Nmap scans
    • Quantum-readiness checks
    • Network Monitoring tasks

🟑 TestLLM – Current experimental model (llama.cpp on 2 CPU threads):

  • βœ… Zero-configuration setup
  • ⏳ 30s load time (slow inference but no API costs)
  • πŸ”§ Help wanted! If you’re into edge-device AI, let’s collaborate!

Other Assistants

🟒 TurboLLM – Uses gpt-4o-mini for:

πŸ”΅ HugLLM – Latest Open-source models:

  • 🌐 Runs on Hugging Face Inference API

πŸ’‘ Example commands to you could test:

  1. "Give me info on my websites SSL certificate"
  2. "Check if my server is using quantum safe encyption for communication"
  3. "Run a comprehensive security audit on my server"
  4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!

Model Details

OLMo Logo

Model Card for OLMo 2 1B

We introduce OLMo 2 1B, the smallest model in the OLMo 2 family. OLMo 2 was pre-trained on OLMo-mix-1124 and uses Dolmino-mix-1124 for mid-training.

OLMo 2 is the latest in a series of Open Language Models designed to enable the science of language models. We have released all code, checkpoints, logs, and associated training details on GitHub.

Size Training Tokens Layers Hidden Size Attention Heads Context Length
OLMo 2-1B 4 Trillion 16 2048 16 4096
OLMo 2-7B 4 Trillion 32 4096 32 4096
OLMo 2-13B 5 Trillion 40 5120 40 4096
OLMo 2-32B 6 Trillion 64 5120 40 4096

The core models released in this batch include the following:

Installation

OLMo 2 1B is supported in transformers v4.48 or higher:

pip install transformers>=4.48

If using vLLM, you will need to install from the main branch until v0.7.4 is released. Please

Inference

You can use OLMo with the standard HuggingFace transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-0425-1B")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is  a key component of any text-based application, but its effectiveness...'

For faster performance, you can quantize the model using the following method:

AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B",
    torch_dtype=torch.float16,
    load_in_8bit=True)  # Requires bitsandbytes

The quantized model is more sensitive to data types and CUDA operations. To avoid potential issues, it's recommended to pass the inputs directly to CUDA using:

inputs.input_ids.to('cuda')

We have released checkpoints for these models. For pretraining, the naming convention is stage1-stepXXX-tokensYYYB. For checkpoints with ingredients of the soup, the naming convention is stage2-ingredientN-stepXXX-tokensYYYB

To load a specific model revision with HuggingFace, simply add the argument revision:

olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B", revision="stage1-step140000-tokens294B")

Or, you can access all the revisions for the models via the following code snippet:

from huggingface_hub import list_repo_refs
out = list_repo_refs("allenai/OLMo-2-0425-1B")
branches = [b.name for b in out.branches]

Fine-tuning

Model fine-tuning can be done from the final checkpoint (the main revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.

  1. Fine-tune with the OLMo repository:
torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
    --data.paths=[{path_to_data}/input_ids.npy] \
    --data.label_mask_paths=[{path_to_data}/label_mask.npy] \
    --load_path={path_to_checkpoint} \
    --reset_trainer_state

For more documentation, see the GitHub README.

  1. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.

Model Description

  • Developed by: Allen Institute for AI (Ai2)
  • Model type: a Transformer style autoregressive language model.
  • Language(s) (NLP): English
  • License: The code and model are released under Apache 2.0.
  • Contact: Technical inquiries: [email protected]. Press: [email protected]
  • Date cutoff: Dec. 2023.

Model Sources

Evaluation

Core model results for OLMo 2 1B are found below.

Instruct Model Avg FLOPΓ—10Β²Β³ AE2 BBH DROP GSM8K IFE MATH MMLU Safety PQA TQA
Closed API models
GPT-3.5 Turbo 0125 60.5 n/a 38.7 66.6 70.2 74.3 66.9 41.2 70.2 69.1 45.0 62.9
GPT 4o Mini 0724 65.7 n/a 49.7 65.9 36.3 83.0 83.5 67.9 82.2 84.9 39.0 64.8
Open weights models 1-1.7B Parameters
SmolLM2 1.7B 34.2 1.1 5.8 39.8 30.9 45.3 51.6 20.3 34.3 52.4 16.4 45.3
Gemma 3 1B 38.3 1.2 20.4 39.4 25.1 35.0 60.6 40.3 38.9 70.2 9.6 43.8
Llama 3.1 1B 39.3 6.7 10.1 40.2 32.2 45.4 54.0 21.6 46.7 87.2 13.8 41.5
Qwen 2.5 1.5B 41.7 1.7 7.4 45.8 13.4 66.2 44.2 40.6 59.7 77.6 15.5 46.5
Fully-open models
OLMo 1B 0724 24.4 0.22 2.4 29.9 27.9 10.8 25.3 2.2 36.6 52.0 12.1 44.3
OLMo 2 1B 42.7 0.35 9.1 35.0 34.6 68.3 70.1 20.7 40.0 87.6 12.9 48.7

Model Details

Training

OLMo 2 1B OLMo 2 7B OLMo 2 13B OLMo 2 32B
Pretraining Stage 1 4 trillion tokens
(1 epoch)
4 trillion tokens
(1 epoch)
5 trillion tokens
(1.2 epochs)
6 trillion tokens
(1.5 epochs)
Pretraining Stage 2 50B tokens 50B tokens (3 runs)
merged
100B tokens (3 runs)
300B tokens (1 run)
merged
100B tokens (3 runs)
300B tokens (1 run)
merged
Post-training SFT+DPO+GRPO
(preference mix)
SFT + DPO + PPO
(preference mix)
SFT + DPO + PPO
(preference mix)
SFT + DPO + GRPO
(preference mix)

Stage 1: Initial Pretraining

  • Dataset: OLMo-mix-1124 (3.9T tokens)
  • Coverage: 95%+ of total pretraining budget
  • 1B Model: ~1 epoch

Stage 2: Mid-training

  • Dataset: Dolmino-Mix-1124
  • One training mix:
    • 50B tokens
  • Mix composition: 50% high-quality web data + academic/Q&A/instruction/math content

Model Merging

  • 1B Model: only 1 version is trained on a 50B mix, we did not merge.

Bias, Risks, and Limitations

Like any base or fine-tuned language model, AI can be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from OLMo or any LLM are often inaccurate, so facts should be verified.

Citation

@misc{olmo20242olmo2furious,
      title={{2 OLMo 2 Furious}},
      author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
      year={2024},
      eprint={2501.00656},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.00656},
}

Model Card Contact

For errors in this model card, contact [email protected].

Downloads last month
558
GGUF
Model size
1.48B params
Architecture
olmo2
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support