A newer version of this model is available: ibm-granite/granite-3.3-2b-instruct

granite-3.2-2b-instruct GGUF Models

Model Generation Details

This model was generated using llama.cpp at commit 5dd942de.


Click here to get info on choosing the right GGUF model format

Granite-3.2-2B-Instruct

Model Summary: Granite-3.2-2B-Instruct is an 2-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of Granite-3.1-2B-Instruct, it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required.

Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.

Intended Use: This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.

Capabilities

  • Thinking
  • Summarization
  • Text classification
  • Text extraction
  • Question-answering
  • Retrieval Augmented Generation (RAG)
  • Code related tasks
  • Function-calling tasks
  • Multilingual dialog use cases
  • Long-context tasks including long document/meeting summarization, long document QA, etc.

Generation: This is a simple example of how to use Granite-3.2-2B-Instruct model.

Install the following libraries:

pip install torch torchvision torchaudio
pip install accelerate
pip install transformers

Then, copy the snippet from the section that is relevant for your use case.

from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch

model_path="ibm-granite/granite-3.2-2b-instruct"
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
        model_path,
        device_map=device,
        torch_dtype=torch.bfloat16,
    )
tokenizer = AutoTokenizer.from_pretrained(
        model_path
)

conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}]

input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)

set_seed(42)
output = model.generate(
    **input_ids,
    max_new_tokens=8192,
)

prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print(prediction)

Example Outputs

  • thinking=True
Here is my thought process:

1. First, let's understand the composition of the initial 10 liters of 30% acid solution:
   - Acid content = 30% of 10 liters = 0.30 * 10 = 3 liters

2. We want to end up with a 50% acid solution. Let's denote the volume of the 70% acid solution to be added as 'x' liters.

3. The amount of acid in the final mixture (50% solution) should equal the initial acid content plus the acid from the added 70% solution.
   - Total acid in final mixture = 0.50 * (10 + x)

4. The equation representing the total acid content is:
   3 (from initial solution) + 0.70 * x (from added solution) = 0.50 * (10 + x)

5. Now, we solve for 'x':
   3 + 0.70x = 5 + 0.50x
   0.70x - 0.50x = 5 - 3
   0.20x = 2
   x = 2 / 0.20
   x = 10 liters

6. Therefore, 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution to achieve a 50% acid mixture.

Here is my response:

To solve this problem, we need to determine how much of a 70% acid solution must be added to a 10-liter solution that is already 30% acid to create a new mixture that is 50% acid. Here's how to calculate it:

1. **Initial Acid Content**:
   - The initial 10-liter solution contains 3 liters of acid (since 30% of 10 liters).

2. **Desired Final Acid Content**:
   - We want the final mixture to be 50% acid.

3. **Equation Setup**:
   - Let 'x' be the volume (in liters) of the 70% acid solution to be added.
   - The equation representing the total acid content in the final mixture (which is 10 liters from the initial solution plus 'x' liters of the added solution) should equal 50% of the total volume (10 + x).
   - This gives us: 3 (acid from initial solution) + 0.70x (acid from added solution) = 0.50(10 + x)

4. **Solving the Equation**:
   - First, simplify the right side of the equation: 0.50(10 + x) = 5 + 0.50x
   - Now, the equation is: 3 + 0.70x = 5 + 0.50x
   - Subtract 0.50x from both sides: 0.20x = 2
   - Divide both sides by 0.20: x = 2 / 0.20
   - x = 10 liters

5. **Conclusion**:
   - To achieve a 50% acid mixture from the initial 30% solution, you need to add 10 liters of a 70% acid solution.

So, the final answer is that 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution.
  • thinking=False
To solve this problem, we'll use the concept of volume and concentration. Let's denote the amount of 70% acid solution we need to add as "x" liters.

First, let's find out how much acid is in the initial 10-liter solution:

Acid in initial solution = 30% of 10 liters = 0.30 * 10 = 3 liters

Now, we want to end up with a 50% acid solution in a total volume of (10 + x) liters. Let's denote the final volume as V.

Final acid concentration = 50%
Final acid amount = 50% of V = 0.50 * V

We know the initial acid amount and the final acid amount, so we can set up an equation:

Initial acid amount + Acid from added solution = Final acid amount
3 liters + (70% of x) = 0.50 * (10 + x)

Now, let's solve for x:

0.70x + 3 = 0.50 * 10 + 0.50x
0.70x - 0.50x = 0.50 * 10 - 3
0.20x = 5 - 3
0.20x = 2
x = 2 / 0.20
x = 10 liters

So, you need to add 10 liters of a 70% acid solution to the initial 10-liter 30% acid solution to achieve a 50% acid mixture.

Evaluation Results:

Models ArenaHard Alpaca-Eval-2 MMLU PopQA TruthfulQA BigBenchHard DROP GSM8K HumanEval HumanEval+ IFEval AttaQ
Llama-3.1-8B-Instruct 36.43 27.22 69.15 28.79 52.79 72.66 61.48 83.24 85.32 80.15 79.10 83.43
DeepSeek-R1-Distill-Llama-8B 17.17 21.85 45.80 13.25 47.43 65.71 44.46 72.18 67.54 62.91 66.50 42.87
Qwen-2.5-7B-Instruct 25.44 30.34 74.30 18.12 63.06 70.40 54.71 84.46 93.35 89.91 74.90 81.90
DeepSeek-R1-Distill-Qwen-7B 10.36 15.35 50.72 9.94 47.14 65.04 42.76 78.47 79.89 78.43 59.10 42.45
Granite-3.1-8B-Instruct 37.58 30.34 66.77 28.7 65.84 68.55 50.78 79.15 89.63 85.79 73.20 85.73
Granite-3.1-2B-Instruct 23.3 27.17 57.11 20.55 59.79 54.46 18.68 67.55 79.45 75.26 63.59 84.7
Granite-3.2-8B-Instruct 55.25 61.19 66.79 28.04 66.92 64.77 50.95 81.65 89.35 85.72 74.31 85.42
Granite-3.2-2B-Instruct 24.86 34.51 57.18 20.56 59.8 52.27 21.12 67.02 80.13 73.39 61.55 83.23

Training Data: Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites.

Infrastructure: We train Granite-3.2-2B-Instruct using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations: Granite-3.2-2B-Instruct builds upon Granite-3.1-2B-Instruct, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to Granite-3.1-2B-Instruct remain relevant.

Resources


πŸš€ If you find these models useful

Help me test my AI-Powered Quantum Network Monitor Assistant with quantum-ready security checks:

πŸ‘‰ Quantum Network Monitor

The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : Source Code Quantum Network Monitor. You will also find the code I use to quantize the models if you want to do it yourself GGUFModelBuilder

πŸ’¬ How to test:
Choose an AI assistant type:

  • TurboLLM (GPT-4.1-mini)
  • HugLLM (Hugginface Open-source models)
  • TestLLM (Experimental CPU-only)

What I’m Testing

I’m pushing the limits of small open-source models for AI network monitoring, specifically:

  • Function calling against live network services
  • How small can a model go while still handling:
    • Automated Nmap security scans
    • Quantum-readiness checks
    • Network Monitoring tasks

🟑 TestLLM – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):

  • βœ… Zero-configuration setup
  • ⏳ 30s load time (slow inference but no API costs) . No token limited as the cost is low.
  • πŸ”§ Help wanted! If you’re into edge-device AI, let’s collaborate!

Other Assistants

🟒 TurboLLM – Uses gpt-4.1-mini :

  • **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
  • Create custom cmd processors to run .net code on Quantum Network Monitor Agents
  • Real-time network diagnostics and monitoring
  • Security Audits
  • Penetration testing (Nmap/Metasploit)

πŸ”΅ HugLLM – Latest Open-source models:

  • 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.

πŸ’‘ Example commands you could test:

  1. "Give me info on my websites SSL certificate"
  2. "Check if my server is using quantum safe encyption for communication"
  3. "Run a comprehensive security audit on my server"
  4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code on. This is a very flexible and powerful feature. Use with caution!

Final Word

I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is open source. Feel free to use whatever you find helpful.

If you appreciate the work, please consider buying me a coffee β˜•. Your support helps cover service costs and allows me to raise token limits for everyone.

I'm also open to job opportunities or sponsorship.

Thank you! 😊

Downloads last month
1,746
GGUF
Model size
2.53B params
Architecture
granite
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Mungert/granite-3.2-2b-instruct-GGUF

Quantized
(30)
this model

Collection including Mungert/granite-3.2-2b-instruct-GGUF