--- license: apache-2.0 base_model: - Zyphra/Zamba2-7B library_name: transformers pipeline_tag: text-generation --- # Model Card for Zamba2-7B-Instruct-v2 Zamba2-7B-Instruct-v2 is obtained from [Zamba2-7B](https://huggingface.co/Zyphra/Zamba2-7B) by fine-tuning on instruction-following and chat datasets. Zamba2-7B-Instruct-v2 is a hybrid model composed of state-space ([Mamba2](https://github.com/state-spaces/mamba)) and transformer blocks. The context window can be extended from 4k to 16k long-context by adjusting the rope frequency in the attention blocks (as described below). ## Quick start ### Prerequisites To use Zamba2-7B-Instruct-v2, install `transformers`: `pip install transformers -U` To install dependencies necessary to run Mamba2 kernels, install `mamba-ssm` from source (due to compatibility issues with PyTorch) as well as `causal-conv1d`: 1. `git clone https://github.com/state-spaces/mamba.git` 2. `cd mamba && git checkout v2.1.0 && pip install .` 3. `pip install causal-conv1d` You can run the model without using the optimized Mamba2 kernels, but it is **not** recommended as it will result in significantly higher latency and memory usage. ### Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Instantiate model and tokenizer tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-7B-Instruct-v2") model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-7B-Instruct-v2", device_map="cuda", torch_dtype=torch.bfloat16) # Format the input as a chat template user_turn_1 = "In one season a flower blooms three times. In one year, there is one blooming season. How many times do two flowers bloom in two years? Please include your logic." assistant_turn_1 = "In one season, a flower blooms three times. In one year, there is one blooming season. Therefore, in two years, there are two blooming seasons. Since each flower blooms three times in one season, in two blooming seasons, each flower will bloom six times. Since there are two flowers, the total number of times they will bloom in two years is 12." user_turn_2 = "How many times do the two flowers blossom in three years?" sample = [{'role': 'user', 'content': user_turn_1}, {'role': 'assistant', 'content': assistant_turn_1}, {'role': 'user', 'content': user_turn_2}] chat_sample = tokenizer.apply_chat_template(sample, tokenize=False) # Tokenize input and generate output input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False) print((tokenizer.decode(outputs[0]))) ``` To use the context-extended version of Zamba, please load the model with `use_long_context=True`, i.e.: ``` model = AutoModelForCausalLM.from_pretrained("Zamba2-7B-Instruct-v2", device_map="cuda", torch_dtype=torch.bfloat16, use_long_context=True) ``` ## Performance Zamba2-7B-Instruct-v2 punches above its weight, achieving extremely strong instruction-following benchmark scores. | Model | Size (B) | IFEval | BBH | GPQA | MATH (Hard) | MMLU Pro | MUSR | Aggregate | |:------|:------:|:-------:|:----:|:-----:|:-----------:|:----------:|:-----:|:-----------:| | Zamba2-7B-Instruct-v2 | 7.36 | 81.63 | 36.72 | 8.60 | 17.76 | 34.51 | 11.94 | 31.78 | | Zamba2-7B-Instruct | 7.36 | 69.89 | 36.18 | 8.81 | 13.02 | 32.81 | 9.20 | 28.32 | | Granite-3.1-8B-Instruct | 8.17 | 72.20 | 38.68 | 8.23 | 19.91 | 35.22 | 17.36 | 31.93 | | Llama-3.1-8B-Instruct | 8.03 | 78.07 | 34.68 | 2.74 | 17.10 | 37.83 | 8.13 | 29.76 | | Mistral-NeMo-Minitron-8B-Instruct | 8.00 | 58.51 | 31.50 | 3.91 | 5.81 | 32.87 | 10.93 | 23.92 | | Gemma2-9B-it | 9.24 | 74.35 | 46.46 | 13.38 | 0.12 | 38.73 | 9.66 | 30.45 | | Ministral-8B-Instruct-2410 | 8.02 | 52.02 | 38.45 | 6.12 | 11.15 | 39.87 | 8.06 | 25.95 | | Qwen2.5-7B-Instruct | 7.62 | 75.30 | 39.82 | 6.02 | 48.91 | 42.95 | 8.77 | 36.96 | Moreover, due to its unique hybrid SSM architecture, Zamba2-7B-Instruct-v2 achieves extremely low inference latency and rapid generation with a significantly smaller memory footprint than comparable transformer-based models. Time to First Token (TTFT) | Output Generation :-------------------------:|:-------------------------:  |  And memory overhead