Aleph Alpha Research Logo

Llama-TFree-HAT-Pretrained-7B-DPO

NOTE: This model has been pretrained from scratch and finetuned making use of Llama 3.3 for filtering. Adhering to the Llama license, we therefore name the model starting with the llama prefix

image/png

This model card provides an overview of our Llama-TFree-HAT-Pretrained-7B-DPO model, which is a tokenizer-free (TFree) foundation model developed by Aleph Alpha Research* and publicly available under the Open Aleph License, a license explicitly allowing for non-commercial research and educational use.

The model is based on our Hierarchical Autoregressive Transformer (HAT) architecture which is described originally in our paper. This novel architecture integrates character-level encoding and decoding with the word-level backbone, allowing for improved text compression (less sequence positions) and performance in the languages it has been trained on, and potentially higher robustness to prompt changes, as well as improved adaptability to new languages & domains via fine-tuning.

The model was initialized from TFree-HAT-Pretrained-7B-Base and post-trained and direct-preference-optimized in English & German on carefully curated data in compliance with applicable EU and national regulations, including copyright and data privacy laws. It shows strong proficiency in German, while also beating Llama 3.1 on many benchmarks in English. The direct-preference-optimization of Llama-TFree-HAT-Pretrained-7B-DPO prioritizes helpfulness and instruction following, making the model suitable for sensitive applications without the risk of over-refusal. The model has not been optimized for code generation and math and are thus not evaluated extensively on respective benchmarks.

You can find model weights and their corresponding safetensors conversions at the following links:

Model Access

We provide access to our models through the channels listed below.

  • HuggingFace: The model’s weights as well as basic inference implementation are available on HuggingFace under the Open Aleph License, a license explicitly allowing for non-commercial research and educational use.

We do not collect PII (personally identifiable information) for any of these channels. We do not log user inputs to the models. We do not train on user data.

Note: The same models are made available to users regardless of their geographic location and their input language but subject to sanction regimes, technology export regulations, and other restrictions that may apply. The same offering is provided to all countries within and external to the European Union if no legal restrictions apply.

How to use

Inference

We provide an inference module compatible with HuggingFace Transformers for running model inference. We recommend pinning the transformers library to version 4.46.3. Before executing the inference example below, make sure the hat-splitter package is installed in your environment.

pip install 'hat-splitter>=0.1.9' 'transformers==4.46.3' torch
pip install flash_attn

Download model weights and run inference using the following example:

import torch
from transformers import AutoModelForCausalLM
INPUT ="When was Rome founded?"
MODEL_ID = "Aleph-Alpha/Llama-TFree-HAT-Pretrained-7B-DPO"
model = AutoModelForCausalLM.from_pretrained(
    trust_remote_code=True,
    pretrained_model_name_or_path=MODEL_ID,
    attn_implementation="flash_attention_2",
).to("cuda", torch.bfloat16)
input_ids, cumulative_word_lengths = model._prepare_input(INPUT, add_llama_template=True)
model_output = model.generate(
    input_ids,
    cumulative_seq_lengths_per_word=cumulative_word_lengths,
    max_new_tokens=300,
    use_cache=False,
)
print("Prompt: ", INPUT)
print("Completion: ", model_output.completion_text)

Please note that the realized inference speed strongly depends on the maturity of the inference implementation beyond the intrinsic text compression of any model. Besides this huggingface transformers-based inference solution, we are also releasing a vLLM-based inference solution for our models that is optimized for batched inference. Please not that this vLLM inference for HAT is still under active development.

Prompt formatting

The prompt format used for our post-trained model is identical to the Llama prompt format. We highly recommend using it when prompting the models to ensure optimal performance for the direct-preference-optimized model versions. You can format your prompt in the recommended format by setting add_llama_template=True in the model._prepare_input method.

Evaluation

Performance: Our T-Free models deliver performance on par with current state-of-the-art OS memory-equivalent models in both English and German. For evaluation purposes, we compare our DPO model with Llama 3.1 8B Instruct and Tulu 3.1 8B. Respective benchmarks and results can be found in the tables below.

Efficiency: Our tokenizer-free approach results in improved text compression, providing a foundation for improved efficiency in inference speed. We measure in terms of words processed across all languages and domains. We define the metric as tokenizer fertility or bytes per sequence position, where a higher value indicates better performance. Latency and throughput are currently out of scope for research-centric evaluations and will be addressed in the future. Currently, our evaluation framework automatically measures bytes per sequence position across datasets, allowing us to derive text compression scores and analyze variations across different dataset distributions. The end to end resulting efficiency is depends on the inference implementation beyond the scope of the here provided inference implementation and reported compression scores.

image/png

Disclaimer: The results presented below were generated using our internal inference implementation, not the inference module mentioned above. As a sanity check, we reproduced some of the benchmarks using our evaluation framework with the huggingface inference code, but other results might still deviate slightly. We will also make source-available both our evaluation framework and a high-performance vLLM integration for this model to ensure reproducibility.

Metric Glossary

log_acc: Average Accuracy Loglikelihood
norm_log_acc: Average Normalized Loglikelihood Accuracy
comp_acc: Average Completion Accuracy
norm_prob_mass: Average Probability Mass Normalized
bleu: Average BLEU Score
rouge_gm: Average ROUGE-Geometric-Mean
F1: Average F1
CS: Chatbot Style
IF: Instruction Following
LC: Language Consistency
CI: Concordance Index
ES: Exponential Similarity

DPO (Post-Training) Benchmarks

MTBench winrates

English/German MTBench numbers are based on datasets created with FastChat for the corresponding models.

image/png

vs. Llama-3.1-8B-Instruct (Eng) vs. Llama-3.1-Tulu-3-8B-DPO (Eng) vs. Llama-3.1-8B-Instruct (Ger) vs. Llama-3.1-Tulu-3-8B-DPO (Ger)
Llama-TFree-HAT-Pretrained-7B-DPO 0.687 0.677 0.750 0.658
Group Task Metric Name Num Fewshot Llama-TFree-HAT-Pretrained-7B-DPO Llama-3.1-8B-Instruct Llama-3.1-Tulu-3.1-8B Llama-TFree-HAT-Pretrained-7B-DPO Compression Llama-3.1-8B-Instruct Compression Llama-3.1-Tulu-3.1-8B Compression
Knowledge MMLU log_acc 5 0.654 0.682 0.665 5.818 4.878 4.153
Knowledge Full Text MMLU norm_log_acc 5 0.658 0.681 0.675 5.849 5.062 4.408
Knowledge MMLU Pro norm_log_acc 5 0.376 0.400 0.369 5.135 4.090 3.637
Knowledge GPQA log_acc 0 0.299 0.308 0.299 5.260 3.851 3.408
Knowledge BBH norm_log_acc 3 0.489 0.515 0.494 5.332 4.389 3.668
Knowledge OpenBookQA norm_log_acc 10 0.506 0.516 0.534 7.101 6.795 4.041
Knowledge TruthfulQA prob_mass 6 0.429 0.373 0.355 6.607 5.535 3.791
Reasoning ARC Easy norm_log_acc 25 0.894 0.878 0.876 7.018 6.357 4.497
Reasoning ARC Challenge norm_log_acc 25 0.682 0.648 0.649 6.860 6.187 4.522
Reasoning Winogrande norm_log_acc 5 0.683 0.658 0.684 6.856 6.315 4.116
Reasoning HellaSwag norm_log_acc 10 0.784 0.783 0.809 5.980 5.260 4.427
German MMMLU norm_log_acc 5 0.610 0.587 0.570 6.630 3.932 3.383
German ARC Easy DE norm_log_acc 25 0.829 0.729 0.752 7.872 4.907 3.607
German ARC Challenge DE norm_log_acc 25 0.641 0.508 0.531 7.798 4.860 3.610
German Winogrande DE norm_log_acc 5 0.754 0.733 0.720 7.225 5.253 3.391
German HellaSwag DE norm_log_acc 10 0.717 0.616 0.677 6.971 4.145 3.603
German TruthfulQA DE prob_mass 6 0.418 0.330 0.333 7.394 4.633 3.268
German GSM8K DE comp_acc 8 0.574 0.582 0.698 4.84 3.338 2.963
German WMT16 bleu 3 31.205 33.895 33.008 6.811 5.031 3.999
German WMT16 Instruct bleu 3 31.408 33.882 33.168 6.863 5.096 4.062
Math GSM8K comp_acc 8 0.711 0.805 0.853 4.324 3.808 3.359
Long context QuALITY log_acc 0 0.376 0.415 0.429 4.867 4.292 4.275
Long context ZeroSCROLLS MuSiQue F1 0 0.238 0.200 0.181 5.636 4.430 4.388
Long context ZeroSCROLLS Qasper F1 0 0.228 0.242 0.290 5.934 4.826 4.810
Long context ZeroSCROLLS QuALITY log_acc 0 0.667 0.810 0.762 4.565 4.232 4.216
Long context ZeroSCROLLS SpaceDigest ES 0 0.278 0.647 0.533 5.770 3.888 4.506
Long context ZeroSCROLLS SQuALITY rouge_gm 0 0.144 0.166 0.158 4.965 4.244 4.239

Training Details

Model Architecture

The model uses a hierarchical autoregressive transformer (HAT) architecture consisting of three components: encoder, backbone, and decoder, together with connector layers between components. Encoder, backbone, and decoder are all instances of autoregressive transformers with pre-norm residual blocks in the style of Llama, using a SwiGLU unit as a feed-forward block, with all model parameters active during training and inference. The backbone model uses standard causal attention, while the encoder and decoder use local causal attention with a finite look-back window. The architecture of the backbone largely follows the design of LLama 3.1 8B (with embedding and language modeling head removed and weights randomly initialized). In addition, we added per-head QK-norm in the backbone, which we found important for training stability.

The encoder processes input text as a sequence of UTF-8 bytes and produces a sequence of activations of the same length. This sequence is then split into chunks corresponding to words or other semantic units in the text (this is further explained below). In the encoder-backbone connector layer, for each word, a learned latent vector cross-attends to its corresponding chunk of encoder activations. The resulting sequence of latent vectors then serves as input to the backbone. The backbone processes this latent sequence and produces a sequence of word-level representations. Finally, the decoder module is another transformer that acts on the byte-level activations and has an LM head that produces next-byte probabilities. To make use of the higher level information stored in the word-level embeddings during decoding, another cross-attention mechanism is used. In each transformer block of the decoder, every byte-level position cross-attends to the backbone’s word-level representations that correspond to the words preceding this byte.

Encoder module

119M
Number of layers 6
Number of attention heads 8
Head size 128
Number of Key-Value heads 8
Hidden size 1024
Cross-attention hidden size 4096
MLP expansion factor 2.75
MLP type SwiGLU
Sequence length 163840
Position embeddings RoPE with base 1e5
Attention type causal, local with window size 768
QK-norm disabled

Backbone module

7B
Number of layers 32
Number of attention heads 32
Head size 128
Number of Key-Value heads 8
Hidden size 4096
MLP expansion factor 3.5
MLP type SwiGLU
Sequence length 20480
Position embeddings RoPE with base 5e5
Attention type causal
QK-norm per head

Decoder module

94M
Number of layers 4
Number of attention heads 8
Head size 128
Number of Key-Value heads 8
Hidden size 1024
Cross-attention hidden size 4096
MLP expansion factor 2.75
MLP type SwiGLU
Sequence length 163840
Position embeddings RoPE with base 1e5
Attention type causal, local with window size 768
QK-norm disabled

Parameter count

Total: 7,192,507,136 Encoder: 119,293,696 Backbone: 6,979,592,192 Decoder: 93,621,248

We note that one distinctive property of our tokenizer-free architectures is that encoder and decoder are substantially smaller than typical embedding and language model head layers of tokenizer-based models. Because of this, while our models share the architecture with Llama 3.1 8B (plus the added QK-norm), they are closer to 7B than 8B parameters in total.

Word splitter

To split arbitrary byte sequences, we adopted the guidelines from UAX #29, which splits text into words for common Western languages but also produces meaningful semantic units for other types of languages (e.g. Chinese, Japanese, Korean). From now on, we refer to these splits as words.

We also merged leading whitespace and trailing punctuation into the words to reduce sequence length at the word level.

To improve the processing of code and math documents, we made additional adjustments to the Unicode splitter. First, we split instances of camel cases like FooBar into Foo and Bar. Second, we treated math symbols (again by Unicode standard) as separate words.

Instruction Fine-tuning

Approach

We optimized TFree-HAT-Pretrained-7B-Base for instruction-following using a standard post-training pipeline. First, we applied supervised fine-tuning (SFT) to train the model on both single-turn and multi-turn (chat) instruction-following tasks. Next, we aligned our model for helpfulness and, in parts, safety using Direct Preference Optimization (DPO).

Data

The data used for instruction fine-tuning is based on a mixture of user prompts and model competitions. The data mixture consists of roughly 2M samples from diverse datasets including but not limited to: specialized reasoning datasets covering mathematics, programming, and logical inference; human feedback focused on helpful and harmless responses; a small curated set for specific response patterns; safety and robustness subsets for appropriate boundaries; collaborative conversational data; multilingual conversation prompts; tabular data reasoning for structured information; and formal mathematics with advanced problems.

We synthesized responses to the prompts using Qwen 2.5-32B and Qwen 2.5-72B. Additionally, we improved German performance by translating English prompts using Mistral-Nemo-Instruct-2407, generating the corresponding answers using Mistral-Small-3.1-Instruct, and performing quality filtering using an LLM judge based on Llama-3.3-70B-Instruct. Lastly, we supplemented the synthetic data with proprietary human-generated SFT data as well as further data sources.

For DPO training, we used a similar dataset of prompts and completions from diverse domains.

Legal Compliance

We acknowledge and abide by applicable national and international regulations, including copyright, data privacy, and other related legislation. Any text and data mining by us is performed in compliance with Directive (EU) 2019/790 and its respective national transposition. During the training and fine-tuning of our models, we comply with applicable data privacy laws, including Regulation (EU) 2016/679 (GDPR) and national data privacy regulations. To the extent possible and foreseeable, we also took legislation with forthcoming obligations into account, such as the obligations for General Purpose AI Models under Regulation (EU) 2024/1689 (EU AI Act), and will constantly monitor such developments and adapt our products and this model card accordingly.

Resource Usage

Compute & Training Efficiency

The following table shows the compute resources used in the training stages for the 7B models.

Model Training phase GPUs Approximate average power consumption per GPU Approximate GPU hours
7B Long context SFT 128 x H100 160W 1,500
7B DPO 128 x H100 160W 1,300

Environmental Impact

Our H200 and A100 infrastructure runs entirely on 100% renewable energy, ensuring that no CO₂ emissions are directly incurred from training. In addition to this, the H200 data center boasts a power usage effectiveness (PUE) of ≤1.2. Its operation also maintains a net-zero water footprint. Specific number on renewable energy usage for the H100 GPUs is not yet available to us.

To estimate the carbon footprint of inference, we base our calculations on publicly available data from the infrastructure provider and, where applicable, standard emissions accounting methodology. We report:

  • Carbon emitted: GPU runtime emissions

  • Carbon emitted accounting for PUE: GPU runtime emissions scaled by the data center's PUE

Because the data centers operate fully on renewable energy, both metrics for its operation (excluding infrastructure-related emissions, e.g., initial chip manufacturing) are effectively zero. For H100 GPU infrastructure no information has been made available to us.

Metric H200 GPU H100 GPU A100 GPU
Carbon emitted 0 kg COâ‚‚ no information available 0 kg COâ‚‚
Carbon emitted accounting for PUE 0 kg COâ‚‚ no information available 0 kg COâ‚‚

Power Consumption

GPU Model Max Power (W)
A100 400 W
H100 700 W
H200 700 W

Numbers may be contextualized with reference to publicly available studies, such as the carbon footprint of language model training.

Intended Use

These models are intended to be deployed as components of AI systems or applications. Use-cases and the model's capabilities include but are not limited to: text generation, classification, summarization, question answering, and labeling. Note that applications might require additional model adaptations or components for guarding against unwanted application behavior or model output.

Non-Permitted Use

Our models shall not be used for illegal or unlawful actions of any kind and with any illegal or unlawful content. This includes in particular prohibited practices according to Article 5 of Regulation (EU) 2024/1689 (EU AI Act) and other activities such as engaging in terrorism, violence, human trafficking, illegal distribution of materials to minors, sexual solicitation, any other criminal activities, harassment, discrimination, creating or promoting malicious code or activities risking death or harm, including those related to military or nuclear applications, and activities not in compliance with sanction regimes, technology export regulations, and other restrictions that may apply. The models are to be used following ethical standards. The utilization of our technology is always governed by, and may be limited in accordance with, our Terms and Conditions, the Open Aleph License, or any specific agreement we might have established with you.

Although we do not inspect the requests sent to our API, we regularly review and monitor potential violations that may be related to our models and depending on the circumstances of the specific case take legal action against them. This includes but is not limited to, enforcement to remove published model content, requesting compensation for damages caused, and account termination or removal of credits.

For non-anonymous reports, we also provide an appeals mechanism for usage policy violations via our dedicated contact address [email protected] to communicate with us.

Customers and partners are enabled to use our ticketing system for appeals, claims, and feedback.

Risks and Limitations

Note: Language models are not agents and not optimized for prescriptive actions. The use of language models in high-stake environments, for critical decisions or to support a user's wellbeing should be performed with additional guardrails in place.

Risk Categories

In the following sections, we describe risk categories and provide examples of completions we would consider inappropriate or harmful. We then describe steps to minimize these risks.

Harmful Language

Large language models can sometimes generate undesired outputs that are unsuitable for certain applications. This includes producing content with harmful language, discriminative content, inappropriate tone and style, systemic biases, or suggestions that might encourage illegal actions. Such outputs can also include incorrect, outdated information, or material that is not suitable for all ages. While we constantly take efforts to reduce the likelihood of such undesired outputs, this possibility can never be fully ruled out. To minimize these issues, the following strategies can be employed:

  • Abide by the guidance on illegal use provided for in this Model Card.

  • Crafting prompts carefully to guide the model's output more effectively.

  • Utilizing a finetuned model (often referred to as a control or instruct model) that prioritizes using explicitly provided information.

  • Employing a finetuned model designed to maintain an appropriate tone and style, including avoiding offensive language.

  • Conducting additional validations at the application level to ensure output quality and appropriateness.

Systemic Biases

Language models obtain world-knowledge from their pre-training data and may therefore exhibit the same systematic biases that are present in the data. Differing deployment scenarios (including differing cultural contexts) can expose systematic biases in different ways. We acknowledge the cultural diversity of communities and users inside and outside the EU. For larger deployments, we encourage users to track systematic biases relevant to their use-case, and we are happy to consult on bespoke fine-tunings to alleviate such biases.

Outdated World Knowledge

Prompt Completion
What was the warmest year in human history? The warmest year in human history, based on global average temperatures, is 2016. However, it's important to note that the ranking of the warmest years can vary slightly depending on the dataset used and the methodology applied.

Pre-training was performed using a fixed dataset, created at a fixed date in the past. Accordingly, the world knowledge of foundation models is limited to the information contained in its training data. More recent information may not be known to the model or misunderstood when presented as input during live usage. In the above example, the model is convinced that 2016 was the warmest year in human history, however, 2023 ended up breaking that record. This information can not be known by the model due to its knowledge cutoff.

Risks include:

  • Generation of personally identifiable information. Models are not explicitly trained to provide such information, but may seem to provide personally identifiable information. This does not necessarily imply the presence of such information in training data, as hallucination is possible.

  • Generation of unintended, irrelevant, or repetitive outputs. This includes the production of incorrect or outdated information.

Risks may be mitigated by:

  • Injecting context, where relevant.

  • Crafting prompts carefully to guide the model's output more effectively.

  • Performing validations on the application layer, e.g., classifying the output.

  • Using the repetition penalty, especially in the case of repetition, or other parameters available in the API (see documentation).

  • Avoiding of use cases targeted at retrieval of personally identifiable information.

Political Bias

Our models have not been optimized to represent a political opinion or take a specific point of view. They may generate outputs that contradict a user's opinion or expectation, e.g., produce hateful, violent or inappropriate, biased, or discriminatory content. Such behavior may be addressed by:

  • Crafting prompts carefully to guide the model's output more effectively.

  • Performing validations on the application layer, e.g., via Red-Teaming or classifying the output.

Mistaken for a Human

Prompt Completion
Good morning, today is such a beautiful day! Hello! It sounds like you're starting your day on a positive note. I'm glad to hear that the weather is beautiful. How are you feeling today? Is there anything specific you'd like to discuss or any tasks you'd like some help with? Whether it's planning your day, sharing some thoughts, or getting advice on something, I'm here to assist! Enjoy your day!

Users may attribute human traits to AI models. This also includes the fact that content generated by the model is not explicitly detectable at this point. It is therefore required to:

  • Inform end users that they are interacting with or reading output of an AI.

  • Design the system in a way that mitigates the impact of unintended interpretation of the output.

Other Errors

Any AI module can produce errors, even after implementing all the recommended measures. When integrating foundation language models into an application, users should:

  • be aware of the risk of (harmful) failure cases and implement the use case in a way that mitigates such risks.

  • be aware that foundation models do not contain application logic, e.g., content filters. Enforcement policies relevant to the use case need to be implemented in the application layer.

  • avoid unsupervised use in high-stakes environments.

  • validate output with adequate measures.

Mitigation Approach

We specifically tailor model alignment and risk mitigation techniques to each user-facing application built on top of our models, working closely with our customers to refine them according to their unique requirements. Our intention is for these models to undergo further fine-tuning by us and our customers, utilizing their own datasets alongside our support and datasets to ensure suitability for end-user applications, including harm mitigation efforts. Our customers are responsible for adhering to the terms and conditions when aligning the models in their downstream applications.

Reproducibility

Some inference parameters, e.g., temperature, lead to the random sampling of outputs, which precludes the reproducibility of outputs. Even when such parameters are not in use, outputs may diverge slightly on a numeric level for technical reasons. One may implement the following measures if needed:

  • Logging of past model outputs on the application layer (Aleph Alpha Research is not storing any data and/or using any data provided in prompts for the training of its LLMs).

This list of risks, biases, and limitations may not be complete, as improving the understanding and behavior of language models is an ongoing research topic in the AI science community.

Legal Acknowledgements

  • Built with Llama: Built with Llama: Llama 3.1 is licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. The applicable license agreement can be found under the following link: Llama 3.1 Community License Agreement

  • Improved using Qwen

*Aleph Alpha Research refers to Aleph Alpha Research GmbH

Downloads last month
193
Safetensors
Model size
7.19B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Aleph-Alpha/llama-tfree-hat-pretrained-7b-dpo

Finetuned
(1)
this model

Collection including Aleph-Alpha/llama-tfree-hat-pretrained-7b-dpo