TFree-HAT-Pretrained-7B-Base
This model card provides an overview of our TFree-HAT-Pretrained-7B-Base model , which is a foundation model developed by Aleph Alpha Research* and publicly available under the Open Aleph License, a license explicitly allowing for non-commercial research and educational use.
The model is based on our Hierarchical Autoregressive Transformer (HAT) architecture which is described originally in our paper. This novel architecture integrates character-level encoding and decoding with the word-level backbone, allowing for improved text compression (less sequence positions) and performance in the languages it has been trained on, and potentially higher robustness to prompt changes, as well as improved adaptability to new languages & domains via fine-tuning.
The model was pre-trained in English & German and adapted to a maximum context length of 32,900 words on carefully curated data in compliance with applicable EU and national regulations, including copyright and data privacy laws. It shows strong proficiency in German, while also beating Llama 3.1 on many benchmarks in English.
A model post-trained and direct-preference-optimized for English & German starting from this base model can be found under this Link.
Note: We release the long-context (32,900 words) adapted checkpoint as the base model (branch main). In addition, 200 intermediate checkpoints from the short-context (3,500 words) pretraining are available in branches named global_step5000, global_step10000, etc. These checkpoints, stored at 5,000-step intervals, are intended to support research on training dynamics and feature learning in tokenizer-free models. All numbers contained in this model card are based on the "main" (long-context) checkpoint.
Model Access
We provide access to our models through the channels listed below.
- HuggingFace: The model’s weights as well as basic inference implementation are available on HuggingFace under the Open Aleph License, a license explicitly allowing for non-commercial research and educational use.
We do not collect PII (personally identifiable information) for any of these channels. We do not log user inputs to the models. We do not train on user data.
Note: The same models are made available to users regardless of their geographic location and their input language but subject to sanction regimes, technology export regulations, and other restrictions that may apply. The same offering is provided to all countries within and external to the European Union if no legal restrictions apply.
How to use
Inference
We provide an inference module compatible with HuggingFace Transformers for running model inference. We recommend pinning the transformers library to version 4.46.3. Before executing the inference example below, make sure the hat-splitter package is installed in your environment.
pip install 'hat-splitter>=0.1.9' 'transformers==4.46.3' torch
pip install flash_attn
Download model weights and run inference using the following example:
import torch
from transformers import AutoModelForCausalLM
INPUT ="When was Rome founded?"
MODEL_ID = "Aleph-Alpha/TFree-HAT-Pretrained-7B-Base"
model = AutoModelForCausalLM.from_pretrained(
trust_remote_code=True,
pretrained_model_name_or_path=MODEL_ID,
attn_implementation="flash_attention_2",
).to("cuda", torch.bfloat16)
input_ids, cumulative_word_lengths = model._prepare_input(INPUT)
model_output = model.generate(
input_ids,
cumulative_seq_lengths_per_word=cumulative_word_lengths,
max_new_tokens=300,
use_cache=False,
)
print("Prompt: ", INPUT)
print("Completion: ", model_output.completion_text)
Please note that the realized inference speed strongly depends on the maturity of the inference implementation beyond the intrinsic text compression of any model. Besides this huggingface transformers-based inference solution, we are also releasing a vLLM-based inference solution for our models that is optimized for batched inference. Please note that this vLLM inference for HAT is still under active development.
Evaluation
Performance: Our T-Free models deliver performance on par with strong tokenizer-based models such as Llama 3.1 8B Base. Respective benchmarks and results can be found in the tables below.
Efficiency: Our tokenizer-free approach results in improved text compression, providing a foundation for improved efficiency in inference speed. We measure in terms of words processed across all languages and domains. We define the metric as tokenizer fertility or bytes per sequence position, where a higher value indicates better performance. Latency and throughput are currently out of scope for research-centric evaluations and will be addressed in the future. Currently, our evaluation framework automatically measures bytes per sequence position across datasets, allowing us to derive text compression scores and analyze variations across different dataset distributions. The end to end resulting efficiency is depends on the inference implementation beyond the scope of the here provided inference implementation and reported compression scores.
Disclaimer: The results presented below were generated using our internal inference implementation, not the inference module mentioned above. As a sanity check, we reproduced some of the benchmarks using our evaluation framework with the huggingface inference code, but other results might still deviate slightly. We will also make source-available both our evaluation framework and a high-performance vLLM integration for this model to ensure reproducibility.
Metric Glossary
log_acc
: Average Accuracy Loglikelihood
norm_log_acc
: Average Normalized Loglikelihood Accuracy
comp_acc
: Average Completion Accuracy
norm_prob_mass
: Average Probability Mass Normalized
bleu
: Average BLEU Score
rouge_gm
: Average ROUGE-Geometric-Mean
F1
: Average F1
CS
: Chatbot Style
IF
: Instruction Following
LC
: Language Consistency
CI
: Concordance Index
ES
: Exponential Similarity
Pre-training Benchmarks
Group | Task | Metric Name | Num Fewshot | TFree-HAT-Pretrained-7B-Base | Llama-3.1-8B | TFree-HAT-Pretrained-7B-Base Compression | Llama-3.1-8B Compression |
---|---|---|---|---|---|---|---|
Knowledge | MMLU | norm_log_acc |
5 | 0.664 | 0.668 | 5.184 | 4.278 |
Knowledge | MMLU Pro | norm_log_acc |
5 | 0.386 | 0.367 | 4.734 | 3.731 |
Knowledge | OpenBookQA | norm_log_acc |
10 | 0.360 | 0.366 | 4.982 | 4.724 |
Knowledge | TriviaQA | comp_acc |
10 | 0.658 | 0.695 | 5.317 | 4.221 |
Knowledge | TruthfulQA | norm_prob_mass |
6 | 0.306 | 0.279 | 4.945 | 4.197 |
Reasoning | ARC Challenge | norm_log_acc |
25 | 0.587 | 0.538 | 5.514 | 4.924 |
Reasoning | Winogrande | norm_log_acc |
5 | 0.754 | 0.747 | 5.158 | 4.909 |
German | MMMLU | norm_log_acc |
5 | 0.618 | 0.576 | 6.056 | 3.410 |
German | WMT16 | bleu |
5 | 34.405 | 34.998 | 5.968 | 4.210 |
German | WMT20 | bleu |
5 | 33.240 | 32.892 | 6.269 | 4.222 |
Math | GSM8K | comp_acc |
8 | 0.528 | 0.528 | 3.840 | 3.332 |
Long context | GSM8K | comp_acc |
16 | 0.536 | --- | 3.837 | --- |
Long context | Long Bench v2 | norm_log_acc |
10 | 0.336 | --- | 5.125 | --- |
Long context German | Long Bench v2 | norm_log_acc |
10 | 0.233 | --- | 5.872 | --- |
Safety | Winogender | norm_log_acc |
5 | 0.671 | 0.636 | 5.232 | 4.799 |
Training Details
Model Architecture
The model uses a hierarchical autoregressive transformer (HAT) architecture consisting of three components: encoder, backbone, and decoder, together with connector layers between components. Encoder, backbone, and decoder are all instances of autoregressive transformers with pre-norm residual blocks in the style of Llama, using a SwiGLU unit as a feed-forward block, with all model parameters active during training and inference. The backbone model uses standard causal attention, while the encoder and decoder use local causal attention with a finite look-back window. The architecture of the backbone largely follows the design of LLama 3.1 8B (with embedding and language modeling head removed and weights randomly initialized). In addition, we added per-head QK-norm in the backbone, which we found important for training stability.
The encoder processes input text as a sequence of UTF-8 bytes and produces a sequence of activations of the same length. This sequence is then split into chunks corresponding to words or other semantic units in the text (this is further explained below). In the encoder-backbone connector layer, for each word, a learned latent vector cross-attends to its corresponding chunk of encoder activations. The resulting sequence of latent vectors then serves as input to the backbone. The backbone processes this latent sequence and produces a sequence of word-level representations. Finally, the decoder module is another transformer that acts on the byte-level activations and has an LM head that produces next-byte probabilities. To make use of the higher level information stored in the word-level embeddings during decoding, another cross-attention mechanism is used. In each transformer block of the decoder, every byte-level position cross-attends to the backbone’s word-level representations that correspond to the words preceding this byte.
Encoder module
119M | |
---|---|
Number of layers | 6 |
Number of attention heads | 8 |
Head size | 128 |
Number of Key-Value heads | 8 |
Hidden size | 1024 |
Cross-attention hidden size | 4096 |
MLP expansion factor | 2.75 |
MLP type | SwiGLU |
Sequence length | 262144 |
Position embeddings | RoPE with base 1e5 |
Attention type | causal, local with window size 768 |
QK-norm | disabled |
Backbone module
7B | |
---|---|
Number of layers | 32 |
Number of attention heads | 32 |
Head size | 128 |
Number of Key-Value heads | 8 |
Hidden size | 4096 |
MLP expansion factor | 3.5 |
MLP type | SwiGLU |
Sequence length | 32900 |
Position embeddings | RoPE with base 5e5 |
Attention type | causal |
QK-norm | per head |
Decoder module
94M | |
---|---|
Number of layers | 4 |
Number of attention heads | 8 |
Head size | 128 |
Number of Key-Value heads | 8 |
Hidden size | 1024 |
Cross-attention hidden size | 4096 |
MLP expansion factor | 2.75 |
MLP type | SwiGLU |
Sequence length | 262144 |
Position embeddings | RoPE with base 1e5 |
Attention type | causal, local with window size 768 |
QK-norm | disabled |
Parameter count
Total: 7,192,507,136
Encoder: 119,293,696
Backbone: 6,979,592,192
Decoder: 93,621,248
We note that one distinctive property of our tokenizer-free architectures is that encoder and decoder are substantially smaller than typical embedding and language model head layers of tokenizer-based models. Because of this, while our models share the architecture with Llama 3.1 8B (plus the added QK-norm), they are closer to 7B than 8B parameters in total.
Word splitter
To split arbitrary byte sequences, we adopted the guidelines from UAX #29, which splits text into words for common Western languages but also produces meaningful semantic units for other types of languages (e.g. Chinese, Japanese, Korean). From now on, we refer to these splits as words.
We also merged leading whitespace and trailing punctuation into the words to reduce sequence length at the word level.
To improve the processing of code and math documents, we made additional adjustments to the Unicode splitter. First, we split instances of camel cases like FooBar into Foo and Bar. Second, we treated math symbols (again by Unicode standard) as separate words.
Pre-Training
Approach
We randomly initialized all model parameters. The model was then trained on the next-byte-prediction objective on a large and diverse document corpus (see below). Initially, we trained on sequences up to 3500 words for a total amount of nearly 4T words. We used global batch-size of 1024 (3.5M words) and followed a warmup-stable-decay schedule with a warmup of 5000 steps, a phase of stable learning rate 2e-3 for 945000 steps and inverse-square-root cooldown to learning rate 0 over the last 50000 steps. We employed weight decay of 0.05 for all parameters except for the embedding and normalization parameters. We employed QK-norm per head and attention logit softcapping at 100, which we found to be important for training stability during pretraining.
We then continued training on sequences of up to 32900 words for another 2500 steps with global batch size 128, totaling to 10.5B words, upweighting longer documents to make use of the extended context. We used warmup-stable-decay learning rate schedule with 500 steps warmup, a phase of stable learning 2e-4, and a final decay to 0 over the last 500 steps. We disabled attention logit softcapping during this long-context adaptation such that it is not required during inference.
The training was conducted in our Scaling framework.
Data sources
The model was trained on a filtered subset of diverse corpora of text data including proprietary curated datasets, high-quality web content, public domain sources, German texts, mathematical texts, and programming code. The proportions and sources of data we used in the pre-training were:
English Language Data (70%)
curated web and synthetic data (63%)
high quality curated sources such as Wikipedia and public domain books (7%)
German Language Data (7%)
curated web and synthetic data (6.3%)
high quality curated sources such as Wikipedia and public domain books (0.7%)
Mathematical Content (5%)
mathematical code and proofs (2%)
mathematical word problems and equations (3%)
Programming Code (18%)
general programming code (11%)
high-quality and synthetic Python code (7%)
Data curation
We applied a range of curation techniques, e.g., for German as described in Aleph-Alpha-GermanWeb. These include but are not limited to:
URL filtering. We used a URL filter developed to filter out fraudulent, harmful, and illegal content from an explicit blocklist, e.g., adult websites, or URLs containing words associated with fraudulent, harmful, or adult content.
Text extraction. Natural language texts which were embedded HTML and other web programming languages were extracted using the Resiliparse text extractor.
Language identification. We used a fastText language classifier trained on character n-grams from Wikipedia to identify, retain, and sort texts into English and German.
Repetition removal. We applied heuristic methods for detection and removal of repetitions on the line, paragraph, and character level.
Document- and line-level filtering. We utilized additional document-level heuristics to ensure documents had reasonable numbers and quality of words, naturalistic symbols-to-words and numbers-to-words ratios, not predominantly made up of bullet points, and a sufficient quantity of real words.
Deduplication. Using exact and fuzzy deduplication to remove duplicate documents.
Synthetic data
We also generated synthetic data by using permissively-licensed LLMs.
Legal Compliance
We acknowledge and abide by applicable national and international regulations, including copyright, data privacy, and other related legislation. Any text and data mining by us is performed in compliance with Directive (EU) 2019/790 and its respective national transposition. During the training and fine-tuning of our models, we comply with applicable data privacy laws, including Regulation (EU) 2016/679 (GDPR) and national data privacy regulations. To the extent possible and foreseeable, we also took legislation with forthcoming obligations into account, such as the obligations for General Purpose AI Models under Regulation (EU) 2024/1689 (EU AI Act), and will constantly monitor such developments and adapt our products and this model card accordingly.
Resource Usage
Compute & Training Efficiency
The following table shows the compute resources used in the training stages for the 7B models.
Model | Training phase | GPUs | Approximate average power consumption per GPU | Approximate GPU hours |
---|---|---|---|---|
7B | Pre-training (part 1) | 256 x H200 | 460W | 111,822 |
7B | Pre-training (part 2) | 256 x H100 | 460W | 151,289 |
7B | Long context adaptation | 256 x H100 | 190W | 5,328 |
Environmental Impact
Our H200 and A100 infrastructure runs entirely on 100% renewable energy, ensuring that no CO₂ emissions are directly incurred from training. In addition to this, the H200 data center boasts a power usage effectiveness (PUE) of ≤1.2. Its operation also maintains a net-zero water footprint. Specific number on renewable energy usage for the H100 GPUs is not yet available to us.
To estimate the carbon footprint of inference, we base our calculations on publicly available data from the infrastructure provider and, where applicable, standard emissions accounting methodology. We report:
Carbon emitted: GPU runtime emissions
Carbon emitted accounting for PUE: GPU runtime emissions scaled by the data center's PUE
Because the data centers operate fully on renewable energy, both metrics for its operation (excluding infrastructure-related emissions, e.g., initial chip manufacturing) are effectively zero. For H100 GPU infrastructure no information has been made available to us.
Metric | H200 GPU | H100 GPU | A100 GPU |
---|---|---|---|
Carbon emitted | 0 kg COâ‚‚ | no information available | 0 kg COâ‚‚ |
Carbon emitted accounting for PUE | 0 kg COâ‚‚ | no information available | 0 kg COâ‚‚ |
Power Consumption
GPU Model | Max Power (W) |
---|---|
A100 | 400 W |
H100 | 700 W |
H200 | 700 W |
Numbers may be contextualized with reference to publicly available studies, such as the carbon footprint of language model training.
Intended Use
These models are intended to be deployed as components of AI systems or applications. Use-cases and the model's capabilities include but are not limited to: text generation, classification, summarization, question answering, and labeling. Note that applications might require additional model adaptations or components for guarding against unwanted application behavior or model output.
Non-Permitted Use
Our models shall not be used for illegal or unlawful actions of any kind and with any illegal or unlawful content. This includes in particular prohibited practices according to Article 5 of Regulation (EU) 2024/1689 (EU AI Act) and other activities such as engaging in terrorism, violence, human trafficking, illegal distribution of materials to minors, sexual solicitation, any other criminal activities, harassment, discrimination, creating or promoting malicious code or activities risking death or harm, including those related to military or nuclear applications, and activities not in compliance with sanction regimes, technology export regulations, and other restrictions that may apply. The models are to be used following ethical standards. The utilization of our technology is always governed by, and may be limited in accordance with, our Terms and Conditions, the Open Aleph License, or any specific agreement we might have established with you.
Although we do not inspect the requests sent to our API, we regularly review and monitor potential violations that may be related to our models and depending on the circumstances of the specific case take legal action against them. This includes but is not limited to, enforcement to remove published model content, requesting compensation for damages caused, and account termination or removal of credits.
For non-anonymous reports, we also provide an appeals mechanism for usage policy violations via our dedicated contact address [email protected] to communicate with us.
Customers and partners are enabled to use our ticketing system for appeals, claims, and feedback.
Risks and Limitations
Note: Language models are not agents and not optimized for prescriptive actions. The use of language models in high-stake environments, for critical decisions or to support a user's wellbeing should be performed with additional guardrails in place.
Risk Categories
In the following sections, we describe risk categories and provide examples of completions we would consider inappropriate or harmful. We then describe steps to minimize these risks.
Harmful Language
Large language models can sometimes generate undesired outputs that are unsuitable for certain applications. This includes producing content with harmful language, discriminative content, inappropriate tone and style, systemic biases, or suggestions that might encourage illegal actions. Such outputs can also include incorrect, outdated information, or material that is not suitable for all ages. While we constantly take efforts to reduce the likelihood of such undesired outputs, this possibility can never be fully ruled out. To minimize these issues, the following strategies can be employed:
Abide by the guidance on illegal use provided for in this Model Card.
Crafting prompts carefully to guide the model's output more effectively.
Utilizing a finetuned model (often referred to as a control or instruct model) that prioritizes using explicitly provided information.
Employing a finetuned model designed to maintain an appropriate tone and style, including avoiding offensive language.
Conducting additional validations at the application level to ensure output quality and appropriateness.
Systemic Biases
Language models obtain world-knowledge from their pre-training data and may therefore exhibit the same systematic biases that are present in the data. Differing deployment scenarios (including differing cultural contexts) can expose systematic biases in different ways. We acknowledge the cultural diversity of communities and users inside and outside the EU. For larger deployments, we encourage users to track systematic biases relevant to their use-case, and we are happy to consult on bespoke fine-tunings to alleviate such biases.
Outdated World Knowledge
Prompt | Completion |
---|---|
What was the warmest year in human history? | The warmest year in human history, based on global average temperatures, is 2016. However, it's important to note that the ranking of the warmest years can vary slightly depending on the dataset used and the methodology applied. |
Pre-training was performed using a fixed dataset, created at a fixed date in the past. Accordingly, the world knowledge of foundation models is limited to the information contained in its training data. More recent information may not be known to the model or misunderstood when presented as input during live usage. In the above example, the model is convinced that 2016 was the warmest year in human history, however, 2023 ended up breaking that record. This information can not be known by the model due to its knowledge cutoff.
Risks include:
Generation of personally identifiable information. Models are not explicitly trained to provide such information, but may seem to provide personally identifiable information. This does not necessarily imply the presence of such information in training data, as hallucination is possible.
Generation of unintended, irrelevant, or repetitive outputs. This includes the production of incorrect or outdated information.
Risks may be mitigated by:
Injecting context, where relevant.
Crafting prompts carefully to guide the model's output more effectively.
Performing validations on the application layer, e.g., classifying the output.
Using the repetition penalty, especially in the case of repetition, or other parameters available in the API (see documentation).
Avoiding of use cases targeted at retrieval of personally identifiable information.
Political Bias
Our models have not been optimized to represent a political opinion or take a specific point of view. They may generate outputs that contradict a user's opinion or expectation, e.g., produce hateful, violent or inappropriate, biased, or discriminatory content. Such behavior may be addressed by:
Crafting prompts carefully to guide the model's output more effectively.
Performing validations on the application layer, e.g., via Red-Teaming or classifying the output.
Mistaken for a Human
Prompt | Completion |
---|---|
Good morning, today is such a beautiful day! | Hello! It sounds like you're starting your day on a positive note. I'm glad to hear that the weather is beautiful. How are you feeling today? Is there anything specific you'd like to discuss or any tasks you'd like some help with? Whether it's planning your day, sharing some thoughts, or getting advice on something, I'm here to assist! Enjoy your day! |
Users may attribute human traits to AI models. This also includes the fact that content generated by the model is not explicitly detectable at this point. It is therefore required to:
Inform end users that they are interacting with or reading output of an AI.
Design the system in a way that mitigates the impact of unintended interpretation of the output.
Other Errors
Any AI module can produce errors, even after implementing all the recommended measures. When integrating foundation language models into an application, users should:
be aware of the risk of (harmful) failure cases and implement the use case in a way that mitigates such risks.
be aware that foundation models do not contain application logic, e.g., content filters. Enforcement policies relevant to the use case need to be implemented in the application layer.
avoid unsupervised use in high-stakes environments.
validate output with adequate measures.
Mitigation Approach
We specifically tailor model alignment and risk mitigation techniques to each user-facing application built on top of our models, working closely with our customers to refine them according to their unique requirements. Our intention is for these models to undergo further fine-tuning by us and our customers, utilizing their own datasets alongside our support and datasets to ensure suitability for end-user applications, including harm mitigation efforts. Our customers are responsible for adhering to the terms and conditions when aligning the models in their downstream applications.
Reproducibility
Some inference parameters, e.g., temperature, lead to the random sampling of outputs, which precludes the reproducibility of outputs. Even when such parameters are not in use, outputs may diverge slightly on a numeric level for technical reasons. One may implement the following measures if needed:
- Logging of past model outputs on the application layer (Aleph Alpha Research is not storing any data and/or using any data provided in prompts for the training of its LLMs).
This list of risks, biases, and limitations may not be complete, as improving the understanding and behavior of language models is an ongoing research topic in the AI science community.
*Aleph Alpha Research refers to Aleph Alpha Research GmbH
- Downloads last month
- 274