Text Generation
Transformers
PyTorch
Safetensors
English
hf_olmo
custom_code
RiccardoDav commited on
Commit
6583be2
·
verified ·
1 Parent(s): 344d48f

Dear model owner(s),
We are a group of researchers investigating the usefulness of sharing AIBOMs (Artificial Intelligence Bill of Materials) to document AI models – AIBOMs are machine-readable structured lists of components (e.g., datasets and models) used to enhance transparency in AI-model supply chains.

To pursue the above-mentioned objective, we identified popular models on HuggingFace and, based on your model card (and some configuration information available in HuggingFace), we generated your AIBOM according to the CyclonDX (v1.6) standard (see https://cyclonedx.org/docs/1.6/json/). AIBOMs are generated as JSON files by using the following open-source supporting tool: https://github.com/MSR4SBOM/ALOHA (technical details are available in the research paper: https://github.com/MSR4SBOM/ALOHA/blob/main/ALOHA.pdf).

The JSON file in this pull request is your AIBOM (see https://github.com/MSR4SBOM/ALOHA/blob/main/documentation.json for details on its structure).

Clearly, the submitted AIBOM matches the current model information, yet it can be easily regenerated when the model evolves, using the aforementioned AIBOM generator tool.

We open this pull request containing an AIBOM of your AI model, and hope it will be considered. We would also like to hear your opinion on the usefulness (or not) of AIBOM by answering a 3-minute anonymous survey: https://forms.gle/WGffSQD5dLoWttEe7.

Thanks in advance, and regards,
Riccardo D’Avino, Fatima Ahmed, Sabato Nocera, Simone Romano, Giuseppe Scanniello (University of Salerno, Italy),
Massimiliano Di Penta (University of Sannio, Italy),
The MSR4SBOM team

Files changed (1) hide show
  1. allenai_OLMo-1B.json +120 -0
allenai_OLMo-1B.json ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bomFormat": "CycloneDX",
3
+ "specVersion": "1.6",
4
+ "serialNumber": "urn:uuid:c7a7cd4d-b598-4466-8f02-1ce70492fd11",
5
+ "version": 1,
6
+ "metadata": {
7
+ "timestamp": "2025-06-05T09:39:19.611048+00:00",
8
+ "component": {
9
+ "type": "machine-learning-model",
10
+ "bom-ref": "allenai/OLMo-1B-9441fd26-4ca0-5e30-91b5-8045c73a1f7c",
11
+ "name": "allenai/OLMo-1B",
12
+ "externalReferences": [
13
+ {
14
+ "url": "https://huggingface.co/allenai/OLMo-1B",
15
+ "type": "documentation"
16
+ }
17
+ ],
18
+ "modelCard": {
19
+ "modelParameters": {
20
+ "task": "text-generation",
21
+ "architectureFamily": "hf_olmo",
22
+ "modelArchitecture": "OLMoForCausalLM",
23
+ "datasets": [
24
+ {
25
+ "ref": "allenai/dolma-1717a952-0159-5eb7-85fb-7ccf03c0a520"
26
+ }
27
+ ]
28
+ },
29
+ "properties": [
30
+ {
31
+ "name": "library_name",
32
+ "value": "transformers"
33
+ }
34
+ ],
35
+ "consideration": {
36
+ "useCases": "<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->"
37
+ }
38
+ },
39
+ "authors": [
40
+ {
41
+ "name": "allenai"
42
+ }
43
+ ],
44
+ "licenses": [
45
+ {
46
+ "license": {
47
+ "id": "Apache-2.0",
48
+ "url": "https://spdx.org/licenses/Apache-2.0.html"
49
+ }
50
+ }
51
+ ],
52
+ "description": "The core models released in this batch are the following:| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length ||------|--------|---------|-------------|-----------------|----------------|| [OLMo 1B](https://huggingface.co/allenai/OLMo-1B) | 3 Trillion |16 | 2048 | 16 | 2048 || [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) | 2.5 Trillion | 32 | 4096 | 32 | 2048 || [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 |We are releasing many checkpoints for these models, for every 1000 traing steps.The naming convention is `step1000-tokens4B`.In particular, we focus on four revisions of the 7B models:| Name | HF Repo | Model Revision | Tokens | Note ||------------|---------|----------------|-------------------|------||OLMo 7B| [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)|`main`| 2.5T|The base OLMo 7B model||OLMo 7B (not annealed)|[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)|step556000-tokens2460B|2.5T| learning rate not annealed to 0||OLMo 7B-2T|[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)| step452000-tokens2000B |2T| OLMo checkpoint at 2T tokens||OLMo-7B-Twin-2T|[allenai/OLMo-7B-Twin-2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T)|`main`|2T| Twin version on different hardware|To load a specific model revision with HuggingFace, simply add the argument `revision`:```bashfrom hf_olmo import OLMoForCausalLM # pip install ai2-olmoolmo = OLMoForCausalLM.from_pretrained(\"allenai/OLMo-1B\", revision=\"step20000-tokens84B\")```All revisions/branches are listed in the file `revisions.txt`.Or, you can access all the revisions for the models via the following code snippet:```pythonfrom huggingface_hub import list_repo_refsout = list_repo_refs(\"allenai/OLMo-1B\")branches = [b.name for b in out.branches]```A few revisions were lost due to an error, but the vast majority are present.",
53
+ "tags": [
54
+ "transformers",
55
+ "pytorch",
56
+ "safetensors",
57
+ "hf_olmo",
58
+ "text-generation",
59
+ "custom_code",
60
+ "en",
61
+ "dataset:allenai/dolma",
62
+ "arxiv:2402.00838",
63
+ "arxiv:2302.13971",
64
+ "license:apache-2.0",
65
+ "autotrain_compatible",
66
+ "region:us"
67
+ ]
68
+ }
69
+ },
70
+ "components": [
71
+ {
72
+ "type": "data",
73
+ "bom-ref": "allenai/dolma-1717a952-0159-5eb7-85fb-7ccf03c0a520",
74
+ "name": "allenai/dolma",
75
+ "data": [
76
+ {
77
+ "type": "dataset",
78
+ "bom-ref": "allenai/dolma-1717a952-0159-5eb7-85fb-7ccf03c0a520",
79
+ "name": "allenai/dolma",
80
+ "contents": {
81
+ "url": "https://huggingface.co/datasets/allenai/dolma",
82
+ "properties": [
83
+ {
84
+ "name": "task_categories",
85
+ "value": "text-generation"
86
+ },
87
+ {
88
+ "name": "language",
89
+ "value": "en"
90
+ },
91
+ {
92
+ "name": "size_categories",
93
+ "value": "n>1T"
94
+ },
95
+ {
96
+ "name": "pretty_name",
97
+ "value": "Dolma"
98
+ },
99
+ {
100
+ "name": "license",
101
+ "value": "odc-by"
102
+ }
103
+ ]
104
+ },
105
+ "governance": {
106
+ "owners": [
107
+ {
108
+ "organization": {
109
+ "name": "allenai",
110
+ "url": "https://huggingface.co/allenai"
111
+ }
112
+ }
113
+ ]
114
+ },
115
+ "description": "Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research"
116
+ }
117
+ ]
118
+ }
119
+ ]
120
+ }