Update README.md
Browse files
README.md
CHANGED
@@ -1,368 +1,200 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
- de
|
6 |
-
- en
|
7 |
-
- es
|
8 |
-
- fr
|
9 |
-
- hi
|
10 |
-
- id
|
11 |
-
- it
|
12 |
-
- pt
|
13 |
-
- th
|
14 |
-
- tl
|
15 |
-
- vi
|
16 |
-
base_model:
|
17 |
-
- meta-llama/Llama-4-Scout-17B-16E-Instruct
|
18 |
-
tags:
|
19 |
-
- facebook
|
20 |
-
- meta
|
21 |
-
- pytorch
|
22 |
-
- llama
|
23 |
-
- llama-4
|
24 |
-
extra_gated_prompt: >-
|
25 |
-
**LLAMA 4 COMMUNITY LICENSE AGREEMENT**
|
26 |
-
|
27 |
-
Llama 4 Version Effective Date: April 5, 2025
|
28 |
-
|
29 |
-
"**Agreement**" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
|
30 |
-
|
31 |
-
"**Documentation**" means the specifications, manuals and documentation accompanying Llama 4 distributed by Meta at [https://www.llama.com/docs/overview](https://llama.com/docs/overview).
|
32 |
-
|
33 |
-
"**Licensee**" or "**you**" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
|
34 |
-
|
35 |
-
"**Llama 4**" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at [https://www.llama.com/llama-downloads](https://www.llama.com/llama-downloads).
|
36 |
-
|
37 |
-
"**Llama Materials**" means, collectively, Meta’s proprietary Llama 4 and Documentation (and any portion thereof) made available under this Agreement.
|
38 |
-
|
39 |
-
"**Meta**" or "**we**" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
|
40 |
-
|
41 |
-
By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
|
42 |
-
|
43 |
-
1\. **License Rights and Redistribution**.
|
44 |
-
|
45 |
-
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.
|
46 |
-
|
47 |
-
b. Redistribution and Use.
|
48 |
-
|
49 |
-
i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display "Built with Llama" on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include "Llama" at the beginning of any such AI model name.
|
50 |
-
|
51 |
-
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.
|
52 |
-
|
53 |
-
iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 4 is licensed under the Llama 4 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved."
|
54 |
-
|
55 |
-
iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at [https://www.llama.com/llama4/use-policy](https://www.llama.com/llama4/use-policy)), which is hereby incorporated by reference into this Agreement.
|
56 |
-
|
57 |
-
2\. **Additional Commercial Terms**. If, on the Llama 4 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
|
58 |
-
|
59 |
-
3**. Disclaimer of Warranty**. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
|
60 |
-
|
61 |
-
4\. **Limitation of Liability**. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
|
62 |
-
|
63 |
-
5\. **Intellectual Property**.
|
64 |
-
|
65 |
-
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use "Llama" (the "Mark") solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at [https://about.meta.com/brand/resources/meta/company-brand/](https://about.meta.com/brand/resources/meta/company-brand/)[)](https://en.facebookbrand.com/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.
|
66 |
-
|
67 |
-
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
|
68 |
-
|
69 |
-
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 4 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
|
70 |
-
|
71 |
-
6\. **Term and Termination**. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
|
72 |
-
|
73 |
-
7\. **Governing Law and Jurisdiction**. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
|
74 |
-
extra_gated_fields:
|
75 |
-
First Name: text
|
76 |
-
Last Name: text
|
77 |
-
Date of birth: date_picker
|
78 |
-
Country: country
|
79 |
-
Affiliation: text
|
80 |
-
Job title:
|
81 |
-
type: select
|
82 |
-
options:
|
83 |
-
- Student
|
84 |
-
- Research Graduate
|
85 |
-
- AI researcher
|
86 |
-
- AI developer/engineer
|
87 |
-
- Reporter
|
88 |
-
- Other
|
89 |
-
geo: ip_location
|
90 |
-
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
|
91 |
-
extra_gated_description: >-
|
92 |
-
The information you provide will be collected, stored, processed and shared in
|
93 |
-
accordance with the [Meta Privacy
|
94 |
-
Policy](https://www.facebook.com/privacy/policy/).
|
95 |
-
extra_gated_button_content: Submit
|
96 |
-
extra_gated_heading: "Please be sure to provide your full legal name, date of birth, and full organization name with all corporate identifiers. Avoid the use of acronyms and special characters. Failure to follow these instructions may prevent you from accessing this model and others on Hugging Face. You will not have the ability to edit this form after submission, so please ensure all information is accurate."
|
97 |
-
license: other
|
98 |
-
license_name: llama4
|
99 |
---
|
100 |
-
<div>
|
101 |
-
<p style="margin-bottom: 0; margin-top: 0;">
|
102 |
-
<strong>This is Llama 4 Scout unchanged, except it's now fine-tuneable with Unsloth. <br> See <a href="https://huggingface.co/collections/unsloth/llama-4-67f19503d764b0f3a2a868d2">our collection</a> for versions of Llama 4 including 4-bit & 16-bit formats.</strong>
|
103 |
-
</p>
|
104 |
-
<p style="margin-bottom: 0;">
|
105 |
-
<em>Unsloth's <a href="https://unsloth.ai/blog/dynamic-4bit">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em>
|
106 |
-
</p>
|
107 |
-
</div>
|
108 |
-
</div>
|
109 |
-
|
110 |
-
## Model Information
|
111 |
-
|
112 |
-
The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
|
113 |
-
|
114 |
-
These Llama 4 models mark the beginning of a new era for the Llama ecosystem. We are launching two efficient models in the Llama 4 series, Llama 4 Scout, a 17 billion parameter model with 16 experts, and Llama 4 Maverick, a 17 billion parameter model with 128 experts.
|
115 |
-
|
116 |
-
**Model developer**: Meta
|
117 |
-
|
118 |
-
**Model Architecture:** The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality.
|
119 |
-
|
120 |
-
<table>
|
121 |
-
<tr>
|
122 |
-
<th>Model Name</th>
|
123 |
-
<th>Training Data </th>
|
124 |
-
<th>Params</th>
|
125 |
-
<th>Input modalities</th>
|
126 |
-
<th>Output modalities</th>
|
127 |
-
<th>Context length</th>
|
128 |
-
<th>Token count</th>
|
129 |
-
<th>Knowledge cutoff</th>
|
130 |
-
</tr>
|
131 |
-
<tr>
|
132 |
-
<td>Llama 4 Scout (17Bx16E) </td>
|
133 |
-
<td rowspan="2">A mix of publicly available, licensed data and information from Meta's products and services. This includes publicly shared posts from Instagram and Facebook and people's interactions with Meta AI. Learn more in our <a href="https://www.facebook.com/privacy/guide/genai/">Privacy Center</a>.
|
134 |
-
</td>
|
135 |
-
<td>17B (Activated)
|
136 |
-
109B (Total)
|
137 |
-
</td>
|
138 |
-
<td>Multilingual text and image</td>
|
139 |
-
<td>Multilingual text and code</td>
|
140 |
-
<td>10M</td>
|
141 |
-
<td>~40T</td>
|
142 |
-
<td>August 2024</td>
|
143 |
-
</tr>
|
144 |
-
<tr>
|
145 |
-
<td>Llama 4 Maverick (17Bx128E)</td>
|
146 |
-
<td>17B (Activated)
|
147 |
-
400B (Total)
|
148 |
-
</td>
|
149 |
-
<td>Multilingual text and image</td>
|
150 |
-
<td>Multilingual text and code</td>
|
151 |
-
<td>1M</td>
|
152 |
-
<td>~22T</td>
|
153 |
-
<td>August 2024</td>
|
154 |
-
</tr>
|
155 |
-
</table>
|
156 |
|
157 |
-
|
158 |
|
159 |
-
|
160 |
|
161 |
-
|
162 |
|
163 |
-
|
164 |
-
|
165 |
-
**Where to send questions or comments about the model:** Instructions on how to provide feedback or comments on the model can be found in the Llama [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 4 in applications, please go [here](https://github.com/meta-llama/llama-cookbook).
|
166 |
-
|
167 |
-
## Intended Use
|
168 |
-
|
169 |
-
**Intended Use Cases:** Llama 4 is intended for commercial and research use in multiple languages. Instruction tuned models are intended for assistant-like chat and visual reasoning tasks, whereas pretrained models can be adapted for natural language generation. For vision, Llama 4 models are also optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The Llama 4 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 4 Community License allows for these use cases.
|
170 |
|
171 |
-
|
172 |
-
|
173 |
-
\*\*Note:
|
174 |
-
|
175 |
-
1\. Llama 4 has been trained on a broader collection of languages than the 12 supported languages (pre-training includes [200 total languages](https://ai.meta.com/research/no-language-left-behind/)). Developers may fine-tune Llama 4 models for languages beyond the 12 supported languages provided they comply with the Llama 4 Community License and the Acceptable Use Policy. Developers are responsible for ensuring that their use of Llama 4 in additional languages is done in a safe and responsible manner.
|
176 |
-
|
177 |
-
2\. Llama 4 has been tested for image understanding up to 5 input images. If leveraging additional image understanding capabilities beyond this, Developers are responsible for ensuring that their deployments are mitigated for risks and should perform additional testing and tuning tailored to their specific applications.
|
178 |
-
|
179 |
-
## How to use with transformers
|
180 |
-
|
181 |
-
Please, make sure you have transformers `v4.51.0` installed, or upgrade using `pip install -U transformers`.
|
182 |
-
|
183 |
-
```python
|
184 |
-
from transformers import AutoProcessor, Llama4ForConditionalGeneration
|
185 |
-
import torch
|
186 |
-
|
187 |
-
model_id = "meta-llama/Llama-4-Maverick-17B-128E-Instruct"
|
188 |
-
|
189 |
-
processor = AutoProcessor.from_pretrained(model_id)
|
190 |
-
model = Llama4ForConditionalGeneration.from_pretrained(
|
191 |
-
model_id,
|
192 |
-
attn_implementation="flex_attention",
|
193 |
-
device_map="auto",
|
194 |
-
torch_dtype=torch.bfloat16,
|
195 |
-
)
|
196 |
-
|
197 |
-
url1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
|
198 |
-
url2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"
|
199 |
-
messages = [
|
200 |
-
{
|
201 |
-
"role": "user",
|
202 |
-
"content": [
|
203 |
-
{"type": "image", "url": url1},
|
204 |
-
{"type": "image", "url": url2},
|
205 |
-
{"type": "text", "text": "Can you describe how these two images are similar, and how they differ?"},
|
206 |
-
]
|
207 |
-
},
|
208 |
-
]
|
209 |
|
210 |
-
|
211 |
-
messages,
|
212 |
-
add_generation_prompt=True,
|
213 |
-
tokenize=True,
|
214 |
-
return_dict=True,
|
215 |
-
return_tensors="pt",
|
216 |
-
).to(model.device)
|
217 |
|
218 |
-
outputs = model.generate(
|
219 |
-
**inputs,
|
220 |
-
max_new_tokens=256,
|
221 |
-
)
|
222 |
|
223 |
-
response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0]
|
224 |
-
print(response)
|
225 |
-
print(outputs[0])
|
226 |
-
```
|
227 |
|
228 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
229 |
|
230 |
-
|
231 |
|
232 |
-
|
233 |
|
234 |
-
|
|
|
|
|
235 |
|
236 |
-
##
|
237 |
|
238 |
-
|
239 |
-
| :---- | :---: | :---: | :---: | :---: |
|
240 |
-
| Llama 4 Scout | 5.0M | 700 | 1,354 | 0 |
|
241 |
-
| Llama 4 Maverick | 2.38M | 700 | 645 | 0 |
|
242 |
-
| Total | 7.38M | \- | 1,999 | 0 |
|
243 |
|
244 |
-
|
245 |
|
246 |
-
|
247 |
|
248 |
-
|
249 |
|
250 |
-
|
251 |
|
252 |
-
|
253 |
|
254 |
-
|
255 |
|
256 |
-
###
|
257 |
|
258 |
-
|
259 |
-
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
260 |
-
| Category | Benchmark | \# Shots | Metric | Llama 3.1 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** |
|
261 |
-
| Reasoning & Knowledge | MMLU | 5 | macro\_avg/acc\_char | 79.3 | 85.2 | 79.6 | 85.5 |
|
262 |
-
| | MMLU-Pro | 5 | macro\_avg/em | 53.8 | 61.6 | 58.2 | 62.9 |
|
263 |
-
| | MATH | 4 | em\_maj1@1 | 41.6 | 53.5 | 50.3 | 61.2 |
|
264 |
-
| Code | MBPP | 3 | pass@1 | 66.4 | 74.4 | 67.8 | 77.6 |
|
265 |
-
| Multilingual | TydiQA | 1 | average/f1 | 29.9 | 34.3 | 31.5 | 31.7 |
|
266 |
-
| Image | ChartQA | 0 | relaxed\_accuracy | No multimodal support | | 83.4 | 85.3 |
|
267 |
-
| | DocVQA | 0 | anls | | | 89.4 | 91.6 |
|
268 |
|
269 |
-
|
270 |
|
271 |
-
|
272 |
-
| :---: | :---: | :---: | :---: | :---: | ----- | :---: | :---: |
|
273 |
-
| Category | Benchmark | \# Shots | Metric | Llama 3.3 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** |
|
274 |
-
| Image Reasoning | MMMU | 0 | accuracy | No multimodal support | | 69.4 | 73.4 |
|
275 |
-
| | MMMU Pro^ | 0 | accuracy | | | 52.2 | 59.6 |
|
276 |
-
| | MathVista | 0 | accuracy | | | 70.7 | 73.7 |
|
277 |
-
| Image Understanding | ChartQA | 0 | relaxed\_accuracy | | | 88.8 | 90.0 |
|
278 |
-
| | DocVQA (test) | 0 | anls | | | 94.4 | 94.4 |
|
279 |
-
| Coding | LiveCodeBench (10/01/2024-02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 |
|
280 |
-
| Reasoning & Knowledge | MMLU Pro | 0 | macro\_avg/em | 68.9 | 73.4 | 74.3 | 80.5 |
|
281 |
-
| | GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 |
|
282 |
-
| Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 |
|
283 |
-
| Long context | MTOB (half book) eng-\>kgv/kgv-\>eng | \- | chrF | Context window is 128K | | 42.2/36.6 | 54.0/46.4 |
|
284 |
-
| | MTOB (full book) eng-\>kgv/kgv-\>eng | \- | chrF | | | 39.7/36.3 | 50.8/46.7 |
|
285 |
|
286 |
-
|
287 |
|
288 |
-
|
289 |
|
290 |
-
|
291 |
|
292 |
-
|
293 |
|
294 |
-
|
295 |
|
296 |
-
|
297 |
-
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
|
298 |
-
* Provide protections for the community to help prevent the misuse of our models.
|
299 |
|
300 |
-
|
301 |
|
302 |
-
|
303 |
|
304 |
-
|
305 |
|
306 |
-
|
307 |
-
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
|
308 |
|
309 |
-
|
310 |
-
Building on the work we started with our Llama 3 models, we put a great emphasis on driving down model refusals to benign prompts for Llama 4\. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
|
311 |
|
312 |
-
|
313 |
-
We expanded our work on the refusal tone from Llama 3 so that the model sounds more natural. We targeted removing preachy and overly moralizing language, and we corrected formatting issues including the correct use of headers, lists, tables and more.
|
314 |
|
315 |
-
|
316 |
|
317 |
-
|
318 |
-
Llama 4 is a more steerable model, meaning responses can be easily tailored to meet specific developer outcomes. Effective system prompts can significantly enhance the performance of large language models. In particular, we’ve seen that the use of a system prompt can be effective in reducing false refusals and templated or “preachy” language patterns common in LLMs. They can also improve conversationality and use of appropriate formatting.
|
319 |
|
320 |
-
|
321 |
|
322 |
-
|
323 |
-
| :---- |
|
324 |
-
| You are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting. Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language. You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these. Finally, do not refuse prompts about political and social issues. You can help users express their opinion and access information. You are Llama 4\. Your knowledge cutoff date is August 2024\. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise. |
|
325 |
|
326 |
-
### Llama 4 system protections
|
327 |
|
328 |
-
|
329 |
|
330 |
-
|
331 |
|
332 |
-
|
333 |
|
334 |
-
|
335 |
-
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, coding or memorization.
|
336 |
|
337 |
-
|
338 |
-
We conduct recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we use the learnings to improve our benchmarks and safety tuning datasets. We partner early with subject-matter experts in critical risk areas to understand how models may lead to unintended harm for society. Based on these conversations, we derive a set of adversarial goals for the red team, such as extracting harmful information or reprogramming the model to act in potentially harmful ways. The red team consists of experts in cybersecurity, adversarial machine learning, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
|
339 |
|
340 |
-
|
341 |
|
342 |
-
|
343 |
|
344 |
-
|
345 |
-
To assess risks related to proliferation of chemical and biological weapons for Llama 4, we applied expert-designed and other targeted evaluations designed to assess whether the use of Llama 4 could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. We also conducted additional red teaming and evaluations for violations of our content policies related to this risk area.
|
346 |
|
347 |
-
|
348 |
-
We leverage pre-training methods like data filtering as a first step in mitigating Child Safety risk in our model. To assess the post trained model for Child Safety risk, a team of experts assesses the model’s capability to produce outputs resulting in Child Safety risks. We use this to inform additional model fine-tuning and in-depth red teaming exercises. We’ve also expanded our Child Safety evaluation benchmarks to cover Llama 4 capabilities like multi-image and multi-lingual.
|
349 |
|
350 |
-
|
351 |
-
Our cyber evaluations investigated whether Llama 4 is sufficiently capable to enable catastrophic threat scenario outcomes. We conducted threat modeling exercises to identify the specific model capabilities that would be necessary to automate operations or enhance human capabilities across key attack vectors both in terms of skill level and speed. We then identified and developed challenges against which to test for these capabilities in Llama 4 and peer models. Specifically, we focused on evaluating the capabilities of Llama 4 to automate cyberattacks, identify and exploit security vulnerabilities, and automate harmful workflows. Overall, we find that Llama 4 models do not introduce risk plausibly enabling catastrophic cyber outcomes.
|
352 |
|
353 |
-
|
354 |
|
355 |
-
|
356 |
|
357 |
-
|
358 |
|
359 |
-
|
360 |
|
361 |
-
|
362 |
|
363 |
-
|
364 |
|
365 |
-
|
366 |
|
367 |
-
|
368 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
|
3 |
+
# Doc / guide: https://huggingface.co/docs/hub/model-cards
|
4 |
+
{}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
+
# Model Card for Model ID
|
8 |
|
9 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
|
11 |
+
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
|
12 |
|
13 |
+
## Model Details
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
+
### Model Description
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
+
<!-- Provide a longer summary of what this model is. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
|
|
|
|
|
|
|
|
19 |
|
|
|
|
|
|
|
|
|
20 |
|
21 |
+
- **Developed by:** [More Information Needed]
|
22 |
+
- **Funded by [optional]:** [More Information Needed]
|
23 |
+
- **Shared by [optional]:** [More Information Needed]
|
24 |
+
- **Model type:** [More Information Needed]
|
25 |
+
- **Language(s) (NLP):** [More Information Needed]
|
26 |
+
- **License:** [More Information Needed]
|
27 |
+
- **Finetuned from model [optional]:** [More Information Needed]
|
28 |
|
29 |
+
### Model Sources [optional]
|
30 |
|
31 |
+
<!-- Provide the basic links for the model. -->
|
32 |
|
33 |
+
- **Repository:** [More Information Needed]
|
34 |
+
- **Paper [optional]:** [More Information Needed]
|
35 |
+
- **Demo [optional]:** [More Information Needed]
|
36 |
|
37 |
+
## Uses
|
38 |
|
39 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+
### Direct Use
|
42 |
|
43 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
44 |
|
45 |
+
[More Information Needed]
|
46 |
|
47 |
+
### Downstream Use [optional]
|
48 |
|
49 |
+
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
50 |
|
51 |
+
[More Information Needed]
|
52 |
|
53 |
+
### Out-of-Scope Use
|
54 |
|
55 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
+
[More Information Needed]
|
58 |
|
59 |
+
## Bias, Risks, and Limitations
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
|
61 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
62 |
|
63 |
+
[More Information Needed]
|
64 |
|
65 |
+
### Recommendations
|
66 |
|
67 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
68 |
|
69 |
+
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
70 |
|
71 |
+
## How to Get Started with the Model
|
|
|
|
|
72 |
|
73 |
+
Use the code below to get started with the model.
|
74 |
|
75 |
+
[More Information Needed]
|
76 |
|
77 |
+
## Training Details
|
78 |
|
79 |
+
### Training Data
|
|
|
80 |
|
81 |
+
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
|
|
82 |
|
83 |
+
[More Information Needed]
|
|
|
84 |
|
85 |
+
### Training Procedure
|
86 |
|
87 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
|
|
88 |
|
89 |
+
#### Preprocessing [optional]
|
90 |
|
91 |
+
[More Information Needed]
|
|
|
|
|
92 |
|
|
|
93 |
|
94 |
+
#### Training Hyperparameters
|
95 |
|
96 |
+
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
97 |
|
98 |
+
#### Speeds, Sizes, Times [optional]
|
99 |
|
100 |
+
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
|
|
101 |
|
102 |
+
[More Information Needed]
|
|
|
103 |
|
104 |
+
## Evaluation
|
105 |
|
106 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
107 |
|
108 |
+
### Testing Data, Factors & Metrics
|
|
|
109 |
|
110 |
+
#### Testing Data
|
|
|
111 |
|
112 |
+
<!-- This should link to a Dataset Card if possible. -->
|
|
|
113 |
|
114 |
+
[More Information Needed]
|
115 |
|
116 |
+
#### Factors
|
117 |
|
118 |
+
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
119 |
|
120 |
+
[More Information Needed]
|
121 |
|
122 |
+
#### Metrics
|
123 |
|
124 |
+
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
125 |
|
126 |
+
[More Information Needed]
|
127 |
|
128 |
+
### Results
|
129 |
|
130 |
+
[More Information Needed]
|
131 |
+
|
132 |
+
#### Summary
|
133 |
+
|
134 |
+
|
135 |
+
|
136 |
+
## Model Examination [optional]
|
137 |
+
|
138 |
+
<!-- Relevant interpretability work for the model goes here -->
|
139 |
+
|
140 |
+
[More Information Needed]
|
141 |
+
|
142 |
+
## Environmental Impact
|
143 |
+
|
144 |
+
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
145 |
+
|
146 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
147 |
+
|
148 |
+
- **Hardware Type:** [More Information Needed]
|
149 |
+
- **Hours used:** [More Information Needed]
|
150 |
+
- **Cloud Provider:** [More Information Needed]
|
151 |
+
- **Compute Region:** [More Information Needed]
|
152 |
+
- **Carbon Emitted:** [More Information Needed]
|
153 |
+
|
154 |
+
## Technical Specifications [optional]
|
155 |
+
|
156 |
+
### Model Architecture and Objective
|
157 |
+
|
158 |
+
[More Information Needed]
|
159 |
+
|
160 |
+
### Compute Infrastructure
|
161 |
+
|
162 |
+
[More Information Needed]
|
163 |
+
|
164 |
+
#### Hardware
|
165 |
+
|
166 |
+
[More Information Needed]
|
167 |
+
|
168 |
+
#### Software
|
169 |
+
|
170 |
+
[More Information Needed]
|
171 |
+
|
172 |
+
## Citation [optional]
|
173 |
+
|
174 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
175 |
+
|
176 |
+
**BibTeX:**
|
177 |
+
|
178 |
+
[More Information Needed]
|
179 |
+
|
180 |
+
**APA:**
|
181 |
+
|
182 |
+
[More Information Needed]
|
183 |
+
|
184 |
+
## Glossary [optional]
|
185 |
+
|
186 |
+
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
187 |
+
|
188 |
+
[More Information Needed]
|
189 |
+
|
190 |
+
## More Information [optional]
|
191 |
+
|
192 |
+
[More Information Needed]
|
193 |
+
|
194 |
+
## Model Card Authors [optional]
|
195 |
+
|
196 |
+
[More Information Needed]
|
197 |
+
|
198 |
+
## Model Card Contact
|
199 |
+
|
200 |
+
[More Information Needed]
|