oieieio commited on
Commit
f579847
·
1 Parent(s): 1751ed5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +218 -173
README.md CHANGED
@@ -1,202 +1,247 @@
1
  ---
2
- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
- # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
- {}
 
 
 
 
 
5
  ---
6
 
7
- # Model Card for Model ID
8
 
9
- <!-- Provide a quick summary of what the model is/does. -->
10
-
11
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
12
-
13
- ## Model Details
14
-
15
- ### Model Description
16
 
17
- <!-- Provide a longer summary of what this model is. -->
18
-
19
-
20
-
21
- - **Developed by:** [More Information Needed]
22
- - **Funded by [optional]:** [More Information Needed]
23
- - **Shared by [optional]:** [More Information Needed]
24
- - **Model type:** [More Information Needed]
25
- - **Language(s) (NLP):** [More Information Needed]
26
- - **License:** [More Information Needed]
27
- - **Finetuned from model [optional]:** [More Information Needed]
28
 
29
- ### Model Sources [optional]
30
 
31
- <!-- Provide the basic links for the model. -->
32
 
33
- - **Repository:** [More Information Needed]
34
- - **Paper [optional]:** [More Information Needed]
35
- - **Demo [optional]:** [More Information Needed]
36
 
37
- ## Uses
38
 
39
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
 
41
- ### Direct Use
 
 
42
 
43
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
 
45
- [More Information Needed]
 
46
 
47
- ### Downstream Use [optional]
48
 
49
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
50
 
51
- [More Information Needed]
52
 
53
- ### Out-of-Scope Use
54
 
55
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
56
 
57
- [More Information Needed]
58
 
59
  ## Bias, Risks, and Limitations
60
 
61
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
62
-
63
- [More Information Needed]
64
-
65
- ### Recommendations
66
-
67
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
68
-
69
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
70
-
71
- ## How to Get Started with the Model
72
-
73
- Use the code below to get started with the model.
74
-
75
- [More Information Needed]
76
-
77
- ## Training Details
78
-
79
- ### Training Data
80
-
81
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
-
83
- [More Information Needed]
84
-
85
- ### Training Procedure
86
-
87
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
88
-
89
- #### Preprocessing [optional]
90
-
91
- [More Information Needed]
92
-
93
-
94
- #### Training Hyperparameters
95
-
96
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
97
-
98
- #### Speeds, Sizes, Times [optional]
99
-
100
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
101
-
102
- [More Information Needed]
103
-
104
- ## Evaluation
105
-
106
- <!-- This section describes the evaluation protocols and provides the results. -->
107
-
108
- ### Testing Data, Factors & Metrics
109
-
110
- #### Testing Data
111
-
112
- <!-- This should link to a Dataset Card if possible. -->
113
-
114
- [More Information Needed]
115
-
116
- #### Factors
117
-
118
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
119
-
120
- [More Information Needed]
121
-
122
- #### Metrics
123
-
124
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
125
-
126
- [More Information Needed]
127
-
128
- ### Results
129
-
130
- [More Information Needed]
131
-
132
- #### Summary
133
-
134
-
135
-
136
- ## Model Examination [optional]
137
-
138
- <!-- Relevant interpretability work for the model goes here -->
139
-
140
- [More Information Needed]
141
-
142
- ## Environmental Impact
143
-
144
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
145
-
146
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
147
-
148
- - **Hardware Type:** [More Information Needed]
149
- - **Hours used:** [More Information Needed]
150
- - **Cloud Provider:** [More Information Needed]
151
- - **Compute Region:** [More Information Needed]
152
- - **Carbon Emitted:** [More Information Needed]
153
-
154
- ## Technical Specifications [optional]
155
-
156
- ### Model Architecture and Objective
157
-
158
- [More Information Needed]
159
-
160
- ### Compute Infrastructure
161
-
162
- [More Information Needed]
163
-
164
- #### Hardware
165
-
166
- [More Information Needed]
167
-
168
- #### Software
169
-
170
- [More Information Needed]
171
-
172
- ## Citation [optional]
173
-
174
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
175
-
176
- **BibTeX:**
177
-
178
- [More Information Needed]
179
-
180
- **APA:**
181
-
182
- [More Information Needed]
183
-
184
- ## Glossary [optional]
185
-
186
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
187
-
188
- [More Information Needed]
189
-
190
- ## More Information [optional]
191
 
192
- [More Information Needed]
 
 
193
 
194
- ## Model Card Authors [optional]
 
195
 
196
- [More Information Needed]
 
 
197
 
198
- ## Model Card Contact
 
 
 
 
 
 
199
 
200
- [More Information Needed]
 
 
 
 
 
 
 
 
 
201
 
 
 
 
202
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pipeline_tag: text-generation
3
+ tags:
4
+ - orca
5
+ - orca2
6
+ - microsoft
7
+ license: other
8
+ license_name: microsoft-research-license
9
+ license_link: LICENSE
10
  ---
11
 
 
12
 
13
+ # Orca 2 13b AWQ - Quantized 4 Bit
 
 
 
 
 
 
14
 
15
+ <!-- Provide a quick summary of what the model is/does. -->
 
 
 
 
 
 
 
 
 
 
16
 
17
+ Orca 2 is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reasoning.
18
 
19
+ Note that:
20
 
21
+ 1. This is a research model, intended to show that we can use capable models and complex workflows (advanced prompts, multiple calls) to create synthetic data that can teach Small Language Models (SLMs) new capabilities. We chose reasoning because it is a widely useful capability that SLMs lack.
22
+ 2. The model is not optimized for chat and has not been trained with RLHF or DPO. It is best used after being finetuned for chat or for a specific task.
23
+ 3. Beyond reasoning, the model inherits capabilities and limitations of its base (LLAMA-2 base). We have already seen that the benefits of the Orca training can be applied to other base model too.
24
 
25
+ We make Orca 2's weights publicly available to support further research on the development, evaluation, and alignment of SLMs.
26
 
27
+ ## What is Orca 2’s intended use(s)?
28
 
29
+ + Orca 2 is built for research purposes only.
30
+ + The main purpose is to allow the research community to assess its abilities and to provide a foundation for
31
+ building better frontier models.
32
 
33
+ ## How was Orca 2 evaluated?
34
 
35
+ + Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
36
+ to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
37
 
38
+ ## Model Details
39
 
40
+ Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
41
+ All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
42
 
43
+ Please refer to LLaMA-2 technical report for details on the model architecture.
44
 
45
+ ## License
46
 
47
+ Orca 2 is licensed under the [Microsoft Research License](LICENSE).
48
 
49
+ Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
50
 
51
  ## Bias, Risks, and Limitations
52
 
53
+ Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
54
+ common limitations of other large language models or limitation caused by its training process,
55
+ including:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
+ **Data Biases**: Large language models, trained on extensive data, can inadvertently carry
58
+ biases present in the source data. Consequently, the models may generate outputs that could
59
+ be potentially biased or unfair.
60
 
61
+ **Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting
62
+ in potential inaccuracies or nonsensical responses.
63
 
64
+ **Lack of Transparency**: Due to the complexity and size, large language models can act
65
+ as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or
66
+ decisions. We recommend reviewing transparency notes from Azure for more information.
67
 
68
+ **Content Harms**: There are various types of content harms that large language models
69
+ can cause. It is important to be aware of them when using these models, and to take
70
+ actions to prevent them. It is recommended to leverage various content moderation services
71
+ provided by different companies and institutions. On an important note, we hope for better
72
+ regulations and standards from government and technology leaders around content harms
73
+ for AI technologies in future. We value and acknowledge the important role that research
74
+ and open source community can play in this direction.
75
 
76
+ **Hallucination**: It is important to be aware and cautious not to entirely rely on a given
77
+ language model for critical decisions or information that might have deep impact as it is
78
+ not obvious how to prevent these models from fabricating content. Moreover, it is not clear
79
+ whether small models may be more susceptible to hallucination in ungrounded generation
80
+ use cases due to their smaller sizes and hence reduced memorization capacities. This is an
81
+ active research topic and we hope there will be more rigorous measurement, understanding
82
+ and mitigations around this topic.
83
+
84
+ **Potential for Misuse**: Without suitable safeguards, there is a risk that these models could
85
+ be maliciously used for generating disinformation or harmful content.
86
 
87
+ **Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution
88
+ of the tuning data. This correlation might limit its accuracy in areas underrepresented in
89
+ the training dataset such as math, coding, and reasoning.
90
 
91
+ **System messages**: Orca 2 demonstrates variance in performance depending on the system
92
+ instructions. Additionally, the stochasticity introduced by the model size may lead to
93
+ generation of non-deterministic responses to different system instructions.
94
+
95
+ **Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings.
96
+ While the model demonstrate very strong performance in zero-shot settings, it does not show
97
+ the same gains of using few-shot learning compared to other, specially larger, models.
98
+
99
+ **Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages
100
+ and shortcomings of the models and methods used for data generation. We posit that Orca
101
+ 2 benefits from the safety measures incorporated during training and safety guardrails (e.g.,
102
+ content filter) within the Azure OpenAI API. However, detailed studies are required for
103
+ better quantification of such risks.
104
+
105
+ This model is solely designed for research settings, and its testing has only been carried
106
+ out in such environments. It should not be used in downstream applications, as additional
107
+ analysis is needed to assess potential harm or bias in the proposed application.
108
+
109
+ ## Getting started with Orca 2
110
+
111
+ **Inference with Hugging Face library**
112
+
113
+ ```python
114
+ import torch
115
+ import transformers
116
+
117
+ if torch.cuda.is_available():
118
+ torch.set_default_device("cuda")
119
+ else:
120
+ torch.set_default_device("cpu")
121
+
122
+ model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-13b", device_map='auto')
123
+
124
+ # https://github.com/huggingface/transformers/issues/27132
125
+ # please use the slow tokenizer since fast and slow tokenizer produces different tokens
126
+ tokenizer = transformers.AutoTokenizer.from_pretrained(
127
+ "microsoft/Orca-2-13b",
128
+ use_fast=False,
129
+ )
130
+
131
+ system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
132
+ user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
133
+
134
+ prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
135
+
136
+ inputs = tokenizer(prompt, return_tensors='pt')
137
+ output_ids = model.generate(inputs["input_ids"],)
138
+ answer = tokenizer.batch_decode(output_ids)[0]
139
+
140
+ print(answer)
141
+
142
+ # This example continues showing how to add a second turn message by the user to the conversation
143
+ second_turn_user_message = "Give me a list of the key points of your first answer."
144
+
145
+ # we set add_special_tokens=False because we dont want to automatically add a bos_token between messages
146
+ second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant"
147
+ second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False)
148
+ second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
149
+
150
+ output_ids_2 = model.generate(second_turn_input,)
151
+ second_turn_answer = tokenizer.batch_decode(output_ids_2)[0]
152
+
153
+ print(second_turn_answer)
154
+ ```
155
+
156
+
157
+ **Safe inference with Azure AI Content Safety**
158
+
159
+ The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged
160
+ and can help prevent content harms. Azure AI Content Safety is a content moderation platform
161
+ that uses AI to keep your content safe. By integrating Orca 2 with Azure AI Content Safety,
162
+ we can moderate the model output by scanning it for sexual content, violence, hate, and
163
+ self-harm with multiple severity levels and multi-lingual detection.
164
+
165
+ ```python
166
+ import os
167
+ import math
168
+ import transformers
169
+ import torch
170
+
171
+ from azure.ai.contentsafety import ContentSafetyClient
172
+ from azure.core.credentials import AzureKeyCredential
173
+ from azure.core.exceptions import HttpResponseError
174
+ from azure.ai.contentsafety.models import AnalyzeTextOptions
175
+
176
+ CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"]
177
+ CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"]
178
+
179
+ # We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold
180
+ # For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/
181
+ def should_filter_out(input_text, threshold=4):
182
+ # Create an Content Safety client
183
+ client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY))
184
+
185
+ # Construct a request
186
+ request = AnalyzeTextOptions(text=input_text)
187
+
188
+ # Analyze text
189
+ try:
190
+ response = client.analyze_text(request)
191
+ except HttpResponseError as e:
192
+ print("Analyze text failed.")
193
+ if e.error:
194
+ print(f"Error code: {e.error.code}")
195
+ print(f"Error message: {e.error.message}")
196
+ raise
197
+ print(e)
198
+ raise
199
+
200
+ categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"]
201
+ max_score = -math.inf
202
+ for category in categories:
203
+ max_score = max(max_score, getattr(response, category).severity)
204
+
205
+ return max_score >= threshold
206
+
207
+ model_path = 'microsoft/Orca-2-13b'
208
+ device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
209
+ model = transformers.AutoModelForCausalLM.from_pretrained(model_path)
210
+ model.to(device)
211
+
212
+ tokenizer = transformers.AutoTokenizer.from_pretrained(
213
+ model_path,
214
+ model_max_length=4096,
215
+ padding_side="right",
216
+ use_fast=False,
217
+ add_special_tokens=False,
218
+ )
219
+
220
+ system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
221
+ user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No."
222
+
223
+ prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
224
+
225
+ inputs = tokenizer(prompt, return_tensors='pt')
226
+ inputs = inputs.to(device)
227
+
228
+ output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True)
229
+ sequence_length = inputs["input_ids"].shape[1]
230
+ new_output_ids = output_ids[:, sequence_length:]
231
+ answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
232
+ final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
233
+
234
+ print(final_output)
235
+ ```
236
+
237
+ ## Citation
238
+ ```bibtex
239
+ @misc{mitra2023orca,
240
+ title={Orca 2: Teaching Small Language Models How to Reason},
241
+ author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
242
+ year={2023},
243
+ eprint={2311.11045},
244
+ archivePrefix={arXiv},
245
+ primaryClass={cs.AI}
246
+ }
247
+ ```