beomi commited on
Commit
7e208ab
1 Parent(s): 4d1698a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +200 -0
README.md ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
5
+ license: other
6
+ license_name: gemma-terms-of-use
7
+ license_link: https://ai.google.dev/gemma/terms
8
+ ---
9
+
10
+ # Gemma-Ko
11
+
12
+ **Original Gemma Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
13
+
14
+ This model card corresponds to the 7B base version of the **Gemma-Ko** model.
15
+
16
+ **Resources and Technical Documentation**:
17
+
18
+ * [Original Google's Gemma-7B](https://huggingface.co/google/gemma-7b)
19
+ * [Training Code @ Github: Gemma-EasyLM](https://github.com/Beomi/Gemma-EasyLM)
20
+
21
+ **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
22
+
23
+ **Model Developers**: Junbum Lee (Beomi) & Taekyoon Choi (Taekyoon)
24
+
25
+ ## Model Information
26
+
27
+ Summary description and brief definition of inputs and outputs.
28
+
29
+ ### Description
30
+
31
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
32
+ built from the same research and technology used to create the Gemini models.
33
+ They are text-to-text, decoder-only large language models, available in English,
34
+ with open weights, pre-trained variants, and instruction-tuned variants. Gemma
35
+ models are well-suited for a variety of text generation tasks, including
36
+ question answering, summarization, and reasoning. Their relatively small size
37
+ makes it possible to deploy them in environments with limited resources such as
38
+ a laptop, desktop or your own cloud infrastructure, democratizing access to
39
+ state of the art AI models and helping foster innovation for everyone.
40
+
41
+ ### Usage
42
+
43
+ Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
44
+
45
+ #### Running the model on a CPU
46
+
47
+ ```python
48
+ from transformers import AutoTokenizer, AutoModelForCausalLM
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained("beomi/gemma-ko-7b")
51
+ model = AutoModelForCausalLM.from_pretrained("beomi/gemma-ko-7b")
52
+
53
+ input_text = "머신러닝과 딥러닝의 차이는"
54
+ input_ids = tokenizer(input_text, return_tensors="pt")
55
+
56
+ outputs = model.generate(**input_ids)
57
+ print(tokenizer.decode(outputs[0]))
58
+ ```
59
+
60
+
61
+ #### Running the model on a single / multi GPU
62
+
63
+ ```python
64
+ # pip install accelerate
65
+ from transformers import AutoTokenizer, AutoModelForCausalLM
66
+
67
+ tokenizer = AutoTokenizer.from_pretrained("beomi/gemma-ko-7b")
68
+ model = AutoModelForCausalLM.from_pretrained("beomi/gemma-ko-7b", device_map="auto")
69
+
70
+ input_text = "머신러닝과 딥러닝의 차이는"
71
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
72
+
73
+ outputs = model.generate(**input_ids)
74
+ print(tokenizer.decode(outputs[0]))
75
+ ```
76
+
77
+ #### Other optimizations
78
+
79
+ * _Flash Attention 2_
80
+
81
+ First make sure to install `flash-attn` in your environment `pip install flash-attn`
82
+
83
+ ```diff
84
+ model = AutoModelForCausalLM.from_pretrained(
85
+ "beomi/gemma-ko-7b",
86
+ torch_dtype=torch.float16,
87
+ + attn_implementation="flash_attention_2"
88
+ ).to(0)
89
+ ```
90
+
91
+ ### Inputs and outputs
92
+
93
+ * **Input:** Text string, such as a question, a prompt, or a document to be
94
+ summarized.
95
+ * **Output:** Generated Korean/English-language text in response to the input, such
96
+ as an answer to a question, or a summary of a document.
97
+
98
+ ## Implementation Information
99
+
100
+ Details about the model internals.
101
+
102
+ ### Software
103
+
104
+ Training was done using [beomi/Gemma-EasyLM](https://github.com/Beomi/Gemma-EasyLM).
105
+
106
+
107
+ ## Evaluation
108
+
109
+ Model evaluation metrics and results.
110
+
111
+ ### Benchmark Results
112
+
113
+ TBD
114
+
115
+ ## Usage and Limitations
116
+
117
+ These models have certain limitations that users should be aware of.
118
+
119
+ ### Intended Usage
120
+
121
+ Open Large Language Models (LLMs) have a wide range of applications across
122
+ various industries and domains. The following list of potential uses is not
123
+ comprehensive. The purpose of this list is to provide contextual information
124
+ about the possible use-cases that the model creators considered as part of model
125
+ training and development.
126
+
127
+ * Content Creation and Communication
128
+ * Text Generation: These models can be used to generate creative text formats
129
+ such as poems, scripts, code, marketing copy, and email drafts.
130
+ * Research and Education
131
+ * Natural Language Processing (NLP) Research: These models can serve as a
132
+ foundation for researchers to experiment with NLP techniques, develop
133
+ algorithms, and contribute to the advancement of the field.
134
+ * Language Learning Tools: Support interactive language learning experiences,
135
+ aiding in grammar correction or providing writing practice.
136
+ * Knowledge Exploration: Assist researchers in exploring large bodies of text
137
+ by generating summaries or answering questions about specific topics.
138
+
139
+ ### Limitations
140
+
141
+ * Training Data
142
+ * The quality and diversity of the training data significantly influence the
143
+ model's capabilities. Biases or gaps in the training data can lead to
144
+ limitations in the model's responses.
145
+ * The scope of the training dataset determines the subject areas the model can
146
+ handle effectively.
147
+ * Context and Task Complexity
148
+ * LLMs are better at tasks that can be framed with clear prompts and
149
+ instructions. Open-ended or highly complex tasks might be challenging.
150
+ * A model's performance can be influenced by the amount of context provided
151
+ (longer context generally leads to better outputs, up to a certain point).
152
+ * Language Ambiguity and Nuance
153
+ * Natural language is inherently complex. LLMs might struggle to grasp subtle
154
+ nuances, sarcasm, or figurative language.
155
+ * Factual Accuracy
156
+ * LLMs generate responses based on information they learned from their
157
+ training datasets, but they are not knowledge bases. They may generate
158
+ incorrect or outdated factual statements.
159
+ * Common Sense
160
+ * LLMs rely on statistical patterns in language. They might lack the ability
161
+ to apply common sense reasoning in certain situations.
162
+
163
+ ### Ethical Considerations and Risks
164
+
165
+ The development of large language models (LLMs) raises several ethical concerns.
166
+ In creating an open model, we have carefully considered the following:
167
+
168
+ * Bias and Fairness
169
+ * LLMs trained on large-scale, real-world text data can reflect socio-cultural
170
+ biases embedded in the training material. These models underwent careful
171
+ scrutiny, input data pre-processing described and posterior evaluations
172
+ reported in this card.
173
+ * Misinformation and Misuse
174
+ * LLMs can be misused to generate text that is false, misleading, or harmful.
175
+ * Guidelines are provided for responsible use with the model, see the
176
+ [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
177
+ * Transparency and Accountability:
178
+ * This model card summarizes details on the models' architecture,
179
+ capabilities, limitations, and evaluation processes.
180
+ * A responsibly developed open model offers the opportunity to share
181
+ innovation by making LLM technology accessible to developers and researchers
182
+ across the AI ecosystem.
183
+
184
+ Risks identified and mitigations:
185
+
186
+ * Perpetuation of biases: It's encouraged to perform continuous monitoring
187
+ (using evaluation metrics, human review) and the exploration of de-biasing
188
+ techniques during model training, fine-tuning, and other use cases.
189
+ * Generation of harmful content: Mechanisms and guidelines for content safety
190
+ are essential. Developers are encouraged to exercise caution and implement
191
+ appropriate content safety safeguards based on their specific product policies
192
+ and application use cases.
193
+ * Misuse for malicious purposes: Technical limitations and developer and
194
+ end-user education can help mitigate against malicious applications of LLMs.
195
+ Educational resources and reporting mechanisms for users to flag misuse are
196
+ provided. Prohibited uses of Gemma models are outlined in the
197
+ [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
198
+ * Privacy violations: Models were trained on data filtered for removal of PII
199
+ (Personally Identifiable Information). Developers are encouraged to adhere to
200
+ privacy regulations with privacy-preserving techniques.