ThivyanRR commited on
Commit
833426d
·
verified ·
1 Parent(s): 7740206

Upload model

Browse files
Files changed (4) hide show
  1. README.md +199 -0
  2. config.json +115 -0
  3. generation_config.json +290 -0
  4. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "facebook/hf-seamless-m4t-medium",
3
+ "activation_dropout": 0.0,
4
+ "activation_function": "relu",
5
+ "adaptor_dropout": 0.1,
6
+ "adaptor_kernel_size": 8,
7
+ "adaptor_stride": 8,
8
+ "add_adapter": true,
9
+ "architectures": [
10
+ "SeamlessM4TModel"
11
+ ],
12
+ "attention_dropout": 0.1,
13
+ "bos_token_id": 2,
14
+ "conv_depthwise_kernel_size": 31,
15
+ "decoder_attention_heads": 16,
16
+ "decoder_ffn_dim": 4096,
17
+ "decoder_layerdrop": 0.05,
18
+ "decoder_layers": 12,
19
+ "decoder_start_token_id": 3,
20
+ "dropout": 0.1,
21
+ "encoder_attention_heads": 16,
22
+ "encoder_ffn_dim": 4096,
23
+ "encoder_layerdrop": 0.05,
24
+ "encoder_layers": 12,
25
+ "eos_token_id": 3,
26
+ "feature_projection_input_dim": 160,
27
+ "hidden_size": 1024,
28
+ "initializer_range": 0.02,
29
+ "is_encoder_decoder": true,
30
+ "lang_embed_dim": 256,
31
+ "layer_norm_eps": 1e-05,
32
+ "leaky_relu_slope": 0.1,
33
+ "max_new_tokens": 256,
34
+ "max_position_embeddings": 4096,
35
+ "max_source_positions": 4096,
36
+ "model_type": "seamless_m4t",
37
+ "num_adapter_layers": 1,
38
+ "num_attention_heads": 16,
39
+ "num_conv_pos_embedding_groups": 16,
40
+ "num_conv_pos_embeddings": 128,
41
+ "num_hidden_layers": 12,
42
+ "pad_token_id": 0,
43
+ "position_embeddings_type": "relative",
44
+ "resblock_dilation_sizes": [
45
+ [
46
+ 1,
47
+ 3,
48
+ 5
49
+ ],
50
+ [
51
+ 1,
52
+ 3,
53
+ 5
54
+ ],
55
+ [
56
+ 1,
57
+ 3,
58
+ 5
59
+ ]
60
+ ],
61
+ "resblock_kernel_sizes": [
62
+ 3,
63
+ 7,
64
+ 11
65
+ ],
66
+ "rotary_embedding_base": 10000,
67
+ "sampling_rate": 16000,
68
+ "scale_embedding": true,
69
+ "speech_encoder_attention_heads": 16,
70
+ "speech_encoder_dropout": 0.0,
71
+ "speech_encoder_hidden_act": "swish",
72
+ "speech_encoder_intermediate_size": 4096,
73
+ "speech_encoder_layerdrop": 0.1,
74
+ "speech_encoder_layers": 12,
75
+ "spkr_embed_dim": 256,
76
+ "t2u_bos_token_id": 0,
77
+ "t2u_decoder_attention_heads": 16,
78
+ "t2u_decoder_ffn_dim": 8192,
79
+ "t2u_decoder_layers": 4,
80
+ "t2u_decoder_start_token_id": 2,
81
+ "t2u_encoder_attention_heads": 16,
82
+ "t2u_encoder_ffn_dim": 8192,
83
+ "t2u_encoder_layers": 4,
84
+ "t2u_eos_token_id": 2,
85
+ "t2u_max_new_tokens": 1024,
86
+ "t2u_max_position_embeddings": 2048,
87
+ "t2u_pad_token_id": 1,
88
+ "t2u_vocab_size": 10082,
89
+ "torch_dtype": "float32",
90
+ "transformers_version": "4.44.2",
91
+ "unit_embed_dim": 1280,
92
+ "unit_hifi_gan_vocab_size": 10000,
93
+ "upsample_initial_channel": 512,
94
+ "upsample_kernel_sizes": [
95
+ 11,
96
+ 8,
97
+ 8,
98
+ 4,
99
+ 4
100
+ ],
101
+ "upsample_rates": [
102
+ 5,
103
+ 4,
104
+ 4,
105
+ 2,
106
+ 2
107
+ ],
108
+ "use_cache": true,
109
+ "var_pred_dropout": 0.5,
110
+ "variance_predictor_kernel_size": 3,
111
+ "vocab_size": 256206,
112
+ "vocoder_num_langs": 36,
113
+ "vocoder_num_spkrs": 200,
114
+ "vocoder_offset": 4
115
+ }
generation_config.json ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 2,
3
+ "decoder_start_token_id": 3,
4
+ "eos_token_id": 3,
5
+ "max_new_tokens": 256,
6
+ "pad_token_id": 0,
7
+ "t2u_lang_code_to_id": {
8
+ "arb": 10043,
9
+ "ben": 10044,
10
+ "cat": 10045,
11
+ "ces": 10046,
12
+ "cmn": 10047,
13
+ "cym": 10048,
14
+ "dan": 10049,
15
+ "deu": 10050,
16
+ "eng": 10051,
17
+ "est": 10052,
18
+ "fin": 10053,
19
+ "fra": 10054,
20
+ "hin": 10055,
21
+ "ind": 10056,
22
+ "ita": 10057,
23
+ "jpn": 10058,
24
+ "kan": 10059,
25
+ "kor": 10060,
26
+ "mlt": 10061,
27
+ "nld": 10062,
28
+ "pes": 10063,
29
+ "pol": 10064,
30
+ "por": 10065,
31
+ "ron": 10066,
32
+ "rus": 10067,
33
+ "slk": 10068,
34
+ "spa": 10069,
35
+ "swe": 10070,
36
+ "swh": 10071,
37
+ "tam": 10072,
38
+ "tel": 10073,
39
+ "tgl": 10074,
40
+ "tha": 10075,
41
+ "tur": 10076,
42
+ "ukr": 10077,
43
+ "urd": 10078,
44
+ "uzn": 10079,
45
+ "vie": 10080
46
+ },
47
+ "text_decoder_lang_to_code_id": {
48
+ "ace": 256001,
49
+ "ace_Latn": 256002,
50
+ "acm": 256003,
51
+ "acq": 256004,
52
+ "aeb": 256005,
53
+ "afr": 256006,
54
+ "ajp": 256007,
55
+ "aka": 256008,
56
+ "als": 256162,
57
+ "amh": 256009,
58
+ "apc": 256010,
59
+ "arb": 256011,
60
+ "ars": 256012,
61
+ "ary": 256013,
62
+ "arz": 256014,
63
+ "asm": 256015,
64
+ "ast": 256016,
65
+ "awa": 256017,
66
+ "ayr": 256018,
67
+ "azb": 256019,
68
+ "azj": 256020,
69
+ "bak": 256021,
70
+ "bam": 256022,
71
+ "ban": 256023,
72
+ "bel": 256024,
73
+ "bem": 256025,
74
+ "ben": 256026,
75
+ "bho": 256027,
76
+ "bjn": 256028,
77
+ "bjn_Latn": 256029,
78
+ "bod": 256030,
79
+ "bos": 256031,
80
+ "bug": 256032,
81
+ "bul": 256033,
82
+ "cat": 256034,
83
+ "ceb": 256035,
84
+ "ces": 256036,
85
+ "cjk": 256037,
86
+ "ckb": 256038,
87
+ "cmn": 256200,
88
+ "cmn_Hant": 256201,
89
+ "crh": 256039,
90
+ "cym": 256040,
91
+ "dan": 256041,
92
+ "deu": 256042,
93
+ "dik": 256043,
94
+ "dyu": 256044,
95
+ "dzo": 256045,
96
+ "ell": 256046,
97
+ "eng": 256047,
98
+ "epo": 256048,
99
+ "est": 256049,
100
+ "eus": 256050,
101
+ "ewe": 256051,
102
+ "fao": 256052,
103
+ "fij": 256054,
104
+ "fin": 256055,
105
+ "fon": 256056,
106
+ "fra": 256057,
107
+ "fur": 256058,
108
+ "fuv": 256059,
109
+ "gaz": 256135,
110
+ "gla": 256060,
111
+ "gle": 256061,
112
+ "glg": 256062,
113
+ "grn": 256063,
114
+ "guj": 256064,
115
+ "hat": 256065,
116
+ "hau": 256066,
117
+ "heb": 256067,
118
+ "hin": 256068,
119
+ "hne": 256069,
120
+ "hrv": 256070,
121
+ "hun": 256071,
122
+ "hye": 256072,
123
+ "ibo": 256073,
124
+ "ilo": 256074,
125
+ "ind": 256075,
126
+ "isl": 256076,
127
+ "ita": 256077,
128
+ "jav": 256078,
129
+ "jpn": 256079,
130
+ "kab": 256080,
131
+ "kac": 256081,
132
+ "kam": 256082,
133
+ "kan": 256083,
134
+ "kas": 256084,
135
+ "kas_Deva": 256085,
136
+ "kat": 256086,
137
+ "kaz": 256089,
138
+ "kbp": 256090,
139
+ "kea": 256091,
140
+ "khk": 256122,
141
+ "khm": 256092,
142
+ "kik": 256093,
143
+ "kin": 256094,
144
+ "kir": 256095,
145
+ "kmb": 256096,
146
+ "kmr": 256099,
147
+ "knc": 256087,
148
+ "knc_Latn": 256088,
149
+ "kon": 256097,
150
+ "kor": 256098,
151
+ "lao": 256100,
152
+ "lij": 256102,
153
+ "lim": 256103,
154
+ "lin": 256104,
155
+ "lit": 256105,
156
+ "lmo": 256106,
157
+ "ltg": 256107,
158
+ "ltz": 256108,
159
+ "lua": 256109,
160
+ "lug": 256110,
161
+ "luo": 256111,
162
+ "lus": 256112,
163
+ "lvs": 256101,
164
+ "mag": 256113,
165
+ "mai": 256114,
166
+ "mal": 256115,
167
+ "mar": 256116,
168
+ "min": 256117,
169
+ "mkd": 256118,
170
+ "mlt": 256120,
171
+ "mni": 256121,
172
+ "mos": 256123,
173
+ "mri": 256124,
174
+ "mya": 256126,
175
+ "nld": 256127,
176
+ "nno": 256128,
177
+ "nob": 256129,
178
+ "npi": 256130,
179
+ "nso": 256131,
180
+ "nus": 256132,
181
+ "nya": 256133,
182
+ "oci": 256134,
183
+ "ory": 256136,
184
+ "pag": 256137,
185
+ "pan": 256138,
186
+ "pap": 256139,
187
+ "pbt": 256143,
188
+ "pes": 256053,
189
+ "plt": 256119,
190
+ "pol": 256140,
191
+ "por": 256141,
192
+ "prs": 256142,
193
+ "quy": 256144,
194
+ "ron": 256145,
195
+ "run": 256146,
196
+ "rus": 256147,
197
+ "sag": 256148,
198
+ "san": 256149,
199
+ "sat": 256150,
200
+ "scn": 256151,
201
+ "shn": 256152,
202
+ "sin": 256153,
203
+ "slk": 256154,
204
+ "slv": 256155,
205
+ "smo": 256156,
206
+ "sna": 256157,
207
+ "snd": 256158,
208
+ "som": 256159,
209
+ "sot": 256160,
210
+ "spa": 256161,
211
+ "srd": 256163,
212
+ "srp": 256164,
213
+ "ssw": 256165,
214
+ "sun": 256166,
215
+ "swe": 256167,
216
+ "swh": 256168,
217
+ "szl": 256169,
218
+ "tam": 256170,
219
+ "taq": 256177,
220
+ "taq_Tfng": 256178,
221
+ "tat": 256171,
222
+ "tel": 256172,
223
+ "tgk": 256173,
224
+ "tgl": 256174,
225
+ "tha": 256175,
226
+ "tir": 256176,
227
+ "tpi": 256179,
228
+ "tsn": 256180,
229
+ "tso": 256181,
230
+ "tuk": 256182,
231
+ "tum": 256183,
232
+ "tur": 256184,
233
+ "twi": 256185,
234
+ "tzm": 256186,
235
+ "uig": 256187,
236
+ "ukr": 256188,
237
+ "umb": 256189,
238
+ "urd": 256190,
239
+ "uzn": 256191,
240
+ "vec": 256192,
241
+ "vie": 256193,
242
+ "war": 256194,
243
+ "wol": 256195,
244
+ "xho": 256196,
245
+ "ydd": 256197,
246
+ "yor": 256198,
247
+ "yue": 256199,
248
+ "zsm": 256125,
249
+ "zul": 256202
250
+ },
251
+ "transformers_version": "4.44.2",
252
+ "vocoder_lang_code_to_id": {
253
+ "arb": 0,
254
+ "ben": 1,
255
+ "cat": 2,
256
+ "ces": 3,
257
+ "cmn": 4,
258
+ "cym": 5,
259
+ "dan": 6,
260
+ "deu": 7,
261
+ "eng": 8,
262
+ "est": 9,
263
+ "fin": 10,
264
+ "fra": 11,
265
+ "hin": 12,
266
+ "ind": 13,
267
+ "ita": 14,
268
+ "jpn": 15,
269
+ "kor": 16,
270
+ "mlt": 17,
271
+ "nld": 18,
272
+ "pes": 19,
273
+ "pol": 20,
274
+ "por": 21,
275
+ "ron": 22,
276
+ "rus": 23,
277
+ "slk": 24,
278
+ "spa": 25,
279
+ "swe": 26,
280
+ "swh": 27,
281
+ "tel": 28,
282
+ "tgl": 29,
283
+ "tha": 30,
284
+ "tur": 31,
285
+ "ukr": 32,
286
+ "urd": 33,
287
+ "uzn": 34,
288
+ "vie": 35
289
+ }
290
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:acc11dbd8127b83b840965c86c56137dd523fd5da88f8a73f39603849e72016e
3
+ size 4838113416