saheedniyi commited on
Commit
ce3dad4
·
verified ·
1 Parent(s): eb99f00

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +349 -170
README.md CHANGED
@@ -1,199 +1,378 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- [More Information Needed]
63
 
64
- ### Recommendations
65
 
66
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
93
  #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
 
163
  #### Hardware
164
 
165
- [More Information Needed]
166
 
167
  #### Software
168
 
169
- [More Information Needed]
170
 
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
 
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
 
 
 
 
 
 
 
 
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
 
 
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ base_model:
7
+ - HuggingFaceTB/SmolLM2-360M
8
+ pipeline_tag: text-to-speech
9
  ---
10
 
11
+ # YarnGPT
12
+ ![image/png](https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/logo.webp)
13
+
14
+ ## Table of Contents
15
+
16
+ 1. [Model Summary](#model-summary)
17
+ 2. [Model Description](#model-description)
18
+ 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
19
+ - [Recommendations](#recommendations)
20
+ 4. [Speech Samples](#speech-samples)
21
+ 5. [Training](#training)
22
+ 6. [Future Improvements](#future-improvements)
23
+ 7. [Citation](#citation)
24
+ 8. [Credits & References](#credits--references)
25
+
26
+ ## Model Summary
27
+
28
+ YarnGPT is a text-to-speech (TTS) model designed to synthesize Nigerian-accented English leveraging pure language modelling without external adapters or complex architectures, offering high-quality, natural, and culturally relevant speech synthesis for diverse applications.
29
+
30
+ <video controls width="600">
31
+ <source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/YearnGPT.mp4" type="video/mp4">
32
+ Your browser does not support the video tag.
33
+ </video>
34
+
35
+ #### How to use (Colab)
36
+ The model can generate audio on its own but its better to use a voice to prompt the model, there are about 11 voices supported by default (6 males and 5 females ):
37
+ - zainab
38
+ - jude
39
+ - tayo
40
+ - remi
41
+ - idera (default and best voice)
42
+ - regina
43
+ - chinenye
44
+ - umar
45
+ - osagie
46
+ - joke
47
+ - emma (the names do not correlate to any tribe or accent)
48
+
49
+ ### Prompt YarnGPT
50
+ ```python
51
+ # clone the YarnGPT repo to get access to the `audiotokenizer`
52
+ !git clone https://github.com/saheedniyi02/yarngpt.git
53
+
54
+
55
+ # install some necessary libraries
56
+ !pip install outetts==0.2.3 uroman
57
+
58
+ #import some important packages
59
+ import os
60
+ import re
61
+ import json
62
+ import torch
63
+ import inflect
64
+ import random
65
+ import uroman as ur
66
+ import numpy as np
67
+ import torchaudio
68
+ import IPython
69
+ from transformers import AutoModelForCausalLM, AutoTokenizer
70
+ from outetts.wav_tokenizer.decoder import WavTokenizer
71
+ from yarngpt.audiotokenizer import AudioTokenizer
72
+
73
+
74
+ # download the wavtokenizer weights and config (to encode and decode the audio)
75
+ !wget https://huggingface.co/novateur/WavTokenizer-medium-speech-75token/resolve/main/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml
76
+ !wget https://huggingface.co/novateur/WavTokenizer-large-speech-75token/resolve/main/wavtokenizer_large_speech_320_24k.ckpt
77
+
78
+ # model path and wavtokenizer weight path (the paths are assumed based on Google colab, a different environment might save the weights to a different location).
79
+ hf_path="saheedniyi/YarnGPT"
80
+ wav_tokenizer_config_path="/content/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml"
81
+ wav_tokenizer_model_path = "/content/wavtokenizer_large_speech_320_24k.ckpt"
82
+
83
+ # create the AudioTokenizer object
84
+ audio_tokenizer=AudioTokenizer(
85
+ hf_path,wav_tokenizer_model_path,wav_tokenizer_config_path
86
+ )
87
+
88
+ #load the model weights
89
+
90
+ model = AutoModelForCausalLM.from_pretrained(hf_path,torch_dtype="auto").to(audio_tokenizer.device)
91
+
92
+ # your input text
93
+ text="Uhm, so, what was the inspiration behind your latest project? Like, was there a specific moment where you were like, 'Yeah, this is it!' Or, you know, did it just kind of, uh, come together naturally over time?"
94
+
95
+ # creating a prompt, when creating a prompt, there is an optional `speaker_name` parameter, the possible speakers are "idera","emma","jude","osagie","tayo","zainab","joke","regina","remi","umar","chinenye" if no speaker is selected a speaker is chosen at random
96
+ prompt=audio_tokenizer.create_prompt(text,"idera")
97
+
98
+ # tokenize the prompt
99
+ input_ids=audio_tokenizer.tokenize_prompt(prompt)
100
+
101
+ # generate output from the model, you can tune the `.generate` parameters as you wish
102
+ output = model.generate(
103
+ input_ids=input_ids,
104
+ temperature=0.1,
105
+ repetition_penalty=1.1,
106
+ max_length=4000,
107
+ )
108
+
109
+ # convert the output to "audio codes"
110
+ codes=audio_tokenizer.get_codes(output)
111
+
112
+ # converts the codes to audio
113
+ audio=audio_tokenizer.get_audio(codes)
114
+
115
+ # play the audio
116
+ IPython.display.Audio(audio,rate=24000)
117
+
118
+ # save the audio
119
+ torchaudio.save(f"audio.wav", audio, sample_rate=24000)
120
+ ```
121
+
122
+ ### Simple Nigerian Accented-NewsReader
123
+ ```python
124
+ !git clone https://github.com/saheedniyi02/yarngpt.git
125
+
126
+ # install some necessary libraries
127
+ !pip install outetts uroman trafilatura pydub
128
+
129
+ import os
130
+ import re
131
+ import json
132
+ import torch
133
+ import inflect
134
+ import random
135
+ import requests
136
+ import trafilatura
137
+ import inflect
138
+ import uroman as ur
139
+ import numpy as np
140
+ import torchaudio
141
+ import IPython
142
+ from pydub import AudioSegment
143
+ from pydub.effects import normalize
144
+ from transformers import AutoModelForCausalLM, AutoTokenizer
145
+ from outetts.wav_tokenizer.decoder import WavTokenizer
146
+
147
+
148
+ !wget https://huggingface.co/novateur/WavTokenizer-medium-speech-75token/resolve/main/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml
149
+ !wget https://huggingface.co/novateur/WavTokenizer-large-speech-75token/resolve/main/wavtokenizer_large_speech_320_24k.ckpt
150
+
151
+ from yarngpt.audiotokenizer import AudioTokenizer
152
+
153
+ tokenizer_path="saheedniyi/YarnGPT"
154
+ wav_tokenizer_config_path="/content/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml"
155
+ wav_tokenizer_model_path = "/content/wavtokenizer_large_speech_320_24k.ckpt"
156
+
157
+
158
+
159
+ audio_tokenizer=AudioTokenizer(
160
+ tokenizer_path,wav_tokenizer_model_path,wav_tokenizer_config_path
161
+ )
162
+
163
+
164
+ model = AutoModelForCausalLM.from_pretrained(tokenizer_path,torch_dtype="auto").to(audio_tokenizer.device)
165
+
166
+
167
+ def split_text_into_chunks(text, word_limit=25):
168
+ """
169
+ Function to split a long web page into reasonable chunks
170
+ """
171
+ sentences=[sentence.strip() for sentence in text.split('.') if sentence.strip()]
172
+ chunks=[]
173
+ for sentence in sentences:
174
+ chunks.append(".")
175
+ sentence_splitted=sentence.split(" ")
176
+ num_words=len(sentence_splitted)
177
+ start_index=0
178
+ if num_words>word_limit:
179
+ while start_index<num_words:
180
+ end_index=min(num_words,start_index+word_limit)
181
+ chunks.append(" ".join(sentence_splitted[start_index:start_index+word_limit]))
182
+ start_index=end_index
183
+ else:
184
+ chunks.append(sentence)
185
+ return chunks
186
+
187
+ #Extracting the content of a webpage
188
+ page=requests.get("https://punchng.com/expensive-feud-how-burna-boy-cubana-chief-priests-fight-led-to-dollar-rain/")
189
+ content=trafilatura.extract(page.text)
190
+ chunks=split_text_into_chunks(content)
191
+
192
+ #Looping over the chunks and adding creating a large `all_codes` list
193
+ all_codes=[]
194
+ for i,chunk in enumerate(chunks):
195
+ print(i)
196
+ print("\n")
197
+ print(chunk)
198
+ if chunk==".":
199
+ #add silence for 0.25 seconds if we encounter a full stop
200
+ all_codes.extend([453]*20)
201
+ else:
202
+ prompt=audio_tokenizer.create_prompt(chunk,"chinenye")
203
+ input_ids=audio_tokenizer.tokenize_prompt(prompt)
204
+ output = model.generate(
205
+ input_ids=input_ids,
206
+ temperature=0.1,
207
+ repetition_penalty=1.1,
208
+ max_length=4000,
209
+ )
210
+ codes=audio_tokenizer.get_codes(output)
211
+ all_codes.extend(codes)
212
+
213
+
214
+ # Converting to audio
215
+ audio=audio_tokenizer.get_audio(all_codes)
216
+ IPython.display.Audio(audio,rate=24000)
217
+ torchaudio.save(f"news1.wav", audio, sample_rate=24000)
218
+ ```
219
+
220
+ ## Model Description
221
+
222
+ - **Developed by:** [Saheedniyi](https://linkedin.com/in/azeez-saheed)
223
+ - **Model type:** Text-to-Speech
224
+ - **Language(s) (NLP):** English--> Nigerian Accented English
225
+ - **Finetuned from:** [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M)
226
+ - **Repository:** [YarnGPT Github Repository](https://github.com/saheedniyi02/yarngpt)
227
+ - **Paper:** IN PROGRESS.
228
+ - **Demo:** 1) [Prompt YarnGPT notebook](https://colab.research.google.com/drive/11zMUrfBiLa1gEflAKp8lliSOTNQ-X_nU?usp=sharing)
229
+ 2) [Simple news reader](https://colab.research.google.com/drive/1SsXV08kly1TUJVM_NFpKqQWOZ1gUZpGe?usp=sharing)
230
+
231
+
232
+
233
+ #### Uses
234
+
235
+ Generate Nigerian-accented English speech for experimental purposes.
236
+
237
+
238
+ #### Out-of-Scope Use
239
+
240
+ The model is not suitable for generating speech in languages other than English or other accents.
241
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
242
 
243
  ## Bias, Risks, and Limitations
244
 
245
+ The model may not capture the full diversity of Nigerian accents and could exhibit biases based on the training dataset. Also a lot of the text the model was trained on were automatically generated which could impact performance.
246
 
 
247
 
248
+ #### Recommendations
249
 
250
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
251
 
252
+ Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Feedback and diverse training data contributions are encouraged.
253
+ ## Speech Samples
254
+
255
+ Listen to samples generated by YarnGPT:
256
+
257
+ <div style="margin-top: 20px;">
258
+ <table style="width: 100%; border-collapse: collapse;">
259
+ <thead>
260
+ <tr>
261
+ <th style="border: 1px solid #ddd; padding: 8px; text-align: left; width: 40%;">Input</th>
262
+ <th style="border: 1px solid #ddd; padding: 8px; text-align: left; width: 40%;">Audio</th>
263
+ <th style="border: 1px solid #ddd; padding: 8px; text-align: left; width: 10%;">Notes</th>
264
+ </tr>
265
+ </thead>
266
+ <tbody>
267
+ <tr>
268
+ <td style="border: 1px solid #ddd; padding: 8px;">Hello world! I am Saheed Azeez and I am excited to announce the release of his project, I have been gathering data and learning how to build Audio-based models over the last two months, but thanks to God, I have been able to come up with something</td>
269
+ <td style="border: 1px solid #ddd; padding: 8px;">
270
+ <audio controls style="width: 100%;">
271
+ <source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/Sample_1.wav" type="audio/wav">
272
+ Your browser does not support the audio element.
273
+ </audio>
274
+ </td>
275
+ <td style="border: 1px solid #ddd; padding: 8px;">(temperature=0.1, repetition_penalty=1.1), voice: idera</td>
276
+ </tr>
277
+ <tr>
278
+ <td style="border: 1px solid #ddd; padding: 8px;"> Wizkid, Davido, Burna Boy perform at same event in Lagos. This event has sparked many reactions across social media, with fans and critics alike praising the artistes' performances and the rare opportunity to see the three music giants on the same stage.</td>
279
+ <td style="border: 1px solid #ddd; padding: 8px;">
280
+ <audio controls style="width: 100%;">
281
+ <source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/Sample_2.wav" type="audio/wav">
282
+ Your browser does not support the audio element.
283
+ </audio>
284
+ </td>
285
+ <td style="border: 1px solid #ddd; padding: 8px;">(temperature=0.1, repetition_penalty=1.1), voice: jude</td>
286
+ </tr>
287
+ <tr>
288
+ <td style="border: 1px solid #ddd; padding: 8px;">Since Nigeria became a republic in 1963, 14 individuals have served as head of state of Nigeria under different titles. The incumbent president Bola Tinubu is the nation's 16th head of state.</td>
289
+ <td style="border: 1px solid #ddd; padding: 8px;">
290
+ <audio controls style="width: 100%;">
291
+ <source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/Sample_3.wav" type="audio/wav">
292
+ Your browser does not support the audio element.
293
+ </audio>
294
+ </td>
295
+ <td style="border: 1px solid #ddd; padding: 8px;">(temperature=0.1, repetition_penalty=1.1), voice: zainab, the model struggled in pronouncing ` in 1963`</td>
296
+ </tr>
297
+ <tr>
298
+ <td style="border: 1px solid #ddd; padding: 8px;">I visited the President, who has shown great concern for the security of Plateau State, especially considering that just a year ago, our state was in mourning. The President’s commitment to addressing these challenges has been steadfast.</td>
299
+ <td style="border: 1px solid #ddd; padding: 8px;">
300
+ <audio controls style="width: 100%;">
301
+ <source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/Sample_4.wav" type="audio/wav">
302
+ Your browser does not support the audio element.
303
+ </audio>
304
+ </td>
305
+ <td style="border: 1px solid #ddd; padding: 8px;">(temperature=0.1, repetition_penalty=1.1), voice: emma</td>
306
+ </tr>
307
+ <tr>
308
+ <td style="border: 1px solid #ddd; padding: 8px;">Scientists have discovered a new planet that may be capable of supporting life!</td>
309
+ <td style="border: 1px solid #ddd; padding: 8px;">
310
+ <audio controls style="width: 100%;">
311
+ <source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/Sample_5.wav" type="audio/wav">
312
+ Your browser does not support the audio element.
313
+ </audio>
314
+ </td>
315
+ <td style="border: 1px solid #ddd; padding: 8px;">(temperature=0.1, repetition_penalty=1.1)</td>
316
+ </tr>
317
+ </tbody>
318
+ </table>
319
+ </div>
320
+
321
+
322
+ ## Training
323
+
324
+ #### Data
325
+ Trained on a dataset of publicly available Nigerian movies, podcasts ( using the subtitle-audio pairs) and open source Nigerian-related audio data on Huggingface,
326
+
327
+ #### Preprocessing
328
+
329
+ Audio files were preprocessed and resampled to 24Khz and tokenized using [wavtokenizer](https://huggingface.co/novateur/WavTokenizer).
330
 
331
  #### Training Hyperparameters
332
+ - **Number of epochs:** 5
333
+ - **batch_size:** 4
334
+ - **Scheduler:** linear schedule with warmup for 4 epochs, then linear decay to zero for the last epoch
335
+ - **Optimizer:** AdamW (betas=(0.9, 0.95),weight_decay=0.01)
336
+ - **Learning rate:** 1*10^-3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
337
 
338
  #### Hardware
339
 
340
+ - **GPUs:** 1 A100 (google colab: 50 hours)
341
 
342
  #### Software
343
 
344
+ - **Training Framework:** Pytorch
345
 
346
+ ## Future Improvements?
347
+ - Scaling up model size and human-annotaed/ reviewed training data
348
+ - Wrap the model around an API endpoint
349
+ - Add support for local Nigerian languages
350
+ - Voice cloning.
351
+ - Potential expansion into speech-to-speech assistant models
 
 
 
352
 
353
+ ## Citation [optional]
 
 
 
 
 
 
354
 
355
+ #### BibTeX:
356
 
357
+ ```python
358
+ @misc{yarngpt2025,
359
+ author = {Saheed Azeez},
360
+ title = {YarnGPT: Nigerian-Accented English Text-to-Speech Model},
361
+ year = {2025},
362
+ publisher = {Hugging Face},
363
+ url = {https://huggingface.co/SaheedAzeez/yarngpt}
364
+ }
365
+ ```
366
 
367
+ #### APA:
368
 
369
+ ```python
370
+ Saheed Azeez. (2025). YarnGPT: Nigerian-Accented English Text-to-Speech Model. Hugging Face. Available at: https://huggingface.co/saheedniyi/YarnGPT
371
+ ```
372
 
 
373
 
374
+ ## Credits & References
375
+ - [OuteAI/OuteTTS-0.2-500M](https://huggingface.co/OuteAI/OuteTTS-0.2-500M/)
376
+ - [WavTokenizer](https://github.com/jishengpeng/WavTokenizer)
377
+ - [CTC Forced Alignment](https://pytorch.org/audio/stable/tutorials/ctc_forced_alignment_api_tutorial.html)
378
+ - [Voicera](https://huggingface.co/Lwasinam/voicera)