Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,10 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
This model is currently being fine-tuned with deepspeed+bf16 weights using the dataset from Robert A. Gonsalves' article "I Once Trained an AI to Rhyme, and It Took GPT-J a Long Time. Since the Colab was slow, I upgraded to Pro. Each limerick cost me a dime."<br>
|
5 |
https://towardsdatascience.com/i-once-trained-an-ai-to-rhyme-and-it-took-gpt-j-a-long-time-de1f98925e17
|
@@ -54,4 +59,4 @@ I have no idea what that means, but it's basically a limerick.
|
|
54 |
|
55 |
Possible improvements to implement:
|
56 |
* Use IPA (or, as R. Gonsalves suggests, use eSpeak) instead of Festival phonetic tokens to incorporate syllable stress.
|
57 |
-
* Better align the task formatting with the model's tokenization system.
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
widget:
|
4 |
+
- text: "<baked beans =T2R="
|
5 |
+
example_title: "Generate Rhyme Words"
|
6 |
+
- text: "<playing baseball: say \\ play \\ home \\ roam \\ day =R2L="
|
7 |
+
example_title: "Generate First Line"
|
8 |
---
|
9 |
This model is currently being fine-tuned with deepspeed+bf16 weights using the dataset from Robert A. Gonsalves' article "I Once Trained an AI to Rhyme, and It Took GPT-J a Long Time. Since the Colab was slow, I upgraded to Pro. Each limerick cost me a dime."<br>
|
10 |
https://towardsdatascience.com/i-once-trained-an-ai-to-rhyme-and-it-took-gpt-j-a-long-time-de1f98925e17
|
|
|
59 |
|
60 |
Possible improvements to implement:
|
61 |
* Use IPA (or, as R. Gonsalves suggests, use eSpeak) instead of Festival phonetic tokens to incorporate syllable stress.
|
62 |
+
* Better align the task formatting with the model's tokenization system.
|