CallmeKaito commited on
Commit
e6a7f92
·
verified ·
1 Parent(s): bdcf7fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -22,18 +22,6 @@ datasets:
22
  - **Task**: Text generation with a focus on "brainrot" content (humorous, absurd, or nonsensical text).
23
  - **Fine-Tuning Dataset Size**: 32 rows (small dataset for experimental purposes).
24
 
25
- ### Intended Use
26
- This model is intended for experimental and entertainment purposes. It is fine-tuned on a small dataset of "brainrot" content. Use cases include:
27
- - Generating funny or absurd text for entertainment.
28
- - Exploring the effects of fine-tuning on small, niche datasets.
29
- - Testing the limits of language models with minimal data.
30
-
31
- ### Limitations
32
- - **Overfitting**: Due to the extremely small dataset (32 rows), the model may have overfitted to the training data, leading to poor generalization on unseen data.
33
- - **Validation Loss**: The validation loss increased during training, indicating potential overfitting or lack of generalization.
34
- - **Niche Use Case**: The model is specialized for "brainrot" content and may not perform well on general text generation tasks.
35
- - **Ethical Considerations**: The model may generate nonsensical or inappropriate content. Use with caution and ensure outputs are reviewed before sharing.
36
-
37
  ## Quick start
38
 
39
  ```python
@@ -76,6 +64,18 @@ response = full_response.split("assistant\n")[-1].strip()
76
  print(response)
77
  ```
78
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  ## Training procedure
80
 
81
  This model was trained with SFT.
 
22
  - **Task**: Text generation with a focus on "brainrot" content (humorous, absurd, or nonsensical text).
23
  - **Fine-Tuning Dataset Size**: 32 rows (small dataset for experimental purposes).
24
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ## Quick start
26
 
27
  ```python
 
64
  print(response)
65
  ```
66
 
67
+ ### Intended Use
68
+ This model is intended for experimental and entertainment purposes. It is fine-tuned on a small dataset of "brainrot" content. Use cases include:
69
+ - Generating funny or absurd text for entertainment.
70
+ - Exploring the effects of fine-tuning on small, niche datasets.
71
+ - Testing the limits of language models with minimal data.
72
+
73
+ ### Limitations
74
+ - **Overfitting**: Due to the extremely small dataset (32 rows), the model may have overfitted to the training data, leading to poor generalization on unseen data.
75
+ - **Validation Loss**: The validation loss increased during training, indicating potential overfitting or lack of generalization.
76
+ - **Niche Use Case**: The model is specialized for "brainrot" content and may not perform well on general text generation tasks.
77
+ - **Ethical Considerations**: The model may generate nonsensical or inappropriate content. Use with caution and ensure outputs are reviewed before sharing.
78
+
79
  ## Training procedure
80
 
81
  This model was trained with SFT.