RichardErkhov commited on
Commit
22f63d2
1 Parent(s): f42728a

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +121 -0
README.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ danbooruTagAutocomplete - GGUF
11
+ - Model creator: https://huggingface.co/0Tick/
12
+ - Original model: https://huggingface.co/0Tick/danbooruTagAutocomplete/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [danbooruTagAutocomplete.Q2_K.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q2_K.gguf) | Q2_K | 0.06GB |
18
+ | [danbooruTagAutocomplete.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.IQ3_XS.gguf) | IQ3_XS | 0.07GB |
19
+ | [danbooruTagAutocomplete.IQ3_S.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.IQ3_S.gguf) | IQ3_S | 0.07GB |
20
+ | [danbooruTagAutocomplete.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q3_K_S.gguf) | Q3_K_S | 0.07GB |
21
+ | [danbooruTagAutocomplete.IQ3_M.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.IQ3_M.gguf) | IQ3_M | 0.07GB |
22
+ | [danbooruTagAutocomplete.Q3_K.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q3_K.gguf) | Q3_K | 0.07GB |
23
+ | [danbooruTagAutocomplete.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q3_K_M.gguf) | Q3_K_M | 0.07GB |
24
+ | [danbooruTagAutocomplete.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q3_K_L.gguf) | Q3_K_L | 0.07GB |
25
+ | [danbooruTagAutocomplete.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.IQ4_XS.gguf) | IQ4_XS | 0.07GB |
26
+ | [danbooruTagAutocomplete.Q4_0.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q4_0.gguf) | Q4_0 | 0.08GB |
27
+ | [danbooruTagAutocomplete.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.IQ4_NL.gguf) | IQ4_NL | 0.08GB |
28
+ | [danbooruTagAutocomplete.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q4_K_S.gguf) | Q4_K_S | 0.08GB |
29
+ | [danbooruTagAutocomplete.Q4_K.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q4_K.gguf) | Q4_K | 0.08GB |
30
+ | [danbooruTagAutocomplete.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q4_K_M.gguf) | Q4_K_M | 0.08GB |
31
+ | [danbooruTagAutocomplete.Q4_1.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q4_1.gguf) | Q4_1 | 0.08GB |
32
+ | [danbooruTagAutocomplete.Q5_0.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q5_0.gguf) | Q5_0 | 0.09GB |
33
+ | [danbooruTagAutocomplete.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q5_K_S.gguf) | Q5_K_S | 0.09GB |
34
+ | [danbooruTagAutocomplete.Q5_K.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q5_K.gguf) | Q5_K | 0.09GB |
35
+ | [danbooruTagAutocomplete.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q5_K_M.gguf) | Q5_K_M | 0.09GB |
36
+ | [danbooruTagAutocomplete.Q5_1.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q5_1.gguf) | Q5_1 | 0.09GB |
37
+ | [danbooruTagAutocomplete.Q6_K.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q6_K.gguf) | Q6_K | 0.1GB |
38
+ | [danbooruTagAutocomplete.Q8_0.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q8_0.gguf) | Q8_0 | 0.12GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ language:
46
+ - en
47
+ license: mit
48
+ library_name: transformers
49
+ tags:
50
+ - generated_from_trainer
51
+ datasets:
52
+ - 0Tick/Danbooru-Random-Posts-Scrape
53
+ metrics:
54
+ - accuracy
55
+ co2_eq_emissions: 100
56
+ pipeline_tag: text-generation
57
+ base_model: distilgpt2
58
+ model-index:
59
+ - name: danbooruTagAutocomplete
60
+ results: []
61
+ ---
62
+
63
+ ## Model description
64
+
65
+ This is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) which is intended to be used with the [promptgen](https://github.com/AUTOMATIC1111/stable-diffusion-webui-promptgen) extension inside the AUTOMATIC1111 WebUI.
66
+ It is trained on the raw tags of danbooru with underscores and spaces. Only posts with a rating higher than "General" were included in the dataset.
67
+
68
+
69
+ # Training
70
+
71
+ This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a dataset of the tags of 118k random posts of [danbooru](danbooru.donmai.us) .
72
+ It achieves the following results on the evaluation set:
73
+ - Loss: 3.6934
74
+ - Accuracy: 0.4650
75
+
76
+
77
+ ## Training and evaluation data
78
+
79
+
80
+ Use this collab notebook to train your own model. Also used to train this model
81
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/0Tick/stable-diffusion-tools/blob/main/distilgpt2train.ipynb)
82
+
83
+ ### Training hyperparameters
84
+
85
+ The following hyperparameters were used during training:
86
+ - learning_rate: 5e-05
87
+ - train_batch_size: 6
88
+ - eval_batch_size: 6
89
+ - seed: 42
90
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
91
+ - lr_scheduler_type: linear
92
+ - num_epochs: 3.0
93
+
94
+ ## Intended uses & limitations
95
+
96
+ Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
97
+
98
+ The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
99
+
100
+ > - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
101
+ > - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
102
+ > - *Entertainment: Creation of games, chat bots, and amusing generations.*
103
+
104
+ Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
105
+
106
+ #### Out-of-scope Uses
107
+
108
+ OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
109
+
110
+ > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
111
+ >
112
+ > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
113
+
114
+
115
+ ### Framework versions
116
+
117
+ - Transformers 4.27.0.dev0
118
+ - Pytorch 1.13.1+cu116
119
+ - Datasets 2.9.0
120
+ - Tokenizers 0.13.2
121
+