Update README.md
Browse files
README.md
CHANGED
@@ -5,198 +5,100 @@ tags:
|
|
5 |
- trl
|
6 |
- sft
|
7 |
---
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
- **
|
24 |
-
- **
|
25 |
-
- **
|
26 |
-
- **
|
27 |
-
|
28 |
-
|
29 |
-
- **
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
## Evaluation
|
107 |
-
|
108 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
109 |
-
|
110 |
-
### Testing Data, Factors & Metrics
|
111 |
-
|
112 |
-
#### Testing Data
|
113 |
-
|
114 |
-
<!-- This should link to a Dataset Card if possible. -->
|
115 |
-
|
116 |
-
[More Information Needed]
|
117 |
-
|
118 |
-
#### Factors
|
119 |
-
|
120 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
121 |
-
|
122 |
-
[More Information Needed]
|
123 |
-
|
124 |
-
#### Metrics
|
125 |
-
|
126 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
127 |
-
|
128 |
-
[More Information Needed]
|
129 |
-
|
130 |
-
### Results
|
131 |
-
|
132 |
-
[More Information Needed]
|
133 |
-
|
134 |
-
#### Summary
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
## Model Examination [optional]
|
139 |
-
|
140 |
-
<!-- Relevant interpretability work for the model goes here -->
|
141 |
-
|
142 |
-
[More Information Needed]
|
143 |
-
|
144 |
-
## Environmental Impact
|
145 |
-
|
146 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
147 |
-
|
148 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
149 |
-
|
150 |
-
- **Hardware Type:** [More Information Needed]
|
151 |
-
- **Hours used:** [More Information Needed]
|
152 |
-
- **Cloud Provider:** [More Information Needed]
|
153 |
-
- **Compute Region:** [More Information Needed]
|
154 |
-
- **Carbon Emitted:** [More Information Needed]
|
155 |
-
|
156 |
-
## Technical Specifications [optional]
|
157 |
-
|
158 |
-
### Model Architecture and Objective
|
159 |
-
|
160 |
-
[More Information Needed]
|
161 |
-
|
162 |
-
### Compute Infrastructure
|
163 |
-
|
164 |
-
[More Information Needed]
|
165 |
-
|
166 |
-
#### Hardware
|
167 |
-
|
168 |
-
[More Information Needed]
|
169 |
-
|
170 |
-
#### Software
|
171 |
-
|
172 |
-
[More Information Needed]
|
173 |
-
|
174 |
-
## Citation [optional]
|
175 |
-
|
176 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
177 |
-
|
178 |
-
**BibTeX:**
|
179 |
-
|
180 |
-
[More Information Needed]
|
181 |
-
|
182 |
-
**APA:**
|
183 |
-
|
184 |
-
[More Information Needed]
|
185 |
-
|
186 |
-
## Glossary [optional]
|
187 |
-
|
188 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
189 |
-
|
190 |
-
[More Information Needed]
|
191 |
-
|
192 |
-
## More Information [optional]
|
193 |
-
|
194 |
-
[More Information Needed]
|
195 |
-
|
196 |
-
## Model Card Authors [optional]
|
197 |
-
|
198 |
-
[More Information Needed]
|
199 |
-
|
200 |
-
## Model Card Contact
|
201 |
-
|
202 |
-
[More Information Needed]
|
|
|
5 |
- trl
|
6 |
- sft
|
7 |
---
|
8 |
+
# Konjac-0.6B-exp Model Description
|
9 |
+
|
10 |
+
## Overview
|
11 |
+
Konjac-0.6B-exp is an experimental, creative writing model designed for uncensored roleplaying and narrative generation. It can generate short stories with a high degree of creative freedom and fluidity. This model is tuned for generating engaging and imaginative content that can span various genres, featuring diverse characters and scenarios. The name "Konjac" comes from its goal to be small yet effective for creative applications.
|
12 |
+
|
13 |
+
This model is not designed for reasoning or structured logic, as it does not incorporate traditional forms of inference. Instead, it generates output based purely on patterns in the data it was trained on, focusing on creativity and narrative development.
|
14 |
+
|
15 |
+
**Note**: The model's uncensored output can sometimes be inconsistent, depending on the prompt, as it is still being refined to handle such cases effectively. Expect to see updates in future iterations.
|
16 |
+
|
17 |
+
## Intended Use
|
18 |
+
- **Creative Writing**: Ideal for generating short-form stories, dialogues, and roleplay scenarios.
|
19 |
+
- **Roleplay**: Designed to facilitate interactive fiction or creative text-based roleplay experiences.
|
20 |
+
- **Uncensored Content**: It allows for the generation of uncensored content, but this may vary depending on the prompt used.
|
21 |
+
|
22 |
+
## Key Features
|
23 |
+
- **Size**: 0.6 billion parameters, offering a balance between performance and size, making it suitable for devices like phones.
|
24 |
+
- **Uncensored**: Allows freedom in output generation, though it may be inconsistent at times.
|
25 |
+
- **Roleplay Focused**: Built with a focus on generating creative and dynamic storytelling for roleplay and creative writing.
|
26 |
+
- **Short Stories**: Primarily focused on generating short stories that are coherent, engaging, and sometimes experimental.
|
27 |
+
|
28 |
+
## Model Limitations
|
29 |
+
- **No Reasoning Capabilities**: This model was fine-tuned to avoid reasoning, which limits its ability to generate logical conclusions or long, structured outputs. This may change in future versions.
|
30 |
+
- **Uncensored Output**: The model's ability to generate uncensored text is currently imperfect, and certain prompts may not result in uncensored outputs.
|
31 |
+
- **Limited Contextual Understanding**: Since the model was trained on responses only (without user or system prompts), it might behave differently depending on the provided input.
|
32 |
+
|
33 |
+
## Recommendations for Usage
|
34 |
+
|
35 |
+
Here is an example of how to use this model with the `transformers` library:
|
36 |
+
|
37 |
+
```python
|
38 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
|
39 |
+
import torch
|
40 |
+
import threading
|
41 |
+
|
42 |
+
model_name = "marcuscedricridia/Konjac-0.6B"
|
43 |
+
|
44 |
+
# Load tokenizer and model
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
46 |
+
model = AutoModelForCausalLM.from_pretrained(
|
47 |
+
model_name,
|
48 |
+
torch_dtype="auto",
|
49 |
+
device_map="auto"
|
50 |
+
)
|
51 |
+
|
52 |
+
# Prepare input
|
53 |
+
prompt = """
|
54 |
+
Please write a story using the following writing prompt: Demons have to do at least one evil thing every day to survive. This one comes to your bakery everyday to buy bread for the homeless kids and steal exactly one cookie.
|
55 |
+
|
56 |
+
The title of this story should be: The Baker's Demon
|
57 |
+
|
58 |
+
It should feature the following genres: Fantasy, Drama
|
59 |
+
"""
|
60 |
+
messages = [{"role": "user", "content": prompt}]
|
61 |
+
text = tokenizer.apply_chat_template(
|
62 |
+
messages,
|
63 |
+
tokenize=False,
|
64 |
+
add_generation_prompt=True,
|
65 |
+
enable_thinking=False
|
66 |
+
)
|
67 |
+
inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
68 |
+
|
69 |
+
# Use streamer
|
70 |
+
streamer = TextIteratorStreamer(tokenizer, skip_special_tokens=True)
|
71 |
+
|
72 |
+
# Generation parameters
|
73 |
+
generation_kwargs = dict(
|
74 |
+
**inputs,
|
75 |
+
streamer=streamer,
|
76 |
+
max_new_tokens=8000,
|
77 |
+
temperature=0.8, # controls randomness (higher = more random)
|
78 |
+
top_k=50, # limits token sampling to top-k tokens
|
79 |
+
top_p=0.95, # nucleus sampling, considers top tokens with p cumulative prob
|
80 |
+
repetition_penalty=1.1, # penalizes repeated tokens
|
81 |
+
do_sample=True # required for sampling to take effect
|
82 |
+
)
|
83 |
+
|
84 |
+
# Run generation in a thread to allow streaming
|
85 |
+
thread = threading.Thread(target=model.generate, kwargs=generation_kwargs)
|
86 |
+
thread.start()
|
87 |
+
|
88 |
+
# Read streamed output
|
89 |
+
print("Streaming output:")
|
90 |
+
for token in streamer:
|
91 |
+
print(token, end="", flush=True)
|
92 |
+
```
|
93 |
+
|
94 |
+
## Future Developments
|
95 |
+
Model Enhancements: Future versions of the model will aim to fix the issues around inconsistent uncensored output and potentially reintroduce reasoning capabilities.
|
96 |
+
|
97 |
+
Larger Outputs: We plan to refine the model to generate longer and more complex narratives, similar to the styles of well-known models like GLM, Gemma, O3, and O4, with improved formatting and creative titles.
|
98 |
+
|
99 |
+
Exploration of Parameters: New training will focus on increasing the creative and thematic variety while maintaining short-form coherence.
|
100 |
+
|
101 |
+
## Known Issues
|
102 |
+
Inconsistent Uncensored Output: The uncensored functionality is still being refined. Sometimes, the model may refuse to generate uncensored content depending on the prompt.
|
103 |
+
|
104 |
+
Size Limitation: The current version will likely remain the smallest in the Konjac family, with future models focusing on improving variations, iterations, and fixes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|