Konjac-0.6B-exp Model Description

Overview

Konjac-0.6B-exp is an experimental, creative writing model designed for uncensored roleplaying and narrative generation. It can generate short stories with a high degree of creative freedom and fluidity. This model is tuned for generating engaging and imaginative content that can span various genres, featuring diverse characters and scenarios. The name "Konjac" comes from its goal to be small yet effective for creative applications.

This model is not designed for reasoning or structured logic, as it does not incorporate traditional forms of inference. Instead, it generates output based purely on patterns in the data it was trained on, focusing on creativity and narrative development.

Note: The model's uncensored output can sometimes be inconsistent, depending on the prompt, as it is still being refined to handle such cases effectively. Expect to see updates in future iterations.

Intended Use

  • Creative Writing: Ideal for generating short-form stories, dialogues, and roleplay scenarios.
  • Roleplay: Designed to facilitate interactive fiction or creative text-based roleplay experiences.
  • Uncensored Content: It allows for the generation of uncensored content, but this may vary depending on the prompt used.

Key Features

  • Size: 0.6 billion parameters, offering a balance between performance and size, making it suitable for devices like phones.
  • Uncensored: Allows freedom in output generation, though it may be inconsistent at times.
  • Roleplay Focused: Built with a focus on generating creative and dynamic storytelling for roleplay and creative writing.
  • Short Stories: Primarily focused on generating short stories that are coherent, engaging, and sometimes experimental.

Model Limitations

  • No Reasoning Capabilities: This model was fine-tuned to avoid reasoning, which limits its ability to generate logical conclusions or long, structured outputs. This may change in future versions.
  • Uncensored Output: The model's ability to generate uncensored text is currently imperfect, and certain prompts may not result in uncensored outputs.
  • Limited Contextual Understanding: Since the model was trained on responses only (without user or system prompts), it might behave differently depending on the provided input.

Recommendations for Usage

Here is an example of how to use this model with the transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
import torch
import threading

model_name = "marcuscedricridia/Konjac-0.6B"

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# Prepare input
prompt = """
Please write a story using the following writing prompt: Demons have to do at least one evil thing every day to survive. This one comes to your bakery everyday to buy bread for the homeless kids and steal exactly one cookie.

The title of this story should be: The Baker's Demon

It should feature the following genres: Fantasy, Drama
"""
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=False
)
inputs = tokenizer([text], return_tensors="pt").to(model.device)

# Use streamer
streamer = TextIteratorStreamer(tokenizer, skip_special_tokens=True)

# Generation parameters
generation_kwargs = dict(
    **inputs,
    streamer=streamer,
    max_new_tokens=8000,
    temperature=0.8,         # controls randomness (higher = more random)
    top_k=50,                # limits token sampling to top-k tokens
    top_p=0.95,              # nucleus sampling, considers top tokens with p cumulative prob
    repetition_penalty=1.1,  # penalizes repeated tokens
    do_sample=True           # required for sampling to take effect
)

# Run generation in a thread to allow streaming
thread = threading.Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()

# Read streamed output
print("Streaming output:")
for token in streamer:
    print(token, end="", flush=True)

Future Developments

  • Model Enhancements: Future versions of the model will aim to fix the issues around inconsistent uncensored output and potentially reintroduce reasoning capabilities.

  • Larger Outputs: We plan to refine the model to generate longer and more complex narratives, similar to the styles of well-known models like GLM, Gemma, O3, and O4, with improved formatting and creative titles.

  • Exploration of Parameters: New training will focus on increasing the creative and thematic variety while maintaining short-form coherence.

Known Issues

  • Inconsistent Uncensored Output: The uncensored functionality is still being refined. Sometimes, the model may refuse to generate uncensored content depending on the prompt.

  • Size Limitation: The current version will likely remain the smallest in the Konjac family, with future models focusing on improving variations, iterations, and fixes.

Downloads last month
7
Safetensors
Model size
596M params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for marcuscedricridia/Konjac-0.6B

Quantizations
1 model