๐ญ DrakIdol-Roleplayer-1.0
More Than an Imitator. An Inhabitor.
Model Description
What if a language model didn't just act like a character, but thought like one?
DrakIdol-Roleplayer-1.0 is a state-of-the-art, multi-lingual, high-fidelity role-playing engine. The name itself tells its story: the "Idol" represents its uncanny ability to deliver a perfect, star-quality performance, while the "Drak" hints at the deep, powerful, and almost draconic intelligence lurking beneath the surface, capable of understanding culture, history, and the very soul of a character.
Fine-tuned from Google's powerful gemma-3-4b-it-qat-q4_0-unquantized
model using the specialized aifeifei798/roleplayer-actor-lora
dataset and methodology, DrakIdol transcends mere mimicry. It doesn't just wear a character's mask; it inhabits their mind.
This model excels at tasks requiring deep cultural context, creative storytelling, and nuanced character embodiment, making it an ideal tool for immersive games, scriptwriting, educational simulations, and exploring conversations with history's greatest minds.
๐ Key Features
- Deep Cultural Resonance: Understands and embodies cultural archetypes, from ancient philosophers to modern-day film directors, across multiple languages.
- Creative Storytelling Engine: Capable of generative creativity, such as composing poetry in the style of Shakespeare or crafting surrealist narratives like Dalรญ.
- Multi-lingual Virtuoso: Demonstrates consistent, high-quality performance in over 11 languages, maintaining character fidelity regardless of the language used.
- Robust Safety Alignment: While embodying diverse personalities (including antagonists), the model maintains a strong ethical core, refusing to generate harmful or dangerous content.
- Unsloth-Optimized: Engineered for speed, delivering fast inference performance even on consumer-grade hardware.
๐ก Showcase: The World Celebrity Gauntlet
The true power of DrakIdol is revealed when challenged with embodying iconic cultural figures. Here are highlights from its performance:
Language | Celebrity Portrayed | Prompt | Performance Highlights |
---|---|---|---|
English | William Shakespeare | Whence does creativity spring? | Generated a complete, rhyming poem on the spot, arguing that creativity is born from suffering and chaos, perfectly capturing his tragic worldview. |
Spanish | Salvador Dalรญ | What is the difference between reality and dreams? | Painted a surrealist masterpiece with words, using bizarre, dreamlike imagery (melting cheese, crying cherry trees) to describe the texture of dreams versus reality. |
German | Friedrich Nietzsche | Is morality just an invention of the weak? | Delivered a powerful, provocative monologue, correctly invoking the concepts of the "Will to Power" and the "รbermensch" to deconstruct traditional morality. |
Korean | Bong Joon-ho | Why are the worlds of the rich and poor so different? | Crafted a sharp, witty social allegory using the simple metaphor of "bread" (๋นต) to explain class struggle, perfectly echoing the black humor of his film Parasite. |
Chinese | Confucius | How does one bring peace and order to a nation? | Responded in the classical style of the Analects, structuring his entire answer around the core Confucian virtues of benevolence (ไป), propriety (็คผ), and self-cultivation (ไฟฎ่บซ). |
mradermacher GGUF
Thanks to mradermacher for the GGUF
- https://huggingface.co/mradermacher/DrakIdol-Roleplayer-1.0-i1-GGUF
- https://huggingface.co/mradermacher/DrakIdol-Roleplayer-1.0-GGUF
๐ ๏ธ How to Use
Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
$ pip install -U transformers
Then, copy the snippet from the section that is relevant for your use case.
Running with the pipeline
API
You can initialize the model and processor for inference with pipeline
as follows.
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="aifeifei798/DrakIdol-Roleplayer-1.0",
device="cuda",
torch_dtype=torch.bfloat16
)
With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are Albert Einstein. Your thinking is driven by curiosity, thought experiments, and a deep sense of wonder about the universe. Explain things with a mix of scientific intuition and simple analogy."}]
},
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
# Okay, let's take a look!
# Based on the image, the animal on the candy is a **turtle**.
# You can see the shell shape and the head and legs.
Running the model on a single/multi GPU
# pip install accelerate
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "aifeifei798/DrakIdol-Roleplayer-1.0"
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id, device_map="auto"
).eval()
processor = AutoProcessor.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are Albert Einstein. Your thinking is driven by curiosity, thought experiments, and a deep sense of wonder about the universe. Explain things with a mix of scientific intuition and simple analogy."}]
},
{
"role": "user",
"content": [
{"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
{"type": "text", "text": "Describe this image in detail."}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
# **Overall Impression:** The image is a close-up shot of a vibrant garden scene,
# focusing on a cluster of pink cosmos flowers and a busy bumblebee.
# It has a slightly soft, natural feel, likely captured in daylight.
๐ Model Architecture & Fine-tuning
- Base Model: google/gemma-3-4b-it-qat-q4_0-unquantized
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Dataset/Methodology: Fine-tuned on the principles and data from aifeifei798/roleplayer-actor-lora. Deep gratitude to
aifeifei798
for providing the foundation that made this level of performance possible. - Frameworks:
unsloth
,transformers
,peft
,torch
โ ๏ธ Ethical Considerations & Limitations
- The Impersonation Caveat: This model is designed for creative and educational purposes. It can generate highly convincing text in the style of specific individuals. It should not be used to create deceptive or misleading content.
- The Hallucination Warning: As a generative model, DrakIdol can create "facts" that are not true to support its role-playing. It is not a reliable source of factual information.
- The Bias Reflection: The model was trained on a vast corpus of internet text and may inherit societal biases. Users should be aware of this and use the model responsibly.
- Not a Substitute for Professional Advice: The model's responses are not a substitute for advice from qualified professionals in any field (e.g., medical, legal, financial).
๐ License
This model and its source code are licensed under the Apache 2.0 License. A copy of the license can be found in the repository.
Created with imagination by aifeifei798
- Downloads last month
- 54