metadata
license: apache-2.0
base_model: Mitchins/t5-base-artgen-multi-instruct
tags:
- text2text-generation
- prompt-enhancement
- ai-art
- onnx
- t5
- art-generation
- stable-diffusion
language:
- en
library_name: optimum
pipeline_tag: text-generation
model-index:
- name: t5-base-artgen-multi-instruct-ONNX
results: []
datasets:
- art-prompts
widget:
- text: 'Enhance this prompt: robot in space'
example_title: Standard Enhancement
- text: 'Enhance this prompt (no lora): beautiful landscape'
example_title: Clean Enhancement
- text: 'Enhance this prompt (with lora): anime girl'
example_title: Technical Enhancement
- text: 'Simplify this prompt: A stunning, highly detailed masterpiece'
example_title: Simplification
T5 Base Art Generation Multi-Instruct ONNX
ONNX version of Mitchins/t5-base-artgen-multi-instruct for optimized CPU inference.
Model Details
- Base Model: T5-base (Google)
- Training Samples: 297,282
- Parameters: 222M
- Format: ONNX (FP32)
- Optimization: CPU inference optimized
Quad-Instruction Capabilities
- Standard Enhancement:
Enhance this prompt: {text}
- Clean Enhancement:
Enhance this prompt (no lora): {text}
- Technical Enhancement:
Enhance this prompt (with lora): {text}
- Simplification:
Simplify this prompt: {text}
Usage
from optimum.onnxruntime import ORTModelForSeq2SeqLM
from transformers import T5Tokenizer
# Load ONNX model
model = ORTModelForSeq2SeqLM.from_pretrained("Mitchins/t5-base-artgen-multi-instruct-ONNX")
tokenizer = T5Tokenizer.from_pretrained("Mitchins/t5-base-artgen-multi-instruct-ONNX")
# Example usage
text = "woman in red dress"
prompt = f"Enhance this prompt (no lora): {text}"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=80)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
Performance
Optimized for CPU inference with significant speedup compared to PyTorch on CPU.