File size: 2,073 Bytes
dc9ee83 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
---
license: apache-2.0
base_model: Mitchins/t5-base-artgen-multi-instruct
tags:
- text2text-generation
- prompt-enhancement
- ai-art
- onnx
- t5
- art-generation
- stable-diffusion
language:
- en
library_name: optimum
pipeline_tag: text-generation
model-index:
- name: t5-base-artgen-multi-instruct-ONNX
results: []
datasets:
- art-prompts
widget:
- text: "Enhance this prompt: robot in space"
example_title: "Standard Enhancement"
- text: "Enhance this prompt (no lora): beautiful landscape"
example_title: "Clean Enhancement"
- text: "Enhance this prompt (with lora): anime girl"
example_title: "Technical Enhancement"
- text: "Simplify this prompt: A stunning, highly detailed masterpiece"
example_title: "Simplification"
---
# T5 Base Art Generation Multi-Instruct ONNX
ONNX version of [Mitchins/t5-base-artgen-multi-instruct](https://huggingface.co/Mitchins/t5-base-artgen-multi-instruct) for optimized CPU inference.
## Model Details
- **Base Model**: T5-base (Google)
- **Training Samples**: 297,282
- **Parameters**: 222M
- **Format**: ONNX (FP32)
- **Optimization**: CPU inference optimized
## Quad-Instruction Capabilities
1. **Standard Enhancement**: `Enhance this prompt: {text}`
2. **Clean Enhancement**: `Enhance this prompt (no lora): {text}`
3. **Technical Enhancement**: `Enhance this prompt (with lora): {text}`
4. **Simplification**: `Simplify this prompt: {text}`
## Usage
```python
from optimum.onnxruntime import ORTModelForSeq2SeqLM
from transformers import T5Tokenizer
# Load ONNX model
model = ORTModelForSeq2SeqLM.from_pretrained("Mitchins/t5-base-artgen-multi-instruct-ONNX")
tokenizer = T5Tokenizer.from_pretrained("Mitchins/t5-base-artgen-multi-instruct-ONNX")
# Example usage
text = "woman in red dress"
prompt = f"Enhance this prompt (no lora): {text}"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=80)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Performance
Optimized for CPU inference with significant speedup compared to PyTorch on CPU.
|