---
inference: false
language:
- "en"
thumbnail: "https://drive.google.com/uc?export=view&id=1_n2kT6lBBs8C3rf8xfNURr_N2Ccx-A1S"
tags:
- text-to-image
- dalle-mini
license: "apache-2.0"
datasets:
- "succinctly/medium-titles-and-images"
---
This is the [dalle-mini/dalle-mini](https://huggingface.co/dalle-mini/dalle-mini) text-to-image model fine-tuned on 120k
pairs from the [Medium](https://medium.com) blogging platform. The full dataset can be found on Kaggle: [Medium Articles Dataset (128k): Metadata + Images](https://www.kaggle.com/datasets/succinctlyai/medium-data).
The goal of this model is to probe the ability of text-to-image models of operating on text prompts that are abstract (like the titles on Medium usually are), as opposed to concrete descriptions of the envisioned visual scene.
[More context here](https://medium.com/@turc.raluca/fine-tuning-dall-e-mini-craiyon-to-generate-blogpost-images-32903cc7aa52).