Rapidata

Enterprise
company
Activity Feed

AI & ML interests

RLHF, Model Evaluation, Benchmarks, Data Labeling, Human Feedback, Computer Vision, Image Generation, Video Generation, LLMs, Translations

Recent Activity

Rapidata's activity

jasoncorkill 
posted an update 10 days ago
view post
Post
3542
Benchmark Update: @google Veo3 (Text-to-Video)

Two months ago, we benchmarked @google ’s Veo2 model. It fell short, struggling with style consistency and temporal coherence, trailing behind Runway, Pika, @tencent , and even @alibaba-pai .

That’s changed.

We just wrapped up benchmarking Veo3, and the improvements are substantial. It outperformed every other model by a wide margin across all key metrics. Not just better, dominating across style, coherence, and prompt adherence. It's rare to see such a clear lead in today’s hyper-competitive T2V landscape.

Dataset coming soon. Stay tuned.
·
jasoncorkill 
posted an update 22 days ago
view post
Post
2867
🔥 Hidream I1 is online! 🔥

We just added Hidream I1 to our T2I leaderboard (https://www.rapidata.ai/leaderboard/image-models) benchmarked using 195k+ human responses from 38k+ annotators, all collected in under 24 hours.

It landed #3 overall, right behind:
- @openai 4o
- @black-forest-labs Flux 1 Pro
...and just ahead of @black-forest-labs Flux 1.1 Pro, @xai-org Aurora and @google Imagen3.

Want to dig into the data? Check out our dataset here:
Rapidata/Hidream_t2i_human_preference

What model should we benchmark next?
jasoncorkill 
posted an update about 1 month ago
view post
Post
5531
🚀 Building Better Evaluations: 32K Image Annotations Now Available

Today, we're releasing an expanded version: 32K images annotated with 3.7M responses from over 300K individuals which was completed in under two weeks using the Rapidata Python API.

Rapidata/text-2-image-Rich-Human-Feedback-32k

A few months ago, we published one of our most liked dataset with 13K images based on the @data-is-better-together 's dataset, following Google's research on "Rich Human Feedback for Text-to-Image Generation" (https://arxiv.org/abs/2312.10240). It collected over 1.5M responses from 150K+ participants.

Rapidata/text-2-image-Rich-Human-Feedback

In the examples below, users highlighted words from prompts that were not correctly depicted in the generated images. Higher word scores indicate more frequent issues. If an image captured the prompt accurately, users could select [No_mistakes].

We're continuing to work on large-scale human feedback and model evaluation. If you're working on related research and need large, high-quality annotations, feel free to get in touch: [email protected].
jasoncorkill 
posted an update about 2 months ago
view post
Post
3282
🚀 We tried something new!

We just published a dataset using a new (for us) preference modality: direct ranking based on aesthetic preference. We ranked a couple of thousand images from most to least preferred, all sampled from the Open Image Preferences v1 dataset by the amazing @data-is-better-together team.

📊 Check it out here:
Rapidata/2k-ranked-images-open-image-preferences-v1

We're really curious to hear your thoughts!
Is this kind of ranking interesting or useful to you? Let us know! 💬

If it is, please consider leaving a ❤️ and if we hit 30 ❤️s, we’ll go ahead and rank the full 17k image dataset!
·