Timar's picture

Timar

Timar
ยท

AI & ML interests

GAN's

Recent Activity

updated a model 8 months ago
Timar/test
reacted to merve's post with โค๏ธ over 1 year ago
Posting about a very underrated model that tops paperswithcode across different segmentation benchmarks: OneFormer ๐Ÿ‘‘ OneFormer is a "truly universal" model for semantic, instance and panoptic segmentation tasks โš”๏ธ What makes is truly universal is that it's a single model that is trained only once and can be used across all tasks. The enabler here is the text conditioning, i.e. the model is given a text query that states task type along with the appropriate input, and using contrastive loss, the model learns the difference between different task types ๐Ÿ‘‡ (see in the image below) It's also super easy to use with transformers. ```python from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_large") model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_ade20k_swin_large") # swap the postprocessing and task_inputs for different types of segmentation semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt") semantic_outputs = model(**semantic_inputs) predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] ``` I have drafted a notebook for you to try right away โœจ https://colab.research.google.com/drive/1wfJhoTFqUqcTAYAOUc6TXUubBTmOYaVa?usp=sharing You can also check out the Space without checking out the code itself ๐Ÿ‘‰ https://huggingface.co/spaces/shi-labs/OneFormer
View all activity

Organizations

lora concepts library's profile picture

Timar's activity

reacted to hexgrad's post with ๐Ÿ”ฅ 3 days ago
view post
Post
4878
To Meta AI Research: I would like to fold ylacombe/expresso into the training mix of an Apache TTS model series. Can you relax the Expresso dataset license to CC-BY or more permissive?

Barring that, can I have an individual exception to train on the materials and distribute trained Apache models, without direct redistribution of the original files? Thanks!

CC (Expresso paper authors whose handles I could find on HF) @wnhsu @adavirro @bowenshi @itaigat @TalRemez @JadeCopet @hassid @felixkreuk @adiyoss @edupoux
updated a model 8 months ago
reacted to merve's post with โค๏ธ over 1 year ago
view post
Post
Posting about a very underrated model that tops paperswithcode across different segmentation benchmarks: OneFormer ๐Ÿ‘‘

OneFormer is a "truly universal" model for semantic, instance and panoptic segmentation tasks โš”๏ธ
What makes is truly universal is that it's a single model that is trained only once and can be used across all tasks.
The enabler here is the text conditioning, i.e. the model is given a text query that states task type along with the appropriate input, and using contrastive loss, the model learns the difference between different task types ๐Ÿ‘‡ (see in the image below)

It's also super easy to use with transformers.
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation

processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_large")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_ade20k_swin_large")

# swap the postprocessing and task_inputs for different types of segmentation
semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
semantic_outputs = model(**semantic_inputs)
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]

I have drafted a notebook for you to try right away โœจ https://colab.research.google.com/drive/1wfJhoTFqUqcTAYAOUc6TXUubBTmOYaVa?usp=sharing
You can also check out the Space without checking out the code itself ๐Ÿ‘‰ shi-labs/OneFormer
ยท
New activity in prompthero/openjourney-v4 about 2 years ago

Where to find yaml file for WebUI?

3
#29 opened about 2 years ago by
pocketlim