
video-p2p-library
AI & ML interests
None defined yet.
video-p2p-library's activity
Post
4223
There seems to multiple paid apps shared here that are based on models on hf, but some ppl sell their wrappers as "products" and promote them here. For a long time, hf was the best and only platform to do oss model stuff but with the recent AI website builders anyone can create a product (really crappy ones btw) and try to sell it with no contribution to oss stuff. Please dont do this, or try finetuning the models you use...
Sorry for filling yall feed with this bs but yk...
Sorry for filling yall feed with this bs but yk...
Post
1603
Gemma 3 seems to be really good at human preference. Just waiting for ppl to see it.

ehristoforuย
posted
an
update
2 months ago
Post
3108
Introducing our first standalone model โ FluentlyLM Prinum
Introducing the first standalone model from Project Fluently LM! We worked on it for several months, used different approaches and eventually found the optimal one.
General characteristics:
- Model type: Causal language models (QwenForCausalLM, LM Transformer)
- Number of parameters: 32.5B
- Number of parameters (not embedded): 31.0B
- Number of layers: 64
- Context: 131,072 tokens
- Language(s) (NLP): English, French, Spanish, Russian, Chinese, Japanese, Persian (officially supported)
- License: MIT
Creation strategy:
The basis of the strategy is shown in Pic. 2.
We used Axolotl & Unsloth for SFT-finetuning with PEFT LoRA (rank=64, alpha=64) and Mergekit for SLERP and TIES mergers.
Evolution:
๐ 12th place in the Open LLM Leaderboard ( open-llm-leaderboard/open_llm_leaderboard) (21.02.2025)
Detailed results and comparisons are presented in Pic. 3.
Links:
- Model: fluently-lm/FluentlyLM-Prinum
- GGUF version: mradermacher/FluentlyLM-Prinum-GGUF
- Demo on ZeroGPU: ehristoforu/FluentlyLM-Prinum-demo
Introducing the first standalone model from Project Fluently LM! We worked on it for several months, used different approaches and eventually found the optimal one.
General characteristics:
- Model type: Causal language models (QwenForCausalLM, LM Transformer)
- Number of parameters: 32.5B
- Number of parameters (not embedded): 31.0B
- Number of layers: 64
- Context: 131,072 tokens
- Language(s) (NLP): English, French, Spanish, Russian, Chinese, Japanese, Persian (officially supported)
- License: MIT
Creation strategy:
The basis of the strategy is shown in Pic. 2.
We used Axolotl & Unsloth for SFT-finetuning with PEFT LoRA (rank=64, alpha=64) and Mergekit for SLERP and TIES mergers.
Evolution:
๐ 12th place in the Open LLM Leaderboard ( open-llm-leaderboard/open_llm_leaderboard) (21.02.2025)
Detailed results and comparisons are presented in Pic. 3.
Links:
- Model: fluently-lm/FluentlyLM-Prinum
- GGUF version: mradermacher/FluentlyLM-Prinum-GGUF
- Demo on ZeroGPU: ehristoforu/FluentlyLM-Prinum-demo

ameerazam08ย
posted
an
update
3 months ago
Post
1625
R1 is out! And with a lot of other R1 releated models...

Shaldonย
authored
6
papers
3 months ago
Video-P2P: Video Editing with Cross-attention Control
Paper
โข
2303.04761
โข
Published
โข
2
Direct Inversion: Boosting Diffusion-based Editing with 3 Lines of Code
Paper
โข
2310.01506
โข
Published
RL-GPT: Integrating Reinforcement Learning and Code-as-policy
Paper
โข
2402.19299
โข
Published
โข
2
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
Paper
โข
2403.18814
โข
Published
โข
48
Multi-modal Cooking Workflow Construction for Food Recipes
Paper
โข
2008.09151
โข
Published
โข
1
Generative Video Propagation
Paper
โข
2412.19761
โข
Published

ehristoforuย
posted
an
update
4 months ago
Post
3954
โ๏ธ Ultraset - all-in-one dataset for SFT training in Alpaca format.
fluently-sets/ultraset
โ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.
๐คฏ Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.
๐ค For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.
โ๏ธ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
fluently-sets/ultraset
โ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.
๐คฏ Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.
๐ค For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.
โ๏ธ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
Post
17608
Google drops Gemini 2.0 Flash Thinking
a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts. The model plans (with thoughts visible), can solve complex problems with Flash speeds, and more
now available in anychat, try it out: https://huggingface.co/spaces/akhaliq/anychat
a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts. The model plans (with thoughts visible), can solve complex problems with Flash speeds, and more
now available in anychat, try it out: https://huggingface.co/spaces/akhaliq/anychat
Post
17628
QwQ-32B-Preview is now available in anychat
A reasoning model that is competitive with OpenAI o1-mini and o1-preview
try it out: https://huggingface.co/spaces/akhaliq/anychat
A reasoning model that is competitive with OpenAI o1-mini and o1-preview
try it out: https://huggingface.co/spaces/akhaliq/anychat
Post
4458
New model drop in anychat
allenai/Llama-3.1-Tulu-3-8B is now available
try it here: https://huggingface.co/spaces/akhaliq/anychat
allenai/Llama-3.1-Tulu-3-8B is now available
try it here: https://huggingface.co/spaces/akhaliq/anychat