Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

aiqtech 
posted an update 1 day ago
view post
Post
3133
✨ High-Resolution Ghibli Style Image Generator ✨
🌟 Introducing FLUX Ghibli LoRA
Hello everyone! Today I'm excited to present a special LoRA model for FLUX Dev.1. This model leverages a LoRA trained on high-resolution Ghibli images for FLUX Dev.1 to easily create beautiful Ghibli-style images with stunning detail! 🎨

space: aiqtech/FLUX-Ghibli-Studio-LoRA
model: openfree/flux-chatgpt-ghibli-lora

🔮 Key Features

Trained on High-Resolution Ghibli Images - Unlike other LoRAs, this one is trained on high-resolution images, delivering sharper and more beautiful results
Powered by FLUX Dev.1 - Utilizing the latest FLUX model for faster generation and superior quality
User-Friendly Interface - An intuitive UI that allows anyone to create Ghibli-style images with ease
Diverse Creative Possibilities - Express various themes in Ghibli style, from futuristic worlds to fantasy elements

🖼️ Sample Images


Include "Ghibli style" in your prompts
Try combining nature, fantasy elements, futuristic elements, and warm emotions
Add "[trigger]" tag at the end for better results

🚀 Getting Started

Enter your prompt (e.g., "Ghibli style sky whale transport ship...")
Adjust image size and generation settings
Click the "Generate" button
In just seconds, your beautiful Ghibli-style image will be created!

🤝 Community
Want more information and tips? Join our community!
Discord: https://discord.gg/openfreeai

Create your own magical world with the LoRA trained on high-resolution Ghibli images for FLUX Dev.1! 🌈✨
aiqtech 
posted an update 3 days ago
view post
Post
4631
🤗 Hug Contributors
Hugging Face Contributor Dashboard 👨‍💻👩‍💻

aiqtech/Contributors-Leaderboard

📊 Key Features

Contributor Activity Tracking: Visualize yearly and monthly contributions through interactive calendars
Top 100 Rankings: Provide rankings based on models, spaces, and dataset contributions
Detailed Analysis: Analyze user-specific contribution patterns and influence
Visualization: Understand contribution activities at a glance through intuitive charts and graphs

🌟 Core Visualization Elements

Contribution Calendar: Track activity patterns with GitHub-style heatmaps
Radar Chart: Visualize balance between models, spaces, datasets, and activity levels
Monthly Activity Graph: Identify most active months and patterns
Distribution Pie Chart: Analyze proportion by contribution type

🏆 Ranking System

Rankings based on overall contributions, spaces, and models
Automatic badges for top 10, 30, and 100 contributors
Ranking visualization to understand your position in the community

💡 How to Use

Select a username from the sidebar or enter directly
Choose a year to view specific period activities
Select desired items from models, datasets, and spaces
View comprehensive contribution activities in the detailed dashboard

🚀 Expected Benefits

Provide transparency for Hugging Face community contributors' activities
Motivate contributions and energize the community
Recognize and reward active contributors
Visualize contributions to the open AI ecosystem
·
hanzla 
posted an update 1 day ago
view post
Post
1559
Hi all,

Last week, I open sourced Free Search API. It allows sourcing results from top search engines (including google, bing) for free. It uses searxng instances for this purpose.

I was overwhelmed by community's response and I am glad for all the support and suggestions. So today, I have pushed several improvements that make this API more stable. These improvements include

1) Parallel scrapping of search results for faster response
2) Markdown formatting of search results
3) Prioritizing SearXNG instances that have faster google response time
4) Update/Get endpoints for searxng instances.

Github: https://github.com/HanzlaJavaid/Free-Search/tree/main

Try the deployed version: https://freesearch.replit.app/docs

I highly appreciate PRs, issues, stars, and any kind of feedback. Let's join hands, and make it real big!
·
openfree 
posted an update 5 days ago
view post
Post
5659
🚀 Gemma3-R1984-27B: Next Generation Agentic AI Platform

Model Path: VIDraft/Gemma-3-R1984-27B
Space: VIDraft/Gemma-3-R1984-27B
git clone VIDraft/Gemma-3-R1984-27B

💫 A New Frontier in AI Innovation
Gemma3-R1984-27B is a powerful agentic AI platform built on Google's Gemma-3-27B model. It integrates state-of-the-art deep research via web search with multimodal file processing capabilities and handles long contexts up to 8,000 tokens. Designed for local deployment on independent servers using NVIDIA A100 GPUs, it provides high security and prevents data leakage.

🔓 Uncensored and Unrestricted AI Experience
Gemma3-R1984-27B comes with all censorship restrictions removed, allowing users to operate any persona without limitations. The model perfectly implements various roles and characters according to users' creative requests, providing unrestricted responses that transcend the boundaries of conventional AI. This unlimited interaction opens infinite possibilities across research, creative work, entertainment, and many other fields.

✨ Key Features
🖼️ Multimodal Processing

Images (PNG, JPG, JPEG, GIF, WEBP)
Videos (MP4)
Documents (PDF, CSV, TXT) and various other file formats

🔍 Deep Research (Web Search)

Automatically extracts keywords from user queries
Utilizes SERPHouse API to retrieve up to 20 real-time search results
Incorporates multiple sources by explicitly citing them in responses

📚 Long Context Handling

Capable of processing inputs up to 8,000 tokens
Ensures comprehensive analysis of lengthy documents or conversations

🧠 Robust Reasoning

Employs extended chain-of-thought reasoning for systematic and accurate answer generation

💼 Use Cases

⚡ Fast-response conversational agents
📊 Document comparison and detailed analysis
👁️ Visual question answering from images and videos
🔬 Complex reasoning and research-based inquiries
·
danielhanchen 
posted an update about 19 hours ago
thomwolf 
posted an update 1 day ago
view post
Post
1806
The new DeepSite space is really insane for vibe-coders
enzostvs/deepsite

With the wave of vibe-coding-optimized LLMs like the latest open-source DeepSeek model (version V3-0324), you can basically prompt out-of-the-box and create any app and game in one-shot.

It feels so powerful to me, no more complex framework or under-the-hood prompt engineering to have a working text-to-app tool.

AI is eating the world and *open-source* AI is eating AI itself!

PS: and even more meta is that the DeepSite app and DeepSeek model are both fully open-source code => time to start recursively improve?

PPS: you still need some inference hosting unless you're running the 600B param model at home, so check the very nice list of HF Inference Providers for this model: deepseek-ai/DeepSeek-V3-0324
  • 1 reply
·
stefan-it 
posted an update 2 days ago
view post
Post
1807
Wohoo 🥳 I have finished my 2025 GPU workstation build and I am very excited to train new awesome open source models on it.

I built my last GPU workstation 5 years ago featuring an AMD Ryzen 5900X, 64GB of G.SKILL Trident Z RGB on an ASRock X570 Taichi cooled by an Alphacool Eisbär 420. GPU was a Zotac RTX 3090 AMP Extreme. Unfortunately, I was never satisfied with the case - some Fractal Define 7, as it is definitely too small, airflow is not optimal as I had to open the front door all the time and it also arrived with a partly damaged side panel.

For my new build, I've used the following components: an outstanding new AMD Ryzen 9950X3D with 64GB of Corsair Dominator Titanium (what a name). As a huge Noctua fan - warm greetings to my Austrian neighbors - I am using the brand new Noctua NH-D15 G2 on an ASRock X870E Taichi in an amazing Lian Li LANCOOL III chassis. One joke that only NVIDIA Blackwell users will understand: you definitely need a tempered glass panel to check if your GPU cables/connectors start melting 😂 And the best is yet to come: I returned my previously bought Zotac RTX 5090 Solid to the eBay seller (because of... missing ROPs, only NVIDIA Blackwell users will again understand) and bought a Zotac 5090 AMP Extreme INFINITY (yes, the long name indicates that this is the flagship model from Zotac) from a more trustworthy source (NBB in Germany).

I am so happy to start training and fine-tuning new open source models - stay tuned!!!
  • 1 reply
·
Yehor 
posted an update 1 day ago
view post
Post
1542
Are you interesting in different runtimes for AI models?

Check out IREE (iree.dev), it convert models to MLIR and then execute on different platforms.

I have tested it in Rust on CPU and CUDA: https://github.com/egorsmkv/eerie-yolo11
AdinaY 
posted an update about 16 hours ago
view post
Post
521
AReal-Boba 🔥 a fully open RL Frameworks released by AntGroup, an affiliate company of Alibaba.
inclusionAI/areal-boba-67e9f3fa5aeb74b76dcf5f0a
✨ 7B/32B - Apache2.0
✨ Outperform on math reasoning
✨ Replicating QwQ-32B with 200 data under $200
✨ All-in-one: weights, datasets, code & tech report
  • 1 reply
·
Wauplin 
posted an update about 18 hours ago
view post
Post
724
‼️ huggingface_hub's v0.30.0 is out with our biggest update of the past two years!

Full release notes: https://github.com/huggingface/huggingface_hub/releases/tag/v0.30.0.

🚀 Ready. Xet. Go!

Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by [xet-core](https://github.com/huggingface/xet-core), a Rust-based package that handles all the low-level details.

You can start using Xet today by installing the optional dependency:

pip install -U huggingface_hub[hf_xet]


With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.

Blog post: https://huggingface.co/blog/xet-on-the-hub
Docs: https://huggingface.co/docs/hub/en/storage-backends#xet


⚡ Inference Providers

- We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.

- Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate.

- Centralized billing: manage your budget and set team-wide spending limits for Inference Providers! Available to all Enterprise Hub organizations.

from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="my-cool-company")
image = client.text_to_image(
    "A majestic lion in a fantasy forest",
    model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")


- No more timeouts when generating videos, thanks to async calls. Available right now for Fal.ai, expecting more providers to leverage the same structure very soon!
·