๐ CycleNavigator: Visualizing Economic and Political Cycles Through AI at a Glance! ๐ง ๐น
๐ซ Strategic Intelligence Tool for Navigating Historical Waves and Forecasting the Future
Hello there! ๐ CycleNavigator brings you an innovative fusion of economic history, data visualization, and generative AI. This open-source project revolutionizes decision-making by displaying four major economic and political cycles through interactive visualizations!
๐ Experience Four Major Cycles in One View:
Business Cycle (โ9 years) โฑ๏ธ - The 'heartbeat' of investment and inventory Kondratiev Wave (โ50 years) ๐ - Long technological innovation waves Finance Cycle (โ80 years) ๐ฐ - Rhythm of debt and financial crises Hegemony Cycle (โ250 years) ๐๏ธ - Transitions in global order
โจ Cutting-Edge Features:
Interactive Wave Visualization ๐ฏ - Intuitive graphs powered by Plotly AI-Powered Historical Similarity Mapping ๐งฉ - Connecting past events via SBERT embeddings Real-time News Integration ๐ฐ - Linking current issues to long cycles with Brave API GPT-Enhanced Analysis ๐ค - Delivering structured insights through optimized prompting
๐ก Practical Applications:
Improve decision accuracy โก by instantly grasping economic trends Identify connections ๐ between breaking news and long-term cycles Gain reliable insights ๐ through verifiable data and transparent methodology Extend to multiple domains ๐ - education, research, asset management, policy institutes
๐ A New Intelligence Paradigm: When slow cycles (9-50-80-250 years) and fast headlines (Brave API) meet on a single canvas, experience an innovative decision-making environment where you can reconstruct the past, interpret the present, and design future scenarios!
7 Free resources to master Multi-Agent Systems (MAS)
Collective intelligence is the future of AI. Sometimes, a single agent isn't enough โ a team of simpler, specialized agents working together to solve a task can be a much better option. Building Multi-Agent Systems (MAS) isnโt easy, that's why today weโre offering you a list of sources that may help you master MAS:
1. CrewAI tutorials -> https://docs.crewai.com/introduction#ready-to-start-building%3F At the end of the page you'll find a guide on how to build a crew of agents that research and analyze a topic, and create a report. Also, there are useful guides on how to build a single CrewAI agent and a workflow
2. Building with CAMEL multi-agent framework -> https://github.com/camel-ai/camel Offers guides, cookbooks and other useful information to build even million agent societies, explore and work with MAS
4. "Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations" by Yoav Shoham and Kevin Leyton-Brown -> https://www.masfoundations.org/download.html This book explains learning between agents, how multiple agents solve shared problems and communicate with focus on theory, practical examples and algorithms, diving into the game theory and logical approaches
Also, check out The Turing Post article about MAS -> https://www.turingpost.com/p/mas Our article can be a good starting guide for you to explore what MAS is, its components, architectures, types, top recent developments and current trends
I've made an open version of Google's NotebookLM, and it shows the superiority of the open source tech task! ๐ช
The app's workflow is simple. Given a source PDF or URL, it extracts the content from it, then tasks Meta's Llama 3.3-70B with writing the podcast script, with a good prompt crafted by @gabrielchua ("two hosts, with lively discussion, fun notes, insightful question etc.") Then it hands off the text-to-speech conversion to Kokoro-82M, and there you go, you have two hosts discussion any article.
The generation is nearly instant, because: > Llama 3.3 70B is running at 1,000 tokens/seconds with Cerebras inference > The audio is generated in streaming mode by the tiny (yet powerful) Kokoro, generating voices faster than real-time.
And the audio generation runs for free on Zero GPUs, hosted by HF on H200s.
Overall, open source solutions rival the quality of closed-source solutions at close to no cost!
I am fascinated by models learning from prompts and rewards - no example answers needed like in Supervised Fine-Tuning.
After the DeepSeek boom, everyone is trying GRPO with GSM8K or the Countdown Game...
I wanted a different challenge, like ๐๐ฒ๐ฎ๐ฐ๐ต๐ถ๐ป๐ด ๐ฎ ๐บ๐ผ๐ฑ๐ฒ๐น ๐๐ผ ๐ฐ๐ฟ๐ฒ๐ฎ๐๐ฒ ๐ฎ ๐๐ฐ๐ต๐ฒ๐ฑ๐๐น๐ฒ ๐ณ๐ฟ๐ผ๐บ ๐ฎ ๐น๐ถ๐๐ ๐ผ๐ณ ๐ฒ๐๐ฒ๐ป๐๐ ๐ฎ๐ป๐ฑ ๐ฝ๐ฟ๐ถ๐ผ๐ฟ๐ถ๐๐ถ๐ฒ๐.
Choosing an original problem forced me to: ๐ค Think about the problem setting ๐งฌ Generate data ๐ค Choose the right base model ๐ Design reward functions (and experiencing reward hacking) ๐ Run multiple rounds of training, hoping that my model would learn something.
For Inference Providers who have built support for our Billing API (currently: Fal, Novita, HF-Inference โ with more coming soon), we've started enabling Pay as you go (=PAYG)
What this means is that you can use those Inference Providers beyond the free included credits, and they're charged to your HF account.
You can see it on this view: any provider that does not have a "Billing disabled" badge, is PAYG-compatible.
9 replies
ยท
reacted to as-cle-bert's
post with ๐คabout 1 month ago
Finding a job that matches with our resume shouldn't be difficult, especially now that we have AI... And still, we're drowning in unclear announcements, jobs whose skill requirements might not really fit us, and tons of material๐ตโ๐ซ That's why I decided to build ๐๐๐ฌ๐ฎ๐ฆ๐ ๐๐๐ญ๐๐ก๐๐ซ (https://github.com/AstraBert/resume-matcher), a fully open-source application that scans your resume and searches the web for jobs that match with it!๐ The workflow is very simple: ๐ฆ A LlamaExtract agent parses the resume and extracts valuable data that represent your profile ๐๏ธThe structured data are passed on to a Job Matching Agent (built with LlamaIndex๐) that uses them to build a web search query based on your resume ๐ The web search is handled by Linkup, which finds the top matches and returns them to the Agent ๐ The agent evaluates the match between your profile and the jobs, and then returns a final answer to you So, are you ready to find a job suitable for you?๐ผ You can spin up the application completely locally and with Docker, starting from the GitHub repo โก๏ธ https://github.com/AstraBert/resume-matcher Feel free to leave your feedback and let me know in the comments if you want an online version of Resume Matcher as well!โจ
2 replies
ยท
reacted to AdinaY's
post with ๐about 1 month ago
Finding a job that matches with our resume shouldn't be difficult, especially now that we have AI... And still, we're drowning in unclear announcements, jobs whose skill requirements might not really fit us, and tons of material๐ตโ๐ซ That's why I decided to build ๐๐๐ฌ๐ฎ๐ฆ๐ ๐๐๐ญ๐๐ก๐๐ซ (https://github.com/AstraBert/resume-matcher), a fully open-source application that scans your resume and searches the web for jobs that match with it!๐ The workflow is very simple: ๐ฆ A LlamaExtract agent parses the resume and extracts valuable data that represent your profile ๐๏ธThe structured data are passed on to a Job Matching Agent (built with LlamaIndex๐) that uses them to build a web search query based on your resume ๐ The web search is handled by Linkup, which finds the top matches and returns them to the Agent ๐ The agent evaluates the match between your profile and the jobs, and then returns a final answer to you So, are you ready to find a job suitable for you?๐ผ You can spin up the application completely locally and with Docker, starting from the GitHub repo โก๏ธ https://github.com/AstraBert/resume-matcher Feel free to leave your feedback and let me know in the comments if you want an online version of Resume Matcher as well!โจ
2 replies
ยท
reacted to yjernite's
post with ๐ฅabout 1 month ago
Today in Privacy & AI Tooling - introducing a nifty new tool to examine where data goes in open-source apps on ๐ค
HF Spaces have tons (100Ks!) of cool demos leveraging or examining AI systems - and because most of them are OSS we can see exactly how they handle user data ๐๐
That requires actually reading the code though, which isn't always easy or quick! Good news: code LMs have gotten pretty good at automatic review, so we can offload some of the work - here I'm using Qwen/Qwen2.5-Coder-32B-Instruct to generate reports and it works pretty OK ๐
The app works in three stages: 1. Download all code files 2. Use the Code LM to generate a detailed report pointing to code where data is transferred/(AI-)processed (screen 1) 3. Summarize the app's main functionality and data journeys (screen 2) 4. Build a Privacy TLDR with those inputs
It comes with a bunch of pre-reviewed apps/Spaces, great to see how many process data locally or through (private) HF endpoints ๐ค
As xet-team infrastructure begins backing hundreds of repositories on the Hugging Face Hub, weโre getting to put on our researcher hats and peer into the bytes. ๐ ๐ค
IMO, one of the most interesting ideas Xet storage introduces is a globally shared store of data.
When you upload a file through Xet, the contents are split into ~64KB chunks and deduplicated, but what if those same chunks already exist in another repo on the Hub?
Because of this, different repositories can share bytes we store. That opens up something cool - we can draw a graph of which repos actually share data at the chunk level, where:
- Nodes = repositories - Edges = shared chunks - Edge thickness = how much they overlap
Come find the many BERT islands. Or see how datasets relate in practice, not just in theory. See how libraries or tasks can tie repositories together. You can play around with node size using storage/likes/downloads too.
The result is a super fun visualization from @saba9 and @znation that Iโve already lost way too much time to. I'm excited to see how the networks grow as we add more repositories!