Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
72.5
TFLOPS
22
4
89
Thijs
ThijsL202
Follow
SamuraiBarbi's profile picture
21world's profile picture
Khetterman's profile picture
5 followers
·
23 following
AI & ML interests
None yet
Recent Activity
reacted
to
codelion
's
post
with 🔥
9 days ago
I recently worked on a LoRA that improves tool use in LLM. Thought the approach might interest folks here. The issue I have had when trying to use some of the local LLMs with coding agents is this: Me: "Find all API endpoints with authentication in this codebase" LLM: "You should look for @app.route decorators and check if they have auth middleware..." But I often want it to search the files and show me but the LLM doesn't trigger a tool use call. To fine-tune it for tool use I combined two data sources: 1. Magpie scenarios - 5000+ diverse tasks (bug hunting, refactoring, security audits) 2. Real execution - Ran these on actual repos (FastAPI, Django, React) to get authentic tool responses This ensures the model learns both breadth (many scenarios) and depth (real tool behavior). Tools We Taught: - `read_file` - Actually read file contents - `search_files` - Regex/pattern search across codebases - `find_definition` - Locate classes/functions - `analyze_imports` - Dependency tracking - `list_directory` - Explore structure - `run_tests` - Execute test suites Improvements: - Tool calling accuracy: 12% → 80% - Correct parameters: 8% → 87% - Multi-step tasks: 3% → 78% - End-to-end completion: 5% → 80% - Tools per task: 0.2 → 3.8 The LoRA really improves on intential tool call as an example consider the query: "Find ValueError in payment module" The response proceeds as follows: 1. Calls `search_files` with pattern "ValueError" 2. Gets 4 matches across 3 files 3. Calls `read_file` on each match 4. Analyzes context 5. Reports: "Found 3 ValueError instances: payment/processor.py:47 for invalid amount, payment/validator.py:23 for unsupported currency..." Resources: - Colab notebook https://colab.research.google.com/github/codelion/ellora/blob/main/Ellora_Recipe_3_Enhanced_Tool_Calling_and_Code_Understanding.ipynb - Model - https://huggingface.co/codelion/Llama-3.2-1B-Instruct-tool-calling-lora - GitHub - https://github.com/codelion/ellora
liked
a model
10 days ago
Lexa-B/LexaLCM_Pre3
reacted
to
dhruv3006
's
post
with 🔥
10 days ago
Pair a vision grounding model with a reasoning LLM with Cua Cua just shipped v0.4 of the Cua Agent framework with Composite Agents - you can now pair a vision/grounding model with a reasoning LLM using a simple modelA+modelB syntax. Best clicks + best plans. The problem: every GUI model speaks a different dialect. • some want pixel coordinates • others want percentages • a few spit out cursed tokens like <|loc095|> We built a universal interface that works the same across Anthropic, OpenAI, Hugging Face, etc.: agent = ComputerAgent( model="anthropic/claude-3-5-sonnet-20241022", tools=[computer] ) But here’s the fun part: you can combine models by specialization. Grounding model (sees + clicks) + Planning model (reasons + decides) → agent = ComputerAgent( model="huggingface-local/HelloKKMe/GTA1-7B+openai/gpt-4o", tools=[computer] ) This gives GUI skills to models that were never built for computer use. One handles the eyes/hands, the other the brain. Think driver + navigator working together. Two specialists beat one generalist. We’ve got a ready-to-run notebook demo - curious what combos you all will try. Github : https://github.com/trycua/cua Blog : https://www.trycua.com/blog/composite-agents
View all activity
Organizations
None yet
ThijsL202
's datasets
None public yet