Update README.md
Browse files
README.md
CHANGED
@@ -7,4 +7,57 @@ language:
|
|
7 |
pretty_name: ROBOCUP24
|
8 |
size_categories:
|
9 |
- 10K<n<100K
|
10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
pretty_name: ROBOCUP24
|
8 |
size_categories:
|
9 |
- 10K<n<100K
|
10 |
+
---
|
11 |
+
# ROBOCUP24@Home
|
12 |
+
|
13 |
+
##**Abstract**
|
14 |
+
Large Language Models (LLMs) have been widely utilized to perform complex robotic tasks. However, handling task-specific complex planning based on robot knowledge and abilities remains a significant challenge. This paper proposes a novel method for dataset generation that integrates hand-annotated data with automated scripts to create a diverse, task-specific training dataset for fine-tuning LLMs. This approach not only enhances the diversity and volume of training data but also ensures that LLMs can autonomously answer questions and formulate plans within their designated domain expertise. Additionally, the model can provide nuanced responses to queries about personal details, demonstrating its ability to reason and engage intelligently. In contrast to other LLM-based methods for complex robotic tasks, our approach can achieve both reasonable planning and QnA capabilities through small-scale fine-tuning, enabling fast on-device local inference. Our method is validated through simulation and real-world testing across various practical scenarios.
|
15 |
+
|
16 |
+
|
17 |
+
## Key Features
|
18 |
+
|
19 |
+
- **Task-Specific Dataset Generation:** Combines hand-annotated data and automated scripts to generate datasets suited for fine-tuning LLMs to perform complex robotic tasks.
|
20 |
+
- **QnA Capabilities:** Enables the model to autonomously respond to user queries about tasks and personal details, showcasing reasoning and interaction.
|
21 |
+
- **Small-Scale Fine-Tuning:** Achieves both planning and QnA capabilities through small-scale fine-tuning, which allows for fast on-device local inference.
|
22 |
+
- **Real-World Validation:** The method is validated through simulations and real-world testing, ensuring its effectiveness across practical scenarios.
|
23 |
+
|
24 |
+
## How It Works
|
25 |
+
1. **Dataset Generation:**
|
26 |
+
- We combine manually annotated data with automated scripts that simulate diverse, task-specific scenarios.
|
27 |
+
- This hybrid approach ensures a rich and varied dataset, tailored to fine-tune LLMs for specific robotic tasks and domains.
|
28 |
+
|
29 |
+
2. **Model Fine-Tuning:**
|
30 |
+
- LLMs are fine-tuned on the generated dataset, allowing them to handle complex tasks and provide intelligent, context-aware responses.
|
31 |
+
- The fine-tuning process includes both planning capabilities and nuanced QnA, supporting efficient interaction and decision-making in real-world environments.
|
32 |
+
|
33 |
+
3. **Simulation and Testing:**
|
34 |
+
- The fine-tuned model is tested both in simulations and through real-world robotic tasks, ensuring robust performance across different scenarios.
|
35 |
+
|
36 |
+
## Advantages
|
37 |
+
- **Enhanced Training Data:** The use of hand-annotated data combined with automated scripts results in a more diverse and comprehensive dataset.
|
38 |
+
- **Fast On-Device Inference:** The small-scale fine-tuning approach enables rapid, local inference, making it ideal for use in real-time robotic systems.
|
39 |
+
- **Versatile Applications:** The model’s ability to reason, answer questions, and plan tasks makes it suitable for a wide range of applications in robotics, from household robots to industrial systems.
|
40 |
+
|
41 |
+
## Example
|
42 |
+
|
43 |
+
Here’s an example of the model’s QnA and planning capabilities:
|
44 |
+
|
45 |
+
1. **Task Planning:**
|
46 |
+
- **User:** "give me the biggest cup from the dishwasher"
|
47 |
+
- **Model:** "Respond('Got it i will bring it to you!') | Move_to('dishwasher') | Search_object('cup', 'biggest') | Pickup('cup', 'biggest') | Search_person('user') | Give_to('user')"
|
48 |
+
|
49 |
+
2. **QnA about Personal Details:**
|
50 |
+
- **User:** "please say something about yourself"
|
51 |
+
- **Model:** "Respond('Well, my name is lucio i am a house hold robot and very close to AGI Artificial general intelligence!')"
|
52 |
+
|
53 |
+
## Validation
|
54 |
+
|
55 |
+
The model was validated in a variety of test scenarios, both in simulated environments and real-world settings, including:
|
56 |
+
- Household cleaning tasks
|
57 |
+
- Personal assistant queries
|
58 |
+
- Complex task planning (e.g., meal preparation, room organization)
|
59 |
+
|
60 |
+
|
61 |
+
## License
|
62 |
+
|
63 |
+
This project is licensed under the MIT License
|