Post
209
Hi all!
I was hoping somebody would be willing to check out this thought experiment of mine with the aim to reduce tokens in inter-agent communications.
How It Works:
1. You provide a task in natural language (NL)
2. NL-to-CCL Agent: Converts your request into a structured Compressed Communication Language (CCL) task.
3. Inter-agent communication occurs in CCL
4. CCL is translated back to NL before being presented to the user.
I have a notebook with an example that claims to achieve these results:
--- Token Usage Summary ---
Total NL Tokens (User Input & Final Output): 364
Total CCL Tokens (for NL/CCL Conversions): 159
Total CCL Tokens (Internal Agent Communication): 194
Overall token savings on NL-to-CCL conversion portions: 56.32%
------------------------
When asking Gemini it concludes:
"Yes, the methods used in this notebook are sensible. The multi-agent architecture is logical, and the introduction of a Compressed Communication Language (CCL) is a clever and practical solution to the real-world problems of token cost and ambiguity in LLM-based systems. While it's a proof-of-concept that would need more robust error handling and potentially more complex feedback loops for a production environment, it successfully demonstrates a viable and efficient strategy for automating a software development lifecycle."
However, I have no idea if it's actually working or if I'm just crazy.
Would really like it if someone would be willing to provide thoughts on this!
The notebook:
https://gist.github.com/CultriX-Github/7f9895bc5e4d99d2d4a3eb17d079f08b#file-token-reduction-ipynb
Thank you for taking the time! :)
I was hoping somebody would be willing to check out this thought experiment of mine with the aim to reduce tokens in inter-agent communications.
How It Works:
1. You provide a task in natural language (NL)
2. NL-to-CCL Agent: Converts your request into a structured Compressed Communication Language (CCL) task.
3. Inter-agent communication occurs in CCL
4. CCL is translated back to NL before being presented to the user.
I have a notebook with an example that claims to achieve these results:
--- Token Usage Summary ---
Total NL Tokens (User Input & Final Output): 364
Total CCL Tokens (for NL/CCL Conversions): 159
Total CCL Tokens (Internal Agent Communication): 194
Overall token savings on NL-to-CCL conversion portions: 56.32%
------------------------
When asking Gemini it concludes:
"Yes, the methods used in this notebook are sensible. The multi-agent architecture is logical, and the introduction of a Compressed Communication Language (CCL) is a clever and practical solution to the real-world problems of token cost and ambiguity in LLM-based systems. While it's a proof-of-concept that would need more robust error handling and potentially more complex feedback loops for a production environment, it successfully demonstrates a viable and efficient strategy for automating a software development lifecycle."
However, I have no idea if it's actually working or if I'm just crazy.
Would really like it if someone would be willing to provide thoughts on this!
The notebook:
https://gist.github.com/CultriX-Github/7f9895bc5e4d99d2d4a3eb17d079f08b#file-token-reduction-ipynb
Thank you for taking the time! :)