Clelia Astra Bertelli's picture

Clelia Astra Bertelli

as-cle-bert

AI & ML interests

Biology + Artificial Intelligence = โค๏ธ | AI for sustainable development, sustainable development for AI | Researching on Machine Learning Enhancement | I love automation for everyday things | Blogger | Open Source

Recent Activity

posted an update 1 day ago
Let's pipe some ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜๐—ต๐—ฒ ๐˜„๐—ฒ๐—ฏ into our vector database, shall we?๐Ÿค  With ๐ข๐ง๐ ๐ž๐ฌ๐ญ-๐š๐ง๐ฒ๐ญ๐ก๐ข๐ง๐  ๐ฏ๐Ÿ.๐Ÿ‘.๐ŸŽ (https://github.com/AstraBert/ingest-anything) you can now scrape content simply starting from URLs, extract the text from it, chunk it and put it into your favorite LlamaIndex-compatible database!๐Ÿ•ธ๏ธ You can do it thanks to ๐—ฐ๐—ฟ๐—ฎ๐˜„๐—น๐—ฒ๐—ฒ by Apify, an open-source crawling library for python and javascript that handles all the data flow from the web: ingest-anything then combines it with ๐—•๐—ฒ๐—ฎ๐˜‚๐˜๐—ถ๐—ณ๐˜‚๐—น๐—ฆ๐—ผ๐˜‚๐—ฝ, ๐—ฃ๐—ฑ๐—ณ๐—œ๐˜๐——๐—ผ๐˜„๐—ป and ๐—ฃ๐˜†๐— ๐˜‚๐—ฃ๐—ฑ๐—ณ to scrape HTML files, convert them to PDF and extract the text - hassle-free!๐Ÿ˜ธ Check the attached code snippet if you're curious of knowing how to get started๐ŸŽฌ PS: Don't tell anybody, but this release also has another gem... It supports OpenAI models for agentic chunking, following the new releases of Chonkie๐Ÿฆ›โœจ If you don't want to miss out on the new features, leave us a little star on GitHub โžก๏ธ https://github.com/AstraBert/ingest-anything And join our discord community! โžก๏ธ https://discord.gg/kDqHNjks
posted an update 10 days ago
Hey there, ๐—ถ๐—ป๐—ด๐—ฒ๐˜€๐˜-๐—ฎ๐—ป๐˜†๐˜๐—ต๐—ถ๐—ป๐—ด ๐˜ƒ๐Ÿญ.๐Ÿฌ.๐Ÿฌ just dropped with major changes: โœ… Embeddings: now works with Sentence Transformers, Jina AI, Cohere, OpenAI, and Model2Vec All powered via ๐—–๐—ต๐—ผ๐—ป๐—ธ๐—ถ๐—ฒโ€™๐˜€ ๐—”๐˜‚๐˜๐—ผ๐—˜๐—บ๐—ฏ๐—ฒ๐—ฑ๐—ฑ๐—ถ๐—ป๐—ด๐˜€. No more local-only limitations ๐Ÿ™Œ โœ… Vector DBs: now supports ๐—ฎ๐—น๐—น ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ๐—œ๐—ป๐—ฑ๐—ฒ๐˜…-๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฎ๐˜๐—ถ๐—ฏ๐—น๐—ฒ ๐—ฏ๐—ฎ๐—ฐ๐—ธ๐—ฒ๐—ป๐—ฑ๐˜€ Think: Qdrant, Pinecone, Weaviate, Milvus, etc. No more bottlenecks๐Ÿ”“ โœ… File parsing: now plugs into any ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ๐—œ๐—ป๐—ฑ๐—ฒ๐˜…-๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฎ๐˜๐—ถ๐—ฏ๐—น๐—ฒ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—น๐—ผ๐—ฎ๐—ฑ๐—ฒ๐—ฟ Using LlamaParse, Docling or your own setup? Youโ€™re covered. Curious of knowing more? Try it out! ๐Ÿ‘‰ https://github.com/AstraBert/ingest-anything
posted an update 11 days ago
One of the biggest challenges I've been facing since I started developing [๐๐๐Ÿ๐ˆ๐ญ๐ƒ๐จ๐ฐ๐ง](https://github.com/AstraBert/PdfItDown) was handling correctly the conversion of files like Excel sheets and CSVs: table conversion was bad and messy, almost unusable for downstream tasks๐Ÿซฃ That's why today I'm excited to introduce ๐ซ๐ž๐š๐๐ž๐ซ๐ฌ, the new feature of PdfItDown v1.4.0!๐ŸŽ‰ With ๐˜ณ๐˜ฆ๐˜ข๐˜ฅ๐˜ฆ๐˜ณ๐˜ด, you can choose among three (for now๐Ÿ‘€) flavors of text extraction and conversion to PDF: - ๐——๐—ผ๐—ฐ๐—น๐—ถ๐—ป๐—ด, which does a fantastic work with presentations, spreadsheets and word documents๐Ÿฆ† - ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ๐—ฃ๐—ฎ๐—ฟ๐˜€๐—ฒ by LlamaIndex, suitable for more complex and articulated documents, with mixture of texts, images and tables๐Ÿฆ™ - ๐— ๐—ฎ๐—ฟ๐—ธ๐—œ๐˜๐——๐—ผ๐˜„๐—ป by Microsoft, not the best at handling highly structured documents, by extremly flexible in terms of input file format (it can even convert XML, JSON and ZIP files!)โœ’๏ธ You can use this new feature in your python scripts (check the attached code snippet!๐Ÿ˜‰) and in the command line interface as well!๐Ÿ Have fun and don't forget to star the repo on GitHub โžก๏ธ https://github.com/AstraBert/PdfItDown
View all activity

Organizations

Social Post Explorers's profile picture Hugging Face Discord Community's profile picture GreenFit AI's profile picture

as-cle-bert's activity

posted an update 1 day ago
view post
Post
2581
Let's pipe some ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜๐—ต๐—ฒ ๐˜„๐—ฒ๐—ฏ into our vector database, shall we?๐Ÿค 

With ๐ข๐ง๐ ๐ž๐ฌ๐ญ-๐š๐ง๐ฒ๐ญ๐ก๐ข๐ง๐  ๐ฏ๐Ÿ.๐Ÿ‘.๐ŸŽ (https://github.com/AstraBert/ingest-anything) you can now scrape content simply starting from URLs, extract the text from it, chunk it and put it into your favorite LlamaIndex-compatible database!๐Ÿ•ธ๏ธ

You can do it thanks to ๐—ฐ๐—ฟ๐—ฎ๐˜„๐—น๐—ฒ๐—ฒ by Apify, an open-source crawling library for python and javascript that handles all the data flow from the web: ingest-anything then combines it with ๐—•๐—ฒ๐—ฎ๐˜‚๐˜๐—ถ๐—ณ๐˜‚๐—น๐—ฆ๐—ผ๐˜‚๐—ฝ, ๐—ฃ๐—ฑ๐—ณ๐—œ๐˜๐——๐—ผ๐˜„๐—ป and ๐—ฃ๐˜†๐— ๐˜‚๐—ฃ๐—ฑ๐—ณ to scrape HTML files, convert them to PDF and extract the text - hassle-free!๐Ÿ˜ธ

Check the attached code snippet if you're curious of knowing how to get started๐ŸŽฌ

PS: Don't tell anybody, but this release also has another gem... It supports OpenAI models for agentic chunking, following the new releases of Chonkie๐Ÿฆ›โœจ

If you don't want to miss out on the new features, leave us a little star on GitHub โžก๏ธ https://github.com/AstraBert/ingest-anything
And join our discord community! โžก๏ธ https://discord.gg/kDqHNjks
  • 1 reply
ยท
posted an update 10 days ago
view post
Post
1035
Hey there, ๐—ถ๐—ป๐—ด๐—ฒ๐˜€๐˜-๐—ฎ๐—ป๐˜†๐˜๐—ต๐—ถ๐—ป๐—ด ๐˜ƒ๐Ÿญ.๐Ÿฌ.๐Ÿฌ just dropped with major changes:

โœ… Embeddings: now works with Sentence Transformers, Jina AI, Cohere, OpenAI, and Model2Vec
All powered via ๐—–๐—ต๐—ผ๐—ป๐—ธ๐—ถ๐—ฒโ€™๐˜€ ๐—”๐˜‚๐˜๐—ผ๐—˜๐—บ๐—ฏ๐—ฒ๐—ฑ๐—ฑ๐—ถ๐—ป๐—ด๐˜€.
No more local-only limitations ๐Ÿ™Œ
โœ… Vector DBs: now supports ๐—ฎ๐—น๐—น ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ๐—œ๐—ป๐—ฑ๐—ฒ๐˜…-๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฎ๐˜๐—ถ๐—ฏ๐—น๐—ฒ ๐—ฏ๐—ฎ๐—ฐ๐—ธ๐—ฒ๐—ป๐—ฑ๐˜€
Think: Qdrant, Pinecone, Weaviate, Milvus, etc.
No more bottlenecks๐Ÿ”“
โœ… File parsing: now plugs into any ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ๐—œ๐—ป๐—ฑ๐—ฒ๐˜…-๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฎ๐˜๐—ถ๐—ฏ๐—น๐—ฒ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—น๐—ผ๐—ฎ๐—ฑ๐—ฒ๐—ฟ
Using LlamaParse, Docling or your own setup? Youโ€™re covered.
Curious of knowing more? Try it out! ๐Ÿ‘‰ https://github.com/AstraBert/ingest-anything
posted an update 11 days ago
view post
Post
1845
One of the biggest challenges I've been facing since I started developing [๐๐๐Ÿ๐ˆ๐ญ๐ƒ๐จ๐ฐ๐ง](https://github.com/AstraBert/PdfItDown) was handling correctly the conversion of files like Excel sheets and CSVs: table conversion was bad and messy, almost unusable for downstream tasks๐Ÿซฃ

That's why today I'm excited to introduce ๐ซ๐ž๐š๐๐ž๐ซ๐ฌ, the new feature of PdfItDown v1.4.0!๐ŸŽ‰

With ๐˜ณ๐˜ฆ๐˜ข๐˜ฅ๐˜ฆ๐˜ณ๐˜ด, you can choose among three (for now๐Ÿ‘€) flavors of text extraction and conversion to PDF:

- ๐——๐—ผ๐—ฐ๐—น๐—ถ๐—ป๐—ด, which does a fantastic work with presentations, spreadsheets and word documents๐Ÿฆ†

- ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ๐—ฃ๐—ฎ๐—ฟ๐˜€๐—ฒ by LlamaIndex, suitable for more complex and articulated documents, with mixture of texts, images and tables๐Ÿฆ™

- ๐— ๐—ฎ๐—ฟ๐—ธ๐—œ๐˜๐——๐—ผ๐˜„๐—ป by Microsoft, not the best at handling highly structured documents, by extremly flexible in terms of input file format (it can even convert XML, JSON and ZIP files!)โœ’๏ธ

You can use this new feature in your python scripts (check the attached code snippet!๐Ÿ˜‰) and in the command line interface as well!๐Ÿ

Have fun and don't forget to star the repo on GitHub โžก๏ธ https://github.com/AstraBert/PdfItDown
replied to their post 15 days ago
view reply

I am working on supporting compatibility with other embeddding models, and we will have that soon, for now I had to reduce the compatibility only to Sentence Transformers.
For what concerns page numbers, I am also working toward having better and more extensive metadata: everything is a big work-in-progress and will come in future releases!

replied to their post 17 days ago
view reply

So, there are two possibilities:

  • If you mean customizing the embedder among the ones available within Sentence Transformers, it is very possible, you just have to change the embedding_model parameter when calling the ingest method
  • If you mean that you have your own embedding model (like saved on your PC), that is a tad more difficult. I think Sentence Transformer might allow loading the model from your PC as long as it is compatible with the package. I think that this guide might be useful in that regard

For now the package only supports Sentence Transformers models, in the future it will probably extend its support to other embedding models as well :)

posted an update 18 days ago
view post
Post
2889
Ever dreamt of ingesting into a vector DB that pile of CSVs, Word documents and presentations laying in some remote folders on your PC?๐Ÿ—‚๏ธ
What if I told you that you can do it within three to six lines of code?๐Ÿคฏ
Well, with my latest open-source project, ๐ข๐ง๐ ๐ž๐ฌ๐ญ-๐š๐ง๐ฒ๐ญ๐ก๐ข๐ง๐  (https://github.com/AstraBert/ingest-anything), you can take all your non-PDF files, convert them to PDF, extract their text, chunk, embed and load them into a vector database, all in one go!๐Ÿš€
How? It's pretty simple!
๐Ÿ“ The input files are converted into PDF by PdfItDown (https://github.com/AstraBert/PdfItDown)
๐Ÿ“‘ The PDF text is extracted using LlamaIndex readers
๐Ÿฆ› The text is chunked exploiting Chonkie
๐Ÿงฎ The chunks are embedded thanks to Sentence Transformers models
๐Ÿ—„๏ธ The embeddings are loaded into a Qdrant vector database

And you're done!โœ…
Curious of trying it? Install it by running:

๐˜ฑ๐˜ช๐˜ฑ ๐˜ช๐˜ฏ๐˜ด๐˜ต๐˜ข๐˜ญ๐˜ญ ๐˜ช๐˜ฏ๐˜จ๐˜ฆ๐˜ด๐˜ต-๐˜ข๐˜ฏ๐˜บ๐˜ต๐˜ฉ๐˜ช๐˜ฏ๐˜จ

And you can start using it in your python scripts!๐Ÿ
Don't forget to star it on GitHub and let me know if you have any feedback! โžก๏ธ https://github.com/AstraBert/ingest-anything
  • 5 replies
ยท
replied to their post 20 days ago
view reply

Hey @T-2000 , you're absolutely right! I'm in the process of making the application online so for now the repo got a bit messy, tomorrow it will be clean and ready to be spinned up also locally: sorry for the incovenient!

posted an update 22 days ago
view post
Post
2975
Finding a job that matches with our resume shouldn't be difficult, especially now that we have AI... And still, we're drowning in unclear announcements, jobs whose skill requirements might not really fit us, and tons of material๐Ÿ˜ตโ€๐Ÿ’ซ
That's why I decided to build ๐‘๐ž๐ฌ๐ฎ๐ฆ๐ž ๐Œ๐š๐ญ๐œ๐ก๐ž๐ซ (https://github.com/AstraBert/resume-matcher), a fully open-source application that scans your resume and searches the web for jobs that match with it!๐ŸŽ‰
The workflow is very simple:
๐Ÿฆ™ A LlamaExtract agent parses the resume and extracts valuable data that represent your profile
๐Ÿ—„๏ธThe structured data are passed on to a Job Matching Agent (built with LlamaIndex๐Ÿ˜‰) that uses them to build a web search query based on your resume
๐ŸŒ The web search is handled by Linkup, which finds the top matches and returns them to the Agent
๐Ÿ”Ž The agent evaluates the match between your profile and the jobs, and then returns a final answer to you

So, are you ready to find a job suitable for you?๐Ÿ’ผ You can spin up the application completely locally and with Docker, starting from the GitHub repo โžก๏ธ https://github.com/AstraBert/resume-matcher
Feel free to leave your feedback and let me know in the comments if you want an online version of Resume Matcher as well!โœจ
  • 2 replies
ยท
replied to their post about 1 month ago
posted an update about 1 month ago
view post
Post
2945
Llama-4 is out and I couldn't resist but to cook something with it... So I came up with ๐‹๐ฅ๐š๐ฆ๐š๐‘๐ž๐ฌ๐ž๐š๐ซ๐œ๐ก๐ž๐ซ (https://llamaresearcher.com), your deep-research AI companion!๐Ÿ”Ž

The workflow behind ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ๐—ฅ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต๐—ฒ๐—ฟ is simple:
๐Ÿ’ฌ You submit a query
๐Ÿ›ก๏ธ Your query is evaluated by Llama 3 guard model, which deems it safe or unsafe
๐Ÿง  If your query is safe, it is routed to the Researcher Agent
โš™๏ธ The Researcher Agent expands the query into three sub-queries, with which to search the web
๐ŸŒ The web is searched for each of the sub-queries
๐Ÿ“Š The retrieved information is evaluated for relevancy against your original query
โœ๏ธ The Researcher Agent produces an essay based on the information it gathered, paying attention to referencing its sources

The agent itself is also built with easy-to-use and intuitive blocks:
๐Ÿฆ™ LlamaIndex provides the agentic architecture and the integrations with the language models
โšกGroq makes Llama-4 available with its lightning-fast inference
๐Ÿ”Ž Linkup allows the agent to deep-search the web and provides sourced answers
๐Ÿ’ช FastAPI does the heavy loading with wrapping everything within an elegant API interface
โฑ๏ธ Redis is used for API rate limiting
๐ŸŽจ Gradio creates a simple but powerful user interface

Special mention also to Lovable, which helped me build the first draft of the landing page for LlamaResearcher!๐Ÿ’–

If you're curious and you want to try LlamaResearcher, you can - completely for free and without subscription - for 30 days from now โžก๏ธ https://llamaresearcher.com
And if you're like me, and you like getting your hands in code and build stuff on your own machine, I have good news: this is all open-source, fully reproducible locally and Docker-ready๐Ÿ‹
Just go to the GitHub repo: https://github.com/AstraBert/llama-4-researcher and don't forget to star it, if you find it useful!โญ

As always, have fun and feel free to leave your feedbackโœจ
  • 2 replies
ยท
posted an update about 1 month ago
view post
Post
742
I heard someone saying ๐˜ƒ๐—ผ๐—ถ๐—ฐ๐—ฒ assistants are the future, and someone else that ๐— ๐—–๐—ฃ will rule the AI world... So I decided to combine both!๐Ÿš€

Meet ๐“๐ฒ๐’๐•๐€ (๐—ง๐˜†pe๐—ฆcript ๐—ฉoice ๐—”ssistant, https://github.com/AstraBert/TySVA), your (speaking) AI companion for everyday TypeScript programming tasks!๐ŸŽ™๏ธ

TySVA is a skilled TypeScript expert and, to provide accurate and up-to-date responses, she leverages the following workflow:
๐Ÿ—ฃ๏ธ If you talk to her, she converts the audio into a textual prompt, and use it a starting point to answer your questions (if you send a message, she'll use directly that๐Ÿ’ฌ)
๐Ÿง  She can solve your questions by (deep)searching the web and/or by retrieving relevant information from a vector database containing TypeScript documentation. If the answer is simple, she can also reply directly (no tools needed!)
๐Ÿ›œ To ease her life, TySVA has all the tools she needs available through Model Context Protocol (MCP)
๐Ÿ”Š Once she's done, she returns her answer to you, along with a voice summary of what she did and what solution she found

But how does she do that? What are her components?๐Ÿคจ

๐Ÿ“– Qdrant + HuggingFace give her the documentation knowledge, providing the vector database and the embeddings
๐ŸŒ Linkup provides her with up-to-date, grounded answers, connecting her to the web
๐Ÿฆ™ LlamaIndex makes up her brain, with the whole agentic architecture
๐ŸŽค ElevenLabs gives her ears and mouth, transcribing and producing voice inputs and outoputs
๐Ÿ“œ Groq provides her with speech, being the LLM provider behind TySVA
๐ŸŽจ Gradio+FastAPI make up her face and fibers, providing a seamless backend-to-frontend integration

If you're now curious of trying her, you can easily do that by spinning her up locally (and with Docker!๐Ÿ‹) from the GitHub repo โžก๏ธ https://github.com/AstraBert/TySVA

And feel free to leave any feedback!โœจ
posted an update about 2 months ago
view post
Post
633
Drowning in handouts, documents and presentations from your professors and not knowing where to start?๐ŸŒŠ๐Ÿ˜ตโ€๐Ÿ’ซ
Well, I might have a tool for you: ๐๐๐Ÿ๐Ÿ๐๐จ๐ญ๐ž๐ฌ (https://github.com/AstraBert/pdf2notes) is an ๐—”๐—œ-๐—ฝ๐—ผ๐˜„๐—ฒ๐—ฟ๐—ฒ๐—ฑ, ๐—ผ๐—ฝ๐—ฒ๐—ป-๐˜€๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ solution that lets you turn your unstructured and chaotic PDFs into nice and well-ordered notes in a matter of seconds!๐Ÿ“

๐—›๐—ผ๐˜„ ๐—ฑ๐—ผ๐—ฒ๐˜€ ๐—ถ๐˜ ๐˜„๐—ผ๐—ฟ๐—ธ?
๐Ÿ“„ You first upload a document
โš™๏ธ LlamaParse by LlamaIndex extracts the text from the document, using DeepMind's Gemini 2 Flash to perform multi-modal parsing
๐Ÿง  Llama-3.3-70B by Groq turns the extracted text into notes!

The notes are not perfect or you want more in-depth insights? No problem:
๐Ÿ’ฌ Send a direct message to the chatbot
โš™๏ธ The chatbot will retrieve the chat history from a Postgres database
๐Ÿง  Llama-3.3-70B will produce the answer you need

All of this is nicely wrapped within a seamless backend-to-frontend framework powered by Gradio and FastAPI๐ŸŽจ

And you can even spin it up easily and locally, using Docker๐Ÿ‹

So, what are you waiting for? Go turn your hundreds of pages of chaotic learning material into neat and elegant notes โžก๏ธ https://github.com/AstraBert/pdf2notes

And, if you would like an online demo, feel free to drop a comment - we'll see what we can build๐Ÿš€
posted an update 2 months ago
view post
Post
1685
๐‘๐€๐†๐œ๐จ๐จ๐ง๐Ÿฆ - ๐€๐ ๐ž๐ง๐ญ๐ข๐œ ๐‘๐€๐† ๐ญ๐จ ๐ก๐ž๐ฅ๐ฉ ๐ฒ๐จ๐ฎ ๐›๐ฎ๐ข๐ฅ๐ ๐ฒ๐จ๐ฎ๐ซ ๐ฌ๐ญ๐š๐ซ๐ญ๐ฎ๐ฉ

GitHub ๐Ÿ‘‰ https://github.com/AstraBert/ragcoon

Are you building a startup and you're stuck in the process, trying to navigate hundreds of resources, suggestions and LinkedIn posts?๐Ÿ˜ถโ€๐ŸŒซ๏ธ
Well, fear no more, because ๐—ฅ๐—”๐—š๐—ฐ๐—ผ๐—ผ๐—ป๐Ÿฆ is here to do some of the job for you:

๐Ÿ“ƒ It's built on free resources written by successful founders
โš™๏ธ It performs complex retrieval operations, exploiting "vanilla" hybrid search, query expansion with an ๐—ต๐˜†๐—ฝ๐—ผ๐˜๐—ต๐—ฒ๐˜๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฑ๐—ผ๐—ฐ๐˜‚๐—บ๐—ฒ๐—ป๐˜ approach and ๐—บ๐˜‚๐—น๐˜๐—ถ-๐˜€๐˜๐—ฒ๐—ฝ ๐—พ๐˜‚๐—ฒ๐—ฟ๐˜† ๐—ฑ๐—ฒ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ผ๐˜€๐—ถ๐˜๐—ถ๐—ผ๐—ป
๐Ÿ“Š It evaluates the ๐—ฟ๐—ฒ๐—น๐—ถ๐—ฎ๐—ฏ๐—ถ๐—น๐—ถ๐˜๐˜† of the retrieved context, and the ๐—ฟ๐—ฒ๐—น๐—ฒ๐˜ƒ๐—ฎ๐—ป๐—ฐ๐˜† and ๐—ณ๐—ฎ๐—ถ๐˜๐—ต๐—ณ๐˜‚๐—น๐—ป๐—ฒ๐˜€๐˜€ of its own responses, in an auto-correction effort

RAGcoon๐Ÿฆ is ๐—ผ๐—ฝ๐—ฒ๐—ป-๐˜€๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ and relies on easy-to-use components:

๐Ÿ”นLlamaIndex is at the core of the agent architecture, provisions the integrations with language models and vector database services, and performs evaluations
๐Ÿ”น Qdrant is your go-to, versatile and scalable companion for vector database services
๐Ÿ”นGroq provides lightning-fast LLM inference to support the agent, giving it the full power of ๐—ค๐˜„๐—ค-๐Ÿฏ๐Ÿฎ๐—• by Qwen
๐Ÿ”นHugging Face provides the embedding models used for dense and sparse retrieval
๐Ÿ”นFastAPI wraps the whole backend into an API interface
๐Ÿ”น๐— ๐—ฒ๐˜€๐—ผ๐—ฝ by Google is used to serve the application frontend

RAGcoon๐Ÿฆ can be spinned up locally - it's ๐——๐—ผ๐—ฐ๐—ธ๐—ฒ๐—ฟ-๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜†๐Ÿ‹, and you can find the whole code to reproduce it on GitHub ๐Ÿ‘‰ https://github.com/AstraBert/ragcoon

But there might be room for an online version of RAGcoon๐Ÿฆ: let me know if you would use it - we can connect and build it together!๐Ÿš€