sccastillo commited on
Commit
88ca848
·
verified ·
1 Parent(s): f9c45d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -15,8 +15,7 @@ base_model: unsloth/llama-3-8b-bnb-4bit
15
 
16
  While developing complex RAG applications, I found a common need for router functionality to map user queries to different system workflows (and APIs). The router acts as a dispatcher that can enhance responsiveness and accuracy by choosing the best workflow or API based on the query context. This implies that we need to produce structured output from unstructured input text.
17
 
18
- To this end, and to avoid external dependencies for this part of the system, I will undertake a simple exercise to fine-tune the new Llama 3 model to process text input and generate JSON-like output. My hope is that we can seamlessly integrate various models to reinforce complex applications in production settings, building a robust critical infraestructures for the semantical modules.
19
-
20
  For training, we will use structured data from [azizshaw](https://huggingface.co/azizshaw/text_to_json). The dataset has 485 rows and contains 'input', 'output' and 'instruction' columns.
21
 
22
  For a quick evaluation, let's use another dataset for text-to-JSON, the **Diverse Restricted JSON Data Extraction**, curated by: The paraloq analytics team ([here](https://huggingface.co/datasets/paraloq/json_data_extraction))
 
15
 
16
  While developing complex RAG applications, I found a common need for router functionality to map user queries to different system workflows (and APIs). The router acts as a dispatcher that can enhance responsiveness and accuracy by choosing the best workflow or API based on the query context. This implies that we need to produce structured output from unstructured input text.
17
 
18
+ To this end, I will undertake a simple exercise to fine-tune the new Llama 3 model to process text input and generate JSON-like output. My hope is that we can avoid some external dependencies for this part of the system by seamlessly integrating various models to reinforce complex applications in production settings. It is my belief that building a robust critical infrastructure for the semantic modules requires choosing the right LLM for a given task.
 
19
  For training, we will use structured data from [azizshaw](https://huggingface.co/azizshaw/text_to_json). The dataset has 485 rows and contains 'input', 'output' and 'instruction' columns.
20
 
21
  For a quick evaluation, let's use another dataset for text-to-JSON, the **Diverse Restricted JSON Data Extraction**, curated by: The paraloq analytics team ([here](https://huggingface.co/datasets/paraloq/json_data_extraction))