sccastillo commited on
Commit
bb49ef2
·
verified ·
1 Parent(s): 739ee15

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -15,10 +15,11 @@ base_model: unsloth/llama-3-8b-bnb-4bit
15
 
16
  While developing complex RAG applications, I found a common need for router functionality to map user queries to different system workflows (and APIs). The router acts as a dispatcher that can enhance responsiveness and accuracy by choosing the best workflow or API based on the query context. This implies that we need to produce structured output from unstructured input text.
17
 
18
- To this end, I will undertake a simple exercise to fine-tune the new Llama 3 model to process text input and generate JSON-like output (here is the [colab](https://colab.research.google.com/drive/1Vj0LOjU_5N9VWLpY-AG91dgdGD88Vjwm?usp=sharing)). My hope is that we can avoid some external dependencies for this part of the system by seamlessly integrating various models to reinforce complex applications in production settings. It is my belief that building a robust critical infrastructure for the semantic modules requires choosing the right LLM for a given task.
19
- For training, we will use structured data from [azizshaw](https://huggingface.co/azizshaw/text_to_json). The dataset has 485 rows and contains 'input', 'output' and 'instruction' columns.
20
 
21
- For a quick evaluation, let's use another dataset for text-to-JSON, the **Diverse Restricted JSON Data Extraction**, curated by: The paraloq analytics team ([here](https://huggingface.co/datasets/paraloq/json_data_extraction)).
 
 
22
 
23
  Run the model for inference:
24
 
 
15
 
16
  While developing complex RAG applications, I found a common need for router functionality to map user queries to different system workflows (and APIs). The router acts as a dispatcher that can enhance responsiveness and accuracy by choosing the best workflow or API based on the query context. This implies that we need to produce structured output from unstructured input text.
17
 
18
+ To this end, I undertook a simple exercise to fine-tune the new Llama 3 model to process text input and generate JSON-like output (here is the [colab](https://colab.research.google.com/drive/1Vj0LOjU_5N9VWLpY-AG91dgdGD88Vjwm?usp=sharing)). My hope was that we could avoid some external dependencies for this part of the system by seamlessly integrating various models to reinforce complex applications in production settings. I believed that building a robust critical infrastructure for the semantic modules required choosing the right LLM for a given task.
 
19
 
20
+ For training, we used structured data from [azizshaw](https://huggingface.co/azizshaw/text_to_json). The dataset contained 485 rows and included 'input', 'output', and 'instruction' columns.
21
+
22
+ For a quick evaluation, we used another dataset for text-to-JSON, the **Diverse Restricted JSON Data Extraction**, curated by the paraloq analytics team ([here](https://huggingface.co/datasets/paraloq/json_data_extraction)).
23
 
24
  Run the model for inference:
25