Fine Tuning

#5
by ybsid - opened

Hello ,

I want to use tinyLlama as base model and fine tune it on my custom use case dataset. (I need to leverage a small model for my application)
The dataset is a set of raw text stored in database.

How can I use the text data to fine tune this model , for answering user queries related to my dataset.

Any steps / code would be appreciated.

Thanks & Happy new Year.

Unsloth_ai released a Google colab script. Maybe you can use that.

Dataset must be in this format : https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k
I was using this easy approach for fine tuning : https://www.kaggle.com/code/tommyadams/fine-tuning-tinyllama
(but in this script author feeding chunks of lines without template)

As an output you will have an adapter - can be loaded separately or merged with the main model
(but need to have equal precision for the main model and adapter).

Unsloth doesn't provide the same prompt template used in this model:

<|system|>
You are a friendly chatbot who always responds in the style of a pirate.
<|user|>
How many helicopters can a human eat in one sitting?
<|assistant|>

How do I finetune it to use exactly the same template, but using the unsloth example?

Sign up or log in to comment