Update lora_clm_with_additional_tokens.ipynb

#5
lora_clm_with_additional_tokens.ipynb CHANGED
@@ -10,7 +10,7 @@
10
  "In this example, we will learn how to train a LoRA model when adding new tokens to the tokenizer and model. \n",
11
  "This is a common usecase when doing the following:\n",
12
  "1. Instruction finetuning with new tokens beind added such as `<|user|>`, `<|assistant|>`, `<|system|>`, `</s>`, `<s>` to properly format the conversations\n",
13
- "2. Finetuning on a specific language wherein language spoecific tokens are added, e.g., korean tokens being added to vocabulary for finetuning LLM on Korean datasets.\n",
14
  "3. Instruction finetuning to return outputs in certain format to enable agent behaviour new tokens such as `<|FUNCTIONS|>`, `<|BROWSE|>`, `<|TEXT2IMAGE|>`, `<|ASR|>`, `<|TTS|>`, `<|GENERATECODE|>`, `<|RAG|>`.\n",
15
  "\n",
16
  "In such cases, you add the Embedding modules to the LORA `target_modules`. PEFT will take care of saving the embedding layers with the new added tokens along with the adapter weights that were trained on the specific initialization of the embeddings weights of the added tokens."
 
10
  "In this example, we will learn how to train a LoRA model when adding new tokens to the tokenizer and model. \n",
11
  "This is a common usecase when doing the following:\n",
12
  "1. Instruction finetuning with new tokens beind added such as `<|user|>`, `<|assistant|>`, `<|system|>`, `</s>`, `<s>` to properly format the conversations\n",
13
+ "2. Finetuning on a specific language wherein language specific tokens are added, e.g., korean tokens being added to vocabulary for finetuning LLM on Korean datasets.\n",
14
  "3. Instruction finetuning to return outputs in certain format to enable agent behaviour new tokens such as `<|FUNCTIONS|>`, `<|BROWSE|>`, `<|TEXT2IMAGE|>`, `<|ASR|>`, `<|TTS|>`, `<|GENERATECODE|>`, `<|RAG|>`.\n",
15
  "\n",
16
  "In such cases, you add the Embedding modules to the LORA `target_modules`. PEFT will take care of saving the embedding layers with the new added tokens along with the adapter weights that were trained on the specific initialization of the embeddings weights of the added tokens."