theprint commited on
Commit
4c43c4f
1 Parent(s): 338d93e

Fine tune x2

Browse files

Original version was fine tuned on the 18k Python data set. The most recent update (as 6/5/2024) was fine tuned further on an additional data set (23k entries).

Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -10,6 +10,11 @@ tags:
10
  - trl
11
  - sft
12
  base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
 
 
 
 
 
13
  ---
14
 
15
  # Uploaded model
@@ -20,4 +25,4 @@ base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
20
 
21
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
10
  - trl
11
  - sft
12
  base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
13
+ datasets:
14
+ - iamtarun/python_code_instructions_18k_alpaca
15
+ - ajibawa-2023/Python-Code-23k-ShareGPT
16
+ library_name: adapter-transformers
17
+ pipeline_tag: question-answering
18
  ---
19
 
20
  # Uploaded model
 
25
 
26
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
27
 
28
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)