LuuWee commited on
Commit
3fcb522
·
verified ·
1 Parent(s): d414c0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -20,3 +20,24 @@ language:
20
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
23
+
24
+ Info:
25
+ I trained these Models on Google Colab with a Dataset i created out of the official CPE-Dictionary.
26
+ The Dataset is formatted in the Alpaca Format:
27
+
28
+ alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
29
+
30
+ ### Instruction:
31
+ {}
32
+
33
+ ### Input:
34
+ {}
35
+
36
+ ### Response:
37
+ {}"""
38
+
39
+ For the best results with this Model use this format when interacting with the model:
40
+
41
+ prompt = alpaca_prompt.format(f"What is the CPE for {vendor} {productname}. Only return the CPE", "", "")
42
+
43
+ this is the exact wording i used i the dataset. Input and Response should be left blank.