FPHam commited on
Commit
1747e35
·
1 Parent(s): e8ec141

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -26,10 +26,10 @@ Karen, Version 2, uses a completely different data set and base model than the p
26
  # There are two versions of Karen V2
27
 
28
  1. Strict (this one), in which Karen will try not to make too many changes to your original text, mostly fixing grammar and spelling, assuming that you know what you are doing.
29
- 2. Creative, in which Karen may suggest contextual improvements or rephrasing where necessary. It's Karen, after a glass of wine.
30
 
31
  # Goals
32
- Karen's main goal is to fix grammatical and spelling errors. She's tuned to catch most typical ESL errors.
33
 
34
  Verb Tense Errors:
35
  Incorrect use of verb tenses, such as using present tense when past tense is required and vice versa.
@@ -72,7 +72,7 @@ Karen's main goal is to fix grammatical and spelling errors. She's tuned to catc
72
  Add more grammar error cases. Better datasets. Use larger datasets.
73
 
74
  # Training
75
- It was reversely trained on fictional passages where errors were intentionally inserted by another Llama model and Python script.
76
 
77
  # Usage
78
  It should be used by submitting a paragraph or block of text at a time.
@@ -128,3 +128,5 @@ Output CREATIVE:
128
 
129
  (Grammarly Score: 83)
130
 
 
 
 
26
  # There are two versions of Karen V2
27
 
28
  1. Strict (this one), in which Karen will try not to make too many changes to your original text, mostly fixing grammar and spelling, assuming that you know what you are doing.
29
+ 2. Creative (to be uploaded), in which Karen may suggest contextual improvements or rephrasing where necessary. It's Karen, after a glass of wine.
30
 
31
  # Goals
32
+ Karen's main goal is to fix US grammatical and spelling errors. She's tuned to catch most typical ESL errors.
33
 
34
  Verb Tense Errors:
35
  Incorrect use of verb tenses, such as using present tense when past tense is required and vice versa.
 
72
  Add more grammar error cases. Better datasets. Use larger datasets.
73
 
74
  # Training
75
+ It was reversely trained on fict/non-fiction US text where errors were intentionally inserted by another Llama model (Darth Karen) and Python script.
76
 
77
  # Usage
78
  It should be used by submitting a paragraph or block of text at a time.
 
128
 
129
  (Grammarly Score: 83)
130
 
131
+ #Conclusion
132
+ The model works reasonably well, with occasional (and often debatable) grammar misses. The limitations seem to be related to the 7B parameters. It appears that the size is not sufficient to have a fine-grained understanding of various nuances of the text. This correlates with my other findings - the Mistral model performs quite well when generating its own text, but its comprehension is less than perfect, again related to be onbly 7B parameters.