FPHam commited on
Commit
fb963d6
·
1 Parent(s): 38f6dcc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -29,7 +29,8 @@ Karen, Version 2, uses a completely different data set and base model than the p
29
  2. Creative (to be uploaded), in which Karen may suggest contextual improvements or rephrasing where necessary. It's Karen, after a glass of wine.
30
 
31
  # Goals
32
- Karen's main goal is to fix US grammatical and spelling errors without changing the style of the text. She's tuned to catch most typical ESL errors.
 
33
 
34
  Verb Tense Errors:
35
  Incorrect use of verb tenses, such as using present tense when past tense is required and vice versa.
@@ -128,5 +129,5 @@ Output CREATIVE:
128
 
129
  (Grammarly Score: 83)
130
 
131
- #Conclusion
132
  The model works reasonably well, with occasional (and often debatable) grammar misses. The limitations seem to be related to the 7B parameters. It appears that the size is not sufficient to have a fine-grained understanding of various nuances of the text. This correlates with my other findings - the Mistral model performs quite well when generating its own text, but its comprehension is less than perfect, again related to be onbly 7B parameters.
 
29
  2. Creative (to be uploaded), in which Karen may suggest contextual improvements or rephrasing where necessary. It's Karen, after a glass of wine.
30
 
31
  # Goals
32
+
33
+ Karen's primary goal is to rectify grammatical and spelling errors in US English without altering the style of the text. She is adept at identifying and correcting common ESL errors.
34
 
35
  Verb Tense Errors:
36
  Incorrect use of verb tenses, such as using present tense when past tense is required and vice versa.
 
129
 
130
  (Grammarly Score: 83)
131
 
132
+ # Conclusion
133
  The model works reasonably well, with occasional (and often debatable) grammar misses. The limitations seem to be related to the 7B parameters. It appears that the size is not sufficient to have a fine-grained understanding of various nuances of the text. This correlates with my other findings - the Mistral model performs quite well when generating its own text, but its comprehension is less than perfect, again related to be onbly 7B parameters.