Norquinal commited on
Commit
c980a7c
·
verified ·
1 Parent(s): 13d4397

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -6
README.md CHANGED
@@ -23,13 +23,21 @@ The original dataset consisted of ~90K samples. Light filtering striped that dow
23
  Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.
24
 
25
  In here are several files:
26
- * `discord_rp_with_token_counts.json` - The original dataset in all its unprocessed glory. ~90k items. Total Average Token Length for all items: ~143 tokens.
27
- * `125_tokens_10_messages_discord_rp.json` (Strictest) - Original dataset filtered for an average token length of 125 and a minimum conversation length of 10 messages. Mostly unprocessed. Average Length: 205 tokens.
28
- * `80_tokens_6_messages_discord_rp.json` (Stricter) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages. Mostly unprocessed. Average Length: 181 tokens. The latter contains the former, so use one or the other, but not both.
29
- * `80_tokens_3_messages_discord_rp.json` (Light) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 3 messages. Mostly unprocessed. Average Length: 202 tokens. The latter contains the former, so use one or the other, but not both.
 
 
 
 
30
  * `opencai_rp.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by `gpt-3.5-turbo-16k`.
31
  * `opencai_rp_metharme.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed, filtered to 1229 samples, and converted to metharme format.
32
 
33
  Explanation of Properties:
34
- * `timestamp` - Date of the interaction in YYYY-MM-DD format
35
- * `conversations`: The conversation between the users in the chat. This is represented as a list of dictionaries, each dictionary representing a single utterance and containing three key-value pairs: `message`, referring to the utterance itself, `author` referring to their Discord username, and `is_bot`, which designates whether the message was sent by a human or a bot. `is_bot` was determined by checking if author still had a discriminator and, therefore, isn't 100% accurate.
 
 
 
 
 
23
  Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.
24
 
25
  In here are several files:
26
+ * `discord_logs_unsquashed.json` - The original dataset without squashing consecutive messages from the same author. All subsequent files are squashed.
27
+ * `discord_logs.json` - The original dataset and default option.
28
+ * `discord_logs_two_users.json` - The original dataset limited to conversations to those with only two users. **I recommend using this file.**
29
+ * `discord_logs_split_threads.json` - The original dataset with threads split by timestamp like channels.
30
+ * `discord_logs_anonymized.json` - The original dataset with usernames replaced with randomized substitutes.
31
+ * `125_tokens_6_messages.json` (Strictest) - Original dataset filtered for an average and median token length of 125 and a minimum conversation length of 6 messages.
32
+ * `80_tokens_6_messages.json` (Stricter) - Original dataset filtered for an average and median token length of 80 tokens and a minimum conversation length of 6 messages. The latter contains the former, so use one or the other, but not both.
33
+ * `80_tokens_3_messages.json` (Light) - Original dataset filtered for an average and median token length of 80 tokens and a minimum conversation length of 3 messages. The latter contains the former, so use one or the other, but not both.
34
  * `opencai_rp.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by `gpt-3.5-turbo-16k`.
35
  * `opencai_rp_metharme.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed, filtered to 1229 samples, and converted to metharme format.
36
 
37
  Explanation of Properties:
38
+ * `timestamp`: Date of the interaction in YYYY-MM-DD format
39
+ * `type`: Whether the interaction originated from a channel (`GuildTextChat`) or thread (`GuildPublicThread`). Threads were parsed differently than channels and use a static timestamp of `1776-07-04` to differentiate them.
40
+ * `token_length`: The total token length of all messages in the conversation, calculated using `tiktoken`.
41
+ * `average_token_length`: The average token length of all messages in the conversation.
42
+ * `median_token_length`: The median token length of all messages in the conversation.
43
+ * `conversations`: The conversation between the users in the chat. This is represented as a list of dictionaries, each dictionary representing a single utterance and containing two key-value pairs: `message`, referring to the utterance itself, and `author` referring to their discord username.