Datasets:
Why processed English data is relatively small in size?
Why the processed English MedDialog only has 603 (482 + 60 + 61) dialogues. In the paper(https://aclanthology.org/2020.emnlp-main.743.pdf) Table 2, there are 257,332 dialogues in English dataset. What kind of processing is done in the processed dataset?
And I found the link to get raw English dataset is invalid, https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing. Can authors provide the link to download the English dataset? Thanks!
Why the processed English MedDialog only has 603 (482 + 60 + 61) dialogues. In the paper(https://aclanthology.org/2020.emnlp-main.743.pdf) Table 2, there are 257,332 dialogues in English dataset. What kind of processing is done in the processed dataset?
And I found the link to get raw English dataset is invalid, https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing. Can authors provide the link to download the English dataset? Thanks!
@Xueren I'm having the same issue! did you manage to get access to all 0.26M dialogues?