Upload merged_deepseek_R1_all_configs_100000_samples.json.gz (1.35 GB)

#6
Etched org
No description provided.
Etched org

This compressed data set contains the following:

  • 100k samples for each unique number of users on each system configuration (based on expert and pipeline parallelism): ~5.1M configurations total
  • EP1 and run with PP2 since Deepseek will not properly fit onto 1 single system with 671B in FP8
  • EP 2, 4, 8, and 16 all run with PP1, which is optimal and strictly better than running with PP>1 for an equivalent number of systems
  • Num_users and maximum kv-cache scale with the total number of systems (EP * PP) to account for greater total capacity of the configuration
  • Batch tokens per user and kv-cache per user are sampled from log-normal distributions
mfein changed pull request status to merged

Sign up or log in to comment