File size: 13,234 Bytes
2dafeb1
 
 
 
 
 
688f116
 
2dafeb1
15ae508
9a87acd
 
 
 
 
 
 
 
 
 
 
 
8dcd98f
3edbc93
 
22f82e7
8dcd98f
 
3edbc93
2dafeb1
3edbc93
fafd10e
8dcd98f
27f3da5
 
 
 
 
 
 
 
fafd10e
8dcd98f
2dafeb1
0de3557
27f3da5
 
 
15ae508
61fa714
27f3da5
 
21f87d6
 
22f82e7
21f87d6
8dcd98f
 
fafd10e
 
27f3da5
 
 
3edbc93
27f3da5
 
9a87acd
27f3da5
 
4965e60
203f3f3
 
688f116
203f3f3
 
 
 
 
 
 
 
27f3da5
 
8dcd98f
813ce52
9a87acd
21f87d6
203f3f3
 
58db0a0
688f116
3edbc93
27f3da5
 
22f82e7
 
3edbc93
10e69e7
 
 
 
 
 
 
0de3557
 
 
10e69e7
 
 
 
 
89d69bf
10e69e7
 
22f82e7
 
 
10e69e7
 
22f82e7
10e69e7
 
89d69bf
10e69e7
 
 
 
22f82e7
 
27f3da5
22f82e7
 
89d69bf
 
 
 
 
 
10e69e7
89d69bf
2dafeb1
89d69bf
10e69e7
 
 
27f3da5
10e69e7
 
89d69bf
2dafeb1
 
10e69e7
fafd10e
 
 
 
27f3da5
fafd10e
 
 
 
 
 
 
 
8f9985e
22f82e7
a897a54
22f82e7
 
 
 
 
672339b
 
 
 
27f3da5
9a87acd
22f82e7
9a87acd
22f82e7
 
 
672339b
688f116
27f3da5
4965e60
9a87acd
 
 
 
 
4965e60
61fa714
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
from constants import (
    ABOUT_TAB_NAME,
    ASSAY_LIST,
    SUBMIT_TAB_NAME,
    TERMS_URL,
    FAQ_TAB_NAME,
    SLACK_URL,
    TUTORIAL_URL,
)

WEBSITE_HEADER = f"""
    ## Welcome to the Ginkgo Antibody Developability Benchmark!

    Participants can submit their model to the leaderboards by simply uploading a CSV file (see the "βœ‰οΈ Submit" tab).

    You can **predict any or all of the 5 properties**, and you can filter the main leaderboard by property.
    See more details in the "{ABOUT_TAB_NAME}" tab.

    πŸ—“οΈ There will be a test set scoring on **October 13th** (which will score all the latest test set submissions at that point). 
    Use this to refine your models before the final submission deadline on **1 November 2025**.
"""

ABOUT_INTRO = f"""
## About this challenge

### Register [here](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition) on the Ginkgo website before submitting

#### What is antibody developability and why is it important?

Antibodies have to be manufacturable, stable in high concentrations, and have low off-target effects.
Properties such as these can often hinder the progression of an antibody to the clinic, and are collectively referred to as 'developability'.
Here we invite the community to submit and develop better predictors, which will be tested out on a heldout private set to assess model generalization.

#### 🧬 Developability properties in this competition

1. πŸ’§ Hydrophobicity
2. 🎯 Polyreactivity
3. 🧲 Self-association
4. 🌑️ Thermostability
5. πŸ§ͺ Titer

#### πŸ† Prizes

For each of the 5 properties in the competition, there is a prize for the model with the highest performance for that property on the private test set.
There is also an 'open-source' prize for the best model trained on the GDPa1 dataset of monoclonal antibodies (reporting cross-validation results) and assessed on the private test set where authors provide all training code and data.
For each of these 6 prizes, participants have the choice between
- **$10 000 in data generation credits** with [Ginkgo Datapoints](https://datapoints.ginkgo.bio/), or
- A **$2000 cash prize**.

See the "{FAQ_TAB_NAME}" tab above (you are currently on the "{ABOUT_TAB_NAME}" tab) or the [competition terms]({TERMS_URL}) for more details.

---
"""

ABOUT_TEXT = f"""

#### How to participate?

1. **Create a Hugging Face account** [here](https://huggingface.co/join) if you don't have one yet (this is used to track unique submissions and to access the GDPa1 dataset).
2. **Register your team** on the [Competition Registration](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition) page.
3. **Build a model** using cross-validation on the [GDPa1](https://huggingface.co/datasets/ginkgo-datapoints/GDPa1) dataset, using the `hierarchical_cluster_IgG_isotype_stratified_fold` column to split the dataset into folds, and write out all cross-validation predictions to a CSV file.
4. **Use your model to make predictions** on the private test set (download the 80 private test set sequences from the {SUBMIT_TAB_NAME} tab).
5. **Submit your training and test set predictions** on the {SUBMIT_TAB_NAME} tab by uploading both your cross-validation and private test set CSV files.

Check out our introductory tutorial on training an antibody developability prediction model with cross-validation [here]({TUTORIAL_URL}).

⏰ Submissions close on **1 November 2025**, but there will be an early test set scoring on **October 13th** (which will score all the latest test set submissions at that point, and then you can refine your model and resubmit).

---

#### Acknowledgements

We gratefully acknowledge [Tamarind Bio](https://www.tamarind.bio/)'s help in running the following models which are on the leaderboard:
- TAP (Therapeutic Antibody Profiler)
- SaProt
- DeepViscosity
- Aggrescan3D
- AntiFold

We're working on getting more public models added, so that participants have more precomputed features to use for modeling.

---

#### How to contribute?

We'd like to add more existing developability models to the leaderboard. Some examples of models we'd like to add:
- Absolute folding stability models (for Thermostability)
- PROPERMAB
- AbMelt (requires GROMACS for MD simulations)

If you would like to form a team or discuss ideas, join the [Slack community]({SLACK_URL}) co-hosted by Bits in Bio.
"""
# TODO(Lood): Add "πŸ“Š The first test set results will be released on October 13th, ahead of the final submission deadline on November 1st."


# Note(Lood): Significance: Add another note of "many models are trained on different datasets, and differing train/test splits, so this is a consistent way of comparing for a heldout set"
FAQS = {
    "Is there a fee to enter?": "No. Participation is free of charge.",
    "Who can participate?": "Anyone. We encourage academic labs, individuals, and especially industry teams who use developability models in production.",
    "Where can I find more information about the methods used to generate the data?": (
        "Our [PROPHET-Ab preprint](https://www.biorxiv.org/content/10.1101/2025.05.01.651684v1) described in detail the methods used to generate the training dataset. "
        "Note: These assays may differ from previously published methods, and these correlations between literature data and experimental data are also described in the preprint. "
        "These same methods are used to generate the heldout test data."
    ),
    "What do the datasets contain?": (
        "Both the GDPa1 and heldout test set contain the VH and VL sequences, as well as the full heavy chain sequence. The GDPa1 dataset is a mix of IgG1, IgG2, and IgG4 antibodies while the heldout test set only contains IgG1 antibodies. We also include the light chain subtype (lambda or kappa)."
    ),
    "How were the heldout sequences designed?": (
        "We sampled 80 paired antibody sequences from [OAS](https://opig.stats.ox.ac.uk/webapps/oas/). We tried to represent the range of germline variants, sequence identities to germline, and CDR3 lengths. "
        "The sequences in the dataset are quite diverse as measured by pairwise sequence identity."
    ),
    "Do I need to design new proteins?": (
        "No. This is just a predictive competition, which will be judged according to the correlation between predictions and experimental values. There may be a generative round in the future."
    ),
    "Can I participate anonymously?": (
        "Yes! Please still create an anonymous Hugging Face account so that we can uniquely associate submissions and add an email on the [registration page](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition) so that we can contact participants throughout the competition."
        "Note that top participants will need to identify themselves at the end of the tournament to receive prizes / recognition. "
        "If there are any concerns about anonymity, please contact us at [email protected] - you can even send us a CSV of submissions from a burner email if necessary! πŸ₯·"
    ),
    "How is intellectual property handled?": (
        f"Participants retain IP rights to the methods they use and develop during the tournament. Read more details in our terms [here]({TERMS_URL})."
    ),
    "Do I need to submit my code / methods in order to participate?": (
        "No, there are no requirements to submit code / methods and submitted predictions remain private. "
        "We also have an optional field for including a short model description. "
        "Top performing participants will be requested to identify themselves at the end of the tournament. "
        "There will be one prize for the best open-source model, which will require code / methods to be available."
    ),
    "How exactly can I evaluate my model?": (
        "You can easily calculate the Spearman correlation coefficient on the GDPa1 dataset yourself before uploading to the leaderboard. "
        "Simply use the `spearmanr(predictions, targets, nan_policy='omit')` function from `scipy.stats` to calculate the Spearman correlation coefficient for each of the 5 folds, and then take the average."
        "For the heldout private set, we will calculate these Spearman correlations privately at the end of the competition (and possibly at other points throughout the competition) - but there will not be 'rolling results' on the private test set to prevent test set leakage."
    ),
    "How often does the leaderboard update?": (
        "The leaderboard should reflect new submissions within a minute of submitting. Note that the leaderboard will not show the results on the private test set, these will be calculated once at the end of the tournament (and possibly at another occasion before that)."
    ),
    "How many submissions can I make?": (
        "You can currently make unlimited submissions, but we may choose to limit the number of possible submissions per user. For the private test set evaluation the latest submission will be used."
    ),
    "How are winners determined?": (
        'There will be 6 prizes (one for each of the assay properties plus an "open-source" prize). '
        "For the property-specific prizes, winners will be determined by the submission with the highest Spearman rank correlation coefficient on the private holdout set. "
        'For the "open-source" prize, this will be determined by the highest average Spearman across all properties. '
        "We reserve the right to award the open-source prize to a predictor with competitive results for a subset of properties (e.g. a top polyreactivity model)."
    ),
    "How does the open-source prize work?": (
        "Participants who open-source their training code and methods will be eligible for the open-source prize (as well as the other prizes)."
    ),
    "What do I need to submit?": (
        'There is a tab on the Hugging Face competition page to upload predictions for datasets - for each dataset participants need to submit a CSV containing a column for each property they would like to predict (e.g. called "HIC"), '
        "and a row with the sequence matching the sequence in the input file. These predictions are then evaluated in the backend using the Spearman rank correlation between predictions and experimental values, and these metrics are then added to the leaderboard. "
        "Predictions remain private and are not seen by other contestants."
    ),
    "Can I submit predictions for only one property?": (
        "Yes. You do not need to predict all 5 properties to participate. Each property has its own leaderboard and prize, so you may submit models for a subset of the assays if you wish."
    ),
    "Are participants required to use the provided cross-validation splits?": (
        "Yes, to ensure fair comparison between different trained models. The results will be calculated by taking the average Spearman correlation coefficient across all folds."
    ),
    "Are there any country restrictions for prize eligibility?": (
        "Yes. Due to applicable laws, prizes cannot be awarded to participants from countries under U.S. sanctions. See the competition terms for details."
    ),
    "How are private test set submissions handled?": (
        "We will use the private test set submission at the close of the competition to determine the winners. "
        "If there are any intermediate releases of private test set results, these will not affect the final ranking."
    ),
}

SUBMIT_INSTRUCTIONS = f"""
# Antibody Developability Submission

You do **not** need to predict all 5 properties β€” each property has its own leaderboard and prize.

## Instructions
1. **Upload both CSV files**:
   - **GDPa1 Cross-Validation predictions** (using cross-validation folds)
   - **Private Test Set predictions** (final test submission)
2. Each CSV should contain `antibody_name` + one column per property you are predicting (e.g. `"antibody_name,Titer,PR_CHO"` if your model predicts Titer and Polyreactivity).
   - List of valid property names: `{', '.join(ASSAY_LIST)}`.
3. Submit as many times as you like, and the latest submission will be used for the leaderboard (and test set scoring at the end of the competition).

The GDPa1 results should appear on the leaderboard within a minute, and can also be calculated manually using average Spearman rank correlation across the 5 folds. 

## Cross-validation

For the GDPa1 cross-validation predictions, use the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column to split the dataset into folds and make predictions for each of the folds.
Submit a CSV file in the same format but also containing the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column.
Check out our tutorial on training an antibody developability prediction model with cross-validation [here]({TUTORIAL_URL}).

## Test set

The **private test set results will not appear on the leaderboards at first**, and will be used to determine the winners at the close of the competition.
πŸ—“οΈ There will be a test set scoring on **October 13th** (which will score all the latest test set submissions at that point).

Submissions close on **1 November 2025**.
"""