abdev-leaderboard / about.py
loodvanniekerkginkgo's picture
Some linting things
a897a54
from constants import (
ABOUT_TAB_NAME,
ASSAY_LIST,
SUBMIT_TAB_NAME,
TERMS_URL,
FAQ_TAB_NAME,
SLACK_URL,
TUTORIAL_URL,
)
ABOUT_INTRO = f"""
## About this challenge
### Register [here](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition) on the Ginkgo website before submitting
#### What is antibody developability and why is it important?
Antibodies have to be manufacturable, stable in high concentrations, and have low off-target effects.
Properties such as these can often hinder the progression of an antibody to the clinic, and are collectively referred to as 'developability'.
Here we invite the community to submit and develop better predictors, which will be tested out on a heldout private set to assess model generalization.
#### 🧬 Developability properties in this competition
1. πŸ’§ Hydrophobicity
2. 🎯 Polyreactivity
3. 🧲 Self-association
4. 🌑️ Thermostability
5. πŸ§ͺ Titer
#### πŸ† Prizes
For each of the 5 properties in the competition, there is a prize for the model with the highest performance for that property on the private test set.
There is also an 'open-source' prize for the best model trained on the GDPa1 dataset of monoclonal antibodies (reporting cross-validation results) and assessed on the private test set where authors provide all training code and data.
For each of these 6 prizes, participants have the choice between
- **$10 000 in data generation credits** with [Ginkgo Datapoints](https://datapoints.ginkgo.bio/), or
- A **$2000 cash prize**.
See the "{FAQ_TAB_NAME}" tab above (you are currently on the "{ABOUT_TAB_NAME}" tab) or the [competition terms]({TERMS_URL}) for more details.
---
"""
ABOUT_TEXT = f"""
#### How to participate?
1. **Create a Hugging Face account** [here](https://huggingface.co/join) if you don't have one yet (this is used to track unique submissions and to access the GDPa1 dataset).
2. **Register your team** on the [Competition Registration](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition) page.
3. **Build a model** using cross-validation on the [GDPa1](https://huggingface.co/datasets/ginkgo-datapoints/GDPa1) dataset, using the `hierarchical_cluster_IgG_isotype_stratified_fold` column to split the dataset into folds, and write out all cross-validation predictions to a CSV file.
4. **Use your model to make predictions** on the private test set (download the 80 private test set sequences from the {SUBMIT_TAB_NAME} tab).
5. **Submit your training and test set predictions** on the {SUBMIT_TAB_NAME} tab by uploading both your cross-validation and private test set CSV files.
Check out our introductory tutorial on training an antibody developability prediction model with cross-validation [here]({TUTORIAL_URL}).
⏰ Submissions close on **1 November 2025**.
---
#### Acknowledgements
We gratefully acknowledge [Tamarind Bio](https://www.tamarind.bio/)'s help in running the following models which are on the leaderboard:
- TAP (Therapeutic Antibody Profiler)
- SaProt
- DeepViscosity
- Aggrescan3D
- AntiFold
We're working on getting more public models added, so that participants have more precomputed features to use for modeling.
---
#### How to contribute?
We'd like to add some more existing developability models to the leaderboard. Some examples of models we'd like to add:
- Absolute folding stability models (for Thermostability)
- PROPERMAB
- AbMelt (requires GROMACS for MD simulations)
If you would like to form a team or discuss ideas, join the [Slack community]({SLACK_URL}) co-hosted by Bits in Bio.
"""
# TODO(Lood): Add "πŸ“Š The first test set results will be released on October 13th, ahead of the final submission deadline on November 1st."
# Note(Lood): Significance: Add another note of "many models are trained on different datasets, and differing train/test splits, so this is a consistent way of comparing for a heldout set"
FAQS = {
"Is there a fee to enter?": "No. Participation is free of charge.",
"Who can participate?": "Anyone. We encourage academic labs, individuals, and especially industry teams who use developability models in production.",
"Where can I find more information about the methods used to generate the data?": (
"Our [PROPHET-Ab preprint](https://www.biorxiv.org/content/10.1101/2025.05.01.651684v1) described in detail the methods used to generate the training dataset. "
"Note: These assays may differ from previously published methods, and these correlations between literature data and experimental data are also described in the preprint. "
"These same methods are used to generate the heldout test data."
),
"What do the datasets contain?": (
"Both the GDPa1 and heldout test set contain the VH and VL sequences, as well as the full heavy chain sequence. The GDPa1 dataset is a mix of IgG1, IgG2, and IgG4 antibodies while the heldout test set only contains IgG1 antibodies. We also include the light chain subtype (lambda or kappa)."
),
"How were the heldout sequences designed?": (
"We sampled 80 paired antibody sequences from [OAS](https://opig.stats.ox.ac.uk/webapps/oas/). We tried to represent the range of germline variants, sequence identities to germline, and CDR3 lengths. "
"The sequences in the dataset are quite diverse as measured by pairwise sequence identity."
),
"Do I need to design new proteins?": (
"No. This is just a predictive competition, which will be judged according to the correlation between predictions and experimental values. There may be a generative round in the future."
),
"Can I participate anonymously?": (
"Yes! Please still create an anonymous Hugging Face account so that we can uniquely associate submissions and add an email on the [registration page](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition) so that we can contact participants throughout the competition."
"Note that top participants will need to identify themselves at the end of the tournament to receive prizes / recognition. "
"If there are any concerns about anonymity, please contact us at [email protected] - you can even send us a CSV of submissions from a burner email if necessary! πŸ₯·"
),
"How is intellectual property handled?": (
f"Participants retain IP rights to the methods they use and develop during the tournament. Read more details in our terms [here]({TERMS_URL})."
),
"Do I need to submit my code / methods in order to participate?": (
"No, there are no requirements to submit code / methods and submitted predictions remain private. "
"We also have an optional field for including a short model description. "
"Top performing participants will be requested to identify themselves at the end of the tournament. "
"There will be one prize for the best open-source model, which will require code / methods to be available."
),
"How exactly can I evaluate my model?": (
"You can easily calculate the Spearman correlation coefficient on the GDPa1 dataset yourself before uploading to the leaderboard. "
"Simply use the `spearmanr(predictions, targets, nan_policy='omit')` function from `scipy.stats` to calculate the Spearman correlation coefficient for each of the 5 folds, and then take the average."
"For the heldout private set, we will calculate these Spearman correlations privately at the end of the competition (and possibly at other points throughout the competition) - but there will not be 'rolling results' on the private test set to prevent test set leakage."
),
"How often does the leaderboard update?": (
"The leaderboard should reflect new submissions within a minute of submitting. Note that the leaderboard will not show the results on the private test set, these will be calculated once at the end of the tournament (and possibly at another occasion before that)."
),
"How many submissions can I make?": (
"You can currently make unlimited submissions, but we may choose to limit the number of possible submissions per user. For the private test set evaluation the latest submission will be used."
),
"How are winners determined?": (
'There will be 6 prizes (one for each of the assay properties plus an "open-source" prize). '
"For the property-specific prizes, winners will be determined by the submission with the highest Spearman rank correlation coefficient on the private holdout set. "
'For the "open-source" prize, this will be determined by the highest average Spearman across all properties. '
"We reserve the right to award the open-source prize to a predictor with competitive results for a subset of properties (e.g. a top polyreactivity model)."
),
"How does the open-source prize work?": (
"Participants who open-source their training code and methods will be eligible for the open-source prize (as well as the other prizes)."
),
"What do I need to submit?": (
'There is a tab on the Hugging Face competition page to upload predictions for datasets - for each dataset participants need to submit a CSV containing a column for each property they would like to predict (e.g. called "HIC"), '
"and a row with the sequence matching the sequence in the input file. These predictions are then evaluated in the backend using the Spearman rank correlation between predictions and experimental values, and these metrics are then added to the leaderboard. "
"Predictions remain private and are not seen by other contestants."
),
"Can I submit predictions for only one property?": (
"Yes. You do not need to predict all 5 properties to participate. Each property has its own leaderboard and prize, so you may submit models for a subset of the assays if you wish."
),
"Are participants required to use the provided cross-validation splits?": (
"Yes, to ensure fair comparison between different trained models. The results will be calculated by taking the average Spearman correlation coefficient across all folds."
),
"Are there any country restrictions for prize eligibility?": (
"Yes. Due to applicable laws, prizes cannot be awarded to participants from countries under U.S. sanctions. See the competition terms for details."
),
"How are private test set submissions handled?": (
"We will use the private test set submission at the close of the competition to determine the winners. "
"If there are any intermediate releases of private test set results, these will not affect the final ranking."
),
}
SUBMIT_INSTRUCTIONS = f"""
# Antibody Developability Submission
You do **not** need to predict all 5 properties β€” each property has its own leaderboard and prize.
## Instructions
1. **Upload both CSV files**:
- **GDPa1 Cross-Validation predictions** (using cross-validation folds)
- **Private Test Set predictions** (final test submission)
2. Each CSV should contain `antibody_name` + one column per property you are predicting (e.g. `"antibody_name,Titer,PR_CHO"` if your model predicts Titer and Polyreactivity).
- List of valid property names: `{', '.join(ASSAY_LIST)}`.
The GDPa1 results should appear on the leaderboard within a minute, and can also be calculated manually using Spearman rank correlation. The **private test set results will not appear on the leaderboards at first**, and will be used to determine the winners at the close of the competition.
We may release private test set results at intermediate points during the competition.
## Cross-validation
For the GDPa1 cross-validation predictions, use the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column to split the dataset into folds and make predictions for each of the folds.
Submit a CSV file in the same format but also containing the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column.
Check out our tutorial on training an antibody developability prediction model with cross-validation [here]({TUTORIAL_URL}).
Submissions close on **1 November 2025**.
"""