Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ configs:
|
|
24 |
path: "complex_traits_all/test.parquet"
|
25 |
---
|
26 |
# 🧬 TraitGym
|
27 |
-
|
28 |
|
29 |
🏆 Leaderboard: https://huggingface.co/spaces/songlab/TraitGym-leaderboard
|
30 |
|
@@ -74,6 +74,50 @@ configs:
|
|
74 |
- Tries to follow [recommended Snakemake structure](https://snakemake.readthedocs.io/en/stable/snakefiles/deployment.html)
|
75 |
- GPN-Promoter code is in [the main GPN repo](https://github.com/songlab-cal/gpn)
|
76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
## Citation
|
78 |
[Link to paper](https://www.biorxiv.org/content/10.1101/2025.02.11.637758v1)
|
79 |
```bibtex
|
|
|
24 |
path: "complex_traits_all/test.parquet"
|
25 |
---
|
26 |
# 🧬 TraitGym
|
27 |
+
[Benchmarking DNA Sequence Models for Causal Regulatory Variant Prediction in Human Genetics](https://www.biorxiv.org/content/10.1101/2025.02.11.637758v1)
|
28 |
|
29 |
🏆 Leaderboard: https://huggingface.co/spaces/songlab/TraitGym-leaderboard
|
30 |
|
|
|
74 |
- Tries to follow [recommended Snakemake structure](https://snakemake.readthedocs.io/en/stable/snakefiles/deployment.html)
|
75 |
- GPN-Promoter code is in [the main GPN repo](https://github.com/songlab-cal/gpn)
|
76 |
|
77 |
+
### Installation
|
78 |
+
First, clone the repo and `cd` into it.
|
79 |
+
Second, install the dependencies:
|
80 |
+
```bash
|
81 |
+
conda env create -f workflow/envs/general.yaml
|
82 |
+
conda activate TraitGym
|
83 |
+
```
|
84 |
+
Optionally, download precomputed datasets and predictions (6.7G):
|
85 |
+
```bash
|
86 |
+
mkdir -p results/dataset
|
87 |
+
huggingface-cli download songlab/TraitGym --repo-type dataset --local-dir results/dataset/
|
88 |
+
```
|
89 |
+
|
90 |
+
### Running
|
91 |
+
To compute a specific result, specify its path:
|
92 |
+
```bash
|
93 |
+
snakemake --cores all <path>
|
94 |
+
```
|
95 |
+
Example paths (these are already computed):
|
96 |
+
```bash
|
97 |
+
# zero-shot LLR
|
98 |
+
results/dataset/complex_traits_matched_9/AUPRC_by_chrom_weighted_average/all/GPN-MSA_absLLR.plus.score.csv
|
99 |
+
# logistic regression/linear probing
|
100 |
+
results/dataset/complex_traits_matched_9/AUPRC_by_chrom_weighted_average/all/GPN-MSA.LogisticRegression.chrom.csv
|
101 |
+
```
|
102 |
+
We recommend the following:
|
103 |
+
```bash
|
104 |
+
# Snakemake sometimes gets confused about which files it needs to rerun and this forces
|
105 |
+
# not to rerun any existing file
|
106 |
+
snakemake --cores all <path> --touch
|
107 |
+
# to output an execution plan
|
108 |
+
snakemake --cores all <path> --dry-run
|
109 |
+
```
|
110 |
+
To evaluate your own set of model features, place a dataframe of shape `n_variants,n_features` in `results/dataset/{dataset}/features/{features}.parquet`.
|
111 |
+
For zero-shot evaluation of column `{feature}` and sign `{sign}` (`plus` or `minus`), you would invoke:
|
112 |
+
```bash
|
113 |
+
snakemake --cores all results/dataset/{dataset}/{metric}/all/{features}.{sign}.{feature}.csv
|
114 |
+
```
|
115 |
+
To train and evaluate a logistic regression model, you would invoke:
|
116 |
+
```bash
|
117 |
+
snakemake --cores all results/dataset/{dataset}/{metric}/all/{feature_set}.LogisticRegression.chrom.csv
|
118 |
+
```
|
119 |
+
where `{feature_set}` should first be defined in `feature_sets` in `config/config.yaml` (this allows combining features defined in different files).
|
120 |
+
|
121 |
## Citation
|
122 |
[Link to paper](https://www.biorxiv.org/content/10.1101/2025.02.11.637758v1)
|
123 |
```bibtex
|