Clement commited on
Commit
69c0fdb
·
verified ·
1 Parent(s): 37cdcf4

Upload 15 files

Browse files
CODE_OF_CONDUCT.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ In the interest of fostering an open and welcoming environment, we as
6
+ contributors and maintainers pledge to make participation in our project and
7
+ our community a harassment-free experience for everyone, regardless of age, body
8
+ size, disability, ethnicity, sex characteristics, gender identity and expression,
9
+ level of experience, education, socio-economic status, nationality, personal
10
+ appearance, race, religion, or sexual identity and orientation.
11
+
12
+ ## Our Standards
13
+
14
+ Examples of behavior that contributes to creating a positive environment
15
+ include:
16
+
17
+ * Using welcoming and inclusive language
18
+ * Being respectful of differing viewpoints and experiences
19
+ * Gracefully accepting constructive criticism
20
+ * Focusing on what is best for the community
21
+ * Showing empathy towards other community members
22
+
23
+ Examples of unacceptable behavior by participants include:
24
+
25
+ * The use of sexualized language or imagery and unwelcome sexual attention or
26
+ advances
27
+ * Trolling, insulting/derogatory comments, and personal or political attacks
28
+ * Public or private harassment
29
+ * Publishing others' private information, such as a physical or electronic
30
+ address, without explicit permission
31
+ * Other conduct which could reasonably be considered inappropriate in a
32
+ professional setting
33
+
34
+ ## Our Responsibilities
35
+
36
+ Project maintainers are responsible for clarifying the standards of acceptable
37
+ behavior and are expected to take appropriate and fair corrective action in
38
+ response to any instances of unacceptable behavior.
39
+
40
+ Project maintainers have the right and responsibility to remove, edit, or
41
+ reject comments, commits, code, wiki edits, issues, and other contributions
42
+ that are not aligned to this Code of Conduct, or to ban temporarily or
43
+ permanently any contributor for other behaviors that they deem inappropriate,
44
+ threatening, offensive, or harmful.
45
+
46
+ ## Scope
47
+
48
+ This Code of Conduct applies within all project spaces, and it also applies when
49
+ an individual is representing the project or its community in public spaces.
50
+ Examples of representing a project or community include using an official
51
+ project e-mail address, posting via an official social media account, or acting
52
+ as an appointed representative at an online or offline event. Representation of
53
+ a project may be further defined and clarified by project maintainers.
54
+
55
+ This Code of Conduct also applies outside the project spaces when there is a
56
+ reasonable belief that an individual's behavior may have a negative impact on
57
+ the project or its community.
58
+
59
+ ## Enforcement
60
+
61
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
62
+ reported by contacting the project team at <[email protected]>. All
63
+ complaints will be reviewed and investigated and will result in a response that
64
+ is deemed necessary and appropriate to the circumstances. The project team is
65
+ obligated to maintain confidentiality with regard to the reporter of an incident.
66
+ Further details of specific enforcement policies may be posted separately.
67
+
68
+ Project maintainers who do not follow or enforce the Code of Conduct in good
69
+ faith may face temporary or permanent repercussions as determined by other
70
+ members of the project's leadership.
71
+
72
+ ## Attribution
73
+
74
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
75
+ available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
76
+
77
+ [homepage]: https://www.contributor-covenant.org
78
+
79
+ For answers to common questions about this code of conduct, see
80
+ https://www.contributor-covenant.org/faq
CONTRIBUTING.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing to DINOv2
2
+ We want to make contributing to this project as easy and transparent as
3
+ possible.
4
+
5
+ ## Pull Requests
6
+ We actively welcome your pull requests.
7
+
8
+ 1. Fork the repo and create your branch from `main`.
9
+ 2. If you've added code that should be tested, add tests.
10
+ 3. If you've changed APIs, update the documentation.
11
+ 4. Ensure the test suite passes.
12
+ 5. Make sure your code lints.
13
+ 6. If you haven't already, complete the Contributor License Agreement ("CLA").
14
+
15
+ ## Contributor License Agreement ("CLA")
16
+ In order to accept your pull request, we need you to submit a CLA. You only need
17
+ to do this once to work on any of Meta's open source projects.
18
+
19
+ Complete your CLA here: <https://code.facebook.com/cla>
20
+
21
+ ## Issues
22
+ We use GitHub issues to track public bugs. Please ensure your description is
23
+ clear and has sufficient instructions to be able to reproduce the issue.
24
+
25
+ Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe
26
+ disclosure of security bugs. In those cases, please go through the process
27
+ outlined on that page and do not file a public issue.
28
+
29
+ ## License
30
+ By contributing to DINOv2, you agree that your contributions will be licensed
31
+ under the LICENSE file in the root directory of this source tree.
LICENSE ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Apache License
4
+ Version 2.0, January 2004
5
+ http://www.apache.org/licenses/
6
+
7
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
8
+
9
+ 1. Definitions.
10
+
11
+ "License" shall mean the terms and conditions for use, reproduction,
12
+ and distribution as defined by Sections 1 through 9 of this document.
13
+
14
+ "Licensor" shall mean the copyright owner or entity authorized by
15
+ the copyright owner that is granting the License.
16
+
17
+ "Legal Entity" shall mean the union of the acting entity and all
18
+ other entities that control, are controlled by, or are under common
19
+ control with that entity. For the purposes of this definition,
20
+ "control" means (i) the power, direct or indirect, to cause the
21
+ direction or management of such entity, whether by contract or
22
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
23
+ outstanding shares, or (iii) beneficial ownership of such entity.
24
+
25
+ "You" (or "Your") shall mean an individual or Legal Entity
26
+ exercising permissions granted by this License.
27
+
28
+ "Source" form shall mean the preferred form for making modifications,
29
+ including but not limited to software source code, documentation
30
+ source, and configuration files.
31
+
32
+ "Object" form shall mean any form resulting from mechanical
33
+ transformation or translation of a Source form, including but
34
+ not limited to compiled object code, generated documentation,
35
+ and conversions to other media types.
36
+
37
+ "Work" shall mean the work of authorship, whether in Source or
38
+ Object form, made available under the License, as indicated by a
39
+ copyright notice that is included in or attached to the work
40
+ (an example is provided in the Appendix below).
41
+
42
+ "Derivative Works" shall mean any work, whether in Source or Object
43
+ form, that is based on (or derived from) the Work and for which the
44
+ editorial revisions, annotations, elaborations, or other modifications
45
+ represent, as a whole, an original work of authorship. For the purposes
46
+ of this License, Derivative Works shall not include works that remain
47
+ separable from, or merely link (or bind by name) to the interfaces of,
48
+ the Work and Derivative Works thereof.
49
+
50
+ "Contribution" shall mean any work of authorship, including
51
+ the original version of the Work and any modifications or additions
52
+ to that Work or Derivative Works thereof, that is intentionally
53
+ submitted to Licensor for inclusion in the Work by the copyright owner
54
+ or by an individual or Legal Entity authorized to submit on behalf of
55
+ the copyright owner. For the purposes of this definition, "submitted"
56
+ means any form of electronic, verbal, or written communication sent
57
+ to the Licensor or its representatives, including but not limited to
58
+ communication on electronic mailing lists, source code control systems,
59
+ and issue tracking systems that are managed by, or on behalf of, the
60
+ Licensor for the purpose of discussing and improving the Work, but
61
+ excluding communication that is conspicuously marked or otherwise
62
+ designated in writing by the copyright owner as "Not a Contribution."
63
+
64
+ "Contributor" shall mean Licensor and any individual or Legal Entity
65
+ on behalf of whom a Contribution has been received by Licensor and
66
+ subsequently incorporated within the Work.
67
+
68
+ 2. Grant of Copyright License. Subject to the terms and conditions of
69
+ this License, each Contributor hereby grants to You a perpetual,
70
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
71
+ copyright license to reproduce, prepare Derivative Works of,
72
+ publicly display, publicly perform, sublicense, and distribute the
73
+ Work and such Derivative Works in Source or Object form.
74
+
75
+ 3. Grant of Patent License. Subject to the terms and conditions of
76
+ this License, each Contributor hereby grants to You a perpetual,
77
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
78
+ (except as stated in this section) patent license to make, have made,
79
+ use, offer to sell, sell, import, and otherwise transfer the Work,
80
+ where such license applies only to those patent claims licensable
81
+ by such Contributor that are necessarily infringed by their
82
+ Contribution(s) alone or by combination of their Contribution(s)
83
+ with the Work to which such Contribution(s) was submitted. If You
84
+ institute patent litigation against any entity (including a
85
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
86
+ or a Contribution incorporated within the Work constitutes direct
87
+ or contributory patent infringement, then any patent licenses
88
+ granted to You under this License for that Work shall terminate
89
+ as of the date such litigation is filed.
90
+
91
+ 4. Redistribution. You may reproduce and distribute copies of the
92
+ Work or Derivative Works thereof in any medium, with or without
93
+ modifications, and in Source or Object form, provided that You
94
+ meet the following conditions:
95
+
96
+ (a) You must give any other recipients of the Work or
97
+ Derivative Works a copy of this License; and
98
+
99
+ (b) You must cause any modified files to carry prominent notices
100
+ stating that You changed the files; and
101
+
102
+ (c) You must retain, in the Source form of any Derivative Works
103
+ that You distribute, all copyright, patent, trademark, and
104
+ attribution notices from the Source form of the Work,
105
+ excluding those notices that do not pertain to any part of
106
+ the Derivative Works; and
107
+
108
+ (d) If the Work includes a "NOTICE" text file as part of its
109
+ distribution, then any Derivative Works that You distribute must
110
+ include a readable copy of the attribution notices contained
111
+ within such NOTICE file, excluding those notices that do not
112
+ pertain to any part of the Derivative Works, in at least one
113
+ of the following places: within a NOTICE text file distributed
114
+ as part of the Derivative Works; within the Source form or
115
+ documentation, if provided along with the Derivative Works; or,
116
+ within a display generated by the Derivative Works, if and
117
+ wherever such third-party notices normally appear. The contents
118
+ of the NOTICE file are for informational purposes only and
119
+ do not modify the License. You may add Your own attribution
120
+ notices within Derivative Works that You distribute, alongside
121
+ or as an addendum to the NOTICE text from the Work, provided
122
+ that such additional attribution notices cannot be construed
123
+ as modifying the License.
124
+
125
+ You may add Your own copyright statement to Your modifications and
126
+ may provide additional or different license terms and conditions
127
+ for use, reproduction, or distribution of Your modifications, or
128
+ for any such Derivative Works as a whole, provided Your use,
129
+ reproduction, and distribution of the Work otherwise complies with
130
+ the conditions stated in this License.
131
+
132
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
133
+ any Contribution intentionally submitted for inclusion in the Work
134
+ by You to the Licensor shall be under the terms and conditions of
135
+ this License, without any additional terms or conditions.
136
+ Notwithstanding the above, nothing herein shall supersede or modify
137
+ the terms of any separate license agreement you may have executed
138
+ with Licensor regarding such Contributions.
139
+
140
+ 6. Trademarks. This License does not grant permission to use the trade
141
+ names, trademarks, service marks, or product names of the Licensor,
142
+ except as required for reasonable and customary use in describing the
143
+ origin of the Work and reproducing the content of the NOTICE file.
144
+
145
+ 7. Disclaimer of Warranty. Unless required by applicable law or
146
+ agreed to in writing, Licensor provides the Work (and each
147
+ Contributor provides its Contributions) on an "AS IS" BASIS,
148
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
149
+ implied, including, without limitation, any warranties or conditions
150
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
151
+ PARTICULAR PURPOSE. You are solely responsible for determining the
152
+ appropriateness of using or redistributing the Work and assume any
153
+ risks associated with Your exercise of permissions under this License.
154
+
155
+ 8. Limitation of Liability. In no event and under no legal theory,
156
+ whether in tort (including negligence), contract, or otherwise,
157
+ unless required by applicable law (such as deliberate and grossly
158
+ negligent acts) or agreed to in writing, shall any Contributor be
159
+ liable to You for damages, including any direct, indirect, special,
160
+ incidental, or consequential damages of any character arising as a
161
+ result of this License or out of the use or inability to use the
162
+ Work (including but not limited to damages for loss of goodwill,
163
+ work stoppage, computer failure or malfunction, or any and all
164
+ other commercial damages or losses), even if such Contributor
165
+ has been advised of the possibility of such damages.
166
+
167
+ 9. Accepting Warranty or Additional Liability. While redistributing
168
+ the Work or Derivative Works thereof, You may choose to offer,
169
+ and charge a fee for, acceptance of support, warranty, indemnity,
170
+ or other liability obligations and/or rights consistent with this
171
+ License. However, in accepting such obligations, You may act only
172
+ on Your own behalf and on Your sole responsibility, not on behalf
173
+ of any other Contributor, and only if You agree to indemnify,
174
+ defend, and hold each Contributor harmless for any liability
175
+ incurred by, or claims asserted against, such Contributor by reason
176
+ of your accepting any such warranty or additional liability.
177
+
178
+ END OF TERMS AND CONDITIONS
179
+
180
+ APPENDIX: How to apply the Apache License to your work.
181
+
182
+ To apply the Apache License to your work, attach the following
183
+ boilerplate notice, with the fields enclosed by brackets "[]"
184
+ replaced with your own identifying information. (Don't include
185
+ the brackets!) The text should be enclosed in the appropriate
186
+ comment syntax for the file format. We also recommend that a
187
+ file or class name and description of purpose be included on the
188
+ same "printed page" as the copyright notice for easier
189
+ identification within third-party archives.
190
+
191
+ Copyright [yyyy] [name of copyright owner]
192
+
193
+ Licensed under the Apache License, Version 2.0 (the "License");
194
+ you may not use this file except in compliance with the License.
195
+ You may obtain a copy of the License at
196
+
197
+ http://www.apache.org/licenses/LICENSE-2.0
198
+
199
+ Unless required by applicable law or agreed to in writing, software
200
+ distributed under the License is distributed on an "AS IS" BASIS,
201
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
202
+ See the License for the specific language governing permissions and
203
+ limitations under the License.
MODEL_CARD.md ADDED
@@ -0,0 +1,272 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card for DINOv2-S/B/L/g
2
+
3
+ These are Vision Transformer models trained following the method described in the papers:
4
+ "DINOv2: Learning Robust Visual Features without Supervision"
5
+ and
6
+ "Vision Transformers Need Registers".
7
+
8
+ We provide 8 models:
9
+ - 1 ViT-g trained from scratch with 3 ViT-S/B/L models distilled from the ViT-g, without registers.
10
+ - 1 ViT-g trained from scratch with 3 ViT-S/B/L models distilled from the ViT-g, with registers.
11
+
12
+ ## Model Details
13
+ The model takes an image as input and returns a class token and patch tokens, and optionally 4 register tokens.
14
+
15
+ The embedding dimension is:
16
+ - 384 for ViT-S.
17
+ - 768 for ViT-B.
18
+ - 1024 for ViT-L.
19
+ - 1536 for ViT-g.
20
+
21
+ The models follow a Transformer architecture, with a patch size of 14. In the case of registers, we add 4 register tokens, learned during training, to the input sequence after the patch embedding.
22
+
23
+ For a 224x224 image, this results in 1 class token + 256 patch tokens, and optionally 4 register tokens.
24
+
25
+ The models can accept larger images provided the image shapes are multiples of the patch size (14).
26
+ If this condition is not verified, the model will crop to the closest smaller multiple of the patch size.
27
+
28
+ ### Model Description
29
+
30
+ - **Developed by:** Meta AI
31
+ - **Model type:** Vision Transformer
32
+ - **License:** Apache License 2.0
33
+
34
+ - **Repository:** https://github.com/facebookresearch/dinov2
35
+ - **Paper:** https://arxiv.org/abs/2304.07193
36
+ - **Demo:** https://dinov2.metademolab.com/
37
+
38
+ ## Uses
39
+
40
+ The models are vision backbones providing multi-purpose features for downstream tasks.
41
+
42
+ ### Direct Use
43
+
44
+ The models can be used without fine-tuning, with downstream classifiers as simple as linear layers, to obtain competitive results:
45
+ - on depth estimation, semantic segmentation, using linear layers.
46
+ - on image classification, using k-NN classifiers on the class token.
47
+ - on image classification, with logistic regression classifiers applied on the class token.
48
+ - on image classification, with a linear layer applied on the class token and the average of the patch tokens.
49
+ - on image retrieval using nearest neighbors.
50
+
51
+ ### Downstream Use
52
+
53
+ It is technically possible to perform fine-tuning on the models, for small gains (we measured +2% on ImageNet-1k classification).
54
+ We recommend keeping this as a very last step and only when necessary, as the features already provide good performance out-of-the-box.
55
+
56
+ ## Bias, Risks, and Limitations
57
+
58
+ Despite improvements thanks to the training method not using annotations, we still observe significant biases in our models toward rich households from Western countries.
59
+
60
+ ### Recommendations
61
+
62
+ We expect fine-tuning will increase the biases in the features produced by the model as they will be tuned to the fine-tuning labels.
63
+
64
+ ## How to Get Started with the Model
65
+
66
+ Use the code below to get started with the model.
67
+
68
+ ```python
69
+ import torch
70
+
71
+ # DINOv2
72
+ dinov2_vits14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14')
73
+ dinov2_vitb14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14')
74
+ dinov2_vitl14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14')
75
+ dinov2_vitg14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14')
76
+
77
+ # DINOv2 with registers
78
+ dinov2_vits14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14_reg')
79
+ dinov2_vitb14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14_reg')
80
+ dinov2_vitl14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14_reg')
81
+ dinov2_vitg14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_reg')
82
+ ```
83
+
84
+ ## Training Details
85
+
86
+ ### Training Data
87
+
88
+ - **Training data:** LVD-142M (see paper)
89
+ - **Training regime:** fp16 using PyTorch-FSDP mixed-precision.
90
+
91
+ ### Training Procedure
92
+
93
+ - **Training objective:**
94
+ - DINO self-distillation loss with multi-crop
95
+ - iBOT masked-image modeling loss
96
+ - KoLeo regularization on [CLS] tokens
97
+ - **Architectures:**
98
+ - ViT-S (21M params): Patch size 14, embedding dimension 384, 6 heads, MLP FFN
99
+ - ViT-B (86M params): Patch size 14, embedding dimension 768, 12 heads, MLP FFN
100
+ - ViT-L (0.3B params): Patch size 14, embedding dimension 1024, 16 heads, MLP FFN
101
+ - ViT-g (1.1B params): Patch size 14, embedding dimension 1536, 24 heads, SwiGLU FFN
102
+ - **Distillation:**
103
+ - Distillation follows the standard DINOv2 pretraining procedure, except the teacher is a pretrained ViT-g, frozen.
104
+
105
+ ## Evaluation
106
+
107
+ We refer users to the associated papers for the evaluation protocols.
108
+
109
+ <table>
110
+ <tr>
111
+ <th colspan="2"></th>
112
+ <th colspan="3">ImageNet-1k</th>
113
+ <th>NYU-Depth v2</th>
114
+ <th>SUN-RGBD</th>
115
+ <th>ADE20k</th>
116
+ <th>iNaturalist 2018</th>
117
+ <th>Oxford-H</th>
118
+ </tr>
119
+ <tr>
120
+ <th rowspan="2">model</th>
121
+ <th rowspan="2">with <br /> registers</th>
122
+ <th>classif. (acc)</th>
123
+ <th>classif. (acc)</th>
124
+ <th>classif. V2 (acc)</th>
125
+ <th>depth (RMSE)</th>
126
+ <th>depth (RMSE)</th>
127
+ <th>segm. (mAP)</th>
128
+ <th>classif. (acc)</th>
129
+ <th>retrieval (mAP)</th>
130
+ </tr>
131
+ <tr>
132
+ <!-- <th>^</th> -->
133
+ <th>k-NN</th>
134
+ <th>linear</th>
135
+ <th>linear</th>
136
+ <th>linear<br />4 layers</th>
137
+ <th>NYU-D transfer</th>
138
+ <th>multiscale</th>
139
+ <th>linear</th>
140
+ <th>nearest neighbor</th>
141
+ </tr>
142
+ <tr>
143
+ <td>ViT-S/14</td>
144
+ <td align="center">:x:</td>
145
+ <td align="right">79.0%</td>
146
+ <td align="right">81.1%</td>
147
+ <td align="right">70.8%</td>
148
+ <td align="right">0.417</td>
149
+ <td align="right">0.431</td>
150
+ <td align="right">47.2</td>
151
+ <td align="right">69.5%</td>
152
+ <td align="right">43.2</td>
153
+ </tr>
154
+ <tr>
155
+ <td>ViT-S/14</td>
156
+ <td align="center">:white_check_mark:</td>
157
+ <td align="right">79.1%</td>
158
+ <td align="right">80.9%</td>
159
+ <td align="right">71.0%</td>
160
+ <td align="right">N/A</td>
161
+ <td align="right">N/A</td>
162
+ <td align="right">N/A</td>
163
+ <td align="right">67.6%</td>
164
+ <td align="right">39.5</td>
165
+ </tr>
166
+ <tr>
167
+ <td>ViT-B/14</td>
168
+ <td align="center">:x:</td>
169
+ <td align="right">82.1%</td>
170
+ <td align="right">84.5%</td>
171
+ <td align="right">74.9%</td>
172
+ <td align="right">0.362</td>
173
+ <td align="right">0.400</td>
174
+ <td align="right">51.3</td>
175
+ <td align="right">76.3%</td>
176
+ <td align="right">49.5</td>
177
+ </tr>
178
+ <td>ViT-B/14</td>
179
+ <td align="center">:white_check_mark:</td>
180
+ <td align="right">82.0%</td>
181
+ <td align="right">84.6%</td>
182
+ <td align="right">75.6%</td>
183
+ <td align="right">N/A</td>
184
+ <td align="right">N/A</td>
185
+ <td align="right">N/A</td>
186
+ <td align="right">73.8%</td>
187
+ <td align="right">51.0</td>
188
+ </tr>
189
+ <tr>
190
+ <td>ViT-L/14</td>
191
+ <td align="center">:x:</td>
192
+ <td align="right">83.5%</td>
193
+ <td align="right">86.3%</td>
194
+ <td align="right">77.6%</td>
195
+ <td align="right">0.333</td>
196
+ <td align="right">0.396</td>
197
+ <td align="right">53.1</td>
198
+ <td align="right">79.8%</td>
199
+ <td align="right">54.0</td>
200
+ </tr>
201
+ <tr>
202
+ <td>ViT-L/14</td>
203
+ <td align="center">:white_check_mark:</td>
204
+ <td align="right">83.8%</td>
205
+ <td align="right">86.7%</td>
206
+ <td align="right">78.5%</td>
207
+ <td align="right">N/A</td>
208
+ <td align="right">N/A</td>
209
+ <td align="right">N/A</td>
210
+ <td align="right">80.9%</td>
211
+ <td align="right">55.7</td>
212
+ </tr>
213
+ <tr>
214
+ <td>ViT-g/14</td>
215
+ <td align="center">:x:</td>
216
+ <td align="right">83.5%</td>
217
+ <td align="right">86.5%</td>
218
+ <td align="right">78.4%</td>
219
+ <td align="right">0.298</td>
220
+ <td align="right">0.362</td>
221
+ <td align="right">53.0</td>
222
+ <td align="right">81.6%</td>
223
+ <td align="right">52.3</td>
224
+ </tr>
225
+ <tr>
226
+ <tr>
227
+ <td>ViT-g/14</td>
228
+ <td align="center">:white_check_mark:</td>
229
+ <td align="right">83.7%</td>
230
+ <td align="right">87.1%</td>
231
+ <td align="right">78.8%</td>
232
+ <td align="right">N/A</td>
233
+ <td align="right">N/A</td>
234
+ <td align="right">N/A</td>
235
+ <td align="right">81.5%</td>
236
+ <td align="right">58.2</td>
237
+ </tr>
238
+ </table>
239
+
240
+ ## Environmental Impact
241
+
242
+ - **Hardware Type:** Nvidia A100
243
+ - **Hours used:** 22,000 for ViT-g, 4,500 for ViT-S distillation, 5,300 for ViT-B distillation, 8,000 for ViT-L distillation
244
+ - **Cloud Provider:** Private infra
245
+ - **Compute Region:** USA
246
+ - **Carbon Emitted:** 7t CO2eq
247
+
248
+ #### Hardware
249
+
250
+ Nvidia A100 GPUs
251
+
252
+ #### Software
253
+
254
+ PyTorch 2.0,
255
+ xFormers 0.0.18
256
+
257
+ **BibTeX**
258
+
259
+ ```
260
+ @misc{oquab2023dinov2,
261
+ title={DINOv2: Learning Robust Visual Features without Supervision},
262
+ author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
263
+ journal={arXiv:2304.07193},
264
+ year={2023}
265
+ }
266
+ @misc{darcet2023vitneedreg,
267
+ title={Vision Transformers Need Registers},
268
+ author={Darcet, Timothée and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
269
+ journal={arXiv:2309.16588},
270
+ year={2023}
271
+ }
272
+ ```
README.md CHANGED
@@ -1 +1,620 @@
1
- Models from https://github.com/facebookresearch/dinov2/tree/main
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ :new: [2023-10-26] *Added DINOv2 backbones with registers, following [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588).*
2
+
3
+ # DINOv2: Learning Robust Visual Features without Supervision
4
+
5
+ **[Meta AI Research, FAIR](https://ai.facebook.com/research/)**
6
+
7
+ Maxime Oquab,
8
+ Timothée Darcet,
9
+ Théo Moutakanni,
10
+ Huy V. Vo,
11
+ Marc Szafraniec,
12
+ Vasil Khalidov,
13
+ Patrick Labatut,
14
+ Armand Joulin,
15
+ Piotr Bojanowski
16
+
17
+ [[`Paper #1`](https://arxiv.org/abs/2304.07193)] [`Paper #2`](https://arxiv.org/abs/2309.16588)] [[`Blog`](https://ai.facebook.com/blog/dino-v2-computer-vision-self-supervised-learning/)] [[`Demo`](https://dinov2.metademolab.com)] [[`BibTeX`](#citing-dinov2)]
18
+
19
+ PyTorch implementation and pretrained models for DINOv2. For details, see the papers: **[DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193)** and **[Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588)**.
20
+
21
+ DINOv2 models produce high-performance visual features that can be directly employed with classifiers as simple as linear layers on a variety of computer vision tasks; these visual features are robust and perform well across domains without any requirement for fine-tuning. The models were pretrained on a dataset of 142 M images without using any labels or annotations.
22
+
23
+ https://github.com/facebookresearch/dinov2/assets/60359573/f168823e-7922-415a-b429-578badf5c356
24
+
25
+ <div align="center">
26
+ Visualization of the three first principal components of the patch features of all frames, mapped to RGB values.
27
+ </div>
28
+
29
+ ## Pretrained models
30
+
31
+ <table style="margin: auto">
32
+ <thead>
33
+ <tr>
34
+ <th>model</th>
35
+ <th># of<br />params</th>
36
+ <th>with<br />registers</th>
37
+ <th>ImageNet<br />k-NN</th>
38
+ <th>ImageNet<br />linear</th>
39
+ <th>download</th>
40
+ </tr>
41
+ </thead>
42
+ <tbody>
43
+ <tr>
44
+ <td>ViT-S/14 distilled</td>
45
+ <td align="right">21 M</td>
46
+ <td align="center">:x:</td>
47
+ <td align="right">79.0%</td>
48
+ <td align="right">81.1%</td>
49
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_pretrain.pth">backbone only</a></td>
50
+ </tr>
51
+ <tr>
52
+ <td>ViT-S/14 distilled</td>
53
+ <td align="right">21 M</td>
54
+ <td align="center">:white_check_mark:</td>
55
+ <td align="right">79.1%</td>
56
+ <td align="right">80.9%</td>
57
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_reg4_pretrain.pth">backbone only</a></td>
58
+ </tr>
59
+ <tr>
60
+ <td>ViT-B/14 distilled</td>
61
+ <td align="right">86 M</td>
62
+ <td align="center">:x:</td>
63
+ <td align="right">82.1%</td>
64
+ <td align="right">84.5%</td>
65
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_pretrain.pth">backbone only</a></td>
66
+ </tr>
67
+ <tr>
68
+ <td>ViT-B/14 distilled</td>
69
+ <td align="right">86 M</td>
70
+ <td align="center">:white_check_mark:</td>
71
+ <td align="right">82.0%</td>
72
+ <td align="right">84.6%</td>
73
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_reg4_pretrain.pth">backbone only</a></td>
74
+ </tr>
75
+ <tr>
76
+ <td>ViT-L/14 distilled</td>
77
+ <td align="right">300 M</td>
78
+ <td align="center">:x:</td>
79
+ <td align="right">83.5%</td>
80
+ <td align="right">86.3%</td>
81
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_pretrain.pth">backbone only</a></td>
82
+ </tr>
83
+ <tr>
84
+ <td>ViT-L/14 distilled</td>
85
+ <td align="right">300 M</td>
86
+ <td align="center">:white_check_mark:</td>
87
+ <td align="right">83.8%</td>
88
+ <td align="right">86.7%</td>
89
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_reg4_pretrain.pth">backbone only</a></td>
90
+ </tr>
91
+ <tr>
92
+ <td>ViT-g/14</td>
93
+ <td align="right">1,100 M</td>
94
+ <td align="center">:x:</td>
95
+ <td align="right">83.5%</td>
96
+ <td align="right">86.5%</td>
97
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_pretrain.pth">backbone only</a></td>
98
+ </tr>
99
+ <tr>
100
+ <td>ViT-g/14</td>
101
+ <td align="right">1,100 M</td>
102
+ <td align="center">:white_check_mark:</td>
103
+ <td align="right">83.7%</td>
104
+ <td align="right">87.1%</td>
105
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_reg4_pretrain.pth">backbone only</a></td>
106
+ </tr>
107
+ </tbody>
108
+ </table>
109
+
110
+ ### Pretrained backbones (via PyTorch Hub)
111
+
112
+ Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install PyTorch (the only required dependency for loading the model). Installing PyTorch with CUDA support is strongly recommended.
113
+
114
+ A corresponding [model card](MODEL_CARD.md) is included in the repository.
115
+
116
+ ```python
117
+ import torch
118
+
119
+ # DINOv2
120
+ dinov2_vits14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14')
121
+ dinov2_vitb14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14')
122
+ dinov2_vitl14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14')
123
+ dinov2_vitg14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14')
124
+
125
+ # DINOv2 with registers
126
+ dinov2_vits14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14_reg')
127
+ dinov2_vitb14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14_reg')
128
+ dinov2_vitl14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14_reg')
129
+ dinov2_vitg14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_reg')
130
+ ```
131
+
132
+ ### Pretrained heads - Image classification
133
+
134
+ <table style="margin: auto">
135
+ <thead>
136
+ <tr>
137
+ <th rowspan="2">backbone</th>
138
+ <th rowspan="2">with<br />registers</th>
139
+ <th>download</th>
140
+ </tr>
141
+ <tr>
142
+ <th>ImageNet</th>
143
+ </tr>
144
+ </thead>
145
+ <tbody>
146
+ <tr>
147
+ <td>ViT-S/14 distilled</td>
148
+ <td align="center">:x:</td>
149
+ <td>
150
+ linear head (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_linear_head.pth">1 layer</a>,
151
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_linear4_head.pth">4 layers</a>)
152
+ </td>
153
+ </tr>
154
+ <tr>
155
+ <td>ViT-S/14 distilled</td>
156
+ <td align="center">:white_check_mark:</td>
157
+ <td>
158
+ linear head (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_reg4_linear_head.pth">1 layer</a>,
159
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_reg4_linear4_head.pth">4 layers</a>)
160
+ </td>
161
+ </tr>
162
+ <tr>
163
+ <td>ViT-B/14 distilled</td>
164
+ <td align="center">:x:</td>
165
+ <td>
166
+ linear head (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_linear_head.pth">1 layer</a>,
167
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_linear4_head.pth">4 layers</a>)
168
+ </tr>
169
+ <tr>
170
+ <td>ViT-B/14 distilled</td>
171
+ <td align="center">:white_check_mark:</td>
172
+ <td>
173
+ linear head (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_reg4_linear_head.pth">1 layer</a>,
174
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_reg4_linear4_head.pth">4 layers</a>)
175
+ </tr>
176
+ <tr>
177
+ <td>ViT-L/14 distilled</td>
178
+ <td align="center">:x:</td>
179
+ <td>
180
+ linear head (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_linear_head.pth">1 layer</a>,
181
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_linear4_head.pth">4 layers</a>)
182
+ </tr>
183
+ <tr>
184
+ <td>ViT-L/14 distilled</td>
185
+ <td align="center">:white_check_mark:</td>
186
+ <td>
187
+ linear head (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_reg4_linear_head.pth">1 layer</a>,
188
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_reg4_linear4_head.pth">4 layers</a>)
189
+ </tr>
190
+ <tr>
191
+ <td>ViT-g/14</td>
192
+ <td align="center">:x:</td>
193
+ <td>
194
+ linear head (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_linear_head.pth">1 layer</a>,
195
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_linear4_head.pth">4 layers</a>)
196
+ </tr>
197
+ <tr>
198
+ <td>ViT-g/14</td>
199
+ <td align="center">:white_check_mark:</td>
200
+ <td>
201
+ linear head (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_lreg4_inear_head.pth">1 layer</a>,
202
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_reg4_linear4_head.pth">4 layers</a>)
203
+ </tr>
204
+ </tbody>
205
+ </table>
206
+
207
+ The (full) classifier models can be loaded via PyTorch Hub:
208
+
209
+ ```python
210
+ import torch
211
+
212
+ # DINOv2
213
+ dinov2_vits14_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14_lc')
214
+ dinov2_vitb14_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14_lc')
215
+ dinov2_vitl14_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14_lc')
216
+ dinov2_vitg14_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_lc')
217
+
218
+ # DINOv2 with registers
219
+ dinov2_vits14_reg_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14_reg_lc')
220
+ dinov2_vitb14_reg_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14_reg_lc')
221
+ dinov2_vitl14_reg_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14_reg_lc')
222
+ dinov2_vitg14_reg_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_reg_lc')
223
+ ```
224
+
225
+ ### Pretrained heads - Depth estimation
226
+
227
+ <table style="margin: auto">
228
+ <thead>
229
+ <tr>
230
+ <th rowspan="2">backbone</th>
231
+ <th colspan="2">download head</th>
232
+ </tr>
233
+ <tr>
234
+ <th>NYUd</th>
235
+ <th>KITTI</th>
236
+ </tr>
237
+ </thead>
238
+ <tbody>
239
+ <tr>
240
+ <td>ViT-S/14 distilled</td>
241
+ <td>
242
+ linear (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_nyu_linear_head.pth">1 layer</a>,
243
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_nyu_linear4_head.pth">4 layers</a>),
244
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_nyu_dpt_head.pth">DPT</a>
245
+ </td>
246
+ <td>
247
+ linear (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_kitti_linear_head.pth">1 layer</a>,
248
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_kitti_linear4_head.pth">4 layers</a>),
249
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_kitti_dpt_head.pth">DPT</a>
250
+ </td>
251
+ </tr>
252
+ <tr>
253
+ <td>ViT-B/14 distilled</td>
254
+ <td>
255
+ linear (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_linear_head.pth">1 layer</a>,
256
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_nyu_linear4_head.pth">4 layers</a>),
257
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_nyu_dpt_head.pth">DPT</a>
258
+ </td>
259
+ <td>
260
+ linear (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_kitti_linear_head.pth">1 layer</a>,
261
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_kitti_linear4_head.pth">4 layers</a>),
262
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_kitti_dpt_head.pth">DPT</a>
263
+ </td>
264
+ </tr>
265
+ <tr>
266
+ <td>ViT-L/14 distilled</td>
267
+ <td>
268
+ linear (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_linear_head.pth">1 layer</a>,
269
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_nyu_linear4_head.pth">4 layers</a>),
270
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_nyu_dpt_head.pth">DPT</a>
271
+ </td>
272
+ <td>
273
+ linear (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_kitti_linear_head.pth">1 layer</a>,
274
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_kitti_linear4_head.pth">4 layers</a>),
275
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_kitti_dpt_head.pth">DPT</a>
276
+ </td>
277
+ </tr>
278
+ <tr>
279
+ <td>ViT-g/14</td>
280
+ <td>
281
+ linear (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_linear_head.pth">1 layer</a>,
282
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_nyu_linear4_head.pth">4 layers</a>),
283
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_nyu_dpt_head.pth">DPT</a>
284
+ </td>
285
+ <td>
286
+ linear (<a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_kitti_linear_head.pth">1 layer</a>,
287
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_kitti_linear4_head.pth">4 layers</a>),
288
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_kitti_dpt_head.pth">DPT</a>
289
+ </td>
290
+ </tr>
291
+ </tbody>
292
+ </table>
293
+
294
+ ### Pretrained heads - Semantic segmentation
295
+
296
+ <table style="margin: auto">
297
+ <thead>
298
+ <tr>
299
+ <th rowspan="2">backbone</th>
300
+ <th>download model</th>
301
+ <th colspan="2">download head</th>
302
+ </tr>
303
+ <tr>
304
+ <th>ADE20K</th>
305
+ <th>ADE20K</th>
306
+ <th>VOC2012</th>
307
+ </tr>
308
+ </thead>
309
+ <tbody>
310
+ <tr>
311
+ <td>ViT-S/14 distilled</td>
312
+ <td></td>
313
+ <td>
314
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_ade20k_linear_head.pth">linear</a>,
315
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_ade20k_ms_head.pth">multi-scale</a>
316
+ </td>
317
+ <td>
318
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_voc2012_linear_head.pth">linear</a>,
319
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_voc2012_ms_head.pth">multi-scale</a>
320
+ </td>
321
+ </tr>
322
+ <tr>
323
+ <td>ViT-B/14 distilled</td>
324
+ <td></td>
325
+ <td>
326
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_ade20k_linear_head.pth">linear</a>,
327
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_ade20k_ms_head.pth">multi-scale</a>
328
+ </td>
329
+ <td>
330
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_voc2012_linear_head.pth">linear</a>,
331
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_voc2012_ms_head.pth">multi-scale</a>
332
+ </td>
333
+ </tr>
334
+ <tr>
335
+ <td>ViT-L/14 distilled</td>
336
+ <td></td>
337
+ <td>
338
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_ade20k_linear_head.pth">linear</a>,
339
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_ade20k_ms_head.pth">multi-scale</a>
340
+ </td>
341
+ <td>
342
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_voc2012_linear_head.pth">linear</a>,
343
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_voc2012_ms_head.pth">multi-scale</a>
344
+ </td>
345
+ </tr>
346
+ <tr>
347
+ <td>ViT-g/14</td>
348
+ <td>
349
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_ade20k_m2f.pth">Mask2Former</a>
350
+ </td>
351
+ <td>
352
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_ade20k_linear_head.pth">linear</a>,
353
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_ade20k_ms_head.pth">multi-scale</a>
354
+ </td>
355
+ <td>
356
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_voc2012_linear_head.pth">linear</a>,
357
+ <a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_voc2012_ms_head.pth">multi-scale</a>
358
+ </td>
359
+ </tr>
360
+ </tbody>
361
+ </table>
362
+
363
+ ## Installation
364
+
365
+ The training and evaluation code requires PyTorch 2.0 and [xFormers](https://github.com/facebookresearch/xformers) 0.0.18 as well as a number of other 3rd party packages. Note that the code has only been tested with the specified versions and also expects a Linux environment. To setup all the required dependencies for training and evaluation, please follow the instructions below:
366
+
367
+ *[conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html)* **(Recommended)** - Clone the repository and then create and activate a `dinov2` conda environment using the provided environment definition:
368
+
369
+ ```shell
370
+ conda env create -f conda.yaml
371
+ conda activate dinov2
372
+ ```
373
+
374
+ *[pip](https://pip.pypa.io/en/stable/getting-started/)* - Clone the repository and then use the provided `requirements.txt` to install the dependencies:
375
+
376
+ ```shell
377
+ pip install -r requirements.txt
378
+ ```
379
+
380
+ For dense tasks (depth estimation and semantic segmentation), there are additional dependencies (specific versions of `mmcv` and `mmsegmentation`) which are captured in the `extras` dependency specifications:
381
+
382
+ *[conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html)* **(Recommended)**:
383
+
384
+ ```shell
385
+ conda env create -f conda-extras.yaml
386
+ conda activate dinov2-extras
387
+ ```
388
+
389
+ *[pip](https://pip.pypa.io/en/stable/getting-started/)*:
390
+
391
+ ```shell
392
+ pip install -r requirements.txt -r requirements-extras.txt
393
+ ```
394
+
395
+ ## Data preparation
396
+
397
+ ### ImageNet-1k
398
+
399
+ The root directory of the dataset should hold the following contents:
400
+
401
+ - `<ROOT>/test/ILSVRC2012_test_00000001.JPEG`
402
+ - `<ROOT>/test/[..]`
403
+ - `<ROOT>/test/ILSVRC2012_test_00100000.JPEG`
404
+ - `<ROOT>/train/n01440764/n01440764_10026.JPEG`
405
+ - `<ROOT>/train/[...]`
406
+ - `<ROOT>/train/n15075141/n15075141_9993.JPEG`
407
+ - `<ROOT>/val/n01440764/ILSVRC2012_val_00000293.JPEG`
408
+ - `<ROOT>/val/[...]`
409
+ - `<ROOT>/val/n15075141/ILSVRC2012_val_00049174.JPEG`
410
+ - `<ROOT>/labels.txt`
411
+
412
+ The provided dataset implementation expects a few additional metadata files to be present under the extra directory:
413
+
414
+ - `<EXTRA>/class-ids-TRAIN.npy`
415
+ - `<EXTRA>/class-ids-VAL.npy`
416
+ - `<EXTRA>/class-names-TRAIN.npy`
417
+ - `<EXTRA>/class-names-VAL.npy`
418
+ - `<EXTRA>/entries-TEST.npy`
419
+ - `<EXTRA>/entries-TRAIN.npy`
420
+ - `<EXTRA>/entries-VAL.npy`
421
+
422
+ These metadata files can be generated (once) with the following lines of Python code:
423
+
424
+ ```python
425
+ from dinov2.data.datasets import ImageNet
426
+
427
+ for split in ImageNet.Split:
428
+ dataset = ImageNet(split=split, root="<ROOT>", extra="<EXTRA>")
429
+ dataset.dump_extra()
430
+ ```
431
+
432
+ Note that the root and extra directories do not have to be distinct directories.
433
+
434
+ ### ImageNet-22k
435
+
436
+ Please adapt the [dataset class](dinov2/data/datasets/image_net_22k.py) to match your local setup.
437
+
438
+ <br />
439
+
440
+ :warning: To execute the commands provided in the next sections for training and evaluation, the `dinov2` package should be included in the Python module search path, i.e. simply prefix the command to run with `PYTHONPATH=.`.
441
+
442
+ ## Training
443
+
444
+ ### Fast setup: training DINOv2 ViT-L/16 on ImageNet-1k
445
+
446
+ Run DINOv2 training on 4 A100-80GB nodes (32 GPUs) in a SLURM cluster environment with submitit:
447
+
448
+ ```shell
449
+ python dinov2/run/train/train.py \
450
+ --nodes 4 \
451
+ --config-file dinov2/configs/train/vitl16_short.yaml \
452
+ --output-dir <PATH/TO/OUTPUT/DIR> \
453
+ train.dataset_path=ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
454
+ ```
455
+
456
+ Training time is approximately 1 day and the resulting checkpoint should reach 81.6% on k-NN eval and 82.9% on linear eval.
457
+
458
+ The training code saves the weights of the teacher in the `eval` folder every 12500 iterations for evaluation.
459
+
460
+ ### Long setup: training DINOv2 ViT-L/14 on ImageNet-22k
461
+
462
+ Run DINOv2 training on 12 A100-80GB nodes (96 GPUs) in a SLURM cluster environment with submitit:
463
+
464
+ ```shell
465
+ python dinov2/run/train/train.py \
466
+ --nodes 12 \
467
+ --config-file dinov2/configs/train/vitl14.yaml \
468
+ --output-dir <PATH/TO/OUTPUT/DIR> \
469
+ train.dataset_path=ImageNet22k:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
470
+ ```
471
+
472
+ Training time is approximately 3.3 days and the resulting checkpoint should reach 82.0% on k-NN eval and 84.5% on linear eval.
473
+
474
+ The training code saves the weights of the teacher in the `eval` folder every 12500 iterations for evaluation.
475
+
476
+
477
+ ## Evaluation
478
+
479
+ The training code regularly saves the teacher weights. In order to evaluate the model, run the following evaluation on a single node:
480
+
481
+ ### k-NN classification on ImageNet-1k
482
+
483
+ ```shell
484
+ python dinov2/run/eval/knn.py \
485
+ --config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
486
+ --pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
487
+ --output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/knn \
488
+ --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
489
+ --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
490
+ ```
491
+
492
+ ### Logistic regression classification on ImageNet-1k
493
+
494
+ ```shell
495
+ python dinov2/run/eval/log_regression.py \
496
+ --config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
497
+ --pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
498
+ --output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/logreg \
499
+ --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
500
+ --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
501
+ ```
502
+
503
+ ### Linear classification with data augmentation on ImageNet-1k
504
+
505
+ ```shell
506
+ python dinov2/run/eval/linear.py \
507
+ --config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
508
+ --pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
509
+ --output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/linear \
510
+ --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
511
+ --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
512
+ ```
513
+
514
+ We release the weights from evaluating the different models:
515
+
516
+ <table style="margin: auto">
517
+ <tr>
518
+ <th>model</th>
519
+ <th>with<br />registers</th>
520
+ <th>ImageNet<br />top-1</th>
521
+ <th>linear evaluation</th>
522
+ </tr>
523
+ <tr>
524
+ <td>ViT-S/14 distilled</td>
525
+ <td align="center">:x:</td>
526
+ <td align="right">81.1%</td>
527
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_linear_head.pth">linear head weights</a></td>
528
+ </tr>
529
+ <tr>
530
+ <td>ViT-S/14 distilled</td>
531
+ <td align="center">:white_check_mark:</td>
532
+ <td align="right">80.8%</td>
533
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_reg4_linear_head.pth">linear head weights</a></td>
534
+ </tr>
535
+ <tr>
536
+ <td>ViT-B/14 distilled</td>
537
+ <td align="center">:x:</td>
538
+ <td align="right">84.5%</td>
539
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_linear_head.pth">linear head weights</a></td>
540
+ </tr>
541
+ <tr>
542
+ <td>ViT-B/14 distilled</td>
543
+ <td align="center">:white_check_mark:</td>
544
+ <td align="right">84.4%</td>
545
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_reg4_linear_head.pth">linear head weights</a></td>
546
+ </tr>
547
+ <tr>
548
+ <td>ViT-L/14 distilled</td>
549
+ <td align="center">:x:</td>
550
+ <td align="right">86.3%</td>
551
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_linear_head.pth">linear head weights</a></td>
552
+ </tr>
553
+ <tr>
554
+ <td>ViT-L/14 distilled</td>
555
+ <td align="center">:white_check_mark:</td>
556
+ <td align="right">86.5%</td>
557
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_reg4_linear_head.pth">linear head weights</a></td>
558
+ </tr>
559
+ <tr>
560
+ <td>ViT-g/14</td>
561
+ <td align="center">:x:</td>
562
+ <td align="right">86.5%</td>
563
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_linear_head.pth">linear head weights</a></td>
564
+ </tr>
565
+ <tr>
566
+ <td>ViT-g/14</td>
567
+ <td align="center">:white_check_mark:</td>
568
+ <td align="right">87.0%</td>
569
+ <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_reg4_linear_head.pth">linear head weights</a></td>
570
+ </tr>
571
+ </table>
572
+
573
+ The performance of the provided pretrained model weights can be evaluated as follows on ImageNet-1k:
574
+
575
+ ```shell
576
+ python dinov2/run/eval/linear.py \
577
+ --config-file dinov2/configs/eval/vitg14_pretrain.yaml \
578
+ --pretrained-weights https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_pretrain.pth \
579
+ --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
580
+ --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
581
+ ```
582
+
583
+ ## Notebooks
584
+
585
+ A few notebooks are provided to help the community leverage the models and code:
586
+
587
+ <ul>
588
+ <li><a href="https://github.com/facebookresearch/dinov2/blob/main/notebooks/depth_estimation.ipynb">Depth estimation</a> - How to load and use the depth heads in combination with a matching backbone via mmcv</li>
589
+ <li><a href="https://github.com/facebookresearch/dinov2/blob/main/notebooks/semantic_segmentation.ipynb">Semantic segmentation</a> - How to load and use the segmentation heads in combination with a matching backbone via mmcv, and also how to load and use the Mask2Former-based segmentation model trained on ADE20K</li>
590
+ </ul>
591
+
592
+ ## License
593
+
594
+ DINOv2 code and model weights are released under the Apache License 2.0. See [LICENSE](LICENSE) for additional details.
595
+
596
+ ## Contributing
597
+
598
+ See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md).
599
+
600
+ ## Citing DINOv2
601
+
602
+ If you find this repository useful, please consider giving a star :star: and citation :t-rex::
603
+
604
+ ```
605
+ @misc{oquab2023dinov2,
606
+ title={DINOv2: Learning Robust Visual Features without Supervision},
607
+ author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
608
+ journal={arXiv:2304.07193},
609
+ year={2023}
610
+ }
611
+ ```
612
+
613
+ ```
614
+ @misc{darcet2023vitneedreg,
615
+ title={Vision Transformers Need Registers},
616
+ author={Darcet, Timothée and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
617
+ journal={arXiv:2309.16588},
618
+ year={2023}
619
+ }
620
+ ```
conda-extras.yaml ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: dinov2-extras
2
+ channels:
3
+ - defaults
4
+ - pytorch
5
+ - nvidia
6
+ - xformers
7
+ - conda-forge
8
+ dependencies:
9
+ - python=3.9
10
+ - pytorch::pytorch=2.0.0
11
+ - pytorch::pytorch-cuda=11.7.0
12
+ - pytorch::torchvision=0.15.0
13
+ - omegaconf
14
+ - torchmetrics=0.10.3
15
+ - fvcore
16
+ - iopath
17
+ - xformers::xformers=0.0.18
18
+ - pip
19
+ - pip:
20
+ - git+https://github.com/facebookincubator/submitit
21
+ - --extra-index-url https://pypi.nvidia.com
22
+ - cuml-cu11
23
+ - mmcv-full==1.5.0
24
+ - mmsegmentation==0.27.0
conda.yaml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: dinov2
2
+ channels:
3
+ - defaults
4
+ - pytorch
5
+ - nvidia
6
+ - xformers
7
+ - conda-forge
8
+ dependencies:
9
+ - python=3.9
10
+ - pytorch::pytorch=2.0.0
11
+ - pytorch::pytorch-cuda=11.7.0
12
+ - pytorch::torchvision=0.15.0
13
+ - omegaconf
14
+ - torchmetrics=0.10.3
15
+ - fvcore
16
+ - iopath
17
+ - xformers::xformers=0.0.18
18
+ - pip
19
+ - pip:
20
+ - git+https://github.com/facebookincubator/submitit
21
+ - --extra-index-url https://pypi.nvidia.com
22
+ - cuml-cu11
dinov2_vitl14_reg4_pretrain.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36e4deffbaef061a2576705b0c36f93621e2ae20bf6274694821b0b492551b51
3
+ size 1217607321
hubconf.py ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ #
3
+ # This source code is licensed under the Apache License, Version 2.0
4
+ # found in the LICENSE file in the root directory of this source tree.
5
+
6
+
7
+ from dinov2.hub.backbones import dinov2_vitb14, dinov2_vitg14, dinov2_vitl14, dinov2_vits14
8
+ from dinov2.hub.backbones import dinov2_vitb14_reg, dinov2_vitg14_reg, dinov2_vitl14_reg, dinov2_vits14_reg
9
+ from dinov2.hub.classifiers import dinov2_vitb14_lc, dinov2_vitg14_lc, dinov2_vitl14_lc, dinov2_vits14_lc
10
+ from dinov2.hub.classifiers import dinov2_vitb14_reg_lc, dinov2_vitg14_reg_lc, dinov2_vitl14_reg_lc, dinov2_vits14_reg_lc
11
+ from dinov2.hub.depthers import dinov2_vitb14_ld, dinov2_vitg14_ld, dinov2_vitl14_ld, dinov2_vits14_ld
12
+ from dinov2.hub.depthers import dinov2_vitb14_dd, dinov2_vitg14_dd, dinov2_vitl14_dd, dinov2_vits14_dd
13
+
14
+
15
+ dependencies = ["torch"]
pyproject.toml ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [tool.black]
2
+ line-length = 120
3
+
4
+ [tool.pylint.master]
5
+ persistent = false
6
+ score = false
7
+
8
+ [tool.pylint.messages_control]
9
+ disable = "all"
10
+ enable = [
11
+ "miscellaneous",
12
+ "similarities",
13
+ ]
14
+
15
+ [tool.pylint.similarities]
16
+ ignore-comments = true
17
+ ignore-docstrings = true
18
+ ignore-imports = true
19
+ min-similarity-lines = 8
20
+
21
+ [tool.pylint.reports]
22
+ reports = false
23
+
24
+ [tool.pylint.miscellaneous]
25
+ notes = [
26
+ "FIXME",
27
+ "XXX",
28
+ "TODO",
29
+ ]
requirements-dev.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ black==22.6.0
2
+ flake8==5.0.4
3
+ pylint==2.15.0
requirements-extras.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ mmcv-full==1.5.0
2
+ mmsegmentation==0.27.0
requirements.txt ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ --extra-index-url https://download.pytorch.org/whl/cu117
2
+ torch==2.0.0
3
+ torchvision==0.15.0
4
+ omegaconf
5
+ torchmetrics==0.10.3
6
+ fvcore
7
+ iopath
8
+ xformers==0.0.18
9
+ submitit
10
+ --extra-index-url https://pypi.nvidia.com
11
+ cuml-cu11
setup.cfg ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ [flake8]
2
+ max-line-length = 120
3
+ ignore = E203,E501,W503
4
+ per-file-ignores =
5
+ __init__.py:F401
6
+ hubconf.py:F401
7
+ exclude =
8
+ venv
setup.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ #
3
+ # This source code is licensed under the Apache License, Version 2.0
4
+ # found in the LICENSE file in the root directory of this source tree.
5
+
6
+ from pathlib import Path
7
+ import re
8
+ from typing import List, Tuple
9
+
10
+ from setuptools import setup, find_packages
11
+
12
+
13
+ NAME = "dinov2"
14
+ DESCRIPTION = "PyTorch code and models for the DINOv2 self-supervised learning method."
15
+
16
+ URL = "https://github.com/facebookresearch/dinov2"
17
+ AUTHOR = "FAIR"
18
+ REQUIRES_PYTHON = ">=3.9.0"
19
+ HERE = Path(__file__).parent
20
+
21
+
22
+ try:
23
+ with open(HERE / "README.md", encoding="utf-8") as f:
24
+ long_description = "\n" + f.read()
25
+ except FileNotFoundError:
26
+ long_description = DESCRIPTION
27
+
28
+
29
+ def get_requirements(path: str = HERE / "requirements.txt") -> Tuple[List[str], List[str]]:
30
+ requirements = []
31
+ extra_indices = []
32
+ with open(path) as f:
33
+ for line in f.readlines():
34
+ line = line.rstrip("\r\n")
35
+ if line.startswith("--extra-index-url "):
36
+ extra_indices.append(line[18:])
37
+ continue
38
+ requirements.append(line)
39
+ return requirements, extra_indices
40
+
41
+
42
+ def get_package_version() -> str:
43
+ with open(HERE / "dinov2/__init__.py") as f:
44
+ result = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", f.read(), re.M)
45
+ if result:
46
+ return result.group(1)
47
+ raise RuntimeError("Can't get package version")
48
+
49
+
50
+ requirements, extra_indices = get_requirements()
51
+ version = get_package_version()
52
+ dev_requirements, _ = get_requirements(HERE / "requirements-dev.txt")
53
+ extras_requirements, _ = get_requirements(HERE / "requirements-extras.txt")
54
+
55
+
56
+ setup(
57
+ name=NAME,
58
+ version=version,
59
+ description=DESCRIPTION,
60
+ long_description=long_description,
61
+ long_description_content_type="text/markdown",
62
+ author=AUTHOR,
63
+ python_requires=REQUIRES_PYTHON,
64
+ url=URL,
65
+ packages=find_packages(),
66
+ package_data={
67
+ "": ["*.yaml"],
68
+ },
69
+ install_requires=requirements,
70
+ extras_require={
71
+ "dev": dev_requirements,
72
+ "extras": extras_requirements,
73
+ },
74
+ dependency_links=extra_indices,
75
+ install_package_data=True,
76
+ license="Apache",
77
+ license_files=("LICENSE",),
78
+ classifiers=[
79
+ # Trove classifiers: https://github.com/pypa/trove-classifiers/blob/main/src/trove_classifiers/__init__.py
80
+ "Development Status :: 3 - Alpha",
81
+ "Intended Audience :: Developers",
82
+ "Intended Audience :: Science/Research",
83
+ "License :: OSI Approved :: Apache Software License",
84
+ "Programming Language :: Python :: 3.9",
85
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
86
+ "Topic :: Software Development :: Libraries :: Python Modules",
87
+ ],
88
+ )