harshil10 commited on
Commit
a416b2e
1 Parent(s): 08a05f7

Upload 5 files

Browse files
Files changed (5) hide show
  1. .gitattributes +6 -32
  2. README.md +60 -1
  3. config.json +1 -0
  4. pytorch_model.zip +3 -0
  5. vocab.txt +0 -0
.gitattributes CHANGED
@@ -1,35 +1,9 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
  *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
  *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
  *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
1
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
2
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
3
  *.bin filter=lfs diff=lfs merge=lfs -text
 
 
 
 
4
  *.h5 filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  *.tflite filter=lfs diff=lfs merge=lfs -text
6
+ *.tar.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.ot filter=lfs diff=lfs merge=lfs -text
8
+ *.onnx filter=lfs diff=lfs merge=lfs -text
9
+ pytorch_model.zip filter=lfs diff=lfs merge=lfs -text
 
 
README.md CHANGED
@@ -1,3 +1,62 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+
5
+ license:
6
+ - mit
7
+
8
+ tags:
9
+ - BERT
10
+ - MNLI
11
+ - NLI
12
+ - transformer
13
+ - pre-training
14
+
15
  ---
16
+
17
+ The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
18
+
19
+ This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
20
+
21
+ If you use the model, please consider citing both the papers:
22
+ ```
23
+ @misc{bhargava2021generalization,
24
+ title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
25
+ author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
26
+ year={2021},
27
+ eprint={2110.01518},
28
+ archivePrefix={arXiv},
29
+ primaryClass={cs.CL}
30
+ }
31
+
32
+ @article{DBLP:journals/corr/abs-1908-08962,
33
+ author = {Iulia Turc and
34
+ Ming{-}Wei Chang and
35
+ Kenton Lee and
36
+ Kristina Toutanova},
37
+ title = {Well-Read Students Learn Better: The Impact of Student Initialization
38
+ on Knowledge Distillation},
39
+ journal = {CoRR},
40
+ volume = {abs/1908.08962},
41
+ year = {2019},
42
+ url = {http://arxiv.org/abs/1908.08962},
43
+ eprinttype = {arXiv},
44
+ eprint = {1908.08962},
45
+ timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
46
+ biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
47
+ bibsource = {dblp computer science bibliography, https://dblp.org}
48
+ }
49
+
50
+ ```
51
+ Config of this model:
52
+ - `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
53
+
54
+
55
+ Other models to check out:
56
+ - `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
57
+ - `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
58
+ - `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
59
+
60
+ Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
61
+
62
+ Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"hidden_size": 128, "hidden_act": "gelu", "initializer_range": 0.02, "vocab_size": 30522, "hidden_dropout_prob": 0.1, "num_attention_heads": 2, "type_vocab_size": 2, "max_position_embeddings": 512, "num_hidden_layers": 2, "intermediate_size": 512, "attention_probs_dropout_prob": 0.1}
pytorch_model.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d83df03c5e9a8c35da825c8a286adced8da57525b222bfc632f70a5ff70167b
3
+ size 16438080
vocab.txt ADDED
The diff for this file is too large to render. See raw diff