maximuspowers commited on
Commit
7d9a590
·
verified ·
1 Parent(s): 896ea71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -14
README.md CHANGED
@@ -7,28 +7,22 @@ tags:
7
  model-index:
8
  - name: bert-philosophy-adapted
9
  results: []
 
 
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
  # bert-philosophy-adapted
16
 
17
- This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 1.5044
20
 
21
  ## Model description
22
 
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
 
33
  ## Training procedure
34
 
@@ -87,4 +81,4 @@ The following hyperparameters were used during training:
87
  - Transformers 4.52.4
88
  - Pytorch 2.6.0+cu124
89
  - Datasets 3.6.0
90
- - Tokenizers 0.21.1
 
7
  model-index:
8
  - name: bert-philosophy-adapted
9
  results: []
10
+ datasets:
11
+ - AiresPucrs/stanford-encyclopedia-philosophy
12
+ language:
13
+ - en
14
+ pipeline_tag: text-classification
15
  ---
16
 
 
 
 
17
  # bert-philosophy-adapted
18
 
19
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [Standford Encylcopedia of Philosophy](https://huggingface.co/datasets/AiresPucrs/stanford-encyclopedia-philosophy) dataset, using masked language modeling.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 1.5044
22
 
23
  ## Model description
24
 
25
+ This model was trained with the intention of creating a BERT encoder model for philosophical terminology, and further training on downstream tasks such as school of philosophy text classification.
 
 
 
 
 
 
 
 
26
 
27
  ## Training procedure
28
 
 
81
  - Transformers 4.52.4
82
  - Pytorch 2.6.0+cu124
83
  - Datasets 3.6.0
84
+ - Tokenizers 0.21.1