jq commited on
Commit
2dcff4d
·
verified ·
1 Parent(s): bf91889

Push model using huggingface_hub.

Browse files
Files changed (3) hide show
  1. README.md +52 -72
  2. config.json +19 -0
  3. model.safetensors +2 -2
README.md CHANGED
@@ -1,75 +1,55 @@
1
  ---
2
- library_name: transformers
 
 
3
  tags:
4
- - generated_from_trainer
5
- model-index:
6
- - name: results
7
- results: []
 
 
 
8
  ---
9
-
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
-
13
- # results
14
-
15
- This model was trained from scratch on the None dataset.
16
- It achieves the following results on the evaluation set:
17
- - Loss: 0.0027
18
-
19
- ## Model description
20
-
21
- More information needed
22
-
23
- ## Intended uses & limitations
24
-
25
- More information needed
26
-
27
- ## Training and evaluation data
28
-
29
- More information needed
30
-
31
- ## Training procedure
32
-
33
- ### Training hyperparameters
34
-
35
- The following hyperparameters were used during training:
36
- - learning_rate: 0.003
37
- - train_batch_size: 16
38
- - eval_batch_size: 16
39
- - seed: 42
40
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
- - lr_scheduler_type: linear
42
- - num_epochs: 20
43
-
44
- ### Training results
45
-
46
- | Training Loss | Epoch | Step | Validation Loss |
47
- |:-------------:|:-----:|:----:|:---------------:|
48
- | No log | 1.0 | 359 | 0.0041 |
49
- | 0.0048 | 2.0 | 718 | 0.0032 |
50
- | 0.0044 | 3.0 | 1077 | 0.0031 |
51
- | 0.0044 | 4.0 | 1436 | 0.0030 |
52
- | 0.0041 | 5.0 | 1795 | 0.0031 |
53
- | 0.004 | 6.0 | 2154 | 0.0029 |
54
- | 0.004 | 7.0 | 2513 | 0.0029 |
55
- | 0.004 | 8.0 | 2872 | 0.0037 |
56
- | 0.0039 | 9.0 | 3231 | 0.0029 |
57
- | 0.0039 | 10.0 | 3590 | 0.0029 |
58
- | 0.0039 | 11.0 | 3949 | 0.0028 |
59
- | 0.0037 | 12.0 | 4308 | 0.0029 |
60
- | 0.0037 | 13.0 | 4667 | 0.0028 |
61
- | 0.0037 | 14.0 | 5026 | 0.0028 |
62
- | 0.0037 | 15.0 | 5385 | 0.0028 |
63
- | 0.0036 | 16.0 | 5744 | 0.0028 |
64
- | 0.0036 | 17.0 | 6103 | 0.0027 |
65
- | 0.0036 | 18.0 | 6462 | 0.0027 |
66
- | 0.0036 | 19.0 | 6821 | 0.0027 |
67
- | 0.0035 | 20.0 | 7180 | 0.0027 |
68
-
69
-
70
- ### Framework versions
71
-
72
- - Transformers 4.48.2
73
- - Pytorch 2.6.0+cu124
74
- - Datasets 3.2.0
75
- - Tokenizers 0.21.0
 
1
  ---
2
+ library_name: segmentation-models-pytorch
3
+ license: mit
4
+ pipeline_tag: image-segmentation
5
  tags:
6
+ - model_hub_mixin
7
+ - pytorch_model_hub_mixin
8
+ - segmentation-models-pytorch
9
+ - semantic-segmentation
10
+ - pytorch
11
+ languages:
12
+ - python
13
  ---
14
+ # Unet Model Card
15
+
16
+ Table of Contents:
17
+ - [Load trained model](#load-trained-model)
18
+ - [Model init parameters](#model-init-parameters)
19
+ - [Model metrics](#model-metrics)
20
+ - [Dataset](#dataset)
21
+
22
+ ## Load trained model
23
+ ```python
24
+ import segmentation_models_pytorch as smp
25
+
26
+ model = smp.from_pretrained("<save-directory-or-this-repo>")
27
+ ```
28
+
29
+ ## Model init parameters
30
+ ```python
31
+ model_init_params = {
32
+ "encoder_name": "resnet34",
33
+ "encoder_depth": 5,
34
+ "encoder_weights": "imagenet",
35
+ "decoder_use_batchnorm": True,
36
+ "decoder_channels": (256, 128, 64, 32, 16),
37
+ "decoder_attention_type": None,
38
+ "in_channels": 3,
39
+ "classes": 2,
40
+ "activation": None,
41
+ "aux_params": None
42
+ }
43
+ ```
44
+
45
+ ## Model metrics
46
+ [More Information Needed]
47
+
48
+ ## Dataset
49
+ Dataset name: [More Information Needed]
50
+
51
+ ## More Information
52
+ - Library: https://github.com/qubvel/segmentation_models.pytorch
53
+ - Docs: https://smp.readthedocs.io/en/latest/
54
+
55
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_model_class": "Unet",
3
+ "activation": null,
4
+ "aux_params": null,
5
+ "classes": 2,
6
+ "decoder_attention_type": null,
7
+ "decoder_channels": [
8
+ 256,
9
+ 128,
10
+ 64,
11
+ 32,
12
+ 16
13
+ ],
14
+ "decoder_use_batchnorm": true,
15
+ "encoder_depth": 5,
16
+ "encoder_name": "resnet34",
17
+ "encoder_weights": "imagenet",
18
+ "in_channels": 3
19
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:54b5d1222cac440295932d9d4d35d03557b7dfd528b8baf8af01a2701d7f93fc
3
- size 97849976
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9229d4645bea961aa04409070515563da3c50b9a03271ac0b97cfa2c8949520
3
+ size 97849944