End of training
Browse files- 20250510_111509.log +27 -0
- 20250510_111525.log +0 -0
- README.md +57 -0
- added_tokens.json +3 -0
- config.json +34 -0
- generation_config.json +13 -0
- model.safetensors +3 -0
- special_tokens_map.json +33 -0
- tokenizer.model +3 -0
- tokenizer_config.json +0 -0
- training_args.bin +3 -0
20250510_111509.log
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[2025-05-10 11:15:11] Created output directory: train_results_pred_mask/google_gemma-3-1b-pt_ds1000_upsample1000_predict_mask
|
2 |
+
[2025-05-10 11:15:11] Chat mode disabled
|
3 |
+
[2025-05-10 11:15:11] Model size is 3B or smaller (1 B). Using full fine-tuning.
|
4 |
+
[2025-05-10 11:15:11] No QA format data will be used
|
5 |
+
[2025-05-10 11:15:11] Limiting dataset size to: 1000 samples
|
6 |
+
[2025-05-10 11:15:12] =======================================
|
7 |
+
[2025-05-10 11:15:12] Starting training for model: google/gemma-3-1b-pt
|
8 |
+
[2025-05-10 11:15:12] =======================================
|
9 |
+
[2025-05-10 11:15:12] CUDA_VISIBLE_DEVICES: 0,1,2,3
|
10 |
+
[2025-05-10 11:15:12] WANDB_PROJECT: wikidyk-ar
|
11 |
+
[2025-05-10 11:15:12] DATA_PATH: data/wikidyk2022-2025_01082025_gpt-4o_evalv2_pages_formatted_combined_v2.json
|
12 |
+
[2025-05-10 11:15:12] Global Batch Size: 128
|
13 |
+
[2025-05-10 11:15:12] Data Size: 1000
|
14 |
+
[2025-05-10 11:15:12] Executing command: torchrun --nproc_per_node "4" --master-port 29501 src/train.py --model_name_or_path "google/gemma-3-1b-pt" --data_path "data/wikidyk2022-2025_01082025_gpt-4o_evalv2_pages_formatted_combined_v2.json" --output_dir "train_results_pred_mask/google_gemma-3-1b-pt_ds1000_upsample1000_predict_mask" --num_upsample "1000" --per_device_train_batch_size "32" --gradient_accumulation_steps "1" --learning_rate "2e-5" --num_train_epochs "1" --model_max_length "32768" --report_to wandb --logging_steps 50 --save_strategy no --bf16 True --use_flash_attention_2 True --qa_data_ratio "-1" --predict_mask "true" --ds_size 1000
|
15 |
+
[2025-05-10 11:15:12] Training started at Sat May 10 11:15:12 UTC 2025
|
16 |
+
[2025-05-10 11:15:13] ERROR: Training failed for google/gemma-3-1b-pt with exit code 1
|
17 |
+
[2025-05-10 11:15:13] ERROR: Training failed for google/gemma-3-1b-pt with exit code 1
|
18 |
+
[2025-05-10 11:15:13] Check error log for details: train_results_pred_mask/google_gemma-3-1b-pt_ds1000_upsample1000_predict_mask/20250510_111509.log
|
19 |
+
[2025-05-10 11:15:13] Resource usage after training google/gemma-3-1b-pt:
|
20 |
+
[2025-05-10 11:15:13] GPU memory usage:
|
21 |
+
3635 MiB, 40960 MiB
|
22 |
+
3615 MiB, 40960 MiB
|
23 |
+
3619 MiB, 40960 MiB
|
24 |
+
3611 MiB, 40960 MiB
|
25 |
+
[2025-05-10 11:15:13] Disk space usage for model outputs:
|
26 |
+
4.0K train_results_pred_mask/google_gemma-3-1b-pt_ds1000_upsample1000_predict_mask
|
27 |
+
[2025-05-10 11:15:13]
|
20250510_111525.log
ADDED
The diff for this file is too large to render.
See raw diff
|
|
README.md
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: gemma
|
4 |
+
base_model: google/gemma-3-1b-pt
|
5 |
+
tags:
|
6 |
+
- generated_from_trainer
|
7 |
+
model-index:
|
8 |
+
- name: google_gemma-3-1b-pt_ds1000_upsample1000_predict_mask
|
9 |
+
results: []
|
10 |
+
---
|
11 |
+
|
12 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
13 |
+
should probably proofread and complete it, then remove this comment. -->
|
14 |
+
|
15 |
+
# google_gemma-3-1b-pt_ds1000_upsample1000_predict_mask
|
16 |
+
|
17 |
+
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt) on an unknown dataset.
|
18 |
+
|
19 |
+
## Model description
|
20 |
+
|
21 |
+
More information needed
|
22 |
+
|
23 |
+
## Intended uses & limitations
|
24 |
+
|
25 |
+
More information needed
|
26 |
+
|
27 |
+
## Training and evaluation data
|
28 |
+
|
29 |
+
More information needed
|
30 |
+
|
31 |
+
## Training procedure
|
32 |
+
|
33 |
+
### Training hyperparameters
|
34 |
+
|
35 |
+
The following hyperparameters were used during training:
|
36 |
+
- learning_rate: 2e-05
|
37 |
+
- train_batch_size: 32
|
38 |
+
- eval_batch_size: 8
|
39 |
+
- seed: 42
|
40 |
+
- distributed_type: multi-GPU
|
41 |
+
- num_devices: 4
|
42 |
+
- total_train_batch_size: 128
|
43 |
+
- total_eval_batch_size: 32
|
44 |
+
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
45 |
+
- lr_scheduler_type: linear
|
46 |
+
- num_epochs: 1.0
|
47 |
+
|
48 |
+
### Training results
|
49 |
+
|
50 |
+
|
51 |
+
|
52 |
+
### Framework versions
|
53 |
+
|
54 |
+
- Transformers 4.51.3
|
55 |
+
- Pytorch 2.6.0+cu124
|
56 |
+
- Datasets 3.5.1
|
57 |
+
- Tokenizers 0.21.1
|
added_tokens.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"<image_soft_token>": 262144
|
3 |
+
}
|
config.json
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"Gemma3ForCausalLM"
|
4 |
+
],
|
5 |
+
"attention_bias": false,
|
6 |
+
"attention_dropout": 0.0,
|
7 |
+
"attn_logit_softcapping": null,
|
8 |
+
"bos_token_id": 2,
|
9 |
+
"cache_implementation": "hybrid",
|
10 |
+
"eos_token_id": 1,
|
11 |
+
"final_logit_softcapping": null,
|
12 |
+
"head_dim": 256,
|
13 |
+
"hidden_activation": "gelu_pytorch_tanh",
|
14 |
+
"hidden_size": 1152,
|
15 |
+
"initializer_range": 0.02,
|
16 |
+
"intermediate_size": 6912,
|
17 |
+
"max_position_embeddings": 32768,
|
18 |
+
"model_type": "gemma3_text",
|
19 |
+
"num_attention_heads": 4,
|
20 |
+
"num_hidden_layers": 26,
|
21 |
+
"num_key_value_heads": 1,
|
22 |
+
"pad_token_id": 0,
|
23 |
+
"query_pre_attn_scalar": 256,
|
24 |
+
"rms_norm_eps": 1e-06,
|
25 |
+
"rope_local_base_freq": 10000,
|
26 |
+
"rope_scaling": null,
|
27 |
+
"rope_theta": 1000000,
|
28 |
+
"sliding_window": 512,
|
29 |
+
"sliding_window_pattern": 6,
|
30 |
+
"torch_dtype": "bfloat16",
|
31 |
+
"transformers_version": "4.51.3",
|
32 |
+
"use_cache": true,
|
33 |
+
"vocab_size": 262144
|
34 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token_id": 2,
|
3 |
+
"cache_implementation": "hybrid",
|
4 |
+
"do_sample": true,
|
5 |
+
"eos_token_id": [
|
6 |
+
1,
|
7 |
+
106
|
8 |
+
],
|
9 |
+
"pad_token_id": 0,
|
10 |
+
"top_k": 64,
|
11 |
+
"top_p": 0.95,
|
12 |
+
"transformers_version": "4.51.3"
|
13 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:193463484d2022a9d7dba34a9028853b637994417d9b17a44949d28dcabe431c
|
3 |
+
size 1999811208
|
special_tokens_map.json
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"boi_token": "<start_of_image>",
|
3 |
+
"bos_token": {
|
4 |
+
"content": "<bos>",
|
5 |
+
"lstrip": false,
|
6 |
+
"normalized": false,
|
7 |
+
"rstrip": false,
|
8 |
+
"single_word": false
|
9 |
+
},
|
10 |
+
"eoi_token": "<end_of_image>",
|
11 |
+
"eos_token": {
|
12 |
+
"content": "<eos>",
|
13 |
+
"lstrip": false,
|
14 |
+
"normalized": false,
|
15 |
+
"rstrip": false,
|
16 |
+
"single_word": false
|
17 |
+
},
|
18 |
+
"image_token": "<image_soft_token>",
|
19 |
+
"pad_token": {
|
20 |
+
"content": "<pad>",
|
21 |
+
"lstrip": false,
|
22 |
+
"normalized": false,
|
23 |
+
"rstrip": false,
|
24 |
+
"single_word": false
|
25 |
+
},
|
26 |
+
"unk_token": {
|
27 |
+
"content": "<unk>",
|
28 |
+
"lstrip": false,
|
29 |
+
"normalized": false,
|
30 |
+
"rstrip": false,
|
31 |
+
"single_word": false
|
32 |
+
}
|
33 |
+
}
|
tokenizer.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
|
3 |
+
size 4689074
|
tokenizer_config.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5d7a0ce79bb181a18ce66142bd7b3fcd8acddc8d0a118592bd81892b1dfeebd7
|
3 |
+
size 5432
|