Nomnoos commited on
Commit
40c1009
·
verified ·
1 Parent(s): 9eb641b

End of training

Browse files
.gitattributes CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ image_0.png filter=lfs diff=lfs merge=lfs -text
37
+ image_1.png filter=lfs diff=lfs merge=lfs -text
38
+ image_2.png filter=lfs diff=lfs merge=lfs -text
39
+ image_3.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: HiDream-ai/HiDream-I1-Dev
3
+ library_name: diffusers
4
+ license: mit
5
+ instance_prompt: a man
6
+ widget:
7
+ - text: a photo of a man in cafe
8
+ output:
9
+ url: image_0.png
10
+ - text: a photo of a man in cafe
11
+ output:
12
+ url: image_1.png
13
+ - text: a photo of a man in cafe
14
+ output:
15
+ url: image_2.png
16
+ - text: a photo of a man in cafe
17
+ output:
18
+ url: image_3.png
19
+ tags:
20
+ - text-to-image
21
+ - diffusers-training
22
+ - diffusers
23
+ - lora
24
+ - hidream
25
+ - hidream-diffusers
26
+ - template:sd-lora
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the training script had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+
33
+ # HiDream Image DreamBooth LoRA - Nomnoos/trained-hidream-lora-pickle
34
+
35
+ <Gallery />
36
+
37
+ ## Model description
38
+
39
+ These are Nomnoos/trained-hidream-lora-pickle DreamBooth LoRA weights for HiDream-ai/HiDream-I1-Dev.
40
+
41
+ The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [HiDream Image diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_hidream.md).
42
+
43
+ ## Trigger words
44
+
45
+ You should use `a man` to trigger the image generation.
46
+
47
+ ## Download model
48
+
49
+ [Download the *.safetensors LoRA](Nomnoos/trained-hidream-lora-pickle/tree/main) in the Files & versions tab.
50
+
51
+ ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
52
+
53
+ ```py
54
+ >>> import torch
55
+ >>> from transformers import PreTrainedTokenizerFast, LlamaForCausalLM
56
+ >>> from diffusers import HiDreamImagePipeline
57
+
58
+ >>> tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
59
+ >>> text_encoder_4 = LlamaForCausalLM.from_pretrained(
60
+ ... "meta-llama/Meta-Llama-3.1-8B-Instruct",
61
+ ... output_hidden_states=True,
62
+ ... output_attentions=True,
63
+ ... torch_dtype=torch.bfloat16,
64
+ ... )
65
+
66
+ >>> pipe = HiDreamImagePipeline.from_pretrained(
67
+ ... "HiDream-ai/HiDream-I1-Full",
68
+ ... tokenizer_4=tokenizer_4,
69
+ ... text_encoder_4=text_encoder_4,
70
+ ... torch_dtype=torch.bfloat16,
71
+ ... )
72
+ >>> pipe.enable_model_cpu_offload()
73
+ >>> pipe.load_lora_weights(f"Nomnoos/trained-hidream-lora-pickle")
74
+ >>> image = pipe(f"a man").images[0]
75
+
76
+
77
+ ```
78
+
79
+ For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
80
+
81
+
82
+ ## Intended uses & limitations
83
+
84
+ #### How to use
85
+
86
+ ```python
87
+ # TODO: add an example code snippet for running this diffusion pipeline
88
+ ```
89
+
90
+ #### Limitations and bias
91
+
92
+ [TODO: provide examples of latent issues and potential remediations]
93
+
94
+ ## Training details
95
+
96
+ [TODO: describe the data used to train the model]
image_0.png ADDED

Git LFS Details

  • SHA256: 4c74d0464c38bf7f4ab790e89d1984104d839abf3040e1ed04d30e94263f6e6f
  • Pointer size: 132 Bytes
  • Size of remote file: 1.58 MB
image_1.png ADDED

Git LFS Details

  • SHA256: 730d9fc74c076427ae990c53b8386fd08ef59959c4225b2487a1a0df461ba191
  • Pointer size: 132 Bytes
  • Size of remote file: 1.53 MB
image_2.png ADDED

Git LFS Details

  • SHA256: 425f8c3ffbab7855cacb992f49d8fbe84dc2bd94356496822cb720aafae81d84
  • Pointer size: 132 Bytes
  • Size of remote file: 1.57 MB
image_3.png ADDED

Git LFS Details

  • SHA256: 3269d8108cfd2ee3dfb0f8ae84a9baca46dd3d3e54f490db0d0bab9afdfadcaf
  • Pointer size: 132 Bytes
  • Size of remote file: 1.52 MB
logs/dreambooth-hidream-lora/1747827880.472402/events.out.tfevents.1747827880.hong-a100-80-1.42848.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad0b1ff2d057533db414f25f0da0afba3da709400a1f2aca066c36f6da3e48de
3
+ size 3506
logs/dreambooth-hidream-lora/1747827880.474366/hparams.yml ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ adam_beta1: 0.9
2
+ adam_beta2: 0.999
3
+ adam_epsilon: 1.0e-08
4
+ adam_weight_decay: 0.0001
5
+ allow_tf32: false
6
+ bnb_quantization_config_path: null
7
+ cache_dir: null
8
+ cache_latents: true
9
+ caption_column: prompt
10
+ center_crop: false
11
+ checkpointing_steps: 500
12
+ checkpoints_total_limit: null
13
+ class_data_dir: null
14
+ class_prompt: null
15
+ dataloader_num_workers: 0
16
+ dataset_config_name: null
17
+ dataset_name: Nomnoos/pickle-test
18
+ final_validation_prompt: null
19
+ gradient_accumulation_steps: 1
20
+ gradient_checkpointing: true
21
+ hub_model_id: null
22
+ hub_token: null
23
+ image_column: image
24
+ instance_data_dir: null
25
+ instance_prompt: a man
26
+ learning_rate: 0.0002
27
+ local_rank: -1
28
+ logging_dir: logs
29
+ logit_mean: 0.0
30
+ logit_std: 1.0
31
+ lora_dropout: 0.0
32
+ lora_layers: null
33
+ lr_num_cycles: 1
34
+ lr_power: 1.0
35
+ lr_scheduler: constant_with_warmup
36
+ lr_warmup_steps: 100
37
+ max_grad_norm: 1.0
38
+ max_sequence_length: 128
39
+ max_train_steps: 200
40
+ mixed_precision: bf16
41
+ mode_scale: 1.29
42
+ num_class_images: 100
43
+ num_train_epochs: 13
44
+ num_validation_images: 4
45
+ offload: false
46
+ optimizer: AdamW
47
+ output_dir: trained-hidream-lora-pickle
48
+ pretrained_model_name_or_path: HiDream-ai/HiDream-I1-Dev
49
+ pretrained_text_encoder_4_name_or_path: meta-llama/Meta-Llama-3.1-8B-Instruct
50
+ pretrained_tokenizer_4_name_or_path: meta-llama/Meta-Llama-3.1-8B-Instruct
51
+ prior_loss_weight: 1.0
52
+ prodigy_beta3: null
53
+ prodigy_decouple: true
54
+ prodigy_safeguard_warmup: true
55
+ prodigy_use_bias_correction: true
56
+ push_to_hub: true
57
+ random_flip: false
58
+ rank: 8
59
+ repeats: 1
60
+ report_to: tensorboard
61
+ resolution: 1024
62
+ resume_from_checkpoint: null
63
+ revision: null
64
+ sample_batch_size: 4
65
+ scale_lr: false
66
+ seed: 0
67
+ skip_final_inference: false
68
+ train_batch_size: 1
69
+ upcast_before_saving: false
70
+ use_8bit_adam: true
71
+ validation_epochs: 25
72
+ validation_prompt: a photo of a man in cafe
73
+ variant: null
74
+ weighting_scheme: none
75
+ with_prior_preservation: false
logs/dreambooth-hidream-lora/events.out.tfevents.1747827880.hong-a100-80-1.42848.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:827a20d4b1b3d134af116e97f168937aaf3ef49f7ebc7835fe0c9297baa48a09
3
+ size 12723658
pytorch_lora_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab80c16eac39e1fe8b38ec34fd0d4b68b22868286bc07508bc85293d25e9845a
3
+ size 15781160