mjbuehler commited on
Commit
2d4569a
·
verified ·
1 Parent(s): 35dcdea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -1
README.md CHANGED
@@ -253,7 +253,7 @@ This produces a JSON file in the ```instance_data_dir``` directory:
253
  {"file_name": "94.jpg", "prompt": "<leaf microstructure>, arafed image of a green plant with a yellow cross"}
254
  ```
255
 
256
- ```raw
257
  !accelerate launch train_dreambooth_lora_sdxl.py \
258
  --pretrained_model_name_or_path="{pretrained_model_name_or_path}" \
259
  --pretrained_vae_model_name_or_path="{pretrained_vae_model_name_or_path}"\
@@ -277,4 +277,51 @@ This produces a JSON file in the ```instance_data_dir``` directory:
277
  --max_train_steps=500 \
278
  --checkpointing_steps=500 \
279
  --seed="0"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
280
  ```
 
253
  {"file_name": "94.jpg", "prompt": "<leaf microstructure>, arafed image of a green plant with a yellow cross"}
254
  ```
255
 
256
+ ```python
257
  !accelerate launch train_dreambooth_lora_sdxl.py \
258
  --pretrained_model_name_or_path="{pretrained_model_name_or_path}" \
259
  --pretrained_vae_model_name_or_path="{pretrained_vae_model_name_or_path}"\
 
277
  --max_train_steps=500 \
278
  --checkpointing_steps=500 \
279
  --seed="0"
280
+ ```
281
+
282
+ ### With prior preservation
283
+
284
+ Set `--with_prior_preservation` flag to include prior preservation. In this case you must specify `--class_data_dir` (directory with class images) and `--class_prompt` (class prompt). You should also set `--num_class_images` to specify how many class preservation images you want to use. Either place them in the directory (specified via `--class_data_dir`) or the code with auto-generate them based off the base model. You can also provide a few yourself and let the code generate the remaining ones.
285
+
286
+ An example is provided below, commented out. The code that will run here will NOT use prior preservation.
287
+
288
+ Some other useful parameters that can be set include:
289
+
290
+ --rank: LoRA adapter rank (LoRA alpha will be set identical to rank)
291
+ --use_dora: Set if you want to use DORA
292
+
293
+ Type ```python train_dreambooth_lora_sdxl.py``` to get a full list of parameters
294
+
295
+ ```python
296
+ instance_data_dir = 'local_instance_data_dir'
297
+ class_prompt = 'a prompt that describes the images in the directory local_instance_data_dir'
298
+ num_class_images = 10 #how many images you want in this class
299
+
300
+ !\accelerate launch train_dreambooth_lora_sdxl.py \
301
+ --pretrained_model_name_or_path="{pretrained_model_name_or_path}" \
302
+ --pretrained_vae_model_name_or_path="{pretrained_vae_model_name_or_path}"\
303
+ --dataset_name="{instance_data_dir}" \
304
+ --class_prompt="{class_prompt}" \
305
+ --num_class_images={num_class_images} \
306
+ --with_prior_preservation \
307
+ --class_data_dir="{class_data_dir}" \
308
+ --output_dir="{instance_output_dir}" \
309
+ --caption_column="prompt"\
310
+ --mixed_precision="fp16" \
311
+ --instance_prompt="{instance_prompt}" \
312
+ --validation_prompt="{val_prompt}" \
313
+ --validation_epochs={val_epochs} \
314
+ --resolution=1024 \
315
+ --train_batch_size=1 \
316
+ --gradient_accumulation_steps=4 \
317
+ --gradient_checkpointing \
318
+ --learning_rate=1e-4 \
319
+ --snr_gamma=5.0 \
320
+ --lr_scheduler="constant" \
321
+ --lr_warmup_steps=0 \
322
+ --mixed_precision="fp16" \
323
+ --use_8bit_adam \
324
+ --max_train_steps=500 \
325
+ --checkpointing_steps=500 \
326
+ --seed="0"
327
  ```