End of training
Browse files- README.md +46 -0
- a blue shirt.png +0 -0
- image_a group of people sitting on the ground.png +0 -0
- image_a man in a forest with a sword.png +0 -0
- image_a man in a green hoodie standing in front of a mountain.png +0 -0
- image_a man standing in front of a bridge.png +0 -0
- green shirt.png +0 -0
- a scarf.png +0 -0
- image_a man with a gun in his hand.png +0 -0
- image_a man with a sword in his hand.png +0 -0
- blue eyes.png +0 -0
- a white jacket.png +0 -0
- a shirt on.png +0 -0
- a beard.png +0 -0
- a cat on her head.png +0 -0
- tie.png +0 -0
- a suit.png +0 -0
- the other with a sword.png +0 -0
- image_two pokemons sitting on top of a cloud.png +0 -0
- pytorch_lora_weights.safetensors +3 -0
README.md
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: creativeml-openrail-m
|
3 |
+
library_name: diffusers
|
4 |
+
tags:
|
5 |
+
- stable-diffusion
|
6 |
+
- stable-diffusion-diffusers
|
7 |
+
- text-to-image
|
8 |
+
- diffusers
|
9 |
+
- diffusers-training
|
10 |
+
- lora
|
11 |
+
base_model: runwayml/stable-diffusion-v1-5
|
12 |
+
inference: true
|
13 |
+
---
|
14 |
+
|
15 |
+
<!-- This model card has been generated automatically according to the information the training script had access to. You
|
16 |
+
should probably proofread and complete it, then remove this comment. -->
|
17 |
+
|
18 |
+
|
19 |
+
# LoRA text2image fine-tuning - PQlet/lora-narutoblip-debug-ablation-r64-a16
|
20 |
+
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Naruto-BLIP dataset. You can find some example images in the following.
|
21 |
+
|
22 |
+

|
23 |
+

|
24 |
+

|
25 |
+

|
26 |
+

|
27 |
+

|
28 |
+

|
29 |
+
|
30 |
+
|
31 |
+
|
32 |
+
## Intended uses & limitations
|
33 |
+
|
34 |
+
#### How to use
|
35 |
+
|
36 |
+
```python
|
37 |
+
# TODO: add an example code snippet for running this diffusion pipeline
|
38 |
+
```
|
39 |
+
|
40 |
+
#### Limitations and bias
|
41 |
+
|
42 |
+
[TODO: provide examples of latent issues and potential remediations]
|
43 |
+
|
44 |
+
## Training details
|
45 |
+
|
46 |
+
[TODO: describe the data used to train the model]
|
a blue shirt.png
RENAMED
File without changes
|
image_a group of people sitting on the ground.png
ADDED
![]() |
image_a man in a forest with a sword.png
ADDED
![]() |
image_a man in a green hoodie standing in front of a mountain.png
ADDED
![]() |
image_a man standing in front of a bridge.png
ADDED
![]() |
green shirt.png
RENAMED
File without changes
|
a scarf.png
RENAMED
File without changes
|
image_a man with a gun in his hand.png
ADDED
![]() |
image_a man with a sword in his hand.png
ADDED
![]() |
blue eyes.png
RENAMED
File without changes
|
a white jacket.png
RENAMED
File without changes
|
a shirt on.png
RENAMED
File without changes
|
a beard.png
RENAMED
File without changes
|
a cat on her head.png
RENAMED
File without changes
|
tie.png
RENAMED
File without changes
|
a suit.png
RENAMED
File without changes
|
the other with a sword.png
RENAMED
File without changes
|
image_two pokemons sitting on top of a cloud.png
ADDED
![]() |
pytorch_lora_weights.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:84cab81499a6cb7417b1032c114fc4e70299e25f6b732f67a749a2ff564e38aa
|
3 |
+
size 51058040
|