happyme531 commited on
Commit
3b37ef5
·
verified ·
1 Parent(s): 1df274b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md CHANGED
@@ -1,3 +1,11 @@
 
 
 
 
 
 
 
 
1
  # Stable Diffusion 1.5 Latent Consistency Model for RKNN2
2
 
3
  ## (English README see below)
@@ -52,6 +60,14 @@ python ./convert-onnx-to-rknn.py -m ./model -r 384x384
52
 
53
  1. 截至目前,使用最新版本的rknn-toolkit2 2.2.0版本转换的模型仍然存在极其严重的精度损失!即使使用的是fp16数据类型。如图,上方是使用onnx模型推理的结果,下方是使用rknn模型推理的结果。所有参数均一致。并且分辨率越高,精度损失越严重。这是rknn-toolkit2的bug。
54
 
 
 
 
 
 
 
 
 
55
  2. 其实模型转换脚本可以选择多个分辨率(例如"384x384,256x256"), 但这会导致模型转换失败。这是rknn-toolkit2的bug。
56
 
57
  ## 参考
@@ -112,6 +128,13 @@ Note that the higher the resolution, the larger the model and the longer the con
112
  ## Known Issues
113
 
114
  1. As of now, models converted using the latest version of rknn-toolkit2 (version 2.2.0) still suffer from severe precision loss, even when using fp16 data type. As shown in the image, the top is the result of inference using the ONNX model, and the bottom is the result using the RKNN model. All parameters are the same. Moreover, the higher the resolution, the more severe the precision loss. This is a bug in rknn-toolkit2.
 
 
 
 
 
 
 
115
 
116
  2. Actually, the model conversion script can select multiple resolutions (e.g., "384x384,256x256"), but this causes the model conversion to fail. This is a bug in rknn-toolkit2.
117
 
 
1
+ ---
2
+ base_model:
3
+ - TheyCallMeHex/LCM-Dreamshaper-V7-ONNX
4
+ tags:
5
+ - rknn
6
+ - LCM
7
+ - stable-diffusion
8
+ ---
9
  # Stable Diffusion 1.5 Latent Consistency Model for RKNN2
10
 
11
  ## (English README see below)
 
60
 
61
  1. 截至目前,使用最新版本的rknn-toolkit2 2.2.0版本转换的模型仍然存在极其严重的精度损失!即使使用的是fp16数据类型。如图,上方是使用onnx模型推理的结果,下方是使用rknn模型推理的结果。所有参数均一致。并且分辨率越高,精度损失越严重。这是rknn-toolkit2的bug。
62
 
63
+ - 384x384:
64
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6319d0860d7478ae0069cd92/yDmipD6zHHVyMVWqero-l.png)
65
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6319d0860d7478ae0069cd92/Ieq2m-4XnAThDnTgHWjvI.png)
66
+
67
+ - 256x256:
68
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6319d0860d7478ae0069cd92/qoagtwDKij1WGkJwqa8bz.jpeg)
69
+
70
+
71
  2. 其实模型转换脚本可以选择多个分辨率(例如"384x384,256x256"), 但这会导致模型转换失败。这是rknn-toolkit2的bug。
72
 
73
  ## 参考
 
128
  ## Known Issues
129
 
130
  1. As of now, models converted using the latest version of rknn-toolkit2 (version 2.2.0) still suffer from severe precision loss, even when using fp16 data type. As shown in the image, the top is the result of inference using the ONNX model, and the bottom is the result using the RKNN model. All parameters are the same. Moreover, the higher the resolution, the more severe the precision loss. This is a bug in rknn-toolkit2.
131
+ - 384x384:
132
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6319d0860d7478ae0069cd92/yDmipD6zHHVyMVWqero-l.png)
133
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6319d0860d7478ae0069cd92/Ieq2m-4XnAThDnTgHWjvI.png)
134
+
135
+ - 256x256:
136
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6319d0860d7478ae0069cd92/qoagtwDKij1WGkJwqa8bz.jpeg)
137
+
138
 
139
  2. Actually, the model conversion script can select multiple resolutions (e.g., "384x384,256x256"), but this causes the model conversion to fail. This is a bug in rknn-toolkit2.
140