JamesL404 commited on
Commit
60d825a
·
verified ·
1 Parent(s): ab5b48c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -60
README.md CHANGED
@@ -1,60 +1,12 @@
1
- # pytorch-AdaIN
2
-
3
- This is an unofficial pytorch implementation of a paper, Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization [Huang+, ICCV2017].
4
- I'm really grateful to the [original implementation](https://github.com/xunhuang1995/AdaIN-style) in Torch by the authors, which is very useful.
5
-
6
- ![Results](results.png)
7
-
8
- ## Requirements
9
- Please install requirements by `pip install -r requirements.txt`
10
-
11
- - Python 3.5+
12
- - PyTorch 0.4+
13
- - TorchVision
14
- - Pillow
15
-
16
- (optional, for training)
17
- - tqdm
18
- - TensorboardX
19
-
20
- ## Usage
21
-
22
- ### Download models
23
- Download [decoder.pth](https://drive.google.com/file/d/1bMfhMMwPeXnYSQI6cDWElSZxOxc6aVyr/view?usp=sharing)/[vgg_normalized.pth](https://drive.google.com/file/d/1EpkBA2K2eYILDSyPTt0fztz59UjAIpZU/view?usp=sharing) and put them under `models/`.
24
-
25
- ### Test
26
- Use `--content` and `--style` to provide the respective path to the content and style image.
27
- ```
28
- CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content input/content/cornell.jpg --style input/style/woman_with_hat_matisse.jpg
29
- ```
30
-
31
- You can also run the code on directories of content and style images using `--content_dir` and `--style_dir`. It will save every possible combination of content and styles to the output directory.
32
- ```
33
- CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content_dir input/content --style_dir input/style
34
- ```
35
-
36
- This is an example of mixing four styles by specifying `--style` and `--style_interpolation_weights` option.
37
- ```
38
- CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content input/content/avril.jpg --style input/style/picasso_self_portrait.jpg,input/style/impronte_d_artista.jpg,input/style/trial.jpg,input/style/antimonocromatismo.jpg --style_interpolation_weights 1,1,1,1 --content_size 512 --style_size 512 --crop
39
- ```
40
-
41
- Some other options:
42
- * `--content_size`: New (minimum) size for the content image. Keeping the original size if set to 0.
43
- * `--style_size`: New (minimum) size for the style image. Keeping the original size if set to 0.
44
- * `--alpha`: Adjust the degree of stylization. It should be a value between 0.0 and 1.0 (default).
45
- * `--preserve_color`: Preserve the color of the content image.
46
-
47
-
48
- ### Train
49
- Use `--content_dir` and `--style_dir` to provide the respective directory to the content and style images.
50
- ```
51
- CUDA_VISIBLE_DEVICES=<gpu_id> python train.py --content_dir <content_dir> --style_dir <style_dir>
52
- ```
53
-
54
- For more details and parameters, please refer to --help option.
55
-
56
- I share the model trained by this code [here](https://drive.google.com/file/d/1YIBRdgGBoVllLhmz_N7PwfeP5V9Vz2Nr/view?usp=sharing)
57
-
58
- ## References
59
- - [1]: X. Huang and S. Belongie. "Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization.", in ICCV, 2017.
60
- - [2]: [Original implementation in Torch](https://github.com/xunhuang1995/AdaIN-style)
 
1
+ ---
2
+ title: Color Transfer
3
+ emoji: 🌍
4
+ colorFrom: pink
5
+ colorTo: indigo
6
+ sdk: gradio
7
+ sdk_version: 4.19.2
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference