Update README.md
Browse files
README.md
CHANGED
@@ -5,14 +5,40 @@ tags:
|
|
5 |
- autoencoder
|
6 |
---
|
7 |
|
|
|
|
|
8 |
## Model description
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
-
|
|
|
11 |
|
|
|
|
|
|
|
12 |
## Intended uses & limitations
|
13 |
|
14 |
-
|
15 |
|
16 |
## Training and evaluation data
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
- autoencoder
|
6 |
---
|
7 |
|
8 |
+
# Vector-Quantized Variational Autoencoders (VQ-VAE)
|
9 |
+
|
10 |
## Model description
|
11 |
+
Learning latent space representations of data remains to be an important task in machine learning. This model, the Vector-Quantized Variational Autoencoder (VQ-VAE) builds upon traditional VAEs in two ways.
|
12 |
+
- The encoder network outputs discrete, rather than continous, codes.
|
13 |
+
- The prior is learned rather than static.
|
14 |
+
|
15 |
+
To learn discrete latent representations, ideas from vector quantisation (VQ) are used. Using the VQ method allows the model to avoid issues of ["posterior collapse"](https://datascience.stackexchange.com/questions/48962/what-is-posterior-collapse-phenomenon). By pairing these representations with an autoregressive prior, VQ-VAE models can generate high quality images, videos, speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.
|
16 |
|
17 |
+
### Further learning
|
18 |
+
This model has been trained using code from this [example](https://keras.io/examples/generative/vq_vae/), and a result of this [paper.](https://arxiv.org/pdf/1711.00937.pdf)
|
19 |
|
20 |
+
## Model
|
21 |
+
Below we have a graphic from the paper above, showing the VQ-VAE model architecture.
|
22 |
+
[!VQ-VAE Model](
|
23 |
## Intended uses & limitations
|
24 |
|
25 |
+
This model is intended to be used for educational purposes. To train your own VQ-VAE model, follow along with this [example](https://keras.io/examples/generative/vq_vae/)
|
26 |
|
27 |
## Training and evaluation data
|
28 |
|
29 |
+
This model is trained using the popular MNIST dataset.
|
30 |
+
This dataset can be found/used with the following command
|
31 |
+
```
|
32 |
+
keras.datasets.mnist.load_data()
|
33 |
+
```
|
34 |
+
|
35 |
+
## Hyperparameters
|
36 |
+
The model was trained usign the following
|
37 |
+
- Latent Dimension = 16
|
38 |
+
- Number of Embeddings = 128
|
39 |
+
- Epochs = 30
|
40 |
+
|
41 |
+
The author of the example encourages toying with both the number and size of the embeddings to see how it affects the results.
|
42 |
+
|
43 |
+
## Reconstruction
|
44 |
+
Below, we can see a few examples of MNIST digits being reconstructed after passing through our model.
|