Mean: Stddev:
Interactive demo of 2 pretrained models (for 64px and 96px) exported to keras.js, note that noise from the two models don't generate same images

what's this?

I created a program that lets my computer generate anime faces as a deep dive into deep learning. The program is a GAN that takes in a 128d Gaussian noise vector and outputs a 64x64 or 96x96 image (depending on the model) of a (hopefully cute) anime face.

For the dataset, I scraped a subset of the 512px rsync directory of Danbooru2017 and used nagadomi/lbpcascade_animeface to extract faces (resized to 96x96px). I then manually removed dirty images (male faces, manga faces etc.), though there are still ~5% dirty. In total, 6013 images were used to train the GAN.

A regular DCGAN architecture was used with Adam optimizer with beta1=0.5. A learning rate of 1e-4 and 4e-4 was used for the discriminator and the generator respectively.

tips & tricks