Anime-style Image Generation using GAN

Zhen Cui, Yasuaki Ito, Koji Nakano, Akihiko Kasagi


With the popularity of social networking services and smartphones, photo processing applications have become widely used, and there is a growing interest in more advanced photo processing techniques. One of these processing techniques is to convert photos to other styles. Many studies have been researched to automatically generate images using machine learning to deal with this problem in recent years. In the field of image generation, methods based on adversarial generative networks (GANs) have shown particularly good results. There are many studies of style transformation from photo to anime-style using the technique. However, most of these studies are limited to style transformation of the face part and background. The goal of this study is to convert a full-body photograph of a person into an image with style similar to that of an animated character. We prepared a dataset of full-body human photos and a dataset of full-body animated characters and used the unsupervised model CycleGAN to train the model. In addition, latent variables were added to obtain the diversity of the generated images. As a result, images with different animation styles were obtained with different latent variables. 


machine learning; style transfer; GANs; anime-style

Full Text:



  • There are currently no refbacks.