Comparison of Deep Generative Models for the Generation of Handwritten Character Images

  • Kırbıyık, Ö., Simsar, E., & Cemgil, A. T. (2019, April). Comparison of Deep Generative Models for the Generation of Handwritten Character Images. In 2019 27th Signal Processing and Communications Applications Conference (SIU) (pp. 1-4). IEEE.

Paper Link

pipeline

Abstract In this study, we compare deep learning methods for generating images of handwritten characters. This problem can be thought of as a restricted Turing test: A human draws a character from any desired alphabet and the system synthesizes images with similar appearances. The intention here is not to merely duplicate the input image but to add random perturbations to give the impression of being human-produced. For this purpose, the images produced by two different generative models (Generative Adversarial Network and Variational Autoencoder) and the related training method (Reptile) are examined with respect to their visual quality in a subjective manner. Also, the capability of transferring the knowledge that is obtained by the model is challenged by using different datasets for the training and test processes. Using the proposed model and meta-learning method, it is possible to produce not only images similar to the ones in the training set but also novel images that belong to a class which is seen for the first time.
Bibtex @inproceedings{kirbiyik2019comparison, title={Comparison of Deep Generative Models for the Generation of Handwritten Character Images}, author={Kirbiyik, Omer and Simsar, Enis and Cemgil, A Taylan}, booktitle={2019 27th Signal Processing and Communications Applications Conference (SIU)}, pages={1--4}, year={2019}, organization={IEEE} }