期刊論文

學年 108
學期 2
出版(發表)日期 2020-07-05
作品名稱 Sketch-guided Deep Portrait Generation
作品名稱(其他語言)
著者 Ho, Trang-Thi; John Jethro Virtusio; Yung-Yao Chen; Chih-Ming Hsu; and Kai-Lung Hua
單位
出版者
著錄名稱、卷期、頁數 ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 16.3, p.1-18
摘要 Generating a realistic human class image from a sketch is a unique and challenging problem considering that the human body has a complex structure that must be preserved. Additionally, input sketches often lack important details that are crucial in the generation process, hence making the problem more complicated. In this article, we present an effective method for synthesizing realistic images from human sketches. Our framework incorporates human poses corresponding to locations of key semantic components (e.g., arm, eyes, nose), seeing that its a strong prior for generating human class images. Our sketch-image synthesis framework consists of three stages: semantic keypoint extraction, coarse image generation, and image refinement. First, we extract the semantic keypoints using Part Affinity Fields (PAFs) and a convolutional autoencoder. Then, we integrate the sketch with semantic keypoints to generate a coarse image of a human. Finally, in the image refinement stage, the coarse image is enhanced by a Generative Adversarial Network (GAN) that adopts an architecture carefully designed to avoid checkerboard artifacts and to generate photo-realistic results. We evaluate our method on 6,300 sketch-image pairs and show that our proposed method generates realistic images and compares favorably against state-of-the-art image synthesis methods.
關鍵字
語言 en
ISSN 1551-6857;1551-6865
期刊性質 國外
收錄於 SSCI Scopus
產學合作
通訊作者
審稿制度
國別 USA
公開徵稿
出版型式 ,電子版
相關連結

機構典藏連結 ( http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/122970 )