Residual Learning Based Convolutional Neural Network for Super Resolution
學年 107
學期 2
發表日期 2019-07-07
作品名稱 Residual Learning Based Convolutional Neural Network for Super Resolution
作品名稱(其他語言)
著者 Hwei Jen Lin; Yoshimasa Tokuyama; Zi-Jun Lin
作品所屬單位
出版者
會議名稱 2019 International Electronics Communication Conference
會議地點 Okinawa, Japan
摘要 Recently, there have been many methods of super resolution proposed in the literature, in which convolutional neural networks have been confirmed to achieve good results. C. Dong et al. proposed a convolutional neural network structure (SRCNN) to effectively solve the super resolution problem. J. Kim et al. proposed a much deeper convolutional neural network (VDSR) to improve C. Dong et al.’s method. However, unlike VDSR proposed by J. Kim et al. which trained residue images, SRCNN proposed by C. Dong et al. directly trained high-resolution images. Consequently, we surmise the improvement of VDSR is due to not only to the depth of the neural network structure but also the training of residue images. This paper studies and compares the performance of training high-resolution images and training residue images associated with the two neural network structures, SRCNN and VDSR. Some deep CNNs proceed zero padding which pads the input to each convolutional layer with zeros around the border so the feature maps remain the same size. SRCNN proposed by C. Dong et al. does not carry out padding, so the size of the resulting high-resolution images is smaller than expected. The study also proposes two revised versions of SRCNN that remain the size the same as the input image.
關鍵字 super resolution;convolutional networks;bicubic interpolation;deep learning;underdetermined inverse problem
語言 en
收錄於
會議性質 國際
校內研討會地點
研討會時間 20190707~20190709
通訊作者 Hwei Jen Lin
國別 TWN
公開徵稿
出版型式
出處 Proceedings of 2019 International Electronics Communication Conference
相關連結

機構典藏連結 ( http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/117397 )