论文部分内容阅读
Moderate resolution imaging spectroradiometer(MODIS) imaging has various applications in the field of ground monitoring,cloud classification and meteorological research.However,the limitations of the sensors and external disturbance make the resolution of image still limited in a certain level.The goal of this paper is to use a single image super-resolution(SISR) method to predict a high-resolution(HR) MODIS image from a single low-resolution(LR) input.Recently,although the method based on sparse representation has tackled the ill-posed problem effectively,two fatal issues have been ignored.First,many methods ignore the relationships among patches,resulting in some unfaithful output.Second,the high computational complexity of sparse coding using l_1 norm is needed in reconstruction stage.In this work,we discover the semantic relationships among LR patches and the corresponding HR patches and group the documents with similar semantic into topics by probabilistic Latent Semantic Analysis(p LSA).Then,we can learn dual dictionaries for each topic in the low-resolution(LR) patch space and high-resolution(HR) patch space and also pre-compute corresponding regression matrices for dictionary pairs.Finally,for the test image,we infer locally which topic it corresponds to and adaptive to select the regression matrix to reconstruct HR image by semantic relationships.Our method discovered the relationships among patches and pre-computed the regression matrices for topics.Therefore,our method can greatly reduce the artifacts and get some speed-up in the reconstruction phase.Experiment manifests that our method performs MODIS image super-resolution effectively,results in higher PSNR,reconstructs faster,and gets better visual quality than some current state-of-art methods.
Moderate resolution imaging spectroradiometer (MODIS) imaging has various applications in the field of ground monitoring, cloud classification and meteorological research. Yet the limitations of the sensors and external disturbance make the resolution of image still limited in a certain level. The goal of this paper is to use a single image super-resolution (SISR) method to predict a high-resolution (HR) MODIS image from a single low-resolution (LR) input.Recently, although the method based on sparse representation has tackled the ill- posed problem effectively, two fatal issues have been ignored. First, many methods ignore the relationships among patches, resulting in some unfaithful output. Second, the high computational complexity of sparse coding using l_1 norm is needed in reconstruction stage. this work, we discover the semantic relationships among LR patches and the corresponding HR patches and group the documents with similar semantic into topics by probabilistic Latent Semantic Analysis (LSA) .Then, we can learn dual dictionaries for each topic in the low-resolution (LR) patch space and high-resolution (HR) patch space and also pre-compute corresponding regression matrices for dictionary pairs. Finally, for the test image, we infer locally which topic it corresponds to and adaptive to select the regression matrix to reconstruct HR image by semantic relationships. Our method discovered the relationships among patches and pre-computed the regression matrices for topics.Therefore, our method can greatly reduce the artifacts and get some speed-up in the reconstruction phase. Experimental manifests that our method performs MODIS image super-resolution effectively, results in higher PSNR, reconstructs faster, and gets better visual quality than some current state-of-art methods.