Processing math: 100%
A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Qiusheng Lian, Wenfeng Yan, Xiaohua Zhang and Shuzhen Chen, "Single Image Rain Removal Using Image Decomposition and a Dense Network," IEEE/CAA J. Autom. Sinica, vol. 6, no. 6, pp. 1428-1437, Nov. 2019. doi: 10.1109/JAS.2019.1911441
Citation: Qiusheng Lian, Wenfeng Yan, Xiaohua Zhang and Shuzhen Chen, "Single Image Rain Removal Using Image Decomposition and a Dense Network," IEEE/CAA J. Autom. Sinica, vol. 6, no. 6, pp. 1428-1437, Nov. 2019. doi: 10.1109/JAS.2019.1911441

Single Image Rain Removal Using Image Decomposition and a Dense Network

doi: 10.1109/JAS.2019.1911441
Funds:

the National Natural Science Foundation of China 61471313

the Natural Science Foundation of Hebei Province F2019203318

More Information
  • Removing rain from a single image is a challenging task due to the absence of temporal information. Considering that a rainy image can be decomposed into the low-frequency (LF) and high-frequency (HF) components, where the coarse scale information is retained in the LF component and the rain streaks and texture correspond to the HF component, we propose a single image rain removal algorithm using image decomposition and a dense network. We design two task-driven sub-networks to estimate the LF and non-rain HF components of a rainy image. The high-frequency estimation sub-network employs a densely connected network structure, while the low-frequency sub-network uses a simple convolutional neural network (CNN). We add total variation (TV) regularization and LF-channel fidelity terms to the loss function to optimize the two subnetworks jointly. The method then obtains de-rained output by combining the estimated LF and non-rain HF components. Extensive experiments on synthetic and real-world rainy images demonstrate that our method removes rain streaks while preserving non-rain details, and achieves superior de-raining performance both perceptually and quantitatively.

     

  • AS a problem unique to images captured outdoors, rain often degrades the visual quality of such images and severely affects the performance of many computer vision tasks, especially with object tracking [1] and detection [2]. Rain streaks blur and deform background scenes and block objects in images. Variation in the rain streaks also creates different effects in the images. Algorithms that can effectively remove rain from a single image are thus of significant interest.

    In the past few years, researchers have proposed various methods to address this problem with varying degrees of success [3]-[18]. However, there are three limitations to existing methods. First, some of them leave rain streaks or artifacts in the restored images [3]-[10]. Second, some of them over-derain the images, producing over-smoothed results with a loss of important details that have direction and shape similar to rain [10], [14], [15]. Third, most of these methods are ineffective in removing heavy rain [9], [10].

    In view of these problems and inspired by the fact that rain streaks correspond to the high-frequency (HF) component [6], we propose a single image rain removal framework using image decomposition and a dense network [19]. Our proposed architecture consists of two task-driven sub-networks. One learns the low-frequency (LF) component of an input rainy image, and the other learns the non-rain HF component. Summing the outputs of the two sub-networks produces the restored image. To achieve the goal, we define a new loss function to train two sub-networks jointly. Unlike the use of low-pass filters to decompose images before rain removal [6], [13], [14], we perform image decomposition and rain removal simultaneously rather than separately. This enables rain removal to remain unaffected by the filters.

    We make three main contributions to the field. First, we propose an end-to-end deep CNN framework using image decomposition and a dense network for single image de-raining. We use deep learning instead of traditional low-pass filters to decompose images. Two separate sub-networks learn the LF and non-rain HF components of rainy images, respectively. Second, we introduce total variation (TV) regularization and LF-channel fidelity terms in the loss function to optimize the two sub-networks jointly. The combined networks learn the non-linear mapping function between rainy and clean images directly. Third, we conduct extensive experiments on synthetic and real-world datasets to demonstrate that our method outperforms several state-of-the-art methods.

    Our paper is organized as follows. Section II reviews related works in the field of rain removal. Section III presents our proposed architecture. Section IV presents the datasets, training details, and experimental results. Finally, Section V contains our conclusions.

    Compared to video-based methods [20]-[24], single image rain removal is more challenging due to the lack of available temporal information. Traditional single image rain removal algorithms make use of kernel regression [3], low rank approximations [4], [5], dictionary learning [6]-[9], and patch-based prior methods [10]. Kim et al. [3] used kernel regression and a non-local mean filter to detect and remove rain streaks. Other researchers have put forth low-rank representation-based methods [4], [5].

    Considering the fact that rain is the HF component in rainy images, Kang et al. [6] applied image decomposition to single image rain removal for the first time. They separated a rainy image into LF and HF components using a bilateral filter [25] and then removed rain from the HF component using sparse coding and dictionary learning. Finally, they obtained the restored image by combining the LF and non-rain HF components. This method's main drawbacks are the bilateral filter's effects on performance and the presence of rain streaks in the restored images. Others have pursued a similar decomposition strategy [7], [8]. Luo et al. [9] proposed a method based on discriminative sparse coding (DSC) to tackle the rain removal problem, but it usually leaves rain streaks in the restored images. Moreover, dictionary learning-based methods [3]-[9] suffer from a high computational burden. Li et al. [10] employed the Gaussian mixture model (GMM) to obtain the patch-based priors of background and rainy images to perform image rain removal, but this method often over-smooths the output and takes a long time to run. Zhu et al. [11] developed a joint bi-layer optimization method to separate a rainy image into a rain-free background layer and a rain-streak layer iteratively. Three image priors were used to improve the performance in removing rain streaks while preserving background details. This method's main limitation is that some parameters are required to be specified by users for optimal performance.

    Recently, deep learning has been applied to many image processing tasks, with superior performance [26]-[28]. Eigen et al. [12] first introduced deep learning to address the problem of rain removal, but the method could only remove static raindrops and dirt spots rather than dynamic rain streaks. Fu et al. [13] proposed the first CNN-based method, DerainNet, to remove rain streaks from a single image. They decomposed a rainy image into the LF and HF components using a guided filter [29] and then built a shallow network to learn the mapping function for the HF component. DerainNet achieves better performance and requires less computation time than traditional methods [9], [10]. However, it creates a slight color shift and introduces artifacts into the output images. Inspired by the residual network (ResNet) [30], Fu et al. [14] proposed the deep detail network (DDN). It identified the residual map of rain streaks from the HF component using the same decomposition process as DerainNet. Although DDN works well, it removes some non-rain details of the background scenes. Yang et al. [15] developed a multi-task framework to perform joint rain detection and removal (JORDER) that could handle rain accumulation. However, the de-rained results still retain some rain streaks and show some blurred details. A generative adversarial framework (ID-CGAN) was introduced to improve the visual quality of output images [16], but it always creates color shift. More recently, Fan et al. [17] proposed a residual guide feature fusion network (ResGuideNet) which is detachable to meet different rainy conditions. However, it generates slightly blurred results. Li et al. [18] designed a deep decomposition-composition network (DDC-Net) that removed rain faster. Although this method fulfills the high-speed requirement, it still leaves some rain effect in the presence of heavy rain.

    In general, rain streaks are always mixed with background scenes. Thus it is difficult to directly determine the non-linear mapping function between the rainy and clean images in the image domain. Motivated by Kang's research [6], we decompose a rainy image into its LF and HF components ILF and IHF.

    I=ILF+IHF. (1)

    As the example shows in Fig. 1, the LF component contains the image's coarse-scale information while the HF component includes rain streaks and fine-scale edges and textures. Therefore, it is natural to remove rain using the HF component alone to simplify the problem. Based on this observation, Kang et al. [6] first uses bilateral filtering to decompose a rainy image into the LF and HF components, and then determines the non-rain component from the HF image using dictionary learning. DerainNet and DDN employ guided filtering to obtain the HF component of an image [13], [14]. They then use the HF image as the input of network to predict the non-rain HF image. However, bilateral filtering often leaves rain streaks in the LF component, and guided filtering cannot extract thick rain streaks. Thus, these low-pass filters affect de-raining performance. Owing to the powerful abilities of CNNs to extract and represent deep features, we build a CNN-based model to estimate the LF and non-rain HF components of the rainy images instead of the filters in previous work.

    Figure  1.  Example LF and HF components of a rainy image

    Fig. 2 shows the architecture of our proposed method. It consists of two sub-networks, a low-frequency estimation sub-network and a high-frequency estimation sub-network. Given a rainy image I we use two sub-networks to estimate ILF and non-rain IHF. We obtain the de-rained image by summing the estimated ILF and non-rain IHF.

    Figure  2.  Architecture of the proposed method. The network consists of the low frequency estimation sub-network and the high frequency estimation sub-network. The low frequency estimation sub-network includes 28 bottleneck structures and 2 convolutional layers while the high frequency estimation sub-network contains 9 convolutional layers

    The LF estimation sub-network is designed to predict the LF component of an input RGB color image. We employ a plain CNN architecture for the LF estimation sub-network due to the simplicity of the task. As Fig. 2 shows, the low-frequency estimation sub-network consists of 9 convolutional layers, with all but the last layer followed by rectified linear units (ReLUs) [31]. The first layer uses 3×3 convolutions to generate 64 feature maps. The middle layers apply the dilated convolution [28] with the dilation factor s=2 and 64 feature maps for each middle layer. The last layer uses 3×3 convolutions to reconstruct the LF image. The dilated convolution [32] enlarges the receptive field of the network without losing resolution or increasing the number of parameters. In this work, the receptive field of the low frequency estimation sub-network is 33×33, based on the work of Le et al. [33].

    The high frequency estimation sub-network predicts the non-rain HF component from the input image. We employ a densely connected network structure motivated by Huang's work [19]. This dense connectivity pattern strengthens feature propagation. The deep layers in the network use the multi-level features extracted by early layers directly. Due to feature reuse, each layer in network requires only a small number of convolution operations. This pattern also improves the flow of gradients throughout the network and alleviates the gradient vanishing problem during training.

    Specifically, the HF estimation sub-network includes 28 bottleneck structures and 2 convolutional layers as illustrated in Fig. 2. A bottleneck structure consists of a 1×1 convolution (Conv) followed by a 3×3 dilated convolution with the dilation factor s=2 (2-DConv), i.e., Conv (1×1)ReLU2-DConvReLU. The 1×1 convolution and the 3×3 dilated convolution generate 64 and 16 feature maps, respectively. The first layer of the sub-network uses 3×3 convolutions and generates 16 feature maps. For the last layer, we use 3×3 convolutions to reconstruct the non-rain HF image. The first layer and all of the bottleneck structures pass on their own feature maps to each of the subsequent ones. In other words, each of them obtains inputs from the output features of all preceding bottleneck structures and the first layer, combining the features with concatenation. In each bottleneck structure, the 1×1 convolution reduces the number of input feature maps, substantially reducing the number of computations and speeding up the training process. We use the dilated convolution to enlarge the receptive field of the sub-network without losing local details.

    The goal of the proposed network is to map a rainy image directly to a de-rained one in an end-to-end fashion. Therefore, we first employ a mean squared error (MSE) loss function between the input and output. However, the outputs of the two sub-networks are inconsistent with our assumption, since they are unsupervised. To bias each sub-network to undertake its designated task, we introduce a total variation (TV) regularization term [34] and a LF-channel fidelity term into the loss function. The overall loss function is

    L(θ1,θ2)= 1NNi=1fL(xi;θ1)+fH(xi;θ2)yi22+λTV(fL(xi;θ1))+fL(xi;θ1)xiω,1 (2)

    where N is the number of training data items; {(xi,yi)}Ni=1 represents the N rainy and ground truth image patch pairs; fL() and fH() denote the non-linear mapping of the low-frequency estimation and the high-frequency estimation sub-networks; and θ1 and θ2 represent the hyperparameters corresponding to the two sub-networks. The first term of the loss function is the MSE loss between the output and the ground truth. In the second term, TV() is the anisotropic total variation regularization function which is defined as

    TV(u)=hu1+vu1 (3)

    where h and v are the horizontal and vertical difference operators. TV regularization preserves the LF component while penalizing the HF component. We use this to promote the LF estimation sub-network capturing the coarse-scale information of a rainy image. Thus, the HF estimation sub-network must learn the non-rain edges and textures. We set the regularization parameter λ to 0.095. The second adjustment is the LF-channel fidelity term

    fL(xi;θ1)xiω,1=ω1(fL(xi;θ1))Y(xi)Y1+ω2(fL(xi;θ1))CbCr(xi)CbCr1. (4)

    This term defines the similarity between fL(xi;θ1) and xi. Since xi contains rain streaks that exhibit stronger luminance than the background scene, we convert fL(xi;θ1) and xi from the RGB color space to the YCbCr space and then obtain the luminance ()Y and chrominance ()CbCr channels, respectively. ω1 and ω2 are the weighting factors that help to control the balance between the luminance fidelity and chrominance fidelity. We set ω1=0.006 and ω2=0.015 for experiments.

    This section presents our experimental details and test results on both synthetic and real-world rainy images. We compared the proposed method with five state-of-the-art methods, including DSC, GMM, DerainNet, DDN and JORDER. We obtained source code for the alternative methods from the websites of their respective authors. To measure performance, we used the structural similarity index (SSIM) [35] and the peak signal to noise ratio (PSNR) for a quantitative evaluation.

    We used synthetic rainy images from Fu et al. [14] as our training data. This training set contains 14 000 pairs of rainy and clean images. The rainy images were synthesized using Photoshop [36] with 14 different streak directions and magnitudes. We randomly selected 9100 image pairs to generate three million 64×64 rainy/clean patch pairs.

    For the performance evaluation, we selected 100 clear images from BSDS500 [37] to synthesize a test dataset using Photoshop, denoted as Rain100. This dataset contains heavy and light rain with different streak directions.

    We trained the combined network using an Nvidia Tesla K80 GPU with the Tensorflow framework [38], the Adam solver [39] with a mini-batch size of 32, Xavier weight initialization [40], and bias set to none. We started the learning rate from 0.0005, dividing it by 5 at 100 000 and 200 000 iterations. The maximum number of iterations was set to 220 000.

    During training, we used a batch of 64×64 synthetic rainy/clean patch pairs as a validation set that has no intersection with the training set. We performed validation every 2000 iterations. In Fig. 3, we show the validation convergence curve during training. The curve shows that the validation loss gradually decreased with more iterations, with the curve stabilizing after 100 000 iterations and converging after 200 000 iterations. The testing code can be found at: https://github.com/ywf313/derain.

    Figure  3.  The convergence curve of validation

    Fig. 4 shows a sample of our de-raining results and the outputs of the two sub-networks. The LF image contains coarse-scale information while the HF image includes non-rain edges and textures. Figs. 5-7 show the de-raining results of all six methods on three synthetic rainy images. For a better visual comparison, we have zoomed in on specific regions-of-interest. The DSC left clear rain streaks in the de-rained output. GMM over-smoothed the images and performed poorly in the presence of heavy rain. JORDER exhibited artifacts and blurred some details. DerainNet removed rain streaks well, but with slight color shifts and artifacts as shown in Figs. 5 (f) and 6 (f). DDN achieved a good performance but still removed some details in the background scenes and produced unnatural-looking output images. For example, the white lines on wood and the bird's feathers were removed when similar to rain streaks in Figs. 5 (g) and 6 (g). DDN left some rain effects with heavy rain as shown in Fig. 7 (g). In contrast, our proposed network removed rain streaks while preserving non-rain details. Tables Ⅰ and show the SSIM and PSNR results evaluated with synthetic rainy images and Rain100. Our proposed method achieved the best results.

    Figure  4.  Results on the synthetic rainy image "umbrella" and outputs of two sub-networks
    Figure  5.  Results on a synthetic rainy image "dock". The bottom row shows corresponding enlarged parts of red boxes in the top row
    Figure  6.  Results on a synthetic rainy image "bird". The bottom row shows corresponding enlarged parts of red boxes in the top row
    Figure  7.  Results on a synthetic rainy image "flower". The bottom row shows corresponding enlarged parts of red boxes in the top row
    Table  Ⅰ.  QUANTITATIVE MEASUREMENT RESULTS USING SSIM ON SYNTHETIC TEST IMAGES
    Image Rainy image DSC GMM JORDER DerainNet DDN Ours
    dock 0.7674 0.7807 0.8395 0.8060 0.8510 0.8591 0.8925
    bird 0.5663 0.6185 0.7689 0.7176 0.8152 0.8638 0.9169
    flower 0.5754 0.6702 0.7953 0.8863 0.8223 0.7929 0.9389
    Rain100 0.7090 0.7633 0.7935 0.8329 0.8472 0.9037 0.9317
     | Show Table
    DownLoad: CSV
    Table  Ⅱ.  QUANTITATIVE MEASUREMENT RESULTS USING PSNR (DB) ON SYNTHETIC TEST IMAGES
    Image Rainy image DSC GMM JORDER DerainNet DDN Ours
    dock 25.62 27.17 29.02 27.23 25.95 28.99 30.54
    bird 19.80 24.09 23.12 22.91 20.60 27.23 30.03
    flower 21.28 26.05 24.71 28.46 22.06 26.92 33.61
    Rain100 22.16 26.11 25.43 25.92 22.81 29.75 31.62
     | Show Table
    DownLoad: CSV

    We also evaluated the performance of the proposed and five baseline methods with many real-world rainy images published by Fu et al. [14]. Fig. 8 shows three rainy images used for testing. Since the ground truth images were unavailable, we compared visual performance. Figs. 9-11 present the de-raining results of all of the methods on three real-world rainy images. The restored images from DSC and GMM retained some rain streaks. JORDER left rain streaks and artifacts in the restored images. DerainNet worked well but introduced a color shift and artifacts in the outputs as in Figs. 9 (d) and 10 (d). DDN removed texture details and left some rain effect in the output, as in the rain effect seen in Fig. 9 (e) and the blurred bushes in Fig. 10 (e). The purple flowers in Fig. 11 (e) were over-smoothed.

    Figure  8.  Three real-world rainy images used for test
    Figure  9.  Results on a real-world rainy image "people"
    Figure  10.  Results on a real-world rainy image "car"
    Figure  11.  Results on a real-world rainy image "street"

    In comparison, the restored images from our proposed method were visually cleaner and preserved more details. Although we trained our network with a synthetic dataset, it generalized well to the real-world test images and achieved superior performance. Overall, our method outperformed state-of-the-art single image rain removal methods significantly.

    We also performed an ablation study to investigate the advantages of image decomposition and the individual contributions of the main components of the proposed network for single image rain removal. We excluded some structures from the network and applied the same training procedures.

    First, we only used the HF estimation sub-network to directly learn the mapping function between a rainy image and the ground truth image in the image domain. Second, we used the LF estimation sub-network to directly learn the mapping function between the rainy image and the ground truth in the image domain. In Figs. 12 (c) and 12 (d), we see that some details of the bird's feathers were removed and some artifacts were retained as a result of the LF-path network. In comparison, our proposed method is better than the HF-path and LF-path network since it can retain richer details and achieve clearer visual effects. Table Ⅲ shows the SSIM and PSNR results from the synthetic rainy image and Rain100 sets. These results indicate that the image decomposition strategy is significantly better for rain removal.

    Figure  12.  Results of the the HF-path, the LF-path and the proposed network on a synthetic rainy image "bird". The bottom row shows corresponding enlarged parts of red boxes in the top row
    Table  Ⅲ.  QUANTITATIVE MEASUREMENT RESULTS USING SSIM/PSNR (DB) ON SYNTHETIC TEST IMAGES
    Image Rainy image HF-path network LF-path network Proposed network
    dock 0.7674/25.62 0.8869/30.15 0.8841/29.82 0.8925/30.54
    bird 0.5663/19.80 0.8961/27.42 0.9026/29.31 0.9169/30.03
    flower 0.5754/21.28 0.9341/33.26 0.9112/31.83 0.9389/33.61
    Rain100 0.7090/22.16 0.9232/30.86 0.9167/30.52 0.9317/31.62
     | Show Table
    DownLoad: CSV

    In addition, we demonstrated the effectiveness of the loss function in our method by conducting the following experiments:

    1) L2 + TV: removed the LF-channel fidelity term from the proposed loss function.

    2) L2 + LF-channel fidelity: removed the TV regularization term from the proposed loss function.

    The average PSNR and SSIM results evaluated on Rain100 are shown in Table Ⅳ. Our method achieved higher SSIM and PSNR. We conclude that the proposed loss function is significant for our network structure.

    Table  Ⅳ.  QUANTITATIVE MEASUREMENT RESULTS USING SSIM/PSNR(DB) ON RAIN100
    Image Rainy image L2 + TV L2 + LF-channel fidelity Proposed method
    Rain100 0.7090/22.16 0.9284/31.31 0.9275/31.08 0.9317/31.62
     | Show Table
    DownLoad: CSV

    In this paper, we have proposed a single image rain removal algorithm based on image decomposition and a dense network. It focuses on the high frequency components of images to remove rain streaks. Instead of using low-pass filters to decompose the image, we designed two sub-networks to determine the low-frequency and non-rain high-frequency components of rainy images simultaneously. We have also introduced a new refined loss function to optimize the two sub-networks jointly. By combining the learned LF and HF images, we obtain the de-rained image. Additionally, our proposed method does not require additional post-processing to improve the visual quality of restored images. Evaluations conducted on synthetic and real-world rainy images have demonstrated that our proposed method noticeably outperforms several state-of-the-art methods.

  • [1]
    S. Ren, K. He, R. Girshick, and S. Jian, "Faster r-CNN: towards realtime object detection with region proposal networks, " in Proc. Int. Conf. Advances in Neural Information Processing Systems, Montreal, Canada, 2015, pp. 91-99. http://cn.bing.com/academic/profile?id=4f54714904ee7ce1557ceb832f17d8b7&encoded=0&v=paper_preview&mkt=zh-cn
    [2]
    H. Nam and B. Han, "Learning multi-domain convolutional neural networks for visual tracking, " Computer Vision and Pattern Recognition (CVPR), pp. 4293-4302, 2015.
    [3]
    J. H. Kim, C. Lee, J. Y. Sim, and C. S. Kim, "Single-image deraining using an adaptive nonlocal means filter, " in Proc. IEEE Int. Conf. Image Processing, Melbourne, Australia, 2013, pp. 914-917. http://cn.bing.com/academic/profile?id=09f21b1d97446c2963d9f36005365fcf&encoded=0&v=paper_preview&mkt=zh-cn
    [4]
    Y. L. Chen and C. T. Hsu, "A generalized low-rank appearance model for spatio-temporally correlated rain streaks, " in IEEE Int. Conf. Computer Vision, 2013, Sydney, Australia, pp. 1968-1975. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=CC0213993873
    [5]
    Y. Chang, L. Yan, and S. Zhong, "Transformed low-rank model for line pattern noise removal, " in Proc. IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 1726-1734. http://cn.bing.com/academic/profile?id=fa1f9f42d4b9336a0e4d56ba4aa7fb34&encoded=0&v=paper_preview&mkt=zh-cn
    [6]
    L. W. Kang, C. W. Lin, and Y. H. Fu, "Automatic single-image-based rain streaks removal via image decomposition, " IEEE Trans. Image Processing, vol. 21, no. 4, pp. 1742-1755, 2012. doi: 10.1109/TIP.2011.2179057
    [7]
    D. A. Huang, L. W. Kang, Y. C. F. Wang, and C. W. Lin, "Self-learning based image decomposition with applications to single image denoising, " IEEE Trans. Multimedia, vol. 16, no. 1, pp. 83-93, 2013. http://cn.bing.com/academic/profile?id=3881e0312572bdfec7772ee257a5e4cb&encoded=0&v=paper_preview&mkt=zh-cn
    [8]
    S. H. Sun, S. P. Fan, and Y. C. F. Wang, "Exploiting image structural similarity for single image rain removal, " in Proc. IEEE Int. Conf. Image Processing, Quebec, Canada, 2015, pp. 4482-4486. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=CC0214832869
    [9]
    Y. Luo, Y. Xu, and H. Ji, "Removing rain from a single image via discriminative sparse coding, " in Proc. IEEE Int. Conf. Computer Vision, Santiago, Chile, 2015, pp. 3397-3405. http://cn.bing.com/academic/profile?id=842d028b636e0d047d7b6e7d2837fda9&encoded=0&v=paper_preview&mkt=zh-cn
    [10]
    Y. Li, R. T. Tan, X. J. Guo, J. B. Lu, and M. S. Brown, "Rain streak removal using layer priors, " in Proc. Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 2736-2744.
    [11]
    L. Zhu, C. W. Fu, D. Lischinski, and P. A. Heng, "Joint bi-layer optimization for single-image rain streak removal, " in Proc. IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 2526-2534. http://cn.bing.com/academic/profile?id=0be776bba237ff6bad63d7a0ff7b54fa&encoded=0&v=paper_preview&mkt=zh-cn
    [12]
    D. Eigen, D. Krishnan, and R. Fergus, "Restoring an image taken through a window covered with dirt or rain, " in Proc. IEEE Int. Conf. Computer Vision, Sydney, Australia, 2013, pp. 633-640. http://cn.bing.com/academic/profile?id=63bb9b7a86ac419a429205e7f6bacc46&encoded=0&v=paper_preview&mkt=zh-cn
    [13]
    X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, "Clearing the skies: a deep network architecture for single-image rain removal, " IEEE Trans. Image Processing, vol. 26, no. 6, pp. 2944-2956, 2017. doi: 10.1109/TIP.2017.2691802
    [14]
    X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, "Removing rain from single images via a deep detail network, " in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, 2017.
    [15]
    W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, "Deep joint rain detection and removal from a single image, " in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Puerto Rico, USA, 2017, 1357-1366. http://cn.bing.com/academic/profile?id=77fbb15463a804f934a3a95cb2a7ac2d&encoded=0&v=paper_preview&mkt=zh-cn
    [16]
    H. Zhang, V. Sindagi, and V. M. Patel, "Image de-raining using a conditional generative adversarial network, " IEEE Trans. Circuits and Systems for Video Technolagy, pp. 99, Jan. 2017.
    [17]
    Z. F. Fan, H. F. Wu, X. Y. Fu, Y. Huang, and X. Y. Ding, "Residual-guide feature fusion network for single image deraining, " [Online]. available: https://arxiv.org/abs/1804.07493. Mar. 26, 2019.
    [18]
    S. Li, W. Ren, J. Zhang, J. Yu, and X. Guo, "Fast single image rain removal via a deep decomposition-composition network, " [Online]. available: https://arxiv.org/abs/1804.02688. Mar. 26, 2019.
    [19]
    G. Huang, Z. Liu, V. D. M. Laurens, and K. Q. Weinberger, "Densely connected convolutional networks, " in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Puerto Rico, USA, 2017.
    [20]
    J. Bossu, N. Hautire, and J. P. Tarel, "Rain or snow detection in image sequences through use of a histogram of orientation of streaks, " Int. J. Computer Vision, vol. 93, no. 3, pp. 348-367, 2011. doi: 10.1007/s11263-011-0421-7
    [21]
    K. Garg and S. K. Nayar, "Detection and removal of rain from videos, " in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, CVPR. Washington, USA, 2004, pp. Ⅰ-528-Ⅰ-535.
    [22]
    J. H. Kim, J. Y. Sim, and C. S. Kim, "Video deraining and desnowing using temporal correlation and low-rank matrix completion, " IEEE Trans. Image Processing, vol. 24, no. 9, pp. 2658-2670, 2015. doi: 10.1109/TIP.2015.2428933
    [23]
    V. Santhaseelan and V. K. Asari, "Utilizing local phase information to remove rain from video, " Int. J. of Computer Vision, vol. 112, no. 1, pp. 71-89, 2015. doi: 10.1007/s11263-014-0759-8
    [24]
    S. You, R. T. Tan, R. Kawakami, Y. Mukaigawa, and K. Ikeuchi, "Adherent raindrop modeling, detectionand removal in video, " IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 38, no. 9, pp. 1721-1733, 2016. doi: 10.1109/TPAMI.2015.2491937
    [25]
    C. Tomasi and R. Manduchi, "Bilateral filtering for gray and color images, " Computer Vision, pp. 839-846, 1998.
    [26]
    K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, "Beyond a Gaussian denoiser: residual learning of deep cnn for image denoising, " IEEE Trans. Image Processing, vol. 26, no. 7, pp. 3142-3155, Jul. 2017. https://arxiv.org/pdf/1608.03981.pdf
    [27]
    B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, "Dehazenet: an end-to-end system for single image haze removal, " IEEE Trans. Image Processing, vol. 25, no. 11, pp. 5187-5198, 2016. doi: 10.1109/TIP.2016.2598681
    [28]
    C. Dong, C. C. Loy, K. He, and X. Tang, "Image super-resolution using deep convolutional networks, " IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295-307, 2016. doi: 10.1109/TPAMI.2015.2439281
    [29]
    K. He, J. Sun, and X. Tang, "Guided image filtering, " IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397- 1409, 2013. doi: 10.1109/TPAMI.2012.213
    [30]
    K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, "Deep residual learning for image recognition, " in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 770-778.
    [31]
    A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks, " in Proc. Int. Conf. Neural Information Processing Systems, Lake Tahoe, Nevada, USA, 2012, pp. 1097-1105.
    [32]
    F. Yu and V. Koltun, "Multi-scale Context Aggregation by Dilated Convolutions, " [Online]. Available: https://arxiv.org/abs/1511.07122. Mar. 26, 2019.
    [33]
    H. Le and A. Borji, "What are the Receptive, Effective Receptive, and Projective Fields of Neurons In Convolutional Neural Networks?" [Online]. Available: https://arxiv.org/abs/1705.07049. Mar. 26, 2019.
    [34]
    L. I. Rudin, S. Osher, and E. Fatemi, "Nonlinear total variation based noise removal algorithms, " Physica D: Nonlinear Phenomena, , 1992, vol. 60, no. 1-4, pp. 259-268, 1992. doi: 10.1016/0167-2789(92)90223-A
    [35]
    W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity, " IEEE Trans Image Process, vol. 13, no. 4, pp. 600-612, 2004. doi: 10.1109/TIP.2003.819861
    [36]
    S. Patterson, "Photoshop Photo Effects Tutorials, " [Online]. available: http://www.photoshopessentials.com/photo-effects. Mar. 26, 2019.
    [37]
    A. Pablo, M. Michael, F. Charless, and M. Jitendra, "Contour detection and hierarchical image segmentation, " IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 33, no. 5, pp. 898-916, 2011. doi: 10.1109/TPAMI.2010.161
    [38]
    M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, and M. Devin, "Tensorflow: a system for large-scale machine learning, " [Online]. available: https://arxiv.org/abs/1605.08695. Mar. 26, 2019.
    [39]
    D. Kingma and J. Ba, "Adam: a method for stochastic optimization, " [Online]. available: https://arxiv.org/abs/1412.6980. Mar. 26, 2019.
    [40]
    X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks, " in Proc. 13th Int. Conf. Artificial Intelligence and Statistics, Sardinia, Italy, 2010, pp. 249-256.
  • Related Articles

    [1]Jiexing Li, Yulin Cao, Zhengtai Xie, Long Jin. A k-Winners-Take-All (kWTA) Network With Noise Characteristics Captured[J]. IEEE/CAA Journal of Automatica Sinica, 2025, 12(4): 734-744. doi: 10.1109/JAS.2025.125153
    [2]Jiahong Jiang, Nan Xia, Siyao Zhou. A Multi-Type Feature Fusion Network Based on Importance Weighting for Occluded Human Pose Estimation[J]. IEEE/CAA Journal of Automatica Sinica, 2025, 12(4): 789-805. doi: 10.1109/JAS.2024.124953
    [3]Zhenzhen Luo, Xiaolu Jin, Yong Luo, Qiangqiang Zhou, Xin Luo. Analysis of Students’ Positive Emotion and Smile Intensity Using Sequence-Relative Key-Frame Labeling and Deep-Asymmetric Convolutional Neural Network[J]. IEEE/CAA Journal of Automatica Sinica, 2025, 12(4): 806-820. doi: 10.1109/JAS.2024.125016
    [4]Bin Yang, Yaguo Lei, Xiang Li, Naipeng Li, Asoke K. Nandi. Label Recovery and Trajectory Designable Network for Transfer Fault Diagnosis of Machines With Incorrect Annotation[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(4): 932-945. doi: 10.1109/JAS.2023.124083
    [5]Zhiming Zhang, Shangce Gao, MengChu Zhou, Mengtao Yan, Shuyang Cao. Mapping Network-Coordinated Stacked Gated Recurrent Units for Turbulence Prediction[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(6): 1331-1341. doi: 10.1109/JAS.2024.124335
    [6]Chi Ma, Dianbiao Dong. Finite-time Prescribed Performance Time-Varying Formation Control for Second-Order Multi-Agent Systems With Non-Strict Feedback Based on a Neural Network Observer[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(4): 1039-1050. doi: 10.1109/JAS.2023.123615
    [7]Dingxin He, HaoPing Wang, Yang Tian, Yida Guo. A Fractional-Order Ultra-Local Model-Based Adaptive Neural Network Sliding Mode Control of n-DOF Upper-Limb Exoskeleton With Input Deadzone[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(3): 760-781. doi: 10.1109/JAS.2023.123882
    [8]Tao Wang, Qiming Chen, Xun Lang, Lei Xie, Peng Li, Hongye Su. Detection of Oscillations in Process Control Loops From Visual Image Space Using Deep Convolutional Networks[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(4): 982-995. doi: 10.1109/JAS.2023.124170
    [9]Kailong Liu, Qiao Peng, Yuhang Liu, Naxin Cui, Chenghui Zhang. Explainable Neural Network for Sensitivity Analysis of Lithium-ion Battery Smart Production[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(9): 1944-1953. doi: 10.1109/JAS.2024.124539
    [10]Qian Hu, Jiayi Ma, Yuan Gao, Junjun Jiang, Yixuan Yuan. MAUN: Memory-Augmented Deep Unfolding Network for Hyperspectral Image Reconstruction[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(5): 1139-1150. doi: 10.1109/JAS.2024.124362
    [11]Kui Jiang, Ruoxi Wang, Yi Xiao, Junjun Jiang, Xin Xu, Tao Lu. Image Enhancement via Associated Perturbation Removal and Texture Reconstruction Learning[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(11): 2253-2269. doi: 10.1109/JAS.2024.124521
    [12]Xingtang Wu, Mingkun Yang, Wenbo Lian, Min Zhou, Hongwei Wang, Hairong Dong. Cascading Delays for the High-Speed Rail Network Under Different Emergencies: A Double Layer Network Approach[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(10): 2014-2025. doi: 10.1109/JAS.2022.105530
    [13]Xiufang Chen, Mei Liu, Shuai Li. Echo State Network With Probabilistic Regularization for Time Series Prediction[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(8): 1743-1753. doi: 10.1109/JAS.2023.123489
    [14]Zhijia Zhao, Jian Zhang, Shouyan Chen, Wei He, Keum-Shik Hong. Neural-Network-Based Adaptive Finite-Time Control for a Two-Degree-of-Freedom Helicopter System With an Event-Triggering Mechanism[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(8): 1754-1765. doi: 10.1109/JAS.2023.123453
    [15]Ruibing Jin, Min Wu, Keyu Wu, Kaizhou Gao, Zhenghua Chen, Xiaoli Li. Position Encoding Based Convolutional Neural Networks for Machine Remaining Useful Life Prediction[J]. IEEE/CAA Journal of Automatica Sinica, 2022, 9(8): 1427-1439. doi: 10.1109/JAS.2022.105746
    [16]Long Sun, Zhenbing Liu, Xiyan Sun, Licheng Liu, Rushi Lan, Xiaonan Luo. Lightweight Image Super-Resolution via Weighted Multi-Scale Residual Network[J]. IEEE/CAA Journal of Automatica Sinica, 2021, 8(7): 1271-1280. doi: 10.1109/JAS.2021.1004009
    [17]Chengcai Leng, Hai Zhang, Guorong Cai, Zhen Chen, Anup Basu. Total Variation Constrained Non-Negative Matrix Factorization for Medical Image Registration[J]. IEEE/CAA Journal of Automatica Sinica, 2021, 8(5): 1025-1037. doi: 10.1109/JAS.2021.1003979
    [18]Xiaowei Feng, Xiangyu Kong, Hongguang Ma. Coupled Cross-correlation Neural Network Algorithm for Principal Singular Triplet Extraction of a Cross-covariance Matrix[J]. IEEE/CAA Journal of Automatica Sinica, 2016, 3(2): 147-156.
    [19]Hongyu Zhao, Chuangbai Xiao, Jing Yu, Xiujie Xu. Single Image Fog Removal Based on Local Extrema[J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(2): 158-165.
    [20]Qiming Zhao, Hao Xu, Sarangapani Jagannathan. Near Optimal Output Feedback Control of Nonlinear Discrete-time Systems Based on Reinforcement Neural Network Learning[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(4): 372-384.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(12)  / Tables(4)

    Article Metrics

    Article views (1613) PDF downloads(131) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return