Daily Abstract Digest

[24.08.16 / ECCV 22'] Fast and High Quality Image Denoising via Malleable Convolution

Emos Yalp 2024. 8. 16. 23:11

https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136780420.pdf

Abstract

Most image denoising networks apply a single set of static convolutional kernels across the entire input image. This is sub-optimal for natural images, as they often consist of heterogeneous visual patterns. Dynamic convolution tries to address this issue by using per-pixel convolution kernels, but this greatly increases computational cost. In this work, we present Malleable Convolution $($MalleConv$)$, which performs spatial-varying processing with minimal computational overhead. MalleConv uses a smaller set of spatially-varying convolution kernels, a compromise between static and per-pixel convolution kernels. These spatially-varying kernels are produced by an efficient predictor network running on a downsampled input, making them much more efficient to compute than per-pixel kernels produced by a full-resolution image, and also enlarging the network’s receptive field compared with static kernels. These kernels are then jointly upsampled and applied to a fullresolution feature map through an efficient on-the-fly slicing operator with minimum memory overhead. To demonstrate the effectiveness of MalleConv, we use it to build an efficient denoising network we call MalleNet. MalleNet achieves high-quality results without very deep architectures, making it 8.9× faster than the best performing denoising algorithms while achieving similar visual quality. We also show that a single MalleConv layer added to a standard convolution-based backbone can significantly reduce the computational cost or boost image quality at a similar cost.


  • Task: Image Denoising
  • Problem Definition: static conv & dynamic conv is not suitable for heterogeneous visual pattern
  • Approach:spatial-varying processing with minimal computational overhead