Innovative Motion Deblurring using a Blur Space Disentangled Net and a Hierarchy Scale-Recurrent Deblurring Network

  • D. Raju, Ramgopal

Abstract

The use of deep learning (DL) techniques to motion deblurring has shown encouraging results, thanks to the availability of large-scale datasets and complex network architectures. Two problems persist, though: first, current approaches work OK on synthetic datasets but fail miserably when faced with complicated blur in the real world; second, restored photos that are either over-or under-estimated for blur will have fuzzy or even distorted edges. To solve this problem, we provide a motion deblurring architecture that combines BSDNet and HSDNet, two networks that deal with blur space disentangled. We train a model that blurs images in order to make it easier to learn a model that deblurs images. At its core, BSDNet is a versatile blur transfer, dataset augmentation, and deblurring model controller; it first learns to extract blur characteristics from hazy photos. Furthermore, HSDNet utilizes the blur characteristics obtained by BSDNet as a priori and divides the non-uniform deblurring work into many subtasks in order to progressively recover crisp information moving from coarse to fine. Along with it, BSDNet's motion blur dataset helps to close the gap between training pictures and real blur. Research on real-world blur datasets shows that our technique achieves the greatest performance compared to numerous state-of-the-art methods, even when faced with complicated settings.

Published
2021-04-02
How to Cite
D. Raju. (2021). Innovative Motion Deblurring using a Blur Space Disentangled Net and a Hierarchy Scale-Recurrent Deblurring Network. International Journal of Advanced Science and Technology, 30(01), 378 - 390. Retrieved from http://sersc.org/journals/index.php/IJAST/article/view/38459
Section
Articles