Publication: A real-time framework for video Dehazing using bounded transmission and controlled Gaussian filter
dc.contributor.author | Alajarmeh, A | en_US |
dc.contributor.author | Zaidan, AA | en_US |
dc.date.accessioned | 2024-05-29T02:53:17Z | |
dc.date.available | 2024-05-29T02:53:17Z | |
dc.date.issued | 2018 | |
dc.description.abstract | The haze phenomenon exerts a degrading effect that decreases contrast and causes color shifts in outdoor images and videos. The presence of haze in outdoor images and videos is bothersome, unpleasant, and occasionally, even dangerous. Atmospheric light scattering (ALS) model is widely used to restore hazy images. In this model, two unknown parameters should be estimated: airlight and scene transmission. The quality of dehazed images and video frames considerably depends on those two parameters as well as on the speed and accuracy of the refinement process of the approximated scene transmission, this refinement is necessary to ensure spatial coherency of the output dehazed video. Spatial coherency should be accounted for in order to eliminate flickering artifacts usually noticed when extending single-image dehazing methods to the video scenario. Classic methods typically require high computation capacity in order to dehaze videos in real time. However, when the driver assistance context is considered, these approaches are inappropriate due to the limited resources mobile environments usually have. To address this issue, this study proposes a framework for real-time video dehazing. This framework consists of two stages: single-image dehazing using the bounded transmission (BT) method, which is utilized to dehaze single video frame in real time with high accuracy; and transmission refinement stage using a filter we call controlled Gaussian filter (CGF), which is proposed for the linear and simplified refinement of the scene transmission. To evaluate the proposed framework, three image datasets in addition to two video streams are employed. Experimental results show that the single-image stage in the proposed framework is at least seven times faster than existing methods. In addition, the analysis of variance (ANOVA) test proves that the quality of dehazed images in this stage is statistically similar to or better than those obtained using existing methods. Also, experiments show that the video stage in the proposed framework is capable of real-time video dehazing with better quality than the existing methods. | |
dc.identifier.doi | 10.1007/s11042-018-5861-4 | |
dc.identifier.epage | 26350 | |
dc.identifier.isbn | 1573-7721 | |
dc.identifier.issn | 1380-7501 | |
dc.identifier.issue | 20 | |
dc.identifier.scopus | WOS:000444201500007 | |
dc.identifier.spage | 26315 | |
dc.identifier.uri | https://oarep.usim.edu.my/handle/123456789/11380 | |
dc.identifier.volume | 77 | |
dc.language | English | |
dc.language.iso | en_US | |
dc.publisher | Springer | en_US |
dc.relation.ispartof | Multimedia Tools And Applications | |
dc.source | Web Of Science (ISI) | |
dc.subject | Single-image dehazing | en_US |
dc.subject | Real-time video dehazing | en_US |
dc.subject | Atmospheric light scattering model | en_US |
dc.subject | Bounded transmission | en_US |
dc.subject | Image integrals | en_US |
dc.subject | Airlight estimation | en_US |
dc.title | A real-time framework for video Dehazing using bounded transmission and controlled Gaussian filter | |
dc.type | Article | en_US |
dspace.entity.type | Publication |