Publication:
A real-time framework for video Dehazing using bounded transmission and controlled Gaussian filter

dc.contributor.affiliationsFaculty of Science and Technology
dc.contributor.affiliationsUniversiti Sains Islam, Malaysia (USIM)
dc.contributor.affiliationsUniversiti Pendidikan Sultan Idris (UPSI)
dc.contributor.authorAlajarmeh A.en_US
dc.contributor.authorZaidan A.A.en_US
dc.date.accessioned2024-05-28T08:24:36Z
dc.date.available2024-05-28T08:24:36Z
dc.date.issued2018
dc.description.abstractThe haze phenomenon exerts a degrading effect that decreases contrast and causes color shifts in outdoor images and videos. The presence of haze in outdoor images and videos is bothersome, unpleasant, and occasionally, even dangerous. Atmospheric light scattering (ALS) model is widely used to restore hazy images. In this model, two unknown parameters should be estimated: airlight and scene transmission. The quality of dehazed images and video frames considerably depends on those two parameters as well as on the speed and accuracy of the refinement process of the approximated scene transmission, this refinement is necessary to ensure spatial coherency of the output dehazed video. Spatial coherency should be accounted for in order to eliminate flickering artifacts usually noticed when extending single-image dehazing methods to the video scenario. Classic methods typically require high computation capacity in order to dehaze videos in real time. However, when the driver assistance context is considered, these approaches are inappropriate due to the limited resources mobile environments usually have. To address this issue, this study proposes a framework for real-time video dehazing. This framework consists of two stages: single-image dehazing using the bounded transmission (BT) method, which is utilized to dehaze single video frame in real time with high accuracy; and transmission refinement stage using a filter we call controlled Gaussian filter (CGF), which is proposed for the linear and simplified refinement of the scene transmission. To evaluate the proposed framework, three image datasets in addition to two video streams are employed. Experimental results show that the single-image stage in the proposed framework is at least seven times faster than existing methods. In addition, the analysis of variance (ANOVA) test proves that the quality of dehazed images in this stage is statistically similar to or better than those obtained using existing methods. Also, experiments show that the video stage in the proposed framework is capable of real-time video dehazing with better quality than the existing methods.en_US
dc.description.natureFinalen_US
dc.identifier.CODENMTAPF
dc.identifier.doi10.1007/s11042-018-5861-4
dc.identifier.epage26350
dc.identifier.issn13807501
dc.identifier.issue20
dc.identifier.scopus2-s2.0-85043681088
dc.identifier.spage26315
dc.identifier.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85043681088&doi=10.1007%2fs11042-018-5861-4&partnerID=40&md5=2c02b2b462a625e78be7b408f009b7a8
dc.identifier.urihttps://oarep.usim.edu.my/handle/123456789/8526
dc.identifier.volume77
dc.languageEnglish
dc.language.isoen_USen_US
dc.publisherSpringer New York LLCen_US
dc.relation.ispartofMultimedia Tools and Applications
dc.sourceScopus
dc.subjectAirlight estimationen_US
dc.subjectAtmospheric light scattering modelen_US
dc.subjectBounded transmissionen_US
dc.subjectImage integralsen_US
dc.subjectReal-time video dehazingen_US
dc.subjectSingle-image dehazingen_US
dc.titleA real-time framework for video Dehazing using bounded transmission and controlled Gaussian filteren_US
dc.title.alternativeMultimedia Tools Applen_US
dc.typeArticleen_US
dspace.entity.typePublication

Files

Collections