Please use this identifier to cite or link to this item: https://idr.l3.nitk.ac.in/jspui/handle/123456789/16398
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChilukuri P.K.
dc.contributor.authorPadala P.
dc.contributor.authorPadala P.
dc.contributor.authorDesanamukula V.S.
dc.contributor.authorPvgd P.R.
dc.date.accessioned2021-05-05T10:30:23Z-
dc.date.available2021-05-05T10:30:23Z-
dc.date.issued2021
dc.identifier.citationIEEE Access Vol. 9 , , p. 16761 - 16782en_US
dc.identifier.urihttps://doi.org/10.1109/ACCESS.2021.3052474
dc.identifier.urihttp://idr.nitk.ac.in/jspui/handle/123456789/16398-
dc.description.abstractImage-stitching (or) mosaicing is considered an active research-topic with numerous use-cases in computer-vision, AR/VR, computer-graphics domains, but maintaining homogeneity among the input image sequences during the stitching/mosaicing process is considered as a primary-limitation major-disadvantage. To tackle these limitations, this article has introduced a robust and reliable image stitching methodology (l,r-Stitch Unit), which considers multiple non-homogeneous image sequences as input to generate a reliable panoramically stitched wide view as the final output. The l,r-Stitch Unit further consists of a pre-processing, post-processing sub-modules a l,r-PanoED-network, where each sub-module is a robust ensemble of several deep-learning, computer-vision image-handling techniques. This article has also introduced a novel convolutional-encoder-decoder deep-neural-network (l,r-PanoED-network) with a unique split-encoding-network methodology, to stitch non-coherent input left, right stereo image pairs. The encoder-network of the proposed l,r-PanoED extracts semantically rich deep-feature-maps from the input to stitch/map them into a wide-panoramic domain, the feature-extraction feature-mapping operations are performed simultaneously in the l,r-PanoED's encoder-network based on the split-encoding-network methodology. The decoder-network of l,r-PanoED adaptively reconstructs the output panoramic-view from the encoder networks' bottle-neck feature-maps. The proposed l,r-Stitch Unit has been rigorously benchmarked with alternative image-stitching methodologies on our custom-built traffic dataset and several other public-datasets. Multiple evaluation metrics (SSIM, PSNR, MSE, L_{\alpha,\beta,\gamma } , FM-rate, Average-latency-time) wild-Conditions (rotational/color/intensity variances, noise, etc) were considered during the benchmarking analysis, and based on the results, our proposed method has outperformed among other image-stitching methodologies and has proved to be effective even in wild non-homogeneous inputs. © 2013 IEEE.en_US
dc.titleL, r-Stitch Unit: Encoder-Decoder-CNN Based Image-Mosaicing Mechanism for Stitching Non-Homogeneous Image Sequencesen_US
dc.typeArticleen_US
Appears in Collections:1. Journal Articles

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.