Please use this identifier to cite or link to this item: https://idr.l3.nitk.ac.in/jspui/handle/123456789/17747
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorC D, Jaidhar-
dc.contributor.authorNaik, Dinesh-
dc.date.accessioned2024-05-14T04:31:36Z-
dc.date.available2024-05-14T04:31:36Z-
dc.date.issued2023-
dc.identifier.urihttp://idr.nitk.ac.in/jspui/handle/123456789/17747-
dc.description.abstractA long-standing goal of artificial intelligence in Computer Vision has been to de- velop models capable of perceiving and comprehending the complex visual environ- ment around us and communicating with us in natural language about it. Significant progress has been achieved toward this goal over the last few years as a result of paral- lel advancements in computing systems, data collection, and algorithms. Visual recog- nition has advanced at a breakneck pace, with computers now capable of classifying images, recognising them, and describing them in even longer words. They exceed humans in various categories, even surpassing them in some instances. Despite tremen- dous progress, the majority of improvements in visual recognition continue to occur when an image is labelled with one or a few different labels and swiftly explained in natural language. The majority of people find it straightforward to watch a brief video and describe what occurred (in words). Machines have a difficult time extracting meaning from video frames and generating a sentence description. Computer vision research has long been focused on comprehending visual media, such as images and videos. Additionally, a new issue within the scope of this study area, dynamic image and video transcription, has sparked the interest of a large number of people. This re- search presents models and methods for associating visual data with semantic labels and visual data with natural language utterances, thereby simplifying translation be- tween domain constituents. Semantic segmentation is a fundamental component of object recognition models, as it aims to classify things on a pixel-by-pixel basis. The primary goal of this re- search is to classify an individual object within an image pixel by pixel. The provided image is evaluated to ascertain the pixel-level properties that are present. Second, we suggested an encoder-decoder architecture with a hybrid loss function that employs a layered LSTM as the encoder and an LSTM model combined with an attention mecha- nism as the decoder. Thirdly, we propose a unique framework for video captioning that combines a bidirectional multi-layer LSTM encoder and a unidirectional decoder with a temporal attention technique to produce superior global representations for videos. Finally, we propose an efficient method for captioning videos using CNN in conjunc- tion with a short-connected LSTM-based encoder-decoder model and a phrase context vector.en_US
dc.language.isoenen_US
dc.publisherNational Institute Of Technology Karnataka Surathkalen_US
dc.subjectComputer Visionen_US
dc.subjectObject Detectionen_US
dc.subjectSemantic Segmentationen_US
dc.subjectOb- ject Recognitionen_US
dc.titleA Region Based Semantic Composition Framework to Visual Image and Video Event Specificatioaen_US
dc.typeThesisen_US
Appears in Collections:1. Ph.D Theses

Files in This Item:
File Description SizeFormat 
110652-IT11P01-DINESH NAIK.pdf19.64 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.