Please use this identifier to cite or link to this item: https://idr.l3.nitk.ac.in/jspui/handle/123456789/7261
Full metadata record
DC FieldValueLanguage
dc.contributor.authorVikram, M.-
dc.contributor.authorSuhas, B.S.-
dc.contributor.authorAnantharaman, A.-
dc.contributor.authorSowmya, Kamath S.-
dc.date.accessioned2020-03-30T09:58:43Z-
dc.date.available2020-03-30T09:58:43Z-
dc.date.issued2019-
dc.identifier.citationACM International Conference Proceeding Series, 2019, Vol., , pp.44-51en_US
dc.identifier.urihttps://idr.nitk.ac.in/jspui/handle/123456789/7261-
dc.description.abstractModern medical practices are increasingly dependent on Medical Imaging for clinical analysis and diagnoses of patient illnesses. A significant challenge when dealing with the extensively available medical data is that it often consists of heterogeneous modalities. Existing works in the field of Content based medical image retrieval (CBMIR) have several limitations as they focus mainly on visual or textual features for retrieval. Given the unique manifold of medical data, we seek to leverage both the visual and textual modalities to improve the image retrieval. We propose a Latent Dirichlet Allocation (LDA) based technique for encoding the visual features and show that these features effectively model the medical images. We explore early fusion and late fusion techniques to combine these visual features with the textual features. The proposed late fusion technique achieved a higher mAP than the state-of-the-art on the ImageCLEF 2009 dataset, underscoring its suitability for effective multimodal medical image retrieval. � 2019 Association for Computing Machinery.en_US
dc.titleAn approach for multimodal medical image retrieval using latent dirichlet allocationen_US
dc.typeBook chapteren_US
Appears in Collections:2. Conference Papers

Files in This Item:
File Description SizeFormat 
12 An Approach for Multimodal.pdf769.81 kBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.