Please use this identifier to cite or link to this item: https://idr.l3.nitk.ac.in/jspui/handle/123456789/7261
Title: An approach for multimodal medical image retrieval using latent dirichlet allocation
Authors: Vikram, M.
Suhas, B.S.
Anantharaman, A.
Sowmya, Kamath S.
Issue Date: 2019
Citation: ACM International Conference Proceeding Series, 2019, Vol., , pp.44-51
Abstract: Modern medical practices are increasingly dependent on Medical Imaging for clinical analysis and diagnoses of patient illnesses. A significant challenge when dealing with the extensively available medical data is that it often consists of heterogeneous modalities. Existing works in the field of Content based medical image retrieval (CBMIR) have several limitations as they focus mainly on visual or textual features for retrieval. Given the unique manifold of medical data, we seek to leverage both the visual and textual modalities to improve the image retrieval. We propose a Latent Dirichlet Allocation (LDA) based technique for encoding the visual features and show that these features effectively model the medical images. We explore early fusion and late fusion techniques to combine these visual features with the textual features. The proposed late fusion technique achieved a higher mAP than the state-of-the-art on the ImageCLEF 2009 dataset, underscoring its suitability for effective multimodal medical image retrieval. � 2019 Association for Computing Machinery.
URI: https://idr.nitk.ac.in/jspui/handle/123456789/7261
Appears in Collections:2. Conference Papers

Files in This Item:
File Description SizeFormat 
12 An Approach for Multimodal.pdf769.81 kBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.