Please use this identifier to cite or link to this item: https://idr.l3.nitk.ac.in/jspui/handle/123456789/8526
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAhsan, H.
dc.contributor.authorKumar, V.
dc.contributor.authorJawahar, C.V.
dc.date.accessioned2020-03-30T10:22:24Z-
dc.date.available2020-03-30T10:22:24Z-
dc.date.issued2015
dc.identifier.citationICAPR 2015 - 2015 8th International Conference on Advances in Pattern Recognition, 2015, Vol., , pp.-en_US
dc.identifier.urihttp://idr.nitk.ac.in/jspui/handle/123456789/8526-
dc.description.abstractAutomatic annotation of an audio or a music piece with multiple labels helps in understanding the composition of a music. Such meta-level information can be very useful in applications such as music transcription, retrieval, organization and personalization. In this work, we formulate the problem of annotation as multi-label classification which is considerably different from that of a popular single (binary or multi-class) label classification. We employ both the nearest neighbour and max-margin (SVM) formulations for the automatic annotation. We consider K-NN and SVM that are adapted for multi-label classification using one-vs-rest strategy and a direct multi-label classification formulation using ML-KNN and M3L. In the case of music, often the signatures of the labels (e.g. instruments and vocal signatures) are fused in the features. We therefore propose a simple feature augmentation technique based on non-negative matrix factorization (NMF) with an intuition to decompose a music piece into its constituent components. We conducted our experiments on two data sets - Indian classical instruments dataset and Emotions dataset [1], and validate the methods. � 2015 IEEE.en_US
dc.titleMulti-label annotation of musicen_US
dc.typeBook chapteren_US
Appears in Collections:2. Conference Papers

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.