Krishna, AG and Sreenivas, TV (2004) Music Instrument Recognition: From Isolated Notes to Solo Phrases. In: 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '04), 17-21 May, Quebec,Canada, Vol.4, 265-268.
Speech and audio processing techniques are used along with statistical pattern recognition principles to solve the problem of music instrument recognition. Non-temporal, frame level features only are used so that the proposed system is scalable from the isolated notes to the solo instrumental phrases scenario without the need for temporal segmentation of solo music. Based on their effectiveness in speech, line spectral frequencies (LSF) are proposed as features for music instrument recognition. The proposed system has also been evaluated using MFCC and LPCC features. Gaussian mixture models and K-nearest neighbour model classifier are used for classification. The experimental dataset included the Ulowa MIS and the C Music corporation RWC databases. Our best results at the instrument family level is about 95% and at the instrument level is about 90% when classifying 14 instruments.
|Item Type:||Conference Paper|
|Additional Information:||Ã�Â©1990 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.|
|Department/Centre:||Division of Electrical Sciences > Electrical Communication Engineering|
|Date Deposited:||14 Dec 2005|
|Last Modified:||19 Sep 2010 04:22|
Actions (login required)