Fusing fine-tuned deep features for recognizing different tympanic membranes

In this study, we focus on recognizing normal, AOM, CSOM, and earwax tympanic membrane (TM) conditions using fused fine-tuned deep features provided by pre-trained deep convolutional neural networks (DCNNs). These features are applied as the input to several networks, such as an artificial neural network (ANN), k-nearest neighbor (k NN), decision tree (DT) and support vector machine (SVM). Moreover, we release a new publicly available TM data set consisting of totally 956 otoscope images. As a result, the DCNNs yielded promising results. Especially, the most efficient results were provided by VGG-16 with an accuracy of 93.05 %. The fused fine-tuned deep features improved the overall classification success. Finally, the proposed model yielded promising results with an accuracy of 99.47 %, sensitivity of 99.35 %, and specificity of 99.77 % using the combination of the fused fine-tuned deep features and SVM model. Consequently, this study shows that fused fine-tuned deep features are rather useful in recognizing different TMs and these features can provide a fully automated model with high sensitivity.
Source: Biocybernetics and Biomedical Engineering - Category: Biomedical Engineering Source Type: research