Performance Comparison of Conventional and Deep Features using SVM Classifier for Automatic Image Annotation

  • Sangita Nemade, Shefali Sonavane

Abstract

Automatic image annotation (AIA) is to apportion word-based caption to an image that distinctly describes the data objects of an image. The performance of AIA depends on the techniques that are utilized for feature extraction. Several pre-trained deep networks extract robust and important features from the image at a low, medium and high level which can be further used for classification which was earlier done by many conventional descriptors like color, texture, HOG and BOVW. Deep networks as feature descriptors can be analyzed for classification accuracy improvement. Thus, the objective of this paper is to extract the conventional features such as the bag of visual word using surf feature and histogram of gradient and deep features of various CNN models from the LabelMe dataset. The performance of these feature extraction methods is obtained by using the support vector machine classifier. For performance analysis; overall accuracy, recall, precision and F1 score are utilized. Based on the results, it is perceived that the ResNet101 performs better as feature extractor than other methods.

Published
2020-02-15
How to Cite
Shefali Sonavane, S. N. (2020). Performance Comparison of Conventional and Deep Features using SVM Classifier for Automatic Image Annotation. International Journal of Advanced Science and Technology, 29(3), 3122- 3130. Retrieved from https://sersc.org/journals/index.php/IJAST/article/view/4544
Section
Articles