Classification of Food categories and Ingredients approximation using an FD-Mobilenet and TF‐YOLO
Due to the increasing use of inexpensive imaging devices such as smartphone cameras, as more technologies in computer-view communities have been developed to promote automatic object recognition, which had already recently received considerable attention in food image recognition. To order to achieve these functions, Convolutional Neural Networks (CNNs) are used. FD-MobileNet is used as a categorizer for food while the updated You Only Look Once interface (YOLO) is the classifier and locator of ingredients. Then additives in the pictures are sliced and put into conventional image processing to evaluate the region compared to the real dimension of the material. Non-uniform shape components are segmented so that the dish's nutrient can be measured. In this paper, a description of food groups and an approximation model of ingredients with Tiny Quick You Only Look Once (TF‐YOLO) were built for embedded devices. In the first case, the k‐means technique supposed to employ to cluster the dataset that leads to improved priority target boxes. Including FD-MobileNet, we inspect 32 times in 12 layers, about half of all layers in the initial MobileNet. This approach has three benefits: (i) it decreases computing costs dramatically. (ii) This enhances the willingness to advice and significantly improves efficiency. (iii) Fast convergence with fewer resources.
Keywords: Food recognition, real‐time detection, Food categorization, convolutional neural network, You Only Look Once (YOLO) model