Identification of foods in the breakfast and determination of nutritional facts using deep convolution neural networks

Autores

DOI:

https://doi.org/10.5327/fst.10323

Palavras-chave:

deep learning, food recognition, machine learning, nutritional value prediction

Resumo

Food recognition plays a crucial role in various fields, including healthcare, nutrition, and the food industry. It involves identifying different types of foods or dishes from images, videos, or other data sources. In healthcare, food recognition aids individuals in monitoring their daily food intake and managing their diet. It also assists dietitians and nutritionists in creating personalized meal plans based on patients' nutritional requirements and preferences. This article focuses on the development of software that can recognize food products and predict their nutritional facts. The software extracts essential nutritional facts such as fat, carbohydrates, protein, and energy from the food products and compiles them into a comprehensive list. For each of the 20 food products, 36 food images were obtained, resulting in a total of 720 food images. To validate the accuracy of the trained models, six different images of each food product were set aside for external validation purposes. The rest of the images were then trained using deep learning algorithms, namely, GoogleNet, ResNet-50, and Inception-v3, in the MATLAB software. The training and validation processes yielded over 98% correct predictions for each of the deep learning algorithms. Although there were no significant differences in accuracy among the algorithms, GoogleNet stood out when considering both prediction accuracy and prediction time. The validated deep learning algorithms were employed in developing the software for food recognition and nutritional value determination. The results indicate that the developed software can reliably identify foods and provide their corresponding nutritional facts. This software holds significant potential for application in the nutrition and dietetic field and can be particularly useful in healthcare settings for monitoring the dietary intake of patients with chronic diseases such as diabetes, heart disease, or obesity. The system can track the types and quantities of foods consumed, offering personalized feedback to patients and healthcare providers.

Downloads

Não há dados estatísticos.

Referências

Allegra, D., Battiato, S., Ortis, A., Urso, S., & Polosa, R. (2020). A review on food recognition technology for health applications. Health Psychology Research, 8(3), 9297. https://doi.org/10.4081/hpr.2020.9297

Alzubi, J., Nayyar, A., & Kumar, A. (2018). Machine learning from theory to algorithms: an overview. Journal of physics: Conference Series, 142, 012012. https://doi.org/10.1088/1742-6596/1142/1/012012

Bossard, L., Guillaumin, M., & Gool, L. V. (2014). Food-101–mining discriminative components with random forests. European Conference on Computer Vision, 446-461. https://doi.org/10.1007/978-3-319-10599-4_29

Boushey, C. J., Spoden, M., Zhu, F. M., Delp, E. J., & Kerr, D. A. (2017). New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods. Proceedings of the Nutrition Society, 76(3), 283-294. https://doi.org/10.1017/S0029665116002913

FatSecret (2023). Foods. Retrieved from https://www.fatsecret.com/calories-nutrition/

Habibi, H., & Khosravi-Darani, K. (2017). Effective variables on production and structure of xanthan gum and its food applications: A review. Biocatalysis and Agricultural Biotechnology, 10, 130-140. https://doi.org/10.1016/j.bcab.2017.02.013

Jiang, L., Qiu, B., Liu, X., Huang, C., & Lin, K. (2020). DeepFood: food image analysis and dietary assessment via deep model. IEEE Access, 8, 47477-47489. https://doi.org/10.1109/ACCESS.2020.2973625

Kawano, Y., & Yanai, K. (2014). Food image recognition with deep convolutional features. Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, 589-593. https://doi.org/10.1145/2638728.2641339

Kawano, Y., & Yanai, K. (2015). Automatic expansion of a food image dataset leveraging existing categories with domain adaptation. European Conference on Computer Vision, 3-1. https://doi.org/10.1007/978-3-319-16199-0_1

Krizhevsky, A., Sutskever, I., & Hinton, G. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Communications of the ACM, 60(6), 84-90. https://doi.org/10.1145/3065386

Miller, L. M., & Cassady, D. L. (2015). The effects of nutrition knowledge on food label use. A review of the literature. Appetite, 92, 207-216. https://doi.org/10.1016/j.appet.2015.05.029

Prieto, A., Prieto, B., Ortigosa, E. M., Ros, E., Pelayo, F., Ortega, J., & Rojas, I. (2016). Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing, 214, 242-268. https://doi.org/10.1016/j.neucom.2016.06.014

Ruthsatz, M., & Candeias, V. (2020). Non-communicable disease prevention, nutrition and aging. Acta Biomedica, 91(2), 379-388. https://doi.org/10.23750/abm.v91i2.9721

Sawamoto, R., Nozaki, T., Nishihara, T., Furukawa, T., Hata, T., Komaki, G., & Sudo, N. (2017). Predictors of successful long-term weight loss maintenance: a two-year follow-up. BioPsychoSocial Medicine, 11, 14. https://doi.org/10.1186/s13030-017-0099-3

Seesaard, T., Goel, N., Kumar, M., & Wongchoosuk, C. (2022). Advances in gas sensors and electronic nose echnologies for agricultural cycle applications. Computers and Electronics in Agriculture, 193, 106673. https://doi.org/10.1016/j.compag.2021.106673

Shirmard, H., Farahbakhsh, E., Heidari, E., Beiranvand Pour, A., Pradhan, B., Müller, D., & Chandra, R. (2022). A comparative study of convolutional neural networks and conventional machine learning models for lithological mapping using remote sensing data. Remote Sensing, 14(4), 819. https://doi.org/10.3390/rs14040819

Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing & Management, 45(4), 427-437. https://doi.org/10.1016/j.ipm.2009.03.002

Stehman, S. V. (1997). Selecting and interpreting measures of thematic classification accuracy. Remote Sensing of Environment, 62(1), 77-89. https://doi.org/10.1016/S0034-4257(97)00083-7

Subhi, M. A., & Ali, S. M. (2018). A Deep Convolutional Neural Network for Food Detection and Recognition. IEEE-EMBS Conference on Biomedical Engineering and Sciences, 284-287. https://doi.org/10.1109/IECBES.2018.8626720

Tarlak, F., & Yucel, O. (2023). Tarlak_and_Yucel_2023_FST. GitHub. Retrieved from https://github.com/ftarlak/Tarlak_and_Yucel_2023_FST

Tarlak, F., Ozdemir, M., & Melikoglu, M. (2016a). Computer vision system approach in colour measurements of foods: Part I. development of methodology. Food Science and Technology, 36(2), 382-388. https://doi.org/10.1590/1678-457X.11615

Tarlak, F., Ozdemir, M., & Melikoglu, M. (2016b). Computer vision system approach in colour measurements of foods: Part II. Validation of methodology with real foods. Food Science and Technology, 36(3), 499-504. https://doi.org/10.1590/1678-457X.02616

Yang, S., Chen, M., Pomerleau, D., & Sukthankar, R. (2010). Food recognition using statistics of pairwise local features. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2249-2256. https://doi.org/10.1109/CVPR.2010.5539907

Downloads

Publicado

2023-09-26

Como Citar

TARLAK, F., & YÜCEL, Özgün. (2023). Identification of foods in the breakfast and determination of nutritional facts using deep convolution neural networks. Food Science and Technology, 43. https://doi.org/10.5327/fst.10323

Edição

Seção

Artigos Originais