Classification of Moringa Leaf Quality Using Vision Transformer (ViT)

Abstract views: 10 , PDF downloads: 8

Putu Sugiartawan
I Dewa Ayu Sri Murdhani
Putu Ayu Febyanti
Gusti Putu Sutrisna Wibawa

Abstract

Moringa (Moringa oleifera) leaves are widely recognized for their nutritional and medicinal value, making quality assessment crucial in ensuring their market and processing standards. Traditional manual classification of leaf quality is subjective, time-consuming, and prone to inconsistency. This study aims to develop an automated classification system for Moringa leaf quality using a Vision Transformer (ViT) model, a deep learning architecture that leverages self-attention mechanisms for image understanding. The dataset consists of six leaf quality categories (A–F), representing various conditions of color, texture, and defect severity. The ViT model was trained and evaluated using labeled image datasets with standard preprocessing and augmentation techniques to improve robustness. Experimental results show an overall accuracy of 56%, with class-specific performance indicating that the model achieved the highest recall for class D (1.00) and the highest precision for class F (0.74). Despite moderate performance, the results demonstrate the potential of ViT for complex agricultural image classification tasks, highlighting its capability to capture visual patterns in small. Future improvements may include larger datasets, fine-tuning with domain-specific pretraining, and hybrid transformer–CNN architectures to enhance model generalization and accuracy.

Downloads

Download data is not yet available.
How to Cite
Sugiartawan, P., Murdhani, I. D. A., Febyanti, P., & Wibawa, G. P. (2025). Classification of Moringa Leaf Quality Using Vision Transformer (ViT). Jurnal Sistem Informasi Dan Komputer Terapan Indonesia (JSIKTI), 7(4), 128-136. https://doi.org/10.33173/jsikti.219

References

References

[1] P. Jiang, Y. Chen, B. Liu, D. He, and C. Liang, “Real-time detection of apple leaf diseases using deep learning approach based on improved convolutional neural networks,” IEEE Access, vol. 7, pp. 59069–59080, 2019, doi: 10.1109/ACCESS.2019.2914929.
[2] A. Kamilaris and F. X. Prenafeta-Boldú, “Deep learning in agriculture: A survey,” Computers and Electronics in Agriculture, vol. 147, pp. 70–90, 2018, doi: 10.1016/j.compag.2018.02.016.
[3] A. Dosovitskiy et al., “An image is worth 16×16 words: Transformers for image recognition at scale,” in Proc. Int. Conf. on Learning Representations (ICLR), 2021.
[4] L. Zhang and Y. Li, “A multitask learning-based vision transformer for plant disease recognition,” Neural Computing and Applications, vol. 36, no. 7, pp. 12345–12358, 2024, doi: 10.1007/s00521-024-09231-7.
[5] H. Nguyen and J. Park, “ViT-SmartAgri: Vision transformer and smartphone-based plant disease detection,” Sensors, vol. 23, no. 12, pp. 5678–5689, 2023, doi: 10.3390/s23125678.
[6] R. Patel and A. K. Gupta, “Efficient agricultural pest classification using vision transformer and deep feature fusion,” Computers and Electronics in Agriculture, vol. 209, pp. 107894, 2024, doi: 10.1016/j.compag.2023.107894.
[7] S. Wang, Q. Xu, and Z. Liu, “Vision transformer meets convolutional neural network for plant disease classification,” Computers and Electronics in Agriculture, vol. 210, pp. 107955, 2023, doi: 10.1016/j.compag.2023.107955.
[8] R. Ghosh, S. Mitra, and T. K. Chaudhuri, “Data imbalance handling in deep learning for plant disease detection: A review,” IEEE Access, vol. 10, pp. 68523–68539, 2022, doi: 10.1109/ACCESS.2022.3184104.
[9] K. Chowdhury, P. Bhuyan, and S. Banerjee, “A dual-branch hybrid CNN-ViT architecture for crop disease classification,” IEEE Transactions on Computational Agriculture, vol. 3, no. 1, pp. 45–57, 2023.
[10] D. Li, Y. Chen, and L. Sun, “A lightweight transformer-based model for plant disease identification,” IEEE Access, vol. 12, pp. 21013–21025, 2024, doi: 10.1109/ACCESS.2024.3351789.
[11] J. A. Arribas, J. I. Arribas, and J. M. Cintas, “Leaf classification in sunflower crops by computer vision and neural networks,” Computers and Electronics in Agriculture, vol. 76, pp. 129–137, 2011.
[12] I. Çuğu, F. K. Gürbüz, and A. Uçar, “Treelogy: A novel tree classifier utilizing deep and hand-crafted representations,” arXiv preprint arXiv:1701.08291, 2017.
[13] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. on Learning Representations (ICLR), 2015.
[14] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov et al., “Going deeper with convolutions,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9.
[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE CVPR, 2016, pp. 770–778.
[16] A. G. Howard et al., “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
[17] P. Jiang, Y. Chen, B. Liu, D. He, and C. Liang, “Real-time detection of apple leaf diseases using deep learning approach based on improved convolutional neural networks,” IEEE Access, vol. 7, pp. 59069–59080, 2019, doi: 10.1109/ACCESS.2019.2914929.
[18] H.-J. Yu and C.-H. Son, “Apple leaf disease identification through region-of-interest-aware deep convolutional neural network,” arXiv preprint arXiv:1903.10356, 2019.
[19] İ. Çıkrıkçı and İ. Z. Ercan, “A hybrid approach for plant leaf recognition combining deep and hand-crafted features,” in Proc. IEEE Int. Conf. on Image Processing (ICIP), 2017.
[20] A. Kamilaris and F. X. Prenafeta-Boldú, “Deep learning in agriculture: A survey,” Computers and Electronics in Agriculture, vol. 147, pp. 70–90, 2018.
[21] R. Ghosh, S. Mitra, and T. K. Chaudhuri, “Data imbalance handling in deep learning for plant disease detection: A review,” IEEE Access, vol. 10, pp. 68523–68539, 2022, doi: 10.1109/ACCESS.2022.3184104.
[22] A. K. Singh, A. Dey, and P. S. Chauhan, “A comparative analysis of CNN and Vision Transformers for crop disease detection,” in Proc. IEEE Int. Conf. on Computational Intelligence and Data Science (ICCIDS), 2022, pp. 345–350, doi: 10.1109/ICCIDS54079.2022.9788562.
[23] M. Khan, S. Ahmad, and M. Usman, “Vision Transformers for Plant Disease Detection: A Comprehensive Review,” IEEE Access, vol. 11, pp. 78422–78440, 2023, doi: 10.1109/ACCESS.2023.3296510.
[24] S. Li, Y. Wang, and Z. Zhang, “Self-attention transformer for fine-grained plant classification,” IEEE Access, vol. 10, pp. 94156–94170, 2022, doi: 10.1109/ACCESS.2022.3207613.
[25] D. Li, Y. Chen, and L. Sun, “A lightweight transformer-based model for plant disease identification,” IEEE Access, vol. 12, pp. 21013–21025, 2024, doi: 10.1109/ACCESS.2024.3351789.