Jurnal Sistem Informasi dan Komputer Terapan Indonesia (JSIKTI)
https://infoteks.org/journals/index.php/jsikti
<p><img style="float: left; width: 230px; margin-top: 8px; margin-right: 10px;" src="/public/site/images/admininfoteks/jsikti-tutu3.png"></p> <p align="justify">JSIKTI (Jurnal Sistem Informasi dan Komputer Terapan Indonesia), a four times annually provides a forum for the full range of scholarly study. JSIKTI scope encompasses <strong>data analysis, natural language processing, artificial intelligence, neural networks, pattern recognition, image processing, genetic algorithm, bioinformatics/biomedical applications, biometrical application, content-based multimedia retrievals, augmented reality, virtual reality, information system, game mobile, dan IT bussiness incubation</strong>.</p> <p align="justify">The journal publishes original research papers, short communications, and review articles both written in English or Bahasa Indonesia. The paper published in this journal implies that the work described has not been, and will not be published elsewhere, except in abstract, as part of a lecture, review or academic thesis. Paper may be written in English or Indonesian, however paper in English is preferred.</p> <p align="justify">Please read these journal guidelines and template carefully. Authors who want to submit their manuscript to the editorial office of JSIKTI (Jurnal Sistem Informasi dan Komputer Terapan Indonesia) should obey the writing guidelines. If the manuscript submitted is not appropriate with the guidelines or written in a different format, it will BE REJECTED by the editors before further reviewed. The editors will only accept the manuscripts which meet the assigned format.</p> <p align="justify">JSIKTI is published four times annually, March, June, September and December by INFOTEKS (Technology Information, Computer and Sciences Association), with <a href="http://issn.pdii.lipi.go.id/issn.cgi?daftar&1543304673&1&&">e-ISSN: <span style="font-family: helvetica; font-size: small;"><span style="font-family: helvetica; font-size: medium;">2655-7290 </span></span></a>and <a href="http://issn.pdii.lipi.go.id/issn.cgi?daftar&1543390687&1&&">p-ISSN: <span style="font-family: helvetica; font-size: small;"><span style="font-family: helvetica; font-size: medium;">2655-2183</span></span></a>.</p> <p align="justify"><strong>Before submission,</strong><br>You have to make sure that your paper is prepared using the JSIKTI paper TEMPLATE, has been carefully proofread and polished, and conformed to the author guidelines.</p> <p align="justify">Open Journal Systems (OJS) has been applied for all business process in JSIKTI. Therefore, the authors are required to register in advance and upload the manuscript by online. The process of the manuscript could be monitored through OJS. Authors, readers, editorial board, editors, and peer review could obtain the real time status of the manuscript. Several other changes are informed in the <a href="http://infoteks.org/journals/index.php/jsikti/Journal_History"><strong>Journal History</strong></a><span lang="id">.</span></p>INFOTEKS (Information Technology, Computer and Sciences)en-USJurnal Sistem Informasi dan Komputer Terapan Indonesia (JSIKTI)2655-2183DenseNet121 and Transfer Learning for Lung Disease Classification from Chest X-Ray Images
https://infoteks.org/journals/index.php/jsikti/article/view/266
<p>Lung-related disorders, including pneumonia, are still among the primary causes of death and illness worldwide, particularly in areas where medical imaging facilities and trained radiologists are scarce. The manual assessment of chest X-ray (CXR) images demands significant time and is prone to subjective interpretation, limiting its scalability for mass screening and early disease identification. To overcome these challenges, this study introduces an automated classification approach utilizing the DenseNet121 convolutional neural network through transfer learning for the detection of lung diseases from CXR scans. The pretrained ImageNet weights were adopted to capture hierarchical visual features efficiently, while overfitting was mitigated using dropout and batch normalization layers. The dataset employed consisted of 1,880 training images and 235 testing images, equally distributed between Normal and Viral Pneumonia categories. Experimental evaluation revealed an overall classification accuracy of 97%, alongside precision, recall, and F1-score metrics of 0.97 each, indicating reliable and balanced model performance. These outcomes suggest that DenseNet121 offers a highly effective foundation for computer-aided diagnostic systems capable of differentiating between healthy and infected lungs with high precision. The proposed framework provides a scalable diagnostic tool suitable for healthcare environments with limited radiological expertise. Future improvements will include expanding toward multi-class disease classification, incorporating explainable artificial intelligence (XAI) techniques to enhance interpretability, and validating the system on larger, more diverse clinical datasets.</p>Putu SugiartawanNi Wayan Wardani
##submission.copyrightStatement##
http://creativecommons.org/licenses/by-sa/4.0
2025-12-082025-12-088210011310.33173/jsikti.266Cataract Classification in Eye Images Using MobileNetV2
https://infoteks.org/journals/index.php/jsikti/article/view/268
<p>Cataract remains one of the primary causes of visual impairment globally, with early detection being essential to prevent permanent blindness and improve patient quality of life. However, conventional diagnosis depends on ophthalmologists and clinical-grade imaging devices, which are often limited in remote or under-resourced areas. This condition highlights the need for an efficient, accessible, and automated screening solution. To address this challenge, this study utilizes the MobileNetV2 deep learning architecture to classify cataract conditions based on eye images. MobileNetV2 is selected because of its lightweight model structure and strong feature representation capabilities, making it suitable for deployment in portable or embedded medical systems. The dataset used consists of two cataract stages, namely immature and mature cataracts, with images undergoing preprocessing prior to model training. The proposed system demonstrates excellent performance, achieving an accuracy, precision, recall, and F1-score of 100% in distinguishing cataract stages. These results confirm that MobileNetV2 can effectively support cataract screening with high reliability while maintaining efficiency. Future work will involve extending the dataset to include additional cataract severity levels and non-cataract eye images, as well as integrating explainable artificial intelligence methods to provide visual diagnostic interpretations and enhance clinical trust in real-world applications.</p>I Putu Adi Pratama
##submission.copyrightStatement##
http://creativecommons.org/licenses/by-sa/4.0
2025-12-082025-12-088211412510.33173/jsikti.268Cataract Maturity Classification Using the VGG16 Deep Learning Model
https://infoteks.org/journals/index.php/jsikti/article/view/267
<p>Cataract continues to be a major contributor to vision impairment worldwide, caused by gradual lens clouding that reduces clarity of sight. Accurately identifying the maturity level of cataracts is crucial in determining appropriate treatment planning and surgical intervention timing. However, the conventional diagnosis process still depends heavily on subjective visual assessment by ophthalmologists, which can lead to variability in classification results. To address this, the present study introduces an automated cataract maturity classification system using the VGG16 deep learning architecture through a transfer learning approach. The model distinguishes between immature and mature cataracts using clinical eye images that have undergone standardized preprocessing, including resizing, normalization, and augmentation, to improve learning robustness and avoid overfitting. Experimental evaluation shows that the model achieves 88 percent accuracy, with average precision, recall, and F1-score values of 0.88, demonstrating balanced classification performance for both classes. These outcomes indicate that VGG16 is capable of capturing relevant opacity progression characteristics associated with different cataract maturity levels. Future research may focus on broadening the dataset to include additional maturity categories, integrating explainability methods, and exploring advanced deep learning architectures to further enhance diagnostic performance and support clinical adoption.</p>I Wayan Kintara Anggara PutraAhmad Rifqi F
##submission.copyrightStatement##
http://creativecommons.org/licenses/by-sa/4.0
2025-12-082025-12-088212613510.33173/jsikti.267Classification of Tuberculosis and Pneumonia Lung Diseases in X-Ray Images Using the CNN Method with VGG-19 Architecture
https://infoteks.org/journals/index.php/jsikti/article/view/269
<p>Tuberculosis (TB) and Pneumonia continue to be among the world’s leading causes of morbidity and mortality, particularly in low- and middle-income countries where access to advanced diagnostic tools remains limited. Conventional radiological interpretation, while effective, heavily depends on the experience and precision of radiologists, resulting in potential subjectivity and diagnostic variability. This study proposes a fully automated classification framework for lung disease detection using a Convolutional Neural Network (CNN) based on the VGG-19 architecture. The model aims to enhance diagnostic accuracy and reliability by leveraging deep learning techniques capable of capturing subtle radiographic patterns that may not be readily identifiable by human observers. A dataset of 3,623 chest X-ray images—divided into Normal, Pneumonia, and Tuberculosis classes—was compiled from Kaggle and Mendeley Data repositories. Preprocessing techniques including Contrast Limited Adaptive Histogram Equalization (CLAHE), cropping, resizing, and normalization were employed to enhance contrast and minimize noise. The model was trained and tested under four data-split configurations (80:20, 70:30, 60:40, and 50:50) to assess generalization capability. The 70:30 configuration achieved optimal performance, recording 96% accuracy, 97% precision, 95% recall, and a 96% F1-score. These findings demonstrate that the VGG-19 model can accurately distinguish between TB, Pneumonia, and Normal cases, providing a reliable foundation for AI-driven medical diagnosis. Future research will focus on dataset expansion, interpretability enhancement using Explainable AI (XAI), and the integration of this model into clinical decision-support systems.</p>I Dewa Ayu Sri MurdhaniHeru IsmantoDidit Suprihanto
##submission.copyrightStatement##
http://creativecommons.org/licenses/by-sa/4.0
2025-12-082025-12-088213614910.33173/jsikti.269Lightweight MobileNet-Based Deep Learning Framework for Automated Lung Infection Detection from Chest X-Ray Images
https://infoteks.org/journals/index.php/jsikti/article/view/270
<p>Lung infections, especially viral pneumonia, continue to pose a significant global health challenge due to their high rates of illness and death. Traditional diagnostic approaches, such as radiologists' interpretation of chest X-ray (CXR) images, are frequently slow and subject to personal bias. The swift advancement in deep learning offers great potential for automating the detection of lung infections; however, many existing convolutional neural network (CNN) models demand substantial computational resources, which restricts their use in real-time or low-resource clinical settings. This study seeks to overcome these issues by creating a lightweight and effective diagnostic system using the MobileNet architecture for automatic lung infection identification from CXR images. The core drive for this research is to deliver an accessible and precise AI tool that aids radiologists in timely disease detection, particularly in under-resourced healthcare environments. The proposed MobileNet-based model, trained through transfer learning and fine-tuning on a binary dataset of normal and viral pneumonia images, strikes an excellent balance between performance and computational efficiency. Experimental results yielded 98% accuracy, 0.98 precision, 0.98 recall, and 0.98 F1-score, validating the model's reliability and appropriateness for embedded or mobile health uses. Moving forward, efforts will concentrate on broadening the dataset to encompass various lung disease types, incorporating explainable AI methods to boost clarity, and implementing the model in live clinical or mobile diagnostic platforms to enable widespread and effective healthcare services.</p>Kadek Gemilang Santiyuda
##submission.copyrightStatement##
http://creativecommons.org/licenses/by-sa/4.0
2025-12-082025-12-088215016410.33173/jsikti.270