Search published articles


Showing 8 results for Machine Learning

Rohollah Kalhor , Asghar Mortezagholi , Fatemeh Naji, Saeed Shahsavari, Mohammad Zakaria Kiaei ,
Volume 76, Issue 12 (3-2019)
Abstract

Background: Diabetes mellitus has several complications. The Late diagnosis of diabetes in people leads to the spread of complications. Therefore, this study has been done to determine the possibility of predicting diabetes type 2 by using data mining techniques.
Methods: This is a descriptive-analytic study that was conducted as a cross-sectional study. The study population included people referring to health centers in Mohammadieh City in Qazvin Province, Iran, from April to June 2015 for screening for diabetes. The 5-step CRISP method was used to implement this study. Data were collected from March 2015 to June 2015. In this study, 1055 persons with complete information were included in the study. Of these, 159 were healthy and 896 were diabetic. A total of 11 characteristics and risk factors were examined, including the age, sex, systolic and diastolic blood pressure, family history of diabetes, BMI, height, weight, waistline, hip circumference and diagnosis. The results obtained by support vector machine (SVM), decision tree (DT) and the k-nearest neighbors algorithm (k-NN) were compared with each other. Data was analyzed using MATLAB® software, version 3.2 (Mathworks Inc., Natick, MA, USA).
Results: Data analysis showed that in all criteria, the best results were obtained by decision tree with accuracy (0.96) and precision (0.89). The k-NN methods were followed by accuracy (0.96) and precision (0.83) and support vector machine with accuracy (0.94) and precision (0.85). Also, in this study, decision tree model obtained the highest degree of class accuracy for both diabetes classes and healthy in the analysis of confusion matrix.
Conclusion: Based on the results, the decision tree represents the best results in the class of test samples which can be recommended as a model for predicting diabetes type 2 using risk factor data.

Amir Reza Naderi Yaghouti , Ahmad Shalbaf, Arash Maghsoudi,
Volume 79, Issue 1 (4-2021)
Abstract

Background: Accurate and early detection of non-alcoholic fatty liver, which is a major cause of chronic diseases is very important and is vital to prevent the complications associated with this disease. Ultrasound of the liver is the most common and widely performed method of diagnosing fatty liver. However, due to the low quality of ultrasound images, the need for an automatic and intelligent classification method based on artificial intelligence methods to accurately detect the amount of liver fat is essential. This paper aims to develop an advanced machine learning model based on texture features to assess liver fat levels based on liver ultrasound images.
Methods: In this analytic study, which is done from April to November 2020 in Tehran, ultrasound images of 55 obese people who have undergone laparoscopic surgery have been used and the histological result of a liver biopsy has been employed as a reference for liver fat. First, 88 texture-based features were extracted from the images using the Gray-Level Co-Occurrence Matrix (GLCM) method. In the next step, using the method of minimum redundancy and maximum correlation, the top features were selected from among 88 features and applied to the classifier input. Finally, using the three classifiers of linear discriminant analysis, support vector machine and AdaBoost, the images were classified into 4 groups based on the amount of liver fat.
Results: The accuracy of the automatic liver fat prediction model from ultrasound images for AdaBoost classification was 92.72%. However, the accuracies obtained for support vector machine and linear discriminant analysis classification were 87.88% and 75.76%, respectively.
Conclusion: The proposed approach based on texture features using the GLCM and the AdaBoost classification from ultrasound images automatically detects the amount of liver fat with high accuracy and can help physicians and radiologists in the final diagnosis.

Hasan Mohammadi Kiani , Ahmad Shalbaf, Arash Maghsoudi,
Volume 79, Issue 2 (5-2021)
Abstract

Background: Early diagnosis of patients in the early stages of Alzheimer's, known as mild cognitive impairment, is of great importance in the treatment of this disease. If a patient can be diagnosed at this stage, it is possible to treat or delay Alzheimer's disease. Resting-state functional magnetic resonance imaging (fMRI) is very common in the process of diagnosing Alzheimer's disease. In this study, we intend to separate subjects with mild cognitive impairment from healthy control based on fMRI data using brain functional connectivity and graph theory.
Methods: In this article, which was done from April to November 2020 in Tehran, after pre-processing the fMRI data, 116 brain regions were extracted using an Automated Anatomical Labeling atlas. Then, the functional connectivity matrix between the time signals of 116 brain regions was calculated using Pearson correlation and mutual information methods. Using functional connectivity calculations, the brain graph network was formed, followed by thresholding of the brain connectivity network to keep significant and strong edges while eliminating weaker edges that were likely noise. Finally, 11 global features were extracted from the graph network and after performing statistical analyses and selecting optimal features; the classification of 14 healthy individuals and 11 patients with mild cognitive impairment was performed using a support vector machine classifier.
Results: Calculations were showed that the mutual information algorithm as a functional connectivity method and five global features of the graph network, including average strength, eccentricity, local efficiency, coefficient clustering and transitivity, using the support vector machine classifier achieved the best performance with the accuracy, sensitivity and specificity of 84, 86 and 93 percent, respectively.
Conclusion: Combining the features of brain graph and functional connectivity by the mutual information method with a machine learning approach, based on fMRI imaging analysis, is very effective in diagnosing mild cognitive impairment in the early stages of Alzheimer’s which consequently allows treating or delaying this disease.

Hanieh Alimiri Dehbaghi , Karim Khoshgard, Hamid Sharini, Samira Jafari Khairabadi, Farhad Naleini,
Volume 81, Issue 5 (8-2023)
Abstract

Background: The use of artificial intelligence algorithms to help with accurate diagnosis in medical images is one of the most important applications of this technology in the field of medical imaging. In this research, the possibility of replacing simple chest radiography instead of CT scan using machine learning models to detect pneumothorax was investigated in cases where CT is usually requested.
Methods: This study is analytical and was conducted from November 2022 to May 2023 at Kermanshah University of Medical Sciences. The data used in this research was extracted from the files of 350 patients suspected of pneumothorax. The collected images were pre-processed in MATLAB software. Then, three machine learning algorithms, including Logistic elastic net regression (LENR), Logistic lasso regression (LLR) and Adaptive Boosting (AdaBoost) were used. To evaluate the performance of these models, the criteria of precision, accuracy, sensitivity, specificity, area under the receiver operating characteristic curve (AUC), F1 score, and misclassification were used.
Results: In the AdaBoost model, the accuracy value in radiographic and CT images was calculated as 98.89% and 98.63%, respectively, and the precision value was calculated as 99.17% and 98.27%, respectively. In radiographic images, the AUC value for AdaBoost model was calculated as 100% and in CT scan images as 96.96%. The F1 score for the same model in radiographic was 99% and in CT images was 98.68%. The specificity value for the AdaBoost model was calculated as 99.45% in radiographic images and 94.67% in CT scan images. In the LLR model, the AUC value for radiographic and CT scan images was 99.87% and 99.02%, respectively.
Conclusion: According to the criteria evaluated in the present study, two LLR and AdaBoost models have similar performance in radiographic and CT images in terms of pneumothorax detection ability, so that this complication can also be diagnosed with high precision level using machine learning techniques on the radiographic images and thus receiving higher levels of radiation doses due to CT scan can be avoided in these patients.

Ameneh Javanmard, Alireza Salehan,
Volume 81, Issue 10 (1-2024)
Abstract

Background: Coronaviruses were discovered in 1960. Large-sized living organisms from the Coronaviridae family, with single-stranded RNA of animal origin. Coronaviruses in humans can cause mild respiratory illness or severe respiratory illness. In 2020, the World Health Organization declared COVID-19 a global pandemic. The aim of this study is to use the Jaccard similarity coefficient to determine the similarity of COVID-19 behavior patterns in different seasons of the year.
Methods: This study used machine learning systems and similarity metrics to determine the behavior pattern of COVID-19 in different seasons of the year. The location of research was the Mousa ibn Ja'far Hospital in Mashhad, and the time was from May 2020 to August 2021. The symptoms of affected patients were compared with the compiled dataset, and the similarity of patients was prepared in a similarity matrix, and the Jaccard correlation coefficient was calculated on the data. Finally, the analysis of strains from the beginning of emergence to the latest strain was examined. The performance indicators of the algorithm in the Jaccard similarity method showed a recall metric with a value of 0.94, a precision metric with a value of 1, an F1 score with a value of 0.86, and remove accuracy metric with a value of 0.76. The most important factors in the investigation include white blood cells, platelets, RT-PCR, CT SCAN, shortness of breath, fever, SPO2, and respiratory rate.
Results: The transmission of the COVID-19 virus depends on several factors, including human interaction. The evidence of the collected data shows that people with COVID-19 have low lymphocyte count and it is very consistent with the results of recent studies. Due to the lack of a dataset, a comparative study was conducted and a dataset was collected.
Conclusion: This study, leveraging machine learning algorithms, identified a clear seasonal correlation in the spread of COVID-19. Considering geographical and seasonal variations among patients, distinct symptoms were observed in each season corresponding to the prevalent strain during that period.

Hamed Zamanian, Ahmad Shalbaf,
Volume 82, Issue 10 (1-2025)
Abstract

Background: Nonalcoholic fatty liver disease (NAFLD) represents a growing global health burden, strongly associated with rising rates of obesity, diabetes, and metabolic syndrome. This study introduces a machine learning framework to precisely diagnose NAFLD, classify disease severity, and stratify risk using routine clinical data. Our model improves early detection and risk prediction, supporting evidence-based clinical decisions. Leveraging predictive analytics, this scalable approach identifies high-risk patients and enables personalized interventions. The data-driven strategy optimizes NAFLD management by extracting maximal value from standard healthcare records, delivering both clinical and operational advantages.
Methods: This study examined 181 NAFLD patients across disease stages. The dataset was compiled from February 2010 to January 2019 at Eheim University Hospital, comprising general volunteers who were diagnosed with or without fatty liver based on histopathological evaluation of liver biopsy samples. Forward selection and mutual information identified predictive features, applied in classification models (e.g., random forest) to assess steatosis severity. Explainable AI (XAI) improved model interpretability. Combining robust feature selection, machine learning, and XAI ensured accurate, clinically actionable NAFLD severity evaluation.
Results: The XGBoost classifier with forward feature selection attained a classification accuracy of 69.23%±5.5% for steatosis severity. Interpretability analysis highlighted age, Body Mass Index (BMI), High-Density Lipoprotein (HDL), Low-Density Lipoprotein (LDL), A1c Hemoglobin (HbA1c), and glutamate pyruvate transaminase (GPT) as the most impactful variables across three severity classes. Furthermore, GPT, age, BMI, HDL, HbA1c, LDL, triglycerides, and cholesterol were critical to model performance, emphasizing their diagnostic significance in NAFLD progression. These findings suggest their utility in clinical assessments and risk stratification.
Conclusion: This study developed a machine learning model for accurate NAFLD diagnosis and severity stratification using routine clinical data. Accessible biomarkers reliably predicted disease progression, enabling gastroenterologists to facilitate early intervention. This cost-effective approach reduces healthcare costs while improving outcomes through precision medicine. Implementing such predictive tools in clinical practice could optimize resource allocation and enhance long-term NAFLD management. The framework supports timely diagnostics and targeted therapies, advancing patient-centered care.

Zakieh Vahedian Ardakani , Mehran Zarei-Ghanavati , Hamid Riazi-Esfahani , Seyed Mehdi Tabatabaei , Mohammad Reza Mehrabi Bahar, Sadegh Ghafarian, Ahmad Masoomi,
Volume 83, Issue 1 (4-2025)
Abstract

Artificial intelligence (AI) has emerged as a transformative force in modern medicine, with ophthalmology standing at the forefront of its clinical integration. Among ophthalmic disorders, glaucoma—a leading cause of irreversible blindness worldwide—presents unique opportunities and challenges for AI-based solutions due to its chronic, progressive nature and reliance on multimodal data, including structural and functional assessments. This review article offers a comprehensive synthesis of the current and emerging roles of AI in the detection, monitoring, and management of glaucoma. AI algorithms, particularly deep learning and machine learning models, have demonstrated exceptional capabilities in interpreting fundus photographs, optical coherence tomography (OCT) images, and visual field data to identify glaucomatous damage. These systems often approach or even exceed the diagnostic performance of human experts. Moreover, AI has shown significant promise in facilitating large-scale population-based screening, improving early detection rates, and addressing disparities in access to subspecialty care, particularly in low-resource and remote settings. In the monitoring of disease progression, AI tools are being developed to detect subtle structural or functional changes over time, predict future visual outcomes, and support more precise and individualized treatment decisions. Despite these advancements, the widespread clinical adoption of AI in glaucoma care faces several critical barriers. Key limitations include poor generalizability of models across diverse populations, imaging devices, and clinical settings; scarcity of well-annotated, high-quality, and demographically representative datasets; and a lack of transparency and interpretability in algorithmic decision-making—commonly referred to as the “black box” problem. Ethical concerns, regulatory uncertainty, integration challenges within existing healthcare infrastructures, and medico-legal accountability also require thoughtful resolution before AI can be reliably deployed in clinical practice. This review critically evaluates the strengths, limitations, and real-world potential of AI technologies in glaucoma. It provides clinicians, researchers, and healthcare policymakers with a balanced and up-to-date perspective, highlighting promising avenues for future research, including explainable AI, federated learning, multi-modal data integration, and longitudinal validation studies. By fostering a deeper understanding of both the opportunities and challenges associated with AI, this article aims to guide the responsible, equitable, and evidence-based integration of AI into comprehensive glaucoma care.

Hossein Akhavan, Fatemeh Rezaei,
Volume 83, Issue 3 (6-2025)
Abstract

Background: An Electrocardiogram is a non-invasive method for receiving heart signals. Despite advances in imaging methods, the electrocardiogram still plays an important role remains a vital tool in the diagnosis of heart diseases. Analysis of electrocardiogram signals plays an important role in the early detection of heart diseases such as arrhythmias and heart attacks. Today, with the advancement of science and technology, computer methods have received more and more attention from doctors. In this study, machine learning methods were used to classify normal and abnormal heartbeats.
Methods: The data under study were extracted from a dataset called Heartbeat published on the Kaggle website. This dataset includes samples of audio ECG signals that are divided into healthy and unhealthy categories. First, the data were preprocessed and normalized to prepare them for input into the model. Then, temporal and frequency features were extracted from the signals. Next, a hybrid model consisting of one-dimensional convolutional layers was designed and trained. Also, by using the early stopping method, overfitting was prevented and the stability of the model was improved.
Results: In this study, it was shown that by using deep learning, especially using CNN and 1D Conv, an accuracy of 0.99% and a loss of 0.0350 for test data in detecting normal and abnormal heartbeats can be achieved. This model has the ability to analyze complex structures and temporal dynamics of ECG signals and is able to detect patterns related to cardiac disorders.
Conclusion: Today, the electrocardiogram has received more attention than ever before. Appropriate selection of the model, data standardization, and a qualitative range of data are among the factors of high accuracy in this study. This study can be an effective step in the development of intelligent systems for diagnosing cardiac disorders and can be used in medical applications, especially in the field of continuous patient monitoring.

 

Page 1 from 1     

© 2026 , Tehran University of Medical Sciences, CC BY-NC 4.0

Designed & Developed by : Yektaweb