Search published articles


Showing 16 results for Neural Network

Ashrafi M, Hamidi Beheshti Mt, Shahidi Sh, Ashrafi F,
Volume 67, Issue 5 (8-2009)
Abstract

Background: Kidney transplantation had been evaluated in some researches in Iran mainly with clinical approach. In this research we evaluated graft survival in kidney recipients and factors impacting on survival rate. Artificial neural networks have a good ability in modeling complex relationships, so we used this ability to demonstrate a model for prediction of 5yr graft survival after kidney transplantation.
Methods: This retrospective study was done on 316 kidney transplants from 1984 through 2006 in Isfahan. Graft survival was calculated by Kaplan-meire method. Cox regression and artificial neural networks were used for constructing a model for prediction of graft survival.
Results: Body mass index (BMI) and type of transplantation (living/cadaver) had significant effects on graft survival in cox regression model. Effective variables in neural network model were recipient age, recipient BMI, type of transplantation and donor age. One year, 3 year and 5 year graft survival was 96%, 93% and 90% respectively. Suggested artificial neural network model had good accuracy (72%) with the area under the Receiver-Operating Characteristic (ROC) curve 0.736 and appropriate results in goodness of fit test (κ2=33.924). Sensitivity of model in identification of true positive situations was more than false negative situations (72% Vs 61%).
Conclusion: Graft survival in living donors was more than cadaver donors. Graft survival decreased when the BMI increased at transplantation time. In traditional statistical approach Cox regression analysis is used in survival analysis, this research shows that artificial neural networks also can be used in constructing models to predict graft survival in kidney transplantation.


Taghavi Kani M, Homayoon Jafari A, Khoshnevisan A, Arabalibeyk H, Abolhasani Mj,
Volume 68, Issue 11 (2-2011)
Abstract

Background: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR) of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system.

Methods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF) neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used.

Results: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS) reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity.

Conclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.


Mahmoud Akbarian , Khadijeh Paydar, Sharareh R Ostam Niakan Kalhori , Abbas Sheikhtaheri ,
Volume 73, Issue 4 (7-2015)
Abstract

Background: Pregnancy in women with systemic lupus erythematosus (SLE) is still introduced as a major challenge. Consulting before pregnancy in these patients is essential in order to estimating the risk of undesirable maternal and fetal outcomes by using appropriate information. The purpose of this study was to develop an artificial neural network for prediction of pregnancy outcomes including spontaneous abortion and live birth in SLE. Methods: In a retrospective study, forty-five variables were identified as effective factors for prediction of pregnancy outcomes in systemic lupus erythematosus. Data of 104 pregnancies in women with systemic lupus erythematosus in Shariati Hospital and 45 pregnancies in a private specialized center in Tehran from 1982 to 2014 in August and September, 2014 were collected and analyzed. For feature selection, information of the 149 pregnancies was analyzed with a binary logistic regression model in SPSS software, version 20 (SPSS, Inc., Chicago, IL, USA). These selected variables were used for inputs of neural networks in MATLAB software, version R2013b (MathWorks Inc., Natick, MA, USA). A Multi-Layer Perceptron (MLP) network with scaled conjugate gradient (trainscg) back propagation learning algorithm has been designed and evaluated for this purpose. We used confusion matrix for evaluation. The accuracy, sensitivity and specificity were calculated from the confusion matrix. Results: Twelve features with P<0.05 and four features with P<0.1 were identified by using binary logistic regression as effective features. These sixteen features were used as input variables in artificial neural networks. The accuracy, sensitivity and specificity of the test data for the MLP network were 90.9%, 80.0%, and 94.1% respectively and for the total data were 97.3%, 93.5%, and 99.0% respectively. Conclusion: According to the results, we concluded that feed-forward Multi-Layer Perceptron (MLP) neural network with scaled conjugate gradient (trainscg) back propagation learning algorithm can help physicians to predict the pregnancy outcomes (spontaneous abortion and live birth) among pregnant women with lupus by using identified effective variables.
Bahman Mansori , Abdol Hamid Pilevar , Babak Azadnia ,
Volume 73, Issue 7 (10-2015)
Abstract

Background: Magnetic resonance imaging (MRI) is widely applied for examination and diagnosis of brain tumors based on its advantages of high resolution in detecting the soft tissues and especially of its harmless radiation damages to human bodies. The goal of the processing of images is automatic segmentation of brain edema and tumors, in different dimensions of the magnetic resonance images. Methods: The proposed method is based on the unsupervised method which discovers the tumor region, if there is any, by analyzing the similarities between two hemispheres and computes the image size of the goal function based on Bhattacharyya coefficient which is used in the next stage to detect the tumor region or some part of it. In this stage, for reducing the color variation, the gray brain image is segmented, then it is turned to gray again. The self-organizing map (SOM) neural network is used the segmented brain image is colored and finally the tumor is detected by matching the detected region and the colored image. This method is proposed to analyze MRI images for discovering brain tumors, and done in Bu Ali Sina University, Hamedan, Iran, in 2014. Results: The results for 30 randomly selected images from data bank of MRI center in Hamedan was compared with manually segmentation of experts. The results showed that, our proposed method had the accuracy of more than 94% at Jaccard similarity index (JSI), 97% at Dice similarity score (DSS), and 98% and 99% at two measures of specificity and sensitivity. Conclusion: The experimental results showed that it was satisfactory and can be used in automatic separation of tumor from normal brain tissues and therefore it can be used in practical applications. The results showed that the use of SOM neural network to classify useful magnetic resonance imaging of the brain and demonstrated a good performance.


Mohammad Karim Sohrabi , Alireza Tajik ,
Volume 73, Issue 12 (3-2016)
Abstract

Background: Warfarin is one of the most common oral anticoagulant, which role is to prevent the clots. The dose of this medicine is very important because changes can be dangerous for patients. Diagnosis is difficult for physicians because increase and decrease in use of warfarin is so dangerous for patients. Identifying the clinical and genetic features involved in determining dose could be useful to predict using data mining techniques. The aim of this paper is to provide a convenient way to select the clinical and genetic features to determine the dose of warfarin using artificial neural networks (ANN) and evaluate it in order to predict the dose patients.

Methods: This experimental study, was investigate from April to May 2014 on 552 patients in Tehran Heart Center Hospital (THC) candidates for warfarin anticoagulant therapy within the international normalized ratio (INR) therapeutic target. Factors affecting the dose include clinical characteristics and genetic extracted, and different methods of feature selection based on genetic algorithm and particle swarm optimization (PSO) and evaluation function neural networks in MATLAB (MathWorks, MA, USA), were performed.

Results: Between algorithms used, particle swarm optimization algorithm accuracy was more appropriate, for the mean square error (MSE), root mean square error (RMSE) and mean absolute error (MAE) were 0.0262, 0.1621 and 0.1164, respectively.

Conclusion: In this article, the most important characteristics were identified using methods of feature selection and the stable dose had been predicted based on artificial neural networks. The output is acceptable and with less features, it is possible to achieve the prediction warfarin dose accurately. Since the prescribed dose for the patients is important, the output of the obtained model can be used as a decision support system.


Hossein Ghayoumi Zadeh, Mostafa Danaeian, Ali Fayazi , Farshad Namdari, Sayed Mohammad Mostafavi Isfahani ,
Volume 76, Issue 1 (4-2018)
Abstract

Background: One common symptom of diabetes is diabetic retinopathy, if not timely diagnosed and treated, leads to blindness. Retinal image analysis has been currently adopted to diagnose retinopathy. In this study, a model of hierarchical self-organized neural networks has been presented for the detection and classification of retina in diabetic patients.
Methods: This study is a retrospective cross-sectional, conducted from December to February 2015 at the AJA University of Medical Sciences, Tehran. The study has been conducted on the MESSIDOR base, which included 1200 images from the posterior pole of the eye. Retinal images are classified into 3 categories: mild, moderate and severe. A system consisting of a new hybrid classification of SOM has been presented for the detection of retina lesions. The proposed system includes rapid preprocessing, extraction of lesions features, and finally provision of a classification model. In the preprocessing, the system is composed of three processes of primary separation of target lesions, separation of the optical disk, and separation of blood vessels from the retina. The second step is a collection of features based on various descriptions, such as morphology, color, light intensity, and moments. The classification includes a model of hierarchical self-organized networks named HSOM which is proposed to accelerate and increase the accuracy of lesions classification considering the high volume of information in the feature extraction.
Results: The sensitivity, specificity and accuracy of the proposed model for the classification of diabetic retinopathy lesions is 98.9%, 96.77%, 97.87%, respectively.
Conclusion: These days, the cases of diabetes with hypertension are constantly increasing, and one of the main adverse effects of this disease is related to eyes. In this respect, the diagnosis of retinopathy, which is the same as identification of exudates, microanurysm and bleeding, is of particular importance. The results show that the proposed model is able to detect lesions in diabetic retinopathy images and classify them with an acceptable accuracy. In addition, the results suggest that this method has an acceptable performance compared to other methods.

Fatemeh Falahati Marvast , Hossein Arabalibeik, Fatemeh Alipour , Abbas Sheikhtaheri, Leila Nouri,
Volume 76, Issue 12 (3-2019)
Abstract

Background: Contact lenses are transparent, thin plastic disks that cover the surface of the cornea. Appropriate lens prescription should be performed properly by an expert to provide better visual acuity and reduce side effects. The lens administration is a multi-stage, complex and time-consuming process involving many considerations. The purpose of this study was to develop a decision support system in the field of contact lens prescription.
Methods: In this fundamental study, data were collected from 127 keratoconus patients referred to the contact lens clinic at Farabi Eye Hospital, Tehran, Iran during the period of March 2013 to July 2014. Five parameters in the contact lens prescribing process were investigated. Parameters were collected as follows. “Lens vertical position”, “vertical movement of the lens during blinking” and “width of the rim” in the fluorescein pattern were obtained by recording videos of the patients while wearing the lens. “Fluorescein dye concentration” under the lens was evaluated by the physician and “patient comfort” was obtained by asking the patient to fill a simple scoring system. Approved and disapproved lenses were judged and recorded based on the decision of an expert contact lens practitioner. The decision support system was designed using artificial neural networks with the mentioned variables as inputs. Approved and disapproved lenses are considered as system outputs. Artificial neural network was developed using MATLAB® software, version 8.3 (Mathworks Inc., Natick, MA, USA). Eighty percent of the data was used to train the support vector machine and the rest of the data (20%) to test the system's performance.
Results: Accuracy, sensitivity and specificity, calculated using the confusion matrix, were 91.3%, 89.8% and 92.6% respectively. The results indicate that the designed decision support system could assist contact lens prescription with high precision.
Conclusion: According to the results, we conclude that hard contact lens fitness could be evaluated properly using an artificial neural network as a decision support system. The proposed system detected approved and disapproved contact lenses with high accuracy.

Atefeh Sedighnia , Sharareh Rostam Niakan Kalhori, Mahshid Nasehi , Ahmad Ali Hanafi-Bojd ,
Volume 77, Issue 4 (7-2019)
Abstract

Background: Tuberculosis (TB) is an important infectious disease with high mortality in the world. None of the countries stay safe from TB. Nowadays, different factors such as Co-morbidities, increase TB incidence. World Health Organization (WHO) last report about Iran's TB status shows rising trend of multidrug-resistant tuberculosis (MDR-TB) and HIV/TB. More than 95% illness and death of TB cases are in developing countries. The most infections are in South East Asia and West Pacific that 56% of them are new cases in the world. The incidence is actually new cases of each year. Incidence prediction is affecting TB prevention, management and control. The purpose of this study is designing and creating a system to predict TB incidence by time series artificial neural networks (ANN) in Iran.
Methods: This study is a retrospective analytic. 10651 TB cases that registered on Iran’s Stop TB System from March 2014 to March 2016, Were analyzed. Most of reliable data used directly, some of them merged together and create new indicators and two columns used to compute a new indicator. At first, effective variables were evaluating with correlation coefficient tests then extracting by linear regression on SPSS statistical software, version 20 (IBM, Armonk, NY, USA). We used different algorithms and number of neurons in hidden layer and delay in time series neural network. R, MSE (mean squared error) and regression graph were used for compare and select the best network. Incidence prediction neural network were designed on MATLAB® software, version R2014a (Mathworks Inc., Natick, MA, USA).
Results: At first, 23 independent variables entered to study. After correlation coefficient and regression, 12 variables with P≤0.01 in Spearman and P≤0.05 in Pearson were selected. We had the best value of R, MSE (mean squared error) and also regression graph in train, validation and tested by Bayesian regularization algorithm with 10 neuron in hidden layer and two delay.
Conclusion: This study showed that artificial neural network had acceptable function to extract knowledge from TB raw data; ANN is beneficial to TB incidence prediction.

Mansour Rezaei, Negin Fakhri , Fateme Rajati , Soodeh Shahsavari ,
Volume 77, Issue 6 (9-2019)
Abstract

Background: Gestational diabetes mellitus (GDM) is one of the most common metabolic disorders in pregnancy, which is associated with serious complications. In the event of early diagnosis of this disease, some of the maternal and fetal complications can be prevented. The aim of this study was to early predict gestational diabetes mellitus by two statistical models including artificial neural network (ANN) and decision tree and also comparing these models in the diagnosis of GDM.
Methods: In this modeling study, among the cases of pregnant women who were monitored by health care centers of Kermanshah City, Iran, from 2010 to 2012, four hundred cases were selected, therefore the information in these cases was analyzed in this study. Demographic information, mother's maternal pregnancy rating, having diabetes at the beginning of pregnancy, fertility parameters and biochemical test results of mothers was collected from their records. Perceptron ANN and decision tree with CART algorithm models were fitted to the data and those performances were compared. According to the accuracy, sensitivity, specificity criteria and surface under the receiver operating characteristic (ROC) curve (AUC), the superior model was introduced.
Results: Following the fitting of an artificial neural network and decision tree models to data set, the following results were obtained. The accuracy, sensitivity, specificity and area under the ROC curve were calculated for both models. All of these values were more in the neural network model than the decision tree model. The accuracy criterion for these models was 0.83, 0.77, the sensitivity 0.62, 0.56 and specificity 0.95, 0.87, respectively. The surface under the ROC curve in ANN model was significantly higher than decision tree (0.79, 0.74, P=0.03).
Conclusion: In predicting and categorizing the presence and absence of gestational diabetes mellitus, the artificial neural network model had a higher accuracy, sensitivity, specificity, and surface under the receiver operating characteristic curve than the decision tree model. It can be concluded that the perceptron artificial neural network model has better predictions and closer to reality than the decision tree model.

Ali Ameri ,
Volume 77, Issue 7 (10-2019)
Abstract

Background: Deep learning has revolutionized artificial intelligence and has transformed many fields. It allows processing high-dimensional data (such as signals or images) without the need for feature engineering. The aim of this research is to develop a deep learning-based system to decode motor intent from electromyogram (EMG) signals.
Methods: A myoelectric system based on convolutional neural networks (CNN) is proposed, as an alternative to conventional classification methods that depend on feature engineering. The proposed model was validated with 10 able-bodied subjects during single and combined wrist motions. Eight EMG channels were recorded using eight pairs of surface electrodes attached around the subject’s dominant forearm. The raw EMG data from windows of 167ms (200 samples) in 8 channels were arranged as 200×8 matrices. For each subject, a CNN was trained using the EMG matrices as the input and the corresponding motion classes as the target. The resulting model was tested using a 4-fold cross-validation. The performance of the proposed approach was compared to that of a standard SVM-based model that used a set of time-domain (TD) features including mean absolute value, zero crossings, slope sign changes, waveform length, and mean frequency.
Results: In spite of the proven performance and popularity of the TD features, no significant difference (P=0.19) was found between the classification accuracies of the two methods. The advantage of the proposed model is that it does not need manual extraction of features, as the CNN can automatically learn and extract required representations from the EMG data.
Conclusion: These results indicate the capacity of CNNs to learn and extract rich and complex information from biological signals. Because both amplitude and frequency of EMG increases with increasing muscle force, both temporal and spectral characteristics of EMG are needed for efficient estimation of motor intent. The TD set, also includes these types of features. The high performance of the CNN model shows its capability to learn temporal and spectral representations from raw EMG data.

Ali Ameri,
Volume 78, Issue 4 (7-2020)
Abstract

Background: The most common types of non-melanoma skin cancer are basal cell carcinoma (BCC), and squamous cell carcinoma (SCC). AKIEC -Actinic keratoses (Solar keratoses) and intraepithelial carcinoma (Bowen’s disease)- are common non-invasive precursors of SCC, which may progress to invasive SCC, if left untreated. Due to the importance of early detection in cancer treatment, this study aimed to propose a computer-based model for identification non-melanoma malignancies.
Methods: In this analytic study, 327 AKIEC, 513 BCC, and 840 benign keratosis images from human against machine with 10000 training dermoscopy images (HAM10000) were extracted. From each of these three types, 90% of the images were designated as the training set and the remaining images were considered as the test set. A deep learning convolutional neural network (CNN) was developed for skin cancer detection by using AlexNet (Krizhevsky, et al., 2012) as a pretrained network. First, the model was trained on the training images to discriminate between benign and malignant lesions. In comparison with conventional methods, the main advantage of the proposed approach is that it does not need cumbersome and time-consuming procedures of lesion segmentation and feature extraction. This is because CNNs have the capability of learning useful features from the raw images. Once the system was trained, it was validated with test data to assess the performance. Study was carried out at Shahid Beheshti University of Medical Sciences, Tehran, Iran, in January and February, 2020.
Results: The proposed deep learning network achieved an AUC (area under the ROC curve) of 0.97. Using a confidence score threshold of 0.5, a classification accuracy of 90% was attained in the classification of images into malignant and benign lesions. Moreover, a sensitivity of 94% and specificity of 86% were obtained. It should be noted that the user can change the threshold to adjust the model performance based on preference. For example, reducing the threshold increase sensitivity while decreasing specificity.
Conclusion: The results highlight the efficacy of deep learning models in detecting non-melanoma skin cancer. This approach can be employed in computer-aided detection systems to assist dermatologists in identification of malignant lesions.
 

Homayoon Yektaei, Mohammad Manthouri,
Volume 78, Issue 6 (9-2020)
Abstract

Breast cancer is the most common cancer among women and the earlier it is diagnosed, the easier it is to treat. The most common way to diagnose breast cancer is mammography. Mammography is a simple chest x-ray and a tool for early detection of non-palpable breast cancers and tumors. However, due to some limitations of this method such as low sensitivity especially in dense breasts, other methods such as 3d mammography, ultrasound and magnetic resonance imaging are often suggested to obtain additional useful information. Recently, computer-aided diagnostic or intelligent diagnostic have been developed to assist radiologists to improve diagnostic accuracy. In general, a computer system consists of four steps: pre-processing, dividing areas of interest, extracting and selecting features, and finally classification. Nowadays, the use of imaging techniques in the identification of patterns for diagnosis and automatic determination of breast cancer by mammography and even digital pathology (which is one of the emerging trends in modern medicine) reduces human errors and speeds up the diagnosis. In this article, We reviewed recent findings and their disadvantages and benefits in the diagnosis of breast cancer by neural networks, especially the artificial neural network, which is widely used in the diagnosis of cancers and intelligent breast cancers. This literature review shows that hybrid algorithms have been better at improving classification and detection accuracy. Providing a convenient way to diagnose tumors in the breast by computer-assisted diagnosis systems will be of great help to the physicians. Much work has been done in recent years to diagnose breast cancer, and many advances have been made in improving and diagnosing breast cancer by computer. All methods have a significant error percentage and are different depending on the type of breast, but compared to other types of neural networks, convolution and combining methods with convo have better results. Another advantage of the convoluted network is the automatic extraction of desirable features. Today, the best percentages of accuracy in detecting benign or malignant cancerous mass are achieved by convolution.
Sanaz Jafari, Ahmad Shalbaf, Jamie Sleigh,
Volume 78, Issue 6 (9-2020)
Abstract

Background: Ensuring adequate depth of anesthesia during surgery is essential for anesthesiologists to prevent the occurrence of unwanted alertness during surgery or failure to return to consciousness. Since the purpose of using anesthetics is to affect the central nervous system, brain signal processing such as electroencephalography (EEG) can be used to predict different levels of anesthesia. Anesthesia disrupts the interaction between different regions of the brain, so brain connectivity between different areas can be a key factor in the anesthesia process. This study aims to determine the depth of anesthesia based on the EEG signal using the effective brain connectivity between frontal and temporal regions.
Methods: This study, which is done from April to December 2018 in Tehran, used EEG signals recorded from eight patients undergoing Propofol anesthesia at Waikato Hospital of New Zealand. In this study, effective brain connectivity in the frontal and temporal regions have been extracted by using various Granger causality methods, including directional transfer function, normalized directional transfer function, partial coherence, partial oriented coherence, and imaginary coherence. The extraction of effective connectivity indices in three modes (awake, anesthesia and recovery) was calculated using MATLAB software. The perceptron neural network is then used to automatically classify the anesthetic phases (Awake, Anesthesia, and recovery).
Results: The results show that the directional transfer function method has a high correlation coefficient with BIS in all cases. Also, the directional transfer function index due to faster response on the drug, low variability, and better ability to track the effect of Propofol works better than the BIS index as a commercial anesthetic depth monitor in clinical application. Also, when using an artificial neural network, our index has a better ability to automatically detect three anesthesia than the BIS index.
Conclusion: The directional transfer function between the pair of EEG signals in the frontal and temporal regions can effectively track the effect of Propofol and estimate the patient's anesthesia well compared to other effective connectivity indexes. It also works better than the BIS index in clinical centers.

Mansour Rezaei , Daryush Afshari, Negin Fakhri, Nazanin Razazian,
Volume 79, Issue 4 (7-2021)
Abstract

Background: Multiple Sclerosis (MS) is one of the most debilitating disease among young adults. Understanding the disability score (Expanded Disability Status Scale (EDSS)) of these patients is helpful in choosing their treatment process. Calculating EDSS takes a lot of time for Neurologists, so having a way to estimate EDSS can be helpful. This study aimed to estimate the EDSS score of MS patients using statistical models including Artificial Neural Network (ANN) and Decision Tree (DT) models.
Methods: This cross-sectional study was performed on MS registry study data of Kermanshah province from April 2017 to November 2018. From the total data available in the registry system, The 12 variables including demographic information, information about MS disease and their EDSS score were extracted. EDSS scores were also estimated using ANN and DT models. The performance of the models was compared in terms of estimation error, correlation and mean of an estimated score. Data were analyzed using Weka software version 3.9.2 and SPSS software version 25 with a significance level of 0.05.
Results: In this study, 353 people were studied. The mean age of the patients was 36.47±9.1 years, the mean age of onset was 9.2±30.34 years, the mean duration of the disease was 6.20±5.7 years and the mean EDSS score was 2.46±1.8. Estimation errors in the DT model were lower than in the ANN model. The real EDSS score was significantly correlated with scores estimated by DT (r=0.571) and ANN (r=0.623). The mean EDSS estimated by the DT model (2.46±1.1) was not significantly different from the real EDSS mean (P=0.621) but the mean EDSS estimated by the ANN model (2.87±1.3) was significantly higher than the real EDSS mean. (P<0.05).
Conclusion: The DT model could better estimate the EDSS score of MS patients than the ANN model and made predictions that were closer to the actual EDSS scores. Therefore, the DT model can accurately estimate the EDSS score of MS patients.

Sara Bagherzadeh, Arash Maghsoudi, Ahmad Shalbaf,
Volume 79, Issue 10 (1-2022)
Abstract

Background: Schizophrenia is a mental disorder that severely affects the perception and relations of individuals. Nowadays, this disease is diagnosed by psychiatrists based on psychiatric tests, which is highly dependent on their experience and knowledge. This study aimed to design a fully automated framework for the diagnosis of schizophrenia from electroencephalogram signals using advanced deep learning algorithms.
Methods: In this analytic study, which is done from April to October 2021 in Tehran, 19-channel electroencephalogram signals from 14 schizophrenia patients and 14 healthy individuals were recorded and pre-processed. Then, the effective connectivity measure using the transfer entropy method is estimated from them and a 19×19 asymmetric connectivity matrix is constructed and represented by a color map as an image. Then, these effective connectivity images are used as inputs to the five pre-trained neural networks of AlexNet, Resnet-50, Shufflenet, Inception, and Xception. Finally, the parameters of these networks are fine-tuned to diagnose schizophrenia patients. All models are fine-tuned based on newly constructed images using the adaptive moment estimation optimizer algorithm and cross-entropy as the loss function. 10-fold cross-validation and subject-independent validation methods are used to evaluate the proposed method.
Results: The results of the study showed that the highest average accuracy, precision, sensitivity and F-score for classification of two classes of schizophrenia and healthy using the connectivity images and the Inception model achieved equal to 96.52%, 95.89%, 97.22% and 96.55%, respectively, in subject-independent validation method and 98.51%, 98.51%, 98.51% and 98.51% for the 10-fold cross-validation method. Also, there was less effective connectivity between schizophrenic patients than healthy individuals and these patients generally have much less information flow.
Conclusion: Based on our results, the proposed new model can effectively analyze brain function and be useful for psychiatrists to accurately diagnose schizophrenia patients and reduce the possible error and subsequently inappropriate treatment.
 

Mahdieh Jamshidi, Vahid Jamshidi,
Volume 81, Issue 4 (7-2023)
Abstract

Background: Due to the fact that various factors are involved in the development of chronic kidney disease, this disease appears with different clinical and laboratory symptoms. The variety in type and number of clinical symptoms often misguides the treating physician. The aim of this study is to extract the key features of the disease and find the best data mining methods to improve the accuracy of kidney disease diagnosis.
Methods: This cross-sectional study was conducted from September 2021 to March 2023 for 30 months at Rafsanjan Ali Ebn Abi Taleb Hospital. Predictive models were developed and tested using different combinations of disease characteristics and seven data mining methods in RapidMiner Studio software. The limitations of the study are as follows: 1) The models were based on 40-year-old and older patients records, which may limit the generalization of results to a wider age group. 2) Despite the high accuracy and comprehensiveness of the method, the models were based only on the information of kidney disease patients at Ali Ibn Abi Talib Rafsanjan Hospital. 3) The climate parameter has not been considered in the data set of the investigation to discover the hidden relationships of this parameter with the kidney disease.
Results: The results of the experiments in this study showed that the proposed prediction model using the Bayes method and eight identified key features (age, renal biopsy, uremia, sedimentation, albumin, edema, nocturnal enuresis, and urine-specific gravity), can detect kidney disease in people of different clinical characteristics, with 99.38% accuracy.
Conclusion: Considering that the early diagnosis of kidney disease and the adoption of appropriate treatment methods can prevent the progression of kidney damage, in this study, an attempt has been made to achieve this goal by using new statistical methods and artificial intelligence techniques. Based on the proposed method and the conducted experiments, the most important features and the best data mining method were obtained, and finally, kidney disease prediction was possible with high accuracy.


Page 1 from 1     

© 2024 , Tehran University of Medical Sciences, CC BY-NC 4.0

Designed & Developed by : Yektaweb