Search published articles


Showing 6 results for Deep Learning

Ali Ameri ,
Volume 77, Issue 7 (10-2019)
Abstract

Background: Deep learning has revolutionized artificial intelligence and has transformed many fields. It allows processing high-dimensional data (such as signals or images) without the need for feature engineering. The aim of this research is to develop a deep learning-based system to decode motor intent from electromyogram (EMG) signals.
Methods: A myoelectric system based on convolutional neural networks (CNN) is proposed, as an alternative to conventional classification methods that depend on feature engineering. The proposed model was validated with 10 able-bodied subjects during single and combined wrist motions. Eight EMG channels were recorded using eight pairs of surface electrodes attached around the subject’s dominant forearm. The raw EMG data from windows of 167ms (200 samples) in 8 channels were arranged as 200×8 matrices. For each subject, a CNN was trained using the EMG matrices as the input and the corresponding motion classes as the target. The resulting model was tested using a 4-fold cross-validation. The performance of the proposed approach was compared to that of a standard SVM-based model that used a set of time-domain (TD) features including mean absolute value, zero crossings, slope sign changes, waveform length, and mean frequency.
Results: In spite of the proven performance and popularity of the TD features, no significant difference (P=0.19) was found between the classification accuracies of the two methods. The advantage of the proposed model is that it does not need manual extraction of features, as the CNN can automatically learn and extract required representations from the EMG data.
Conclusion: These results indicate the capacity of CNNs to learn and extract rich and complex information from biological signals. Because both amplitude and frequency of EMG increases with increasing muscle force, both temporal and spectral characteristics of EMG are needed for efficient estimation of motor intent. The TD set, also includes these types of features. The high performance of the CNN model shows its capability to learn temporal and spectral representations from raw EMG data.

Ali Ameri ,
Volume 78, Issue 3 (6-2020)
Abstract

Background: Skin cancer is one of the most common forms of cancer in the world and melanoma is the deadliest type of skin cancer. Both melanoma and melanocytic nevi begin in melanocytes (cells that produce melanin). However, melanocytic nevi are benign whereas melanoma is malignant. This work proposes a deep learning model for classification of these two lesions.   
Methods: In this analytic study, the database of HAM10000 (human against machine with 10000 training images) dermoscopy images, 1000 melanocytic nevi and 1000 melanoma images were employed, where in each category 900 images were selected randomly and were designated as the training set. The remaining 100 images in each category were considered as the test set. A deep learning convolutional neural network  (CNN) was deployed with AlexNet (Krizhevsky et al., 2012) as a pretrained model. The network was trained with 1800 dermoscope images and subsequently was validated with 200 test images. The proposed method removes the need for cumbersome tasks of lesion segmentation and feature extraction. Instead, the CNN can automatically learn and extract useful features from the raw images. Therefore, no image preprocessing is required. Study was conducted at Shahid Beheshti University of Medical Sciences, Tehran, Iran from January to February, 2020.
Results: The proposed model achieved an area under the receiver operating characteristic (ROC) curve of 0.98. Using a confidence score threshold of 0.5, a classification accuracy of 93%, sensitivity of 94%, and specificity of 92% was attained. The user can adjust the threshold to change the model performance according to preference. For example, if sensitivity is the main concern; i.e. false negative is to be avoided, then the threshold must be reduced to improve sensitivity at the cost of specificity. The ROC curve shows that to achieve sensitivity of 100%, specificity is decreased to 83%.
Conclusion: The results show the strength of convolutional neural networks in melanoma detection in dermoscopy images. The proposed method can be deployed to help dermatologists in identifying melanoma. It can also be implemented for self diagnosis of photographs taken from skin lesions. This may facilitate early detection of melanoma, and hence substantially reduce the mortality chance of this dangerous malignancy.

Ali Ameri,
Volume 78, Issue 4 (7-2020)
Abstract

Background: The most common types of non-melanoma skin cancer are basal cell carcinoma (BCC), and squamous cell carcinoma (SCC). AKIEC -Actinic keratoses (Solar keratoses) and intraepithelial carcinoma (Bowen’s disease)- are common non-invasive precursors of SCC, which may progress to invasive SCC, if left untreated. Due to the importance of early detection in cancer treatment, this study aimed to propose a computer-based model for identification non-melanoma malignancies.
Methods: In this analytic study, 327 AKIEC, 513 BCC, and 840 benign keratosis images from human against machine with 10000 training dermoscopy images (HAM10000) were extracted. From each of these three types, 90% of the images were designated as the training set and the remaining images were considered as the test set. A deep learning convolutional neural network (CNN) was developed for skin cancer detection by using AlexNet (Krizhevsky, et al., 2012) as a pretrained network. First, the model was trained on the training images to discriminate between benign and malignant lesions. In comparison with conventional methods, the main advantage of the proposed approach is that it does not need cumbersome and time-consuming procedures of lesion segmentation and feature extraction. This is because CNNs have the capability of learning useful features from the raw images. Once the system was trained, it was validated with test data to assess the performance. Study was carried out at Shahid Beheshti University of Medical Sciences, Tehran, Iran, in January and February, 2020.
Results: The proposed deep learning network achieved an AUC (area under the ROC curve) of 0.97. Using a confidence score threshold of 0.5, a classification accuracy of 90% was attained in the classification of images into malignant and benign lesions. Moreover, a sensitivity of 94% and specificity of 86% were obtained. It should be noted that the user can change the threshold to adjust the model performance based on preference. For example, reducing the threshold increase sensitivity while decreasing specificity.
Conclusion: The results highlight the efficacy of deep learning models in detecting non-melanoma skin cancer. This approach can be employed in computer-aided detection systems to assist dermatologists in identification of malignant lesions.
 

Ali Ameri, Mahmoud Shiri, Masoumeh Gity , Mohammad Ali Akhaee,
Volume 79, Issue 5 (8-2021)
Abstract

Breast cancer is one of the most common types of cancer in women. Screening mammography is a low‑dose X‑ray examination of breasts, which is conducted to detect breast cancer at early stages when the cancerous tumor is too small to be felt as a lump. Screening mammography is conducted for women with no symptoms of breast cancer, for early detection of cancer when the cancer is most treatable and consequently greatly reduce the death rate from the breast cancer. Screening mammography should be performed every year for women age 45-54, and every two years for women age 55 and older who are in good health. A mammogram is read by a radiologist to diagnose cancer.
To assist radiologists in reading mammograms, computer-aided detection (CAD) systems have been developed which can identify suspicious lesions on mammograms. CADs can improve the accuracy and confidence level of radiologists in decision making and have been approved by FDA for clinical use. Traditional CAD systems work based on conventional machine learning (ML) and image processing algorithms. With recent advances in software and hardware resources, a great breakthrough in deep learning (DL) algorithms was followed, which revolutionized various engineering areas including medical technologies. Recently, DL models have been applied in CAD systems in mammograms and achieved outstanding performance. In contrast to conventional ML, DL algorithms eliminate the need for the tedious task of human-designed feature engineering, as they are capable of learning useful features automatically from the raw data (mammogram). One of the most common DL frameworks is the convolutional neural network (CNN). To localize lesions in a mammogram, a CNN should be applied in region‑based algorithms such as R‑CNN, Fast R‑CNN, Faster R‑CNN, and YOLO.
Proper training of a DL‑based CAD requires a large amount of annotated mammogram data, where cancerous lesions have been marked by an experienced radiologist. This highlights the importance of establishing a large, annotated mammogram dataset for the development of a reliable CAD system. This article provides a brief review of the state‑of‑the‑art techniques for DL‑based CAD in mammography.

Zahra Papi , Iraj Abedi, Fatemeh Dalvand, Alireza Amouheidari,
Volume 80, Issue 4 (7-2022)
Abstract

Background: Glioma is the most common primary brain tumor, and early detection of tumors is important in the treatment planning for the patient. The precise segmentation of the tumor and intratumoral areas on the MRI by a radiologist is the first step in the diagnosis, which, in addition to the consuming time, can also receive different diagnoses from different physicians. The aim of this study was to provide an automated method for segmenting the tumor and intratumoral areas.
Methods: This is a fundamental-applied study that was conducted from May 2020 to September 2021 using multimodal MRI images of 285 patients with glioma tumors from the BraTS 2018 Database. This database was collected from 19 different MRI imaging centers, including multimodal MRI images of 210 HGG patients, and 75 LGG patients. In this study, a 2D U-Net architecture was designed with a patch-based method for training, which comprises an encoding path for feature extraction and a symmetrical decoding path. The training of this network was performed in three separate stages, using data from high-grade gliomas (HGG), and low-grade gliomas (LGG), and combining two groups of 210, 75, and 220 patients, respectively.
Results: The proposed model estimated the Dice Similarity Coefficient (DSC) results in HGG datasets 0.85, 0.85, 0.77, LGG datasets 0.80, 0.66, 0.51, and the combination of the two groups 0.88, 0.79, 0.77 for regions the whole tumor, tumor core, and enhancing region in the training dataset, respectively. The results related to Hussdorf Distance (HD) for HGG datasets were 8.24, 9.92, 4.43, LGG datasets 11.5, 11.31, 2.23, and the combination of the two groups 7.20, 8.82, 4.43 for regions the whole tumor, tumor core, and enhancing region in the training dataset, respectively.
Conclusion: Using the U-Net network can help physicians in the accurate segmentation of the tumor and its various areas, as well as increase the survival rate of these patients and improve their quality of life through accurate diagnosis and early treatment.

Faezeh Moghadas, Zahra Amini, Rahele Kafieh,
Volume 80, Issue 10 (1-2023)
Abstract

Background: Brain-computer interface systems provide the possibility of communicating with the outside world without using physiological mediators for people with physical disabilities through brain signals. A popular type of BCIs is the motor imagery-based systems and one of the most important parts in the design of these systems is the classification of brain signals into different motor imagery classes in order to transform them into control commands. In this paper, a new method of brain signal classifying based on deep learning methods is presented.
Methods: This cross-sectional study was conducted at Isfahan University of Medical Sciences, School of Advanced Technologies in Medicine, from February 2020 to June 2022. In the pre-processing block, segmentation of brain signals, selection of suitable channels and filtering by Butterworth filter have been done; then data has transformed to the time-frequency domain by three different kinds of mother wavelets including Cmor, Mexicanhat, and Cgaus. In the classification step, two types of convolutional neural networks (one-dimensional and two-dimensional) were applied whereas each one of them was utilized in two different architectures. Finally, the performance of the networks has been investigated by each one of these three types of input data.
Results: Three channels were selected as the best ones for nine subjects. To separate 8-30 Hz, a 5th degree Butterworth filter was used. After finding the optimal parameters in the proposed networks, wavelet transform with Cgauss mother wavelet has the highest percentage in the both proposed architectures. Two-dimensional convolutional neural network has higher convergence speed, higher accuracy and more complexity of calculations. In terms of accuracy, precision, sensitivity and F1-score, two-dimensional convolutional neural network has performed better than one-dimensional convolutional neural network. The accuracy of 92.53%, which is obtained from the second architecture, as the best result, is reported.
Conclusion: The results obtained from the proposed network indicate that suitable, and well-designed deep learning networks can be utilized as an accurate tool for data classification in application of motion perception.


Page 1 from 1     

© 2024 , Tehran University of Medical Sciences, CC BY-NC 4.0

Designed & Developed by : Yektaweb