Volume 24, Issue 3 (Autumn 2022)                   Advances in Cognitive Sciences 2022, 24(3): 105-115 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Arjmand M, Setayeshi S, Kelarestaghi M, Hatami J. Reconstruction of functional magnetic resonance imaging images using electroencephalogram signals with the method of autoencoder densely connected convolutional networks. Advances in Cognitive Sciences 2022; 24 (3) :105-115
URL: http://icssjournal.ir/article-1-1445-en.html
1- PhD Candidate of Cognitive Modeling, Department of Cognitive Modeling, Institute for Cognitive Science Studies, Tehran, Iran
2- Associate Professor, Department of Nuclear Engineering, Faculty of Physics and Energy Engineering, Amirkabir University of Technology, Tehran, Iran
3- Assistant Professor, Department of Electrical and Computer Engineering, Technical and Engineering Faculty, Khwarazmi University, Tehran, Iran
4- Associate Professor of Psychology, University of Tehran and Higher Education Institute of Cognitive Sciences, Tehran, Iran
Abstract:   (593 Views)
Introduction
Deep Neural Network (DNN) models are one of the most dominant unsupervised feature extraction methods that have been widely studied recently. Convolutional Neural Networks (CNN) combine learned features with input data and use 2D convolutional layers, and this architecture is usually well suited for processing 2D data such as images. Convolutional neural networks work by extracting features directly from images and do not require feature extraction by an observer. The corresponding properties are not predefined. Convolutional neural networks learn to recognize different features of an image using tens or hundreds of hidden layers. Each hidden layer increases the complexity of the learned image features. The initially hidden layers can learn how to detect edges, and the later layers  discover how to identify more complex shapes, particularly the shape of the object we are trying to detect.
Generally, CNNs in each layer recognize more detailed features from an image that conclude, analyze, and then decide.
Methods
The basis of this research is based on DenseNet deep learning architecture. DenseNet architecture consists of several dense transmission blocks placed between two adjacent dense blocks. Each layer uses all the previous feature maps as input. This new model provides elevatedaccuracy with a reasonable number of network parameters for object detection tasks.
With changes in settings, these algorithms achieved high accurate results in several datasets used for this purpose. Sets of features are collected in a flat layer and reconstructed layer by layer as a latent space of an Autoencoder network with UpSampling layers.
In a joint project with the support of the American Science Foundation, Stanford University, and several other scientific research centers, a website for the free sharing of brain and cognitive science data under the openfMRI was launched in 2011 and  then  renamed openNeuro. Accordingly, these data were related to confidence in perceptual decisions, which were simultaneously recorded in EEG and fMRI.
The task used during data recording was the Random Moving Dots (RDM) test, in which healthy volunteers were asked to judge the direction in which dots were moving across the screen.
This study used  the independent component analysis (ICA) methodto clean the existing EEG signals in the Python programming language using the fastICA algorithm. The self-sufficient analysiscomponents are called the separation of independent sources mixed by an unknown combination system. Besides, the the separation should be done only based on the observation of the combined signals, i.e. , both the combination system and the primary signals are unknown.
The Gramian Angular Sum/Difference Fields method and Markov Transfer Fields were used to reduce the size of the input without losing the basic data and to convert the EEG signal into images.
Results
The test data in this deep neural network are tensorial, consisting of MTF images with a size of 64×64×3 and an fMRI image matrix with a size of 70×70×32. The total available data are 16962, and the number of test data is 11873, which for this network, the ratio of test data to test data is 70-30. GPUs from Google's Collaboratory service were used to process these number calculations.
The total number of parameters is 1593175, of which 1541275 parameters, are trained in the network.  The average accuracy of the model was 85.86% during 1000 iterations (Figure 1).
Latent
Figure 1. Model implementation steps
Conclusion
The above method using fastICA, MTF, and DenseNet deep learning algorithm and combining it with an autoencoder network is new for reconstructing brain images. According to the obtained results, it can be concluded that the deep learning algorithm performs better than other superficial methods in many applications. As mentioned earlier, the deep learning algorithm is a new method yet has much potential for improvement and development. The method used in this research is a novel method in image reconstruction and there is a need to improve it. Developing methods can help improve the algorithm performance in future work   to increase the accuracy model and a more appropriate weighting of the used neural network. Additionally, with newer deep structures that increase in number daily, their efficiency in image reconstruction using EEG signals can be evaluated and checked.
Ethical considerations
Compliance with ethical guidelines
There were no ethical considerations involved in the research related to this article.
Authors’ contributions
The design of the research implementation stages and the writing of the article were done by Mahdi Arjmand. All authors performed the research literature and background. Saeed Stayeshi, Manuchehr Kelarestaghi, and Javad Hatami completed the design and actively participated in advising and supervising the implementation of the research and writing stages.
Funding
This research is not under the financial support of any institution or organization.
Acknowledgments
The authors are grateful to all the dignitaries who helped us in conducting and consensus on the current research.
Conflict of interest
The authors declare no conflicts of interest.
 
Full-Text [PDF 1467 kb]   (197 Downloads)    
Type of Study: Research |
Received: 2022/07/21 | Accepted: 2022/09/1 | Published: 2022/11/15

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Designed & Developed by : Yektaweb