We have located links that may give you full text access.
Decoding brain cognitive activity across subjects using multimodal M/EEG neuroimaging.
Brain decoding is essential in understanding where and how information is encoded inside the brain. Existing literature has shown that a good classification accuracy is achievable in decoding for single subjects, but multi-subject classification has proven difficult due to the inter-subject variability. In this paper, multi-modal neuroimaging was used to improve two-class multi-subject classification accuracy in a cognitive task of differentiating between a face and a scrambled face. In this transfer learning problem, a feature space based on special-form covariance matrices manipulated with riemannian geometry are used. A supervised two-layer hierarchical model was trained iteratively for estimating classification accuracies. Results are reported on a publically available multi-subject, multi-modal human neuroimaging dataset from MRC Cognition and Brain Sciences Unit, University of Cambridge. The dataset contains simultaneous recordings of electroencephalography (EEG) and magnetoencephalography (MEG). Our model attained, using leave-one-subject-out cross-validation, a classification accuracy of 70.82% for single modal EEG, 81.55% for single modal MEG and 84.98% for multi-modal M/EEG.
Full text links
Related Resources
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app