Add like
Add dislike
Add to saved papers

GLACIER: GLASS-BOX TRANSFORMER FOR INTERPRETABLE DYNAMIC NEUROIMAGING.

Deep learning models can perform as well or better than humans in many tasks, especially vision related. Almost exclusively, these models are used to perform classification or prediction. However, deep learning models are usually of black-box nature, and it is often difficult to interpret the model or the features. The lack of interpretability causes a restrain from applying deep learning to fields such as neuroimaging, where the results must be transparent, and interpretable. Therefore, we present a 'glass-box' deep learning model and apply it to the field of neuroimaging. Our model mixes spatial and temporal dimensions in succession to estimate dynamic connectivity between the brain's intrinsic networks. The interpretable connectivity matrices produced by our model result in beating state-of-the-art models on many tasks using multiple functional MRI datasets. More importantly, our model estimates task-based flexible connectivity matrices, unlike static methods such as Pearson's correlation coefficients.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

Related Resources

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app