We have located links that may give you full text access.
Towards Improved Design and Evaluation of Epileptic Seizure Predictors.
IEEE Transactions on Bio-medical Engineering 2018 March
OBJECTIVE: Key issues in the epilepsy seizure prediction research are (1) the reproducibility of results (2) the inability to compare multiple approaches directly. To overcome these problems, the seizure prediction challenge was organized on Kaggle.com. It aimed at establishing benchmarks on a dataset with predefined train, validation, and test sets. Our main objective is to analyze the competition format, and to propose improvements, which would facilitate a better comparison of algorithms. The second objective is to present a novel deep learning approach to seizure prediction and compare it to other commonly used methods using patient centered metrics.
METHODS: We used the competition's datasets to illustrate the effects of data contamination. Having better data partitions, we compared three types of models in terms of different objectives.
RESULTS: We found that correct selection of test samples is crucial when evaluating the performance of seizure forecasting models. Moreover, we showed that models, which achieve state-of-the-art performance with respect to commonly used AUC, sensitivity, and specificity metrics, may not yet be suitable for practical usage because of low precision scores.
CONCLUSION: Correlation between validation and test datasets used in the competition limited its scientific value.
SIGNIFICANCE: Our findings provide guidelines which allow for a more objective evaluation of seizure prediction models.
METHODS: We used the competition's datasets to illustrate the effects of data contamination. Having better data partitions, we compared three types of models in terms of different objectives.
RESULTS: We found that correct selection of test samples is crucial when evaluating the performance of seizure forecasting models. Moreover, we showed that models, which achieve state-of-the-art performance with respect to commonly used AUC, sensitivity, and specificity metrics, may not yet be suitable for practical usage because of low precision scores.
CONCLUSION: Correlation between validation and test datasets used in the competition limited its scientific value.
SIGNIFICANCE: Our findings provide guidelines which allow for a more objective evaluation of seizure prediction models.
Full text links
Related Resources
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app