We have located links that may give you full text access.
Assessing generalization ability of Majority Vote Point Classifiers.
IEEE Transactions on Neural Networks and Learning Systems 2016 September 31
Classification algorithms have been traditionally designed to simultaneously reduce errors caused by bias as well by variance. However, there occur many situations where low generalization error becomes extremely crucial to getting tangible classification solutions, and even slight overfitting causes serious consequences in the test results. In such situations, classifiers with low Vapnik-Chervonenkis (VC) dimension can bring out positive differences due to two main advantages: 1) the classifier manages to keep the test error close to training error and 2) the classifier learns effectively with small number of samples. This paper shows that a class of classifiers named majority vote point (MVP) classifiers, on account of very low VC dimension, can exhibit a generalization error that is even lower than that of linear classifiers. This paper proceeds by theoretically formulating an upper bound on the VC dimension of the MVP classifier. Later, through empirical analysis, the trend of exact values of VC dimension is estimated. Finally, case studies on machine fault diagnosis problems and prostate tumor detection problem revalidate the fact that an MVP classifier can achieve a lower generalization error than most other classifiers.
Full text links
Related Resources
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app