Add like
Add dislike
Add to saved papers

An asynchronous multi-view learning approach for activity recognition using wearables.

In this paper, we introduce an Asynchronous Multiview Learning (AML) approach to allow accurate transfer of activity classification models across asynchronous sensor views. Our study is motivated by the highly dynamic nature of health monitoring using wearable sensors. Such dynamics include changes in sensing platform (e.g., sensor upgrade) and platform settings (e.g., sampling frequency, on-body sensor location), which result in failure of the machine learning algorithms if they remain untrained in the new setting. Our approach allows machine learning algorithms to automatically reconfigure without any need for labeled training data in the new setting. Our evaluation using real data collected with wearable motion sensors demonstrates that the average classification accuracy using our automatically labeled training data is 85.2%. This accuracy is only 3.4% to 4.5% less than the experimental upper bound, where ground truth labeled training data are used to develop a new activity recognition classifier.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

Related Resources

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app