We have located links that may give you full text access.
Deep learning using contrast-enhanced ultrasound images to predict the nuclear grade of clear cell renal cell carcinoma.
World Journal of Urology 2024 March 22
PURPOSE: To assess the effectiveness of a deep learning model using contrastenhanced ultrasound (CEUS) images in distinguishing between low-grade (grade I and II) and high-grade (grade III and IV) clear cell renal cell carcinoma (ccRCC).
METHODS: A retrospective study was conducted using CEUS images of 177 Fuhrmangraded ccRCCs (93 low-grade and 84 high-grade) from May 2017 to December 2020. A total of 6412 CEUS images were captured from the videos and normalized for subsequent analysis. A deep learning model using the RepVGG architecture was proposed to differentiate between low-grade and high-grade ccRCC. The model's performance was evaluated based on sensitivity, specificity, positive predictive value, negative predictive value and area under the receiver operating characteristic curve (AUC). Class activation mapping (CAM) was used to visualize the specific areas that contribute to the model's predictions.
RESULTS: For discriminating high-grade ccRCC from low-grade, the deep learning model achieved a sensitivity of 74.8%, specificity of 79.1%, accuracy of 77.0%, and an AUC of 0.852 in the test set.
CONCLUSION: The deep learning model based on CEUS images can accurately differentiate between low-grade and high-grade ccRCC in a non-invasive manner.
METHODS: A retrospective study was conducted using CEUS images of 177 Fuhrmangraded ccRCCs (93 low-grade and 84 high-grade) from May 2017 to December 2020. A total of 6412 CEUS images were captured from the videos and normalized for subsequent analysis. A deep learning model using the RepVGG architecture was proposed to differentiate between low-grade and high-grade ccRCC. The model's performance was evaluated based on sensitivity, specificity, positive predictive value, negative predictive value and area under the receiver operating characteristic curve (AUC). Class activation mapping (CAM) was used to visualize the specific areas that contribute to the model's predictions.
RESULTS: For discriminating high-grade ccRCC from low-grade, the deep learning model achieved a sensitivity of 74.8%, specificity of 79.1%, accuracy of 77.0%, and an AUC of 0.852 in the test set.
CONCLUSION: The deep learning model based on CEUS images can accurately differentiate between low-grade and high-grade ccRCC in a non-invasive manner.
Full text links
Related Resources
Trending Papers
A Guide to the Use of Vasopressors and Inotropes for Patients in Shock.Journal of Intensive Care Medicine 2024 April 14
Prevention and treatment of ischaemic and haemorrhagic stroke in people with diabetes mellitus: a focus on glucose control and comorbidities.Diabetologia 2024 April 17
British Society for Rheumatology guideline on management of adult and juvenile onset Sjögren disease.Rheumatology 2024 April 17
Diagnosis and Management of Cardiac Sarcoidosis: A Scientific Statement From the American Heart Association.Circulation 2024 April 19
Albumin: a comprehensive review and practical guideline for clinical use.European Journal of Clinical Pharmacology 2024 April 13
Eosinophilic Esophagitis: Clinical Pearls for Primary Care Providers and Gastroenterologists.Mayo Clinic Proceedings 2024 April
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app