We have located links that may give you full text access.
A Context-driven Extractive Framework for Generating Realistic Image Descriptions.
IEEE Transactions on Image Processing : a Publication of the IEEE Signal Processing Society 2016 November 15
Automatic image annotation methods are extremely beneficial for image search, retrieval and organization systems. The lack of strict correlation between semantic concepts and visual features, referred to as the semantic gap is a huge challenge for annotation systems. In this paper, we propose an image annotation model that incorporates contextual cues collected from sources both intrinsic and extrinsic to images, to bridge the semantic gap. The main focus of our work is a large real-world dataset of news images that we collected. Unlike standard image annotation benchmark datasets, our dataset does not require human annotators to generate artificial ground truth descriptions after data collection, since our images already include contextually meaningful and real-world captions written by journalists. We thoroughly study the nature of image descriptions in this real-world dataset. News image captions describe both visual contents and the contexts of images. Auxiliary information sources are also available with such images in the form of news article and metadata (e.g. keywords and categories). The proposed framework extracts contextual-cues from available sources of different data modalities and transforms them into a common representation space, i.e., the probability space. Predicted annotations are later transformed into sentence-like captions through an extractive framework applied over news articles. Our context-driven framework outperforms the state-ofthe- art on the collected dataset of approximately 20; 000 items, as well as on a previously available smaller news images dataset.
Full text links
Related Resources
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app