Add like
Add dislike
Add to saved papers

Generative Inferences Based on Learned Relations.

A key property of relational representations is their generativity: From partial descriptions of relations between entities, additional inferences can be drawn about other entities. A major theoretical challenge is to demonstrate how the capacity to make generative inferences could arise as a result of learning relations from non-relational inputs. In the present paper, we show that a bottom-up model of relation learning, initially developed to discriminate between positive and negative examples of comparative relations (e.g., deciding whether a sheep is larger than a rabbit), can be extended to make generative inferences. The model is able to make quasi-deductive transitive inferences (e.g., "If A is larger than B and B is larger than C, then A is larger than C") and to qualitatively account for human responses to generative questions such as "What is an animal that is smaller than a dog?" These results provide evidence that relational models based on bottom-up learning mechanisms are capable of supporting generative inferences.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

Related Resources

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app