keyword
MENU ▼
Read by QxMD icon Read
search

Demis Hassabis

keyword
https://www.readbyqxmd.com/read/29903970/neural-scene-representation-and-rendering
#1
S M Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, David P Reichert, Lars Buesing, Theophane Weber, Oriol Vinyals, Dan Rosenbaum, Neil Rabinowitz, Helen King, Chloe Hillier, Matt Botvinick, Daan Wierstra, Koray Kavukcuoglu, Demis Hassabis
Scene representation-the process of converting visual sensory data into concise descriptions-is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem. To this end, we introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints...
June 15, 2018: Science
https://www.readbyqxmd.com/read/29760527/prefrontal-cortex-as-a-meta-reinforcement-learning-system
#2
Jane X Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Demis Hassabis, Matthew Botvinick
Over the past 20 years, neuroscience research on reward-based learning has converged on a canonical model, under which the neurotransmitter dopamine 'stamps in' associations between situations, actions and rewards by modulating the strength of synaptic connections between neurons. However, a growing number of recent findings have placed this standard model under strain. We now draw on recent advances in artificial intelligence to introduce a new theory of reward-based learning. Here, the dopamine system trains another part of the brain, the prefrontal cortex, to operate as its own free-standing learning system...
May 14, 2018: Nature Neuroscience
https://www.readbyqxmd.com/read/29743670/vector-based-navigation-using-grid-like-representations-in-artificial-agents
#3
Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beattie, Stig Petersen, Amir Sadik, Stephen Gaffney, Helen King, Koray Kavukcuoglu, Demis Hassabis, Raia Hadsell, Dharshan Kumaran
Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go1,2 . Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space7,8 and is critical for integrating self-motion (path integration)6,7,9 and planning direct trajectories to goals (vector-based navigation)7,10,11 ...
May 9, 2018: Nature
https://www.readbyqxmd.com/read/29507207/how-cognitive-and-reactive-fear-circuits-optimize-escape-decisions-in-humans
#4
Song Qi, Demis Hassabis, Jiayin Sun, Fangjian Guo, Nathaniel Daw, Dean Mobbs
Flight initiation distance (FID), the distance at which an organism flees from an approaching threat, is an ecological metric of cost-benefit functions of escape decisions. We adapted the FID paradigm to investigate how fast- or slow-attacking "virtual predators" constrain escape decisions. We show that rapid escape decisions rely on "reactive fear" circuits in the periaqueductal gray and midcingulate cortex (MCC), while protracted escape decisions, defined by larger buffer zones, were associated with "cognitive fear" circuits, which include posterior cingulate cortex, hippocampus, and the ventromedial prefrontal cortex, circuits implicated in more complex information processing, cognitive avoidance strategies, and behavioral flexibility...
March 20, 2018: Proceedings of the National Academy of Sciences of the United States of America
https://www.readbyqxmd.com/read/29463734/reply-to-husz%C3%A3-r-the-elastic-weight-consolidation-penalty-is-empirically-valid
#5
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, Raia Hadsell
No abstract text is available yet for this article.
March 13, 2018: Proceedings of the National Academy of Sciences of the United States of America
https://www.readbyqxmd.com/read/29052630/mastering-the-game-of-go-without-human-knowledge
#6
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, Demis Hassabis
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules...
October 18, 2017: Nature
https://www.readbyqxmd.com/read/28728020/neuroscience-inspired-artificial-intelligence
#7
REVIEW
Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, Matthew Botvinick
The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals...
July 19, 2017: Neuron
https://www.readbyqxmd.com/read/28447631/artificial-intelligence-chess-match-of-the-century
#8
Demis Hassabis
No abstract text is available yet for this article.
April 26, 2017: Nature
https://www.readbyqxmd.com/read/28292907/overcoming-catastrophic-forgetting-in-neural-networks
#9
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, Raia Hadsell
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks...
March 28, 2017: Proceedings of the National Academy of Sciences of the United States of America
https://www.readbyqxmd.com/read/27930904/computations-underlying-social-hierarchy-learning-distinct-neural-mechanisms-for-updating-and-representing-self-relevant-information
#10
Dharshan Kumaran, Andrea Banino, Charles Blundell, Demis Hassabis, Peter Dayan
Knowledge about social hierarchies organizes human behavior, yet we understand little about the underlying computations. Here we show that a Bayesian inference scheme, which tracks the power of individuals, better captures behavioral and neural data compared with a reinforcement learning model inspired by rating systems used in games such as chess. We provide evidence that the medial prefrontal cortex (MPFC) selectively mediates the updating of knowledge about one's own hierarchy, as opposed to that of another individual, a process that underpinned successful performance and involved functional interactions with the amygdala and hippocampus...
December 7, 2016: Neuron
https://www.readbyqxmd.com/read/27732574/hybrid-computing-using-a-neural-network-with-dynamic-external-memory
#11
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, Demis Hassabis
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer...
October 27, 2016: Nature
https://www.readbyqxmd.com/read/27551087/semantic-representations-in-the-temporal-pole-predict-false-memories
#12
Martin J Chadwick, Raeesa S Anjum, Dharshan Kumaran, Daniel L Schacter, Hugo J Spiers, Demis Hassabis
Recent advances in neuroscience have given us unprecedented insight into the neural mechanisms of false memory, showing that artificial memories can be inserted into the memory cells of the hippocampus in a way that is indistinguishable from true memories. However, this alone is not enough to explain how false memories can arise naturally in the course of our daily lives. Cognitive psychology has demonstrated that many instances of false memory, both in the laboratory and the real world, can be attributed to semantic interference...
September 6, 2016: Proceedings of the National Academy of Sciences of the United States of America
https://www.readbyqxmd.com/read/27510579/retrieval-based-model-accounts-for-striking-profile-of-episodic-memory-and-generalization
#13
Andrea Banino, Raphael Koster, Demis Hassabis, Dharshan Kumaran
A fundamental theoretical tension exists between the role of the hippocampus in generalizing across a set of related episodes, and in supporting memory for individual episodes. Whilst the former requires an appreciation of the commonalities across episodes, the latter emphasizes the representation of the specifics of individual experiences. We developed a novel version of the hippocampal-dependent paired associate inference (PAI) paradigm, which afforded us the unique opportunity to investigate the relationship between episodic memory and generalization in parallel...
August 11, 2016: Scientific Reports
https://www.readbyqxmd.com/read/27315762/what-learning-systems-do-intelligent-agents-need-complementary-learning-systems-theory-updated
#14
REVIEW
Dharshan Kumaran, Demis Hassabis, James L McClelland
We update complementary learning systems (CLS) theory, which holds that intelligent agents must possess two learning systems, instantiated in mammalians in neocortex and hippocampus. The first gradually acquires structured knowledge representations while the second quickly learns the specifics of individual experiences. We broaden the role of replay of hippocampal memories in the theory, noting that replay allows goal-dependent weighting of experience statistics. We also address recent challenges to the theory and extend it by showing that recurrent activation of hippocampal traces can support some forms of generalization and that neocortical learning can be rapid for information that is consistent with known structure...
July 2016: Trends in Cognitive Sciences
https://www.readbyqxmd.com/read/27196978/neural-mechanisms-of-hierarchical-planning-in-a-virtual-subway-network
#15
Jan Balaguer, Hugo Spiers, Demis Hassabis, Christopher Summerfield
Planning allows actions to be structured in pursuit of a future goal. However, in natural environments, planning over multiple possible future states incurs prohibitive computational costs. To represent plans efficiently, states can be clustered hierarchically into "contexts". For example, representing a journey through a subway network as a succession of individual states (stations) is more costly than encoding a sequence of contexts (lines) and context switches (line changes). Here, using functional brain imaging, we asked humans to perform a planning task in a virtual subway network...
May 18, 2016: Neuron
https://www.readbyqxmd.com/read/26819042/mastering-the-game-of-go-with-deep-neural-networks-and-tree-search
#16
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, Demis Hassabis
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play...
January 28, 2016: Nature
https://www.readbyqxmd.com/read/26112828/hippocampal-place-cells-construct-reward-related-sequences-through-unexplored-space
#17
H Freyja Ólafsdóttir, Caswell Barry, Aman B Saleem, Demis Hassabis, Hugo J Spiers
Dominant theories of hippocampal function propose that place cell representations are formed during an animal's first encounter with a novel environment and are subsequently replayed during off-line states to support consolidation and future behaviour. Here we report that viewing the delivery of food to an unvisited portion of an environment leads to off-line pre-activation of place cells sequences corresponding to that space. Such 'preplay' was not observed for an unrewarded but otherwise similar portion of the environment...
June 26, 2015: ELife
https://www.readbyqxmd.com/read/25719670/human-level-control-through-deep-reinforcement-learning
#18
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis
The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations...
February 26, 2015: Nature
https://www.readbyqxmd.com/read/25532898/a-goal-direction-signal-in-the-human-entorhinal-subicular-region
#19
Martin J Chadwick, Amy E J Jolly, Doran P Amos, Demis Hassabis, Hugo J Spiers
Navigating to a safe place, such as a home or nest, is a fundamental behavior for all complex animals. Determining the direction to such goals is a crucial first step in navigation. Surprisingly, little is known about how or where in the brain this "goal direction signal" is represented. In mammals, "head-direction cells" are thought to support this process, but despite 30 years of research, no evidence for a goal direction representation has been reported. Here, we used fMRI to record neural activity while participants made goal direction judgments based on a previously learned virtual environment...
January 5, 2015: Current Biology: CB
https://www.readbyqxmd.com/read/23739983/foraging-under-competition-the-neural-basis-of-input-matching-in-humans
#20
Dean Mobbs, Demis Hassabis, Rongjun Yu, Carlton Chu, Matthew Rushworth, Erie Boorman, Tim Dalgleish
Input-matching is a key mechanism by which animals optimally distribute themselves across habitats to maximize net gains based on the changing input values of food supply rate and competition. To examine the neural systems that underlie this rule in humans, we created a continuous-input foraging task where subjects had to decide to stay or switch between two habitats presented on the left and right of the screen. The subject's decision to stay or switch was based on changing input values of reward-token supply rate and competition density...
June 5, 2013: Journal of Neuroscience: the Official Journal of the Society for Neuroscience
keyword
keyword
109241
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"