Add like
Add dislike
Add to saved papers

Neuromorphic computation with spiking memristors: habituation, experimental instantiation of logic gates and a novel sequence-sensitive perceptron model.

Faraday Discussions 2018 November 13
Memristors have been compared to neurons and synapses, suggesting they would be good for neuromorphic computing. A change in voltage across a memristor causes a current spike which imparts a short-term memory to a memristor, allowing for through-time computation, which can do arithmetical operations and sequential logic, or model short-time habituation to a stimulus. Using simple physical rules, simple logic gates such as XOR, and novel, more complex, gates such as the arithmetic full adder (AFA) can be instantiated in sol-gel TiO2 plastic memristors. The adder makes use of the memristor's short-term memory to add together three binary values and outputs the sum, the carry digit and even the order they were input in, allowing for logically (but not physically reversible) computation. Only a single memristor is required to instantiate each gate, as additional input/output ports can be replaced with extra time-steps allowing a single memristor to do a hitherto unexpectedly large amount of computation, which may mitigate the memristor's slow operation speed and may relate to how neurons do a similarly large computation with slow operation speeds. These logic gates can be understood by modelling the memristors as a novel type of perceptron: one which is sensitive to input order. The memristor's short-term memory can change the input weights applied to later inputs, and thus the memristor gates cannot be accurately described by a single perceptron, requiring either a network of time-invariant perceptrons, or a sequence-sensitive self-reprogrammable perceptron. Thus, the AFA is best described as a sequence-sensitive perceptron that sorts binary inputs into classes corresponding to the arithmetical sum of the inputs. Co-development of memristor hardware alongside software (sequence-sensitive perceptron) models in trained neural networks would allow the porting of modern deep-neural networks architecture to low-power hardware neural net chips.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

Related Resources

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app