Aayush Ankit

Orcid: 0000-0003-2827-8306

According to our database1, Aayush Ankit authored at least 38 papers between 2017 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.



In proceedings 
PhD thesis 


Online presence:

On csauthors.net:


Exploring Neuromorphic Computing Based on Spiking Neural Networks: Algorithms to Hardware.
ACM Comput. Surv., December, 2023

SAMBA: Sparsity Aware In-Memory Computing Based Machine Learning Accelerator.
IEEE Trans. Computers, September, 2023

Identifying Efficient Dataflows for Spiking Neural Networks.
Proceedings of the ISLPED '22: ACM/IEEE International Symposium on Low Power Electronics and Design, Boston, MA, USA, August 1, 2022

HyperX: A Hybrid RRAM-SRAM partitioned system for error recovery in memristive Xbars.
Proceedings of the 2022 Design, Automation & Test in Europe Conference & Exhibition, 2022

NAX: neural architecture and memristive xbar based accelerator co-design.
Proceedings of the DAC '22: 59th ACM/IEEE Design Automation Conference, San Francisco, California, USA, July 10, 2022

NAX: Co-Designing Neural Network and Hardware Architecture for Memristive Xbar based Computing Systems.
CoRR, 2021

SPACE: Structured Compression and Sharing of Representational Space for Continual Learning.
IEEE Access, 2021

Mixed Precision Quantization for ReRAM-based DNN Inference Accelerators.
Proceedings of the ASPDAC '21: 26th Asia and South Pacific Design Automation Conference, 2021

Design Tools for Resistive Crossbar based Machine Learning Accelerators.
Proceedings of the 3rd IEEE International Conference on Artificial Intelligence Circuits and Systems, 2021

TraNNsformer: Clustered Pruning on Crossbar-Based Architectures for Energy-Efficient Neural Networks.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 2020

PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-Efficient ReRAM.
IEEE Trans. Computers, 2020

Resistive Crossbars as Approximate Hardware Building Blocks for Machine Learning: Opportunities and Challenges.
Proc. IEEE, 2020

Constructing energy-efficient mixed-precision neural networks through principal component analysis for edge intelligence.
Nat. Mach. Intell., 2020

Circuits and Architectures for In-Memory Computing-Based Machine Learning Accelerators.
IEEE Micro, 2020

Structured Compression and Sharing of Representational Space for Continual Learning.
CoRR, 2020

Incremental Learning in Deep Convolutional Neural Networks Using Partial Network Sharing.
IEEE Access, 2020

Energy-Efficient Target Recognition using ReRAM Crossbars for Enabling On-Device Intelligence.
Proceedings of the IEEE Workshop on Signal Processing Systems, 2020

GENIEx: A Generalized Approach to Emulating Non-Ideality in Memristive Xbars using Neural Networks.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020

In-Memory Computing in Emerging Memory Technologies for Machine Learning: An Overview.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020

Powerline Communication for Enhanced Connectivity in Neuromorphic Systems.
IEEE Trans. Very Large Scale Integr. Syst., 2019

Xcel-RAM: Accelerating Binary Neural Networks in High-Throughput SRAM Compute Arrays.
IEEE Trans. Circuits Syst. I Regul. Pap., 2019

SPARE: Spiking Neural Network Acceleration Using ROM-Embedded RAMs as In-Memory-Computation Primitives.
IEEE Trans. Computers, 2019

Neural network accelerator design with resistive crossbars: Opportunities and challenges.
IBM J. Res. Dev., 2019

PCA-driven Hybrid network design for enabling Intelligence at the Edge.
CoRR, 2019

Efficient Hybrid Network Architectures for Extremely Quantized Neural Networks Enabling Intelligence at the Edge.
CoRR, 2019

PABO: Pseudo Agent-Based Multi-Objective Bayesian Hyperparameter Optimization for Efficient Neural Accelerator Design.
Proceedings of the International Conference on Computer-Aided Design, 2019

PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference.
Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019

Cross-Layer Design Exploration for Energy-Quality Tradeoffs in Spiking and Non-Spiking Deep Artificial Neural Networks.
IEEE Trans. Multi Scale Comput. Syst., 2018

An All-Memristor Deep Spiking Neural Computing System: A Step Toward Realizing the Low-Power Stochastic Brain.
IEEE Trans. Emerg. Top. Comput. Intell., 2018

Energy-Efficient Neural Computing with Approximate Multipliers.
ACM J. Emerg. Technol. Comput. Syst., 2018

Neuromorphic Computing Across the Stack: Devices, Circuits and Architectures.
Proceedings of the 2018 IEEE International Workshop on Signal Processing Systems, 2018

FALCON: Feature Driven Selective Classification for Energy-Efficient Image Recognition.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 2017

An All-Memristor Deep Spiking Neural Network: A Step Towards Realizing the Low Power, Stochastic Brain.
CoRR, 2017

SPARE: Spiking Networks Acceleration Using CMOS ROM-Embedded RAM as an In-Memory-Computation Primitive.
CoRR, 2017

Performance analysis and benchmarking of all-spin spiking neural networks (Special session paper).
Proceedings of the 2017 International Joint Conference on Neural Networks, 2017

TraNNsformer: Neural network transformation for memristive crossbar based neuromorphic system design.
Proceedings of the 2017 IEEE/ACM International Conference on Computer-Aided Design, 2017

RESPARC: A Reconfigurable and Energy-Efficient Architecture with Memristive Crossbars for Deep Spiking Neural Networks.
Proceedings of the 54th Annual Design Automation Conference, 2017