Hardik Sharma

Orcid: 0000-0003-0028-013X

According to our database1, Hardik Sharma authored at least 17 papers between 2015 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
DaCapo: Accelerating Continuous Learning in Autonomous Systems for Video Analytics.
CoRR, 2024

In-Storage Domain-Specific Acceleration for Serverless Computing.
Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2024

2023
Towards A Sustainable and Ethical Supply Chain Management: The Potential of IoT Solutions.
CoRR, 2023

Domain-Specific Computational Storage for Serverless Computing.
CoRR, 2023

2022
CoVA: Exploiting Compressed-Domain Analysis to Accelerate Video Analytics.
Proceedings of the 2022 USENIX Annual Technical Conference, 2022

2020
Planaria: Dynamic Architecture Fission for Spatial Multi-Tenant Acceleration of Deep Neural Networks.
Proceedings of the 53rd Annual IEEE/ACM International Symposium on Microarchitecture, 2020

Bit-Parallel Vector Composability for Neural Acceleration.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020

Mixed-Signal Charge-Domain Acceleration of Deep Neural Networks through Interleaved Bit-Partitioned Arithmetic.
Proceedings of the PACT '20: International Conference on Parallel Architectures and Compilation Techniques, 2020

2019
Accelerated Deep Learning for the Edge-to-Cloud continuum: a Specialized Full Stack derived from Algorithms.
PhD thesis, 2019

Mixed-Signal Charge-Domain Acceleration of Deep Neural networks through Interleaved Bit-Partitioned Arithmetic.
CoRR, 2019

2018
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Network.
Proceedings of the 45th ACM/IEEE Annual International Symposium on Computer Architecture, 2018

2017
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks.
CoRR, 2017

Scale-out acceleration for machine learning.
Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, 2017

2016
From high-level deep neural models to FPGAs.
Proceedings of the 49th Annual IEEE/ACM International Symposium on Microarchitecture, 2016

TABLA: A unified template-based framework for accelerating statistical machine learning.
Proceedings of the 2016 IEEE International Symposium on High Performance Computer Architecture, 2016

The impact of 3D stacking on GPU-accelerated deep neural networks: An experimental study.
Proceedings of the 2016 IEEE International 3D Systems Integration Conference, 2016

2015
Neural acceleration for GPU throughput processors.
Proceedings of the 48th International Symposium on Microarchitecture, 2015


  Loading...