Sayeh Sharify

According to our database1, Sayeh Sharify authored at least 20 papers between 2017 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Mixed-Precision Quantization with Cross-Layer Dependencies.
CoRR, 2023

2021
Boveda: Building an On-Chip Deep Learning Memory Hierarchy Brick by Brick.
Proceedings of Machine Learning and Systems 2021, 2021

2020
Late Breaking Results: Building an On-Chip Deep Learning Memory Hierarchy Brick by Brick.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020

2019
Accelerating Image-Sensor-Based Deep Learning Applications.
IEEE Micro, 2019

ShapeShifter: Enabling Fine-Grain Data Width Adaptation in Deep Learning.
Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019

Laconic deep learning inference acceleration.
Proceedings of the 46th International Symposium on Computer Architecture, 2019

Bit-Tactical: A Software/Hardware Approach to Exploiting Value and Bit Sparsity in Neural Networks.
Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019

2018
Value-Based Deep-Learning Acceleration.
IEEE Micro, 2018

Laconic Deep Learning Computing.
CoRR, 2018

DPRed: Making Typical Activation Values Matter In Deep Learning Computing.
CoRR, 2018

Bit-Tactical: Exploiting Ineffectual Computations in Convolutional Neural Networks: Which, Why, and How.
CoRR, 2018

Exploiting Typical Values to Accelerate Deep Learning.
Computer, 2018

Identifying and Exploiting Ineffectual Computations to Enable Hardware Acceleration of Deep Learning.
Proceedings of the 16th IEEE International New Circuits and Systems Conference, 2018

Loom: exploiting weight and activation precisions to accelerate convolutional neural networks.
Proceedings of the 55th Annual Design Automation Conference, 2018

2017
Loom: Exploiting Weight and Activation Precisions to Accelerate Convolutional Neural Networks.
CoRR, 2017

Cnvlutin2: Ineffectual-Activation-and-Weight-Free Deep Neural Network Computing.
CoRR, 2017

Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability.
CoRR, 2017

Dynamic Stripes: Exploiting the Dynamic Precision Requirements of Activation Values in Neural Networks.
CoRR, 2017

Bit-pragmatic deep neural network computing.
Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, 2017

Bit-Pragmatic Deep Neural Network Computing.
Proceedings of the 5th International Conference on Learning Representations, 2017


  Loading...