Yaman Umuroglu

Orcid: 0000-0002-3700-5935

According to our database1, Yaman Umuroglu authored at least 25 papers between 2014 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
EcoFlow: Efficient Convolutional Dataflows on Low-Power Neural Network Accelerators.
IEEE Trans. Computers, September, 2024

A2Q+: Improving Accumulator-Aware Weight Quantization.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2022
Elastic-DF: Scaling Performance of DNN Inference in FPGA Clouds through Automatic Partitioning.
ACM Trans. Reconfigurable Technol. Syst., 2022

RadioML Meets FINN: Enabling Future RF Applications With FPGA Streaming Architectures.
IEEE Micro, 2022

Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark.
CoRR, 2022

QONNX: Representing Arbitrary-Precision Quantized Neural Networks.
CoRR, 2022

2021
Evaluation of Optimized CNNs on Heterogeneous Accelerators Using a Novel Benchmarking Approach.
IEEE Trans. Computers, 2021

Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference.
Frontiers Artif. Intell., 2021

2020
LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications.
Proceedings of the 30th International Conference on Field-Programmable Logic and Applications, 2020

Evaluation of Optimized CNNs on FPGA and non-FPGA based Accelerators using a Novel Benchmarking Approach.
Proceedings of the FPGA '20: The 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2020

High-Throughput DNN Inference with LogicNets.
Proceedings of the 28th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines, 2020

2019
Optimizing Bit-Serial Matrix Multiplication for Reconfigurable Computing.
ACM Trans. Reconfigurable Technol. Syst., 2019

2018
Accelerating Sparse Linear Algebra and Deep Neural Networks on Reconfigurable Platforms.
PhD thesis, 2018

FINN-<i>R</i>: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks.
ACM Trans. Reconfigurable Technol. Syst., 2018

FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks.
CoRR, 2018

BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing.
Proceedings of the 28th International Conference on Field Programmable Logic and Applications, 2018

2017
Streamlined Deployment for Quantized Neural Networks.
CoRR, 2017

Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic.
Proceedings of the 2017 IEEE International Conference on Computer Design, 2017

Scaling Binarized Neural Networks on Reconfigurable Logic.
Proceedings of the 8th Workshop and 6th Workshop on Parallel Programming and Run-Time Management Techniques for Many-core Architectures and Design Tools and Architectures for Multicore Embedded Computing Platforms, 2017

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference.
Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2017

Towards efficient quantized neural network inference on mobile devices: work-in-progress.
Proceedings of the 2017 International Conference on Compilers, 2017

2016
Random access schemes for efficient FPGA SpMV acceleration.
Microprocess. Microsystems, 2016

2015
Hybrid breadth-first search on a single-chip FPGA-CPU heterogeneous platform.
Proceedings of the 25th International Conference on Field Programmable Logic and Applications, 2015

A Vector Caching Scheme for Streaming FPGA SpMV Accelerators.
Proceedings of the Applied Reconfigurable Computing - 11th International Symposium, 2015

2014
An energy efficient column-major backend for FPGA SpMV accelerators.
Proceedings of the 32nd IEEE International Conference on Computer Design, 2014


  Loading...