Kota Ando

Orcid: 0000-0001-8648-3768

According to our database1, Kota Ando authored at least 29 papers between 2016 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
SPCTRE: sparsity-constrained fully-digital reservoir computing architecture on FPGA.
Int. J. Parallel Emergent Distributed Syst., March, 2024

Pianissimo: A Sub-mW Class DNN Accelerator With Progressively Adjustable Bit-Precision.
IEEE Access, 2024

2023
A Fully-Parallel Annealing Algorithm with Autonomous Pinning Effect Control for Various Combinatorial Optimization Problems.
IEICE Trans. Inf. Syst., December, 2023

TT-MLP: Tensor Train Decomposition on Deep MLPs.
IEEE Access, 2023

Pianissimo: A Sub-mW Class DNN Accelerator with Progressive Bit-by-Bit Datapath Architecture for Adaptive Inference at Edge.
Proceedings of the 2023 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), 2023

Amorphica: 4-Replica 512 Fully Connected Spin 336MHz Metamorphic Annealer with Programmable Optimization Strategy and Compressed-Spin-Transfer Multi-Chip Extension.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

2022
A Hybrid Integer Encoding Method for Obtaining High-Quality Solutions of Quadratic Knapsack Problems on Solid-State Annealers.
IEICE Trans. Inf. Syst., December, 2022

Hiddenite: 4K-PE Hidden Network Inference 4D-Tensor Engine Exploiting On-Chip Model Construction Achieving 34.8-to-16.0TOPS/W for CIFAR-100 and ImageNet.
Proceedings of the IEEE International Solid-State Circuits Conference, 2022

APC-SCA: A Fully-Parallel Annealing Algorithm with Autonomous Pinning Effect Control.
Proceedings of the IEEE International Parallel and Distributed Processing Symposium, 2022

Multicoated Supermasks Enhance Hidden Networks.
Proceedings of the International Conference on Machine Learning, 2022

2021
STATICA: A 512-Spin 0.25M-Weight Annealing Processor With an All-Spin-Updates-at-Once Architecture for Combinatorial Optimization With Complete Spin-Spin Interactions.
IEEE J. Solid State Circuits, 2021

ProgressiveNN: Achieving Computational Scalability with Dynamic Bit-Precision Adjustment by MSB-first Accumulative Computation.
Int. J. Netw. Comput., 2021

Edge Inference Engine for Deep & Random Sparse Neural Networks with 4-bit Cartesian-Product MAC Array and Pipelined Activation Aligner.
Proceedings of the IEEE Hot Chips 33 Symposium, 2021

2020
7.3 STATICA: A 512-Spin 0.25M-Weight Full-Digital Annealing Processor with a Near-Memory All-Spin-Updates-at-Once Architecture for Combinatorial Optimization with Complete Spin-Spin Interactions.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

A 3D-Stacked SRAM using Inductive Coupling with Low-Voltage Transmitter and 12: 1 SerDes.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2020

ProgressiveNN: Achieving Computational Scalability without Network Alteration by MSB-first Accumulative Computation.
Proceedings of the Eighth International Symposium on Computing and Networking, 2020

2019
QUEST: Multi-Purpose Log-Quantized DNN Inference Engine Stacked on 96-MB 3-D SRAM Using Inductive Coupling Technology in 40-nm CMOS.
IEEE J. Solid State Circuits, 2019

Dither NN: Hardware/Algorithm Co-Design for Accurate Quantized Neural Networks.
IEICE Trans. Inf. Syst., 2019

DeltaNet: Differential Binary Neural Network.
Proceedings of the 30th IEEE International Conference on Application-specific Systems, 2019

2018
BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W.
IEEE J. Solid State Circuits, 2018

Area and Energy Optimization for Bit-Serial Log-Quantized DNN Accelerator with Shared Accumulators.
Proceedings of the 12th IEEE International Symposium on Embedded Multicore/Many-core Systems-on-Chip, 2018

QUEST: A 7.49TOPS multi-purpose log-quantized DNN inference engine stacked on 96MB 3D SRAM using inductive-coupling technology in 40nm CMOS.
Proceedings of the 2018 IEEE International Solid-State Circuits Conference, 2018

Dither NN: An Accurate Neural Network with Dithering for Low Bit-Precision Hardware.
Proceedings of the International Conference on Field-Programmable Technology, 2018

2017
Quantization Error-Based Regularization in Neural Networks.
Proceedings of the Artificial Intelligence XXXIV, 2017

In-memory area-efficient signal streaming processor design for binary neural networks.
Proceedings of the IEEE 60th International Midwest Symposium on Circuits and Systems, 2017

Exploring optimized accelerator design for binarized convolutional neural networks.
Proceedings of the 2017 International Joint Conference on Neural Networks, 2017

Logarithmic Compression for Memory Footprint Reduction in Neural Network Training.
Proceedings of the Fifth International Symposium on Computing and Networking, 2017

Accelerating deep learning by binarized hardware.
Proceedings of the 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2017

2016
FPGA architecture for feed-forward sequential memory network targeting long-term time-series forecasting.
Proceedings of the International Conference on ReConFigurable Computing and FPGAs, 2016


  Loading...