Georg Rutishauser

Orcid: 0000-0001-8875-7611

According to our database1, Georg Rutishauser authored at least 16 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC With 2-8 b DNN Acceleration and 30%-Boost Adaptive Body Biasing.
IEEE J. Solid State Circuits, January, 2024

Combining Local and Global Perception for Autonomous Navigation on Nano-UAVs.
CoRR, 2024

2023
7 μJ/inference end-to-end gesture recognition from dynamic vision sensor data using ternarized hybrid convolutional neural networks.
Future Gener. Comput. Syst., December, 2023

TCN-CUTIE: A 1, 036-TOp/s/W, 2.72-µJ/Inference, 12.2-mW All-Digital Ternary Accelerator in 22-nm FDX Technology.
IEEE Micro, 2023

Flexible and Fully Quantized Ultra-Lightweight TinyissimoYOLO for Ultra-Low-Power Edge Systems.
CoRR, 2023

Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC with 2-to-8b DNN Acceleration and 30%-Boost Adaptive Body Biasing.
CoRR, 2023

ColibriUAV: An Ultra-Fast, Energy-Efficient Neuromorphic Edge Processing UAV-Platform with Event-Based and Frame-Based Cameras.
Proceedings of the 9th International Workshop on Advances in Sensors and Interfaces, 2023

A 3 TOPS/W RISC-V Parallel Cluster for Inference of Fine-Grain Mixed-Precision Quantized Neural Networks.
Proceedings of the IEEE Computer Society Annual Symposium on VLSI, 2023

A 12.4TOPS/W @ 136GOPS AI-IoT System-on-Chip with 16 RISC-V, 2-to-8b Precision-Scalable DNN Acceleration and 30%-Boost Adaptive Body Biasing.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

ColibriES: A Milliwatts RISC-V Based Embedded System Leveraging Neuromorphic and Neural Networks Hardware Accelerators for Low-Latency Closed-loop Control Applications.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2023

Free Bits: Latency Optimization of Mixed-Precision Quantized Neural Networks on the Edge.
Proceedings of the 5th IEEE International Conference on Artificial Intelligence Circuits and Systems, 2023

2022
CUTIE: Beyond PetaOp/s/W Ternary DNN Inference Acceleration With Better-Than-Binary Energy Efficiency.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 2022

TCN-CUTIE: A 1036 TOp/s/W, 2.72 uJ/Inference, 12.2 mW All-Digital Ternary Accelerator in 22 nm FDX Technology.
CoRR, 2022

Ternarized TCN for $\mu \mathrm{J}/\text{Inference}$ Gesture Recognition from DVS Event Frames.
Proceedings of the 2022 Design, Automation & Test in Europe Conference & Exhibition, 2022

A 1036 TOp/s/W, 12.2 mW, 2.72 μJ/Inference All Digital TNN Accelerator in 22 nm FDX Technology for TinyML Applications.
Proceedings of the IEEE Symposium in Low-Power and High-Speed Chips, 2022

2019
EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators.
IEEE J. Emerg. Sel. Topics Circuits Syst., 2019


  Loading...