Tianheng Ling

Orcid: 0000-0003-4603-8576

According to our database1, Tianheng Ling authored at least 17 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Automating Versatile Time-Series Analysis with Tiny Transformers on Embedded FPGAs.
CoRR, May, 2025

Evaluating Time Series Models for Urban Wastewater Management: Predictive Performance, Model Complexity and Resilience.
CoRR, April, 2025

Configuration-aware approaches for enhancing energy efficiency in FPGA-based deep learning accelerators.
J. Syst. Archit., 2025

2024
Exploring energy efficiency of LSTM accelerators: A parameterized architecture design for embedded FPGAs.
J. Syst. Archit., 2024

Resource-aware Mixed-precision Quantization for Enhancing Deployability of Transformers for Time-series Forecasting on Embedded FPGAs.
CoRR, 2024

An Automated Approach to Collecting and Labeling Time Series Data for Event Detection Using Elastic Node Hardware.
CoRR, 2024

Integer-only Quantized Transformers for Embedded FPGA-based Time-series Forecasting in AIoT.
CoRR, 2024

Towards Auto-Building of Embedded FPGA-based Soft Sensors for Wastewater Flow Estimation.
CoRR, 2024

Data-driven Modeling of Combined Sewer Systems for Urban Sustainability: An Empirical Evaluation.
Proceedings of the 2nd Workshop on Public Interest AI (PI-AI 2024) co-located with the German Conference on AI (KI 2024), 2024

FlowPrecision: Advancing FPGA-Based Real-Time Fluid Flow Estimation with Linear Quantization.
Proceedings of the IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, 2024

Idle is the New Sleep: Configuration-Aware Alternative to Powering Off FPGA-Based DL Accelerators During Inactivity.
Proceedings of the Architecture of Computing Systems - 37th International Conference, 2024

2023
A Study of Quantisation-aware Training on Time Series Transformer Models for Resource-constrained FPGAs.
CoRR, 2023

ElasticAI: Creating and Deploying Energy-Efficient Deep Learning Accelerator for Pervasive Computing.
Proceedings of the IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, 2023

On-Device AI: Quantization-Aware Training of Transformers in Time-Series.
Proceedings of the IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, 2023

On-Device Soft Sensors: Real-Time Fluid Flow Estimation from Level Sensor Data.
Proceedings of the Mobile and Ubiquitous Systems: Computing, Networking and Services, 2023

Energy Efficient LSTM Accelerators for Embedded FPGAs Through Parameterised Architecture Design.
Proceedings of the Architecture of Computing Systems - 36th International Conference, 2023

2022
Enhancing Energy-Efficiency by Solving the Throughput Bottleneck of LSTM Cells for Embedded FPGAs.
Proceedings of the Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022


  Loading...