Jing Liu

Orcid: 0000-0002-6745-3050

Affiliations:
  • Monash University Clayton Campus, Faculty of Information Technology, VIC, Australia
  • South China University of Technology, School of Software Engineering, Guangzhou, China


According to our database1, Jing Liu authored at least 25 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Pruning Self-Attentions Into Convolutional Layers in Single Path.
IEEE Trans. Pattern Anal. Mach. Intell., May, 2024

2023
Generative Data Free Model Quantization With Knowledge Matching for Classification.
IEEE Trans. Circuits Syst. Video Technol., December, 2023

Single-Path Bit Sharing for Automatic Loss-Aware Model Compression.
IEEE Trans. Pattern Anal. Mach. Intell., October, 2023

EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models.
CoRR, 2023

Stitched ViTs are Flexible Vision Backbones.
CoRR, 2023

PTQD: Accurate Post-Training Quantization for Diffusion Models.
CoRR, 2023

A Survey on Efficient Training of Transformers.
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023

BiViT: Extremely Compressed Binary Vision Transformers.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Dynamic Focus-aware Positional Queries for Semantic Segmentation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Effective Training of Convolutional Neural Networks With Low-Bitwidth Weights and Activations.
IEEE Trans. Pattern Anal. Mach. Intell., 2022

Discrimination-Aware Network Pruning for Deep Model Compression.
IEEE Trans. Pattern Anal. Mach. Intell., 2022

FocusFormer: Focusing on What We Need via Architecture Sampler.
CoRR, 2022

EcoFormer: Energy-Saving Attention with Linear Complexity.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Downscaling and Overflow-aware Model Compression for Efficient Vision Processors.
Proceedings of the 42nd IEEE International Conference on Distributed Computing Systems, 2022

Less Is More: Pay Less Attention in Vision Transformers.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Sharpness-aware Quantization for Deep Neural Networks.
CoRR, 2021

Mesa: A Memory-saving Training Framework for Transformers.
CoRR, 2021

Elastic Architecture Search for Diverse Tasks with Different Resources.
CoRR, 2021

Scalable Visual Transformers with Hierarchical Pooling.
CoRR, 2021

ABS: Automatic Bit Sharing for Model Compression.
CoRR, 2021

Scalable Vision Transformers with Hierarchical Pooling.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

AQD: Towards Accurate Quantized Object Detection.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

2020
Generative Low-Bitwidth Data Free Quantization.
Proceedings of the Computer Vision - ECCV 2020, 2020

Deep Transferring Quantization.
Proceedings of the Computer Vision - ECCV 2020, 2020

2018
Discrimination-aware Channel Pruning for Deep Neural Networks.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018


  Loading...