Zhiwei Hao

Orcid: 0000-0002-6237-7028

Affiliations:
  • Beijing Institute of Technology, Beijing, China


According to our database1, Zhiwei Hao authored at least 18 papers between 2021 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning.
Int. J. Comput. Vis., August, 2025

CoFormer: Collaborating with Heterogeneous Edge Devices for Scalable Transformer Inference.
CoRR, August, 2025

Low-Precision Training of Large Language Models: Methods, Challenges, and Opportunities.
CoRR, May, 2025

2024
DeViT: Decomposing Vision Transformers for Collaborative Inference in Edge Devices.
IEEE Trans. Mob. Comput., May, 2024

GhostNetV3: Exploring the Training Strategies for Compact Models.
CoRR, 2024

SAM-DiffSR: Structure-Modulated Diffusion Model for Image Super-Resolution.
CoRR, 2024

PrimKD: Primary Modality Guided Multimodal Fusion for RGB-D Semantic Segmentation.
Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October 2024, 2024

Data-efficient Large Vision Models through Sequential Autoregression.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Adapt Without Forgetting: Distill Proximity from Dual Teachers in Vision-Language Models.
Proceedings of the Computer Vision - ECCV 2024, 2024

Visual Prompting via Partial Optimal Transport.
Proceedings of the Computer Vision - ECCV 2024, 2024

2023
Multi-Agent Collaborative Inference via DNN Decoupling: Intermediate Feature Compression and Edge Learning.
IEEE Trans. Mob. Comput., October, 2023

VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale.
CoRR, 2023

One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Revisit the Power of Vanilla Knowledge Distillation: from Small Scale to Large Scale.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
CDFKD-MFS: Collaborative Data-Free Knowledge Distillation via Multi-Level Feature Sharing.
IEEE Trans. Multim., 2022

Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

2021
Data-Free Ensemble Knowledge Distillation for Privacy-conscious Multimedia Model Compression.
Proceedings of the MM '21: ACM Multimedia Conference, Virtual Event, China, October 20, 2021

Model Compression via Collaborative Data-Free Knowledge Distillation for Edge Intelligence.
Proceedings of the 2021 IEEE International Conference on Multimedia and Expo, 2021


  Loading...