Gongfan Fang

Orcid: 0009-0009-6935-0432

According to our database1, Gongfan Fang authored at least 37 papers between 2019 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
ConciseHint: Boosting Efficient Reasoning via Continuous Concise Hints during Generation.
CoRR, June, 2025

Diversity-Guided MLP Reduction for Efficient Large Vision Transformers.
CoRR, June, 2025

PixelThink: Towards Efficient Chain-of-Pixel Reasoning.
CoRR, May, 2025

VeriThinker: Learning to Verify Makes Reasoning Model Efficient.
CoRR, May, 2025

dKV-Cache: The Cache for Diffusion Language Models.
CoRR, May, 2025

Thinkless: LLM Learns When to Think.
CoRR, May, 2025

Efficient Reasoning Models: A Survey.
CoRR, April, 2025

PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud Learning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

Diffusion Model is Effectively Its Own Teacher.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

TinyFusion: Diffusion Transformers Learned Shallow.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

CoT-Valve: Length-Compressible Chain-of-Thought Tuning.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
PruningBench: A Comprehensive Benchmark of Structural Pruning.
CoRR, 2024

Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

SlimSAM: 0.1% Data Makes Segment Anything Slim.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

LiteFocus: Accelerated Diffusion Inference for Long Audio Synthesis.
Proceedings of the 25th Annual Conference of the International Speech Communication Association, 2024

Isomorphic Pruning for Vision Models.
Proceedings of the Computer Vision - ECCV 2024, 2024

DeepCache: Accelerating Diffusion Models for Free.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

2023
Deep semantic image compression via cooperative network pruning.
J. Vis. Commun. Image Represent., September, 2023

Knowledge Amalgamation for Object Detection With Transformers.
IEEE Trans. Image Process., 2023

0.1% Data Makes Segment Anything Slim.
CoRR, 2023

LLM-Pruner: On the Structural Pruning of Large Language Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Structural Pruning for Diffusion Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

DepGraph: Towards Any Structural Pruning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Federated Selective Aggregation for Knowledge Amalgamation.
CoRR, 2022

Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt.
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2022

Up to 100x Faster Data-Free Knowledge Distillation.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Contrastive Model Inversion for Data-Free Knowledge Distillation.
CoRR, 2021

Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Contrastive Model Invertion for Data-Free Knolwedge Distillation.
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021

2020
Impression Space from Deep Template Network.
CoRR, 2020

Adversarial Self-Supervised Data-Free Distillation for Text Classification.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

2019
Data-Free Adversarial Distillation.
CoRR, 2019

Knowledge Amalgamation from Heterogeneous Networks by Common Feature Learning.
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019


  Loading...