Lujun Li

Orcid: 0000-0002-4329-2707

Affiliations:
  • Hong Kong University of Science and Technology, Hong Kong, SAR, China


According to our database1, Lujun Li authored at least 43 papers between 2021 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Sub-MoE: Efficient Mixture-of-Expert LLMs Compression via Subspace Expert Merging.
CoRR, June, 2025

BTC-LLM: Efficient Sub-1-Bit LLM Quantization via Learnable Transformation and Binary Codebook.
CoRR, June, 2025

Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression.
CoRR, May, 2025

Delta Decompression for MoE-based LLMs Compression.
CoRR, February, 2025

STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

BayesKD: Bayesian Knowledge Distillation for Compact LLMs in Constrained Fine-tuning Scenarios.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

ParZC: Parametric Zero-Cost Proxies for Efficient NAS.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
You Know What I'm Saying: Jailbreak Attack via Implicit Reference.
CoRR, 2024

NoRA: Nested Low-Rank Adaptation for Efficient Fine-Tuning Large Models.
CoRR, 2024

STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs.
CoRR, 2024

ParZC: Parametric Zero-Cost Proxies for Efficient NAS.
CoRR, 2024

Adaptive Layer Sparsity for Large Language Models via Activation Correlation Assessment.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Discovering Sparsity Allocation for Layer-wise Pruning of Large Language Models.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

TVT: Training-Free Vision Transformer Search on Tiny Datasets.
Proceedings of the Pattern Recognition - 27th International Conference, 2024

DetKDS: Knowledge Distillation Search for Object Detectors.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

LPZero: Language Model Zero-cost Proxy Search from Zero.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

Auto-DAS: Automated Proxy Discovery for Training-Free Distillation-Aware Architecture Search.
Proceedings of the Computer Vision - ECCV 2024, 2024

AttnZero: Efficient Attention Discovery for Vision Transformers.
Proceedings of the Computer Vision - ECCV 2024, 2024

Auto-GAS: Automated Proxy Discovery for Training-Free Generative Architecture Search.
Proceedings of the Computer Vision - ECCV 2024, 2024

SasWOT: Real-Time Semantic Segmentation Architecture Search WithOut Training.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

UniADS: Universal Architecture-Distiller Search for Distillation Gap.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery.
CoRR, 2023

TVT: Training-Free Vision Transformer Search on Tiny Datasets.
CoRR, 2023

Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling.
CoRR, 2023

GP-NAS-ensemble: a model for NAS Performance Prediction.
CoRR, 2023

KD-Zero: Evolving Knowledge Distiller for Any Teacher-Student Pairs.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

NORM: Knowledge Distillation via N-to-One Representation Matching.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

MENAS: Multi-trial Evolutionary Neural Architecture Search with Lottery Tickets.
Proceedings of the IEEE International Conference on Image Processing, 2023

Automated Knowledge Distillation via Monte Carlo Tree Search.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

DMFormer: Closing the gap Between CNN and Vision Transformers.
Proceedings of the IEEE International Conference on Acoustics, 2023

Progressive Meta-Pooling Learning for Lightweight Image Classification Model.
Proceedings of the IEEE International Conference on Acoustics, 2023

RD-NAS: Enhancing One-Shot Supernet Ranking Ability Via Ranking Distillation From Zero-Cost Proxies.
Proceedings of the IEEE International Conference on Acoustics, 2023

A<sup>2</sup>-Aug: Adaptive Automated Data Augmentation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

DisWOT: Student Architecture Search for Distillation WithOut Training.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Prior-Guided One-shot Neural Architecture Search.
CoRR, 2022

Teacher-free Distillation via Regularizing Intermediate Representation.
Proceedings of the International Joint Conference on Neural Networks, 2022

Boosting Online Feature Transfer via Separable Feature Fusion.
Proceedings of the International Joint Conference on Neural Networks, 2022

Self-Regulated Feature Learning via Teacher-free Feature Distillation.
Proceedings of the Computer Vision - ECCV 2022, 2022

Activation Modulation and Recalibration Scheme for Weakly Supervised Semantic Segmentation.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Improving One-Shot NAS with Shrinking-and-Expanding Supernet.
Pattern Recognit., 2021


  Loading...