Yuxin Zhang

Orcid: 0000-0002-4409-7030

Affiliations:
  • Xiamen University, MAC Lab, School of Informatics, Fujian, China


According to our database1, Yuxin Zhang authored at least 24 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models.
CoRR, 2024

Learning Image Demoiréing from Unpaired Real Data.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Lottery Jackpots Exist in Pre-Trained Models.
IEEE Trans. Pattern Anal. Mach. Intell., December, 2023

Super Vision Transformer.
Int. J. Comput. Vis., December, 2023

Pruning Networks With Cross-Layer Ranking & k-Reciprocal Nearest Filters.
IEEE Trans. Neural Networks Learn. Syst., November, 2023

Carrying Out CNN Channel Pruning in a White Box.
IEEE Trans. Neural Networks Learn. Syst., October, 2023

1xN Pattern for Pruning Convolutional Neural Networks.
IEEE Trans. Pattern Anal. Mach. Intell., April, 2023

Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical Study.
CoRR, 2023

Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs.
CoRR, 2023

Spatial Re-parameterization for N: M Sparsity.
CoRR, 2023

MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization.
CoRR, 2023

Distribution-Flexible Subset Quantization for Post-Quantizing Super-Resolution Networks.
CoRR, 2023

Bi-directional Masks for Efficient N: M Sparse Training.
Proceedings of the International Conference on Machine Learning, 2023

Real-Time Image Demoiréing on Mobile Devices.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

SMMix: Self-Motivated Image Mixing for Vision Transformers.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

2022
Shadow Removal by High-Quality Shadow Synthesis.
CoRR, 2022

Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training.
CoRR, 2022

Super Vision Transformer.
CoRR, 2022

Optimizing Gradient-driven Criteria in Network Sparsity: Gradient is All You Need.
CoRR, 2022

Learning Best Combination for Efficient N: M Sparsity.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

2021
1×N Block Pattern for Network Sparsity.
CoRR, 2021

Carrying out CNN Channel Pruning in a White Box.
CoRR, 2021

Lottery Jackpots Exist in Pre-trained Models.
CoRR, 2021

2020
Channel Pruning via Automatic Structure Search.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020


  Loading...