Xingjian Li

Orcid: 0000-0001-8073-7552

Affiliations:
  • Baidu, Inc., China
  • University of Macau, Macau, SAR, China
  • High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, China (former)


According to our database1, Xingjian Li authored at least 42 papers between 2011 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Towards accurate knowledge transfer via target-awareness representation disentanglement.
Mach. Learn., February, 2024

Deep Active Learning with Noise Stability.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

G-LIME: Statistical Learning for Local Interpretations of Deep Neural Networks Using Global Priors (Abstract Reprint).
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Semi-supervised transfer learning with hierarchical self-regularization.
Pattern Recognit., December, 2023

SMILE: Sample-to-feature Mixup for Efficient Transfer Learning.
Trans. Mach. Learn. Res., 2023

Robust Cross-Modal Knowledge Distillation for Unconstrained Videos.
CoRR, 2023

Large-scale knowledge distillation with elastic heterogeneous computing resources.
Concurr. Comput. Pract. Exp., 2023

G-LIME: Statistical learning for local interpretations of deep neural networks using global priors.
Artif. Intell., 2023

Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs.
Proceedings of the Machine Learning and Knowledge Discovery in Databases: Research Track, 2023

Towards Inadequately Pre-trained Models in Transfer Learning.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Improving Bert Fine-Tuning via Stabilizing Cross-Layer Mutual Information.
Proceedings of the IEEE International Conference on Acoustics, 2023

2022
GrOD: Deep Learning with Gradients Orthogonal Decomposition for Knowledge Transfer, Distillation, and Adversarial Training.
ACM Trans. Knowl. Discov. Data, 2022

Knowledge Distillation with Attention for Deep Transfer Learning of Convolutional Networks.
ACM Trans. Knowl. Discov. Data, 2022

COLAM: Co-Learning of Deep Neural Networks and Soft Labels via Alternating Minimization.
Neural Process. Lett., 2022

Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond.
Knowl. Inf. Syst., 2022

InterpretDL: Explaining Deep Models in PaddlePaddle.
J. Mach. Learn. Res., 2022

Fine-tuning Pre-trained Language Models with Noise Stability Regularization.
CoRR, 2022

Deep Active Learning with Noise Stability.
CoRR, 2022

Inadequately Pre-trained Models are Better Feature Extractors.
CoRR, 2022

Boosting Active Learning via Improving Test Performance.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
"In-Network Ensemble": Deep Ensemble Learning with Diversified Knowledge Distillation.
ACM Trans. Intell. Syst. Technol., 2021

Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.
Frontiers Artif. Intell., 2021

SMILE: Self-Distilled MIxup for Efficient Transfer LEarning.
CoRR, 2021

Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond.
CoRR, 2021

Noise Stability Regularization for Improving BERT Fine-tuning.
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021

Elastic Deep Learning Using Knowledge Distillation with Heterogeneous Computing Resources.
Proceedings of the Euro-Par 2021: Parallel Processing Workshops, 2021

Adaptive Consistency Regularization for Semi-Supervised Transfer Learning.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

Temporal Relational Modeling with Self-Supervision for Action Segmentation.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Towards Accurate Knowledge Transfer via Target-awareness Representation Disentanglement.
CoRR, 2020

Measuring Information Transfer in Neural Networks.
CoRR, 2020

XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-domain Mixup.
CoRR, 2020

RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr.
Proceedings of the 37th International Conference on Machine Learning, 2020

Pay Attention to Features, Transfer Learn Faster CNNs.
Proceedings of the 8th International Conference on Learning Representations, 2020

Neighbours Matter: Image Captioning with Similar Images.
Proceedings of the 31st British Machine Vision Conference 2020, 2020

Quasi-optimal Data Placement for Secure Multi-tenant Data Federation on the Cloud.
Proceedings of the 2020 IEEE International Conference on Big Data (IEEE BigData 2020), 2020

2019
An Empirical Study on Regularization of Deep Neural Networks by Local Rademacher Complexity.
CoRR, 2019

Delta: Deep Learning Transfer using Feature Map with Attention for Convolutional Networks.
Proceedings of the 7th International Conference on Learning Representations, 2019

Towards Making Deep Transfer Learning Never Hurt.
Proceedings of the 2019 IEEE International Conference on Data Mining, 2019

2012
An optimized large-scale hybrid DGEMM design for CPUs and ATI GPUs.
Proceedings of the International Conference on Supercomputing, 2012

2011
Biprominer: Automatic Mining of Binary Protocol Features.
Proceedings of the 12th International Conference on Parallel and Distributed Computing, 2011

Experience of parallelizing cryo-EM 3D reconstruction on a CPU-GPU heterogeneous system.
Proceedings of the 20th ACM International Symposium on High Performance Distributed Computing, 2011

Floating-point mixed-radix FFT core generation for FPGA and comparison with GPU and CPU.
Proceedings of the 2011 International Conference on Field-Programmable Technology, 2011


  Loading...