Tao Fan

Orcid: 0009-0003-3040-6140

Affiliations:
  • WeBank, WeBank AI Group, Department of Artificial Intelligence, FATE, Shenzhen, China
  • Hong Kong University of Science and Technology, Hong Kong, SAR, China


According to our database1, Tao Fan authored at least 19 papers between 2019 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Ten Challenging Problems in Federated Foundation Models.
IEEE Trans. Knowl. Data Eng., July, 2025

INFERENCEDYNAMICS: Efficient Routing Across LLMs through Structured Capability and Knowledge Profiling.
CoRR, May, 2025

Towards Multi-Agent Reasoning Systems for Collaborative Expertise Delegation: An Exploratory Design Study.
CoRR, May, 2025

PPC-GPT: Federated Task-Specific Compression of Large Language Models via Pruning and Chain-of-Thought Distillation.
CoRR, February, 2025

FedMKT: Federated Mutual Knowledge Transfer for Large and Small Language Models.
Proceedings of the 31st International Conference on Computational Linguistics, 2025

2024
Privacy-Preserving Federated Adversarial Domain Adaptation Over Feature Groups for Interpretability.
IEEE Trans. Big Data, December, 2024

Accelerating Vertical Federated Learning.
IEEE Trans. Big Data, December, 2024

FedCoLLM: A Parameter-Efficient Federated Co-tuning Framework for Large and Small Language Models.
CoRR, 2024

PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of Large Language Models.
CoRR, 2024

SecureBoost+: Large Scale and High-Performance Vertical Federated Gradient Boosting Decision Tree.
Proceedings of the Advances in Knowledge Discovery and Data Mining, 2024

Unveiling the Vulnerability of Private Fine-Tuning in Split-Based Frameworks for Large Language Models: A Bidirectionally Enhanced Attack.
Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, 2024

2023
Grounding Foundation Models through Federated Transfer Learning: A General Framework.
CoRR, 2023

FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models.
CoRR, 2023

SecureBoost Hyperparameter Tuning via Multi-Objective Federated Learning.
CoRR, 2023

2021
FATE: An Industrial Grade Platform for Collaborative Learning With Data Protection.
J. Mach. Learn. Res., 2021

SecureBoost: A Lossless Federated Learning Framework.
IEEE Intell. Syst., 2021

SecureBoost+ : A High Performance Gradient Boosting Tree Framework for Large Scale Vertical Federated Learning.
CoRR, 2021

2019
A Quasi-Newton Method Based Vertical Federated Learning Framework for Logistic Regression.
CoRR, 2019

SecureBoost: A Lossless Federated Learning Framework.
CoRR, 2019


  Loading...