Ziyu Yao

Orcid: 0000-0003-1310-0169

Affiliations:
  • Peking University, School of Electronic and Computer Engineering, China


According to our database1, Ziyu Yao authored at least 10 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
CountLLM: Towards Generalizable Repetitive Action Counting via Large Language Model.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

2024
CAR: Controllable Autoregressive Modeling for Visual Generation.
CoRR, 2024

FD2Talk: Towards Generalized Talking Head Generation with Facial Decoupled Diffusion Model.
Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October 2024, 2024

PoseRAC: Enhancing Repetitive Action Counting with Salient Poses.
Proceedings of the Neural Information Processing - 31st International Conference, 2024

Recovering Global Data Distribution Locally in Federated Learning.
Proceedings of the 35th British Machine Vision Conference, 2024

Soul-Mix: Enhancing Multimodal Machine Translation with Manifold Mixup.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
PoseRAC: Pose Saliency Transformer for Repetitive Action Counting.
CoRR, 2023

GhostT5: Generate More Features with Cheap Operations to Improve Textless Spoken Question Answering.
Proceedings of the 24th Annual Conference of the International Speech Communication Association, 2023

FC-MTLF: A Fine- and Coarse-grained Multi-Task Learning Framework for Cross-Lingual Spoken Language Understanding.
Proceedings of the 24th Annual Conference of the International Speech Communication Association, 2023

C²A-SLU: Cross and Contrastive Attention for Improving ASR Robustness in Spoken Language Understanding.
Proceedings of the 24th Annual Conference of the International Speech Communication Association, 2023


  Loading...