Kaisiyuan Wang

Orcid: 0000-0002-2120-8383

According to our database1, Kaisiyuan Wang authored at least 14 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face Generation.
IEEE Access, 2024

2023
Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch to Portrait Generation.
CoRR, 2023

Efficient Video Portrait Reenactment via Grid-based Codebook.
Proceedings of the ACM SIGGRAPH 2023 Conference Proceedings, 2023

Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch to Portrait Generation.
Proceedings of the 25th International Conference on Multimodal Interaction, 2023

ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-Based Generator.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Robust Video Portrait Reenactment via Personalized Representation Quantization.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
VPU: A Video-Based Point Cloud Upsampling Framework.
IEEE Trans. Image Process., 2022

Real-time Neural Radiance Talking Portrait Synthesis via Audio-spatial Decomposition.
CoRR, 2022

Masked Lip-Sync Prediction by Audio-Visual Contextual Exploitation in Transformers.
Proceedings of the SIGGRAPH Asia 2022 Conference Papers, 2022

EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model.
Proceedings of the SIGGRAPH '22: Special Interest Group on Computer Graphics and Interactive Techniques Conference, Vancouver, BC, Canada, August 7, 2022

2021
Sequential Point Cloud Upsampling by Exploiting Multi-Scale Temporal Dependency.
IEEE Trans. Circuits Syst. Video Technol., 2021

Audio-Driven Emotional Video Portraits.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

2020
MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation.
Proceedings of the Computer Vision - ECCV 2020, 2020


  Loading...