Junchen Zhu

Orcid: 0000-0002-3872-6689

According to our database1, Junchen Zhu authored at least 13 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Utilizing Greedy Nature for Multimodal Conditional Image Synthesis in Transformers.
IEEE Trans. Multim., 2024

EchoReel: Enhancing Action Generation of Existing Video Diffusion Models.
CoRR, 2024

CoIN: A Benchmark of Continual Instruction tuNing for Multimodel Large Language Model.
CoRR, 2024

Training-Free Semantic Video Composition via Pre-trained Diffusion Model.
CoRR, 2024

2023
Label-Guided Generative Adversarial Network for Realistic Image Synthesis.
IEEE Trans. Pattern Anal. Mach. Intell., March, 2023

From External to Internal: Structuring Image for Text-to-Image Attributes Manipulation.
IEEE Trans. Multim., 2023

VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation.
CoRR, 2023

MovieFactory: Automatic Movie Creation from Text using Large Generative Models for Language and Images.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

MobileVidFactory: Automatic Diffusion-Based Social Media Video Generation for Mobile Devices from Text.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

CUCL: Codebook for Unsupervised Continual Learning.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

2021
Fully Functional Image Manipulation Using Scene Graphs in A Bounding-Box Free Way.
Proceedings of the MM '21: ACM Multimedia Conference, Virtual Event, China, October 20, 2021

Towards Unsupervised Deformable-Instances Image-to-Image Translation.
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021

2020
Lab2Pix: Label-Adaptive Generative Adversarial Network for Unsupervised Image Synthesis.
Proceedings of the MM '20: The 28th ACM International Conference on Multimedia, 2020


  Loading...