Yan Zeng

Orcid: 0000-0003-1872-7534

Affiliations:
  • ByteDance AI Lab.
  • Université de Montréal, Canada (former)


According to our database1, Yan Zeng authored at least 15 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
X$^{2}$2-VLM: All-in-One Pre-Trained Model for Vision-Language Tasks.
IEEE Trans. Pattern Anal. Mach. Intell., May, 2024

2023
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
X<sup>2</sup>-VLM: All-In-One Pre-trained Model For Vision-Language Tasks.
CoRR, 2022

Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training.
CoRR, 2022

VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models.
CoRR, 2022

VLUE: A Multi-Task Multi-Dimension Benchmark for Evaluating Vision-Language Pre-training.
Proceedings of the International Conference on Machine Learning, 2022

Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts.
Proceedings of the International Conference on Machine Learning, 2022

2021
A Simple and Efficient Multi-Task Learning Approach for Conditioned Dialogue Generation.
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021

An Investigation of Suitability of Pre-Trained Language Models for Dialogue Generation - Avoiding Discrepancies.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

2020
Multi-Domain Dialogue State Tracking - A Purely Transformer-Based Generative Approach.
CoRR, 2020

Open-Domain Dialogue Generation Based on Pre-trained Language Models.
CoRR, 2020

Generalized Conditioned Dialogue Generation Based on Pre-trained Language Model.
CoRR, 2020

Multi-Domain Dialogue State Tracking based on State Graph.
CoRR, 2020


  Loading...