Hangbo Bao

According to our database1, Hangbo Bao authored at least 21 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Fine-tuning pretrained transformer encoders for sequence-to-sequence learning.
Int. J. Mach. Learn. Cybern., May, 2024

2023
A Unified View of Masked Image Modeling.
Trans. Mach. Learn. Res., 2023

Corrupted Image Modeling for Self-Supervised Visual Pre-Training.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Image as a Foreign Language: BEIT Pretraining for Vision and Vision-Language Tasks.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks.
CoRR, 2022

BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers.
CoRR, 2022

VL-BEiT: Generative Vision-Language Pretraining.
CoRR, 2022

VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

BEiT: BERT Pre-Training of Image Transformers.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Attention Temperature Matters in Abstractive Summarization Distillation.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, 2022

2021
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts.
CoRR, 2021

s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning.
CoRR, 2021

BEiT: BERT Pre-Training of Image Transformers.
CoRR, 2021

MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

Learning to Sample Replacements for ELECTRA Pre-Training.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

2020
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
Neural Melody Composition from Lyrics.
Proceedings of the Natural Language Processing and Chinese Computing, 2019

Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension.
Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 2019

2017
Neural Question Generation from Text: A Preliminary Study.
Proceedings of the Natural Language Processing and Chinese Computing, 2017


  Loading...