Shijie Wu

Orcid: 0000-0002-7125-752X

According to our database1, Shijie Wu authored at least 28 papers between 2018 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
BloombergGPT: A Large Language Model for Finance.
CoRR, 2023

Towards a Unified Multi-Domain Multilingual Named Entity Recognition Model.
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023

MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
BoundaryFace: A mining framework with noise label self-correction for Face Recognition.
CoRR, 2022

How Do Multilingual Encoders Learn Cross-lingual Representation?
CoRR, 2022

Zero-shot Cross-lingual Transfer is Under-specified Optimization.
Proceedings of the 7th Workshop on Representation Learning for NLP, 2022

Cross-lingual Few-Shot Learning on Unseen Languages.
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, 2022

Bernice: A Multilingual Pre-trained Encoder for Twitter.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

BoundaryFace: A Mining Framework with Noise Label Self-correction for Face Recognition.
Proceedings of the Computer Vision - ECCV 2022, 2022

2021
Weighted citation based on ranking-related contribution: a new index for evaluating article impact.
Scientometrics, 2021

Triple Attention Network architecture for MovieQA.
CoRR, 2021

Differentiable Generative Phonology.
CoRR, 2021

Everything Is All It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual Information Extraction.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

Applying the Transformer to Character-level Transduction.
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021

2020

The SIGMORPHON 2020 Shared Task on Multilingual Grapheme-to-Phoneme Conversion.
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, 2020

Are All Languages Created Equal in Multilingual BERT?
Proceedings of the 5th Workshop on Representation Learning for NLP, 2020

Which *BERT? A Survey Organizing Contextualized Encoders.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

Do Explicit Alignments Robustly Improve Multilingual Encoders?
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

The Paradigm Discovery Problem.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

Emerging Cross-lingual Structure in Pretrained Language Models.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

2019
The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection.
CoRR, 2019

A Simple Joint Model for Improved Contextual Neural Lemmatization.
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019

Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019

Morphological Irregularity Correlates with Frequency.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

Exact Hard Monotonic Attention for Character-Level Transduction.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

2018
Hard Non-Monotonic Attention for Character-Level Transduction.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31, 2018


  Loading...