Ziqing Yang

Orcid: 0000-0003-0666-4409

Affiliations:
  • Joint Laboratory of HIT and iFLYTEK Research (HFL), Beijing, China


According to our database1, Ziqing Yang authored at least 19 papers between 2019 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
Efficient and Effective Text Encoding for Chinese LLaMA and Alpaca.
CoRR, 2023

MiniRBT: A Two-stage Distilled Small Chinese Pre-trained Model.
CoRR, 2023

IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Gradient-based Intra-attention Pruning on Pre-trained Language Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Interactive Gated Decoder for Machine Reading Comprehension.
ACM Trans. Asian Low Resour. Lang. Inf. Process., 2022

PERT: Pre-training BERT with Permuted Language Model.
CoRR, 2022

Cross-Lingual Text Classification with Multilingual Distillation and Zero-Shot-Aware Training.
CoRR, 2022

HFL at SemEval-2022 Task 8: A Linguistics-inspired Regression Model with Data Augmentation for Multilingual News Similarity.
Proceedings of the 16th International Workshop on Semantic Evaluation, SemEval@NAACL 2022, 2022

HIT at SemEval-2022 Task 2: Pre-trained Language Model for Idioms Detection.
Proceedings of the 16th International Workshop on Semantic Evaluation, SemEval@NAACL 2022, 2022

CINO: A Chinese Minority Pre-trained Language Model.
Proceedings of the 29th International Conference on Computational Linguistics, 2022

TextPruner: A Model Pruning Toolkit for Pre-Trained Language Models.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022

2021
Pre-Training With Whole Word Masking for Chinese BERT.
IEEE ACM Trans. Audio Speech Lang. Process., 2021

Bilingual Alignment Pre-training for Zero-shot Cross-lingual Transfer.
CoRR, 2021

Adversarial Training for Machine Reading Comprehension with Virtual Embeddings.
Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, 2021

Benchmarking Robustness of Machine Reading Comprehension Models.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

2020
A Sentence Cloze Dataset for Chinese Machine Reading Comprehension.
Proceedings of the 28th International Conference on Computational Linguistics, 2020

TextBrewer: An Open-Source Knowledge Distillation Toolkit for Natural Language Processing.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2020

2019
Improving Machine Reading Comprehension via Adversarial Training.
CoRR, 2019

Pre-Training with Whole Word Masking for Chinese BERT.
CoRR, 2019


  Loading...