Yi Zhang

Orcid: 0000-0002-9700-0693

Affiliations:
  • Peking University, School of Electronics Engineering and Computer Science / MOE Key Laboratory of Computational Linguistics, Beijing, China


According to our database1, Yi Zhang authored at least 18 papers between 2017 and 2022.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2022
Alleviating the Knowledge-Language Inconsistency: A Study for Deep Commonsense Knowledge.
IEEE ACM Trans. Audio Speech Lang. Process., 2022

2021
A Global Past-Future Early Exit Method for Accelerating Inference of Pre-trained Language Models.
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021

2020
Training Simplification and Model Simplification for Deep Learning : A Minimal Effort Back Propagation Method.
IEEE Trans. Knowl. Data Eng., 2020

Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, 2020

Parallel Data Augmentation for Formality Style Transfer.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

2019
Regularizing Output Distribution of Abstractive Chinese Social Media Text Summarization for Improved Semantic Consistency.
ACM Trans. Asian Low Resour. Lang. Inf. Process., 2019

Towards easier and faster sequence labeling for natural language processing: A search-based probabilistic online learning framework (SAPO).
Inf. Sci., 2019

Sequence-to-sequence Pre-training with Data Augmentation for Sentence Rewriting.
CoRR, 2019

PKUSEG: A Toolkit for Multi-Domain Chinese Word Segmentation.
CoRR, 2019

2018
A Deep Reinforced Sequence-to-Set Model for Multi-Label Text Classification.
CoRR, 2018

Accelerating Graph-Based Dependency Parsing with Lock-Free Parallel Perceptron.
Proceedings of the Natural Language Processing and Chinese Computing, 2018

A Chinese Dataset with Negative Full Forms for General Abbreviation Prediction.
Proceedings of the Eleventh International Conference on Language Resources and Evaluation, 2018

Learning Sentiment Memories for Sentiment Modification without Parallel Data.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31, 2018

A Skeleton-Based Model for Promoting Coherence Among Sentences in Narrative Story Generation.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31, 2018

Does Higher Order LSTM Have Better Accuracy for Segmenting and Labeling Sequence Data?
Proceedings of the 27th International Conference on Computational Linguistics, 2018

2017
Complex Structure Leads to Overfitting: A Structure Regularization Decoding Method for Natural Language Processing.
CoRR, 2017

Does Higher Order LSTM Have Better Accuracy in Chunking and Named Entity Recognition?
CoRR, 2017

Transfer Deep Learning for Low-Resource Chinese Word Segmentation with a Novel Neural Network.
Proceedings of the Natural Language Processing and Chinese Computing, 2017


  Loading...