Ning Ding

Orcid: 0000-0001-8758-9484

Affiliations:
  • Tsinghua University, Beijing, China


According to our database1, Ning Ding authored at least 54 papers between 2013 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Generative AI for Complex Scenarios: Language Models are Sequence Processors.
Int. J. Artif. Intell. Robotics Res., 2024

Mastering Text, Code and Math Simultaneously via Fusing Highly Specialized Language Models.
CoRR, 2024

CoGenesis: A Framework Collaborating Large and Small Language Models for Secure Context-Aware Instruction Following.
CoRR, 2024

UltraLink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset.
CoRR, 2024

2023
Parameter-efficient fine-tuning of large-scale pre-trained language models.
Nat. Mac. Intell., March, 2023

Improving task generalization via unified schema prompt.
AI Open, January, 2023

INTERVENOR: Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing.
CoRR, 2023

Unlock Predictable Scaling from Emergent Abilities.
CoRR, 2023

UltraFeedback: Boosting Language Models with High-quality Feedback.
CoRR, 2023

Empowering Private Tutoring by Chaining Large Language Models.
CoRR, 2023

KoLA: Carefully Benchmarking World Knowledge of Large Language Models.
CoRR, 2023

Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models.
CoRR, 2023

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations.
CoRR, 2023

Tool Learning with Foundation Models.
CoRR, 2023

CRaSh: Clustering, Removing, and Sharing Enhance Fine-tuning without Full Large Language Model.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Exploring the Impact of Model Scaling on Parameter-Efficient Tuning.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Sparse Low-rank Adaptation of Pre-trained Language Models.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

WebCPM: Interactive Web Search for Chinese Long-form Question Answering.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

Parameter-efficient Weight Ensembling Facilitates Task-level Knowledge Transfer.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2023

OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2023

Few-shot Classification with Hypersphere Modeling of Prototypes.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Decoder Tuning: Efficient Language Understanding as Decoding.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

Exploring Lottery Prompts for Pre-trained Language Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
PTR: Prompt Tuning with Rules for Text Classification.
AI Open, January, 2022

Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning.
CoRR, 2022

Sparse Structure Search for Parameter-Efficient Tuning.
CoRR, 2022

Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models.
CoRR, 2022

Sparse Structure Search for Delta Tuning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

ProQA: Structural Prompt-based Pre-training for Unified Question Answering.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Parameter-Efficient Tuning.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

Prompt-learning for Fine-grained Entity Typing.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

OpenPrompt: An Open-source Framework for Prompt-learning.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022

Prototypical Verbalizer for Prompt-based Few-shot Tuning.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

2021
Modeling Relation Paths for Knowledge Graph Completion.
IEEE Trans. Knowl. Data Eng., 2021

Exploring Low-dimensional Intrinsic Task Subspace via Prompt Tuning.
CoRR, 2021

Prompt-Learning for Fine-Grained Entity Typing.
CoRR, 2021

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification.
CoRR, 2021

Pre-Trained Models: Past, Present and Future.
CoRR, 2021

Learning Purified Feature Representations from Task-irrelevant Labels.
CoRR, 2021

Pre-trained models: Past, present and future.
AI Open, 2021

Prototypical Representation Learning for Relation Extraction.
Proceedings of the 9th International Conference on Learning Representations, 2021

CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

Few-NERD: A Few-shot Named Entity Recognition Dataset.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
Generalized Local Aggregation for Large Scale Gaussian Process Regression.
Proceedings of the 2020 International Joint Conference on Neural Networks, 2020

Triple-to-Text Generation with an Anchor-to-Prototype Framework.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020

Infobox-to-text Generation with Tree-like Planning based Attention Network.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020

Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

Integrating Linguistic Knowledge to Sentence Paraphrase Generation.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Event Detection with Trigger-Aware Lattice Neural Network.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019

Chinese Relation Extraction with Multi-Grained Information and External Linguistic Knowledge.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

2013
Emergency evacuation simulation in staircases considering evacuees' physical and psychological status.
Proceedings of the 2013 IEEE International Conference on Automation Science and Engineering, 2013


  Loading...