Shengding Hu

According to our database1, Shengding Hu authored at least 27 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Unified View of Grokking, Double Descent and Emergent Abilities: A Perspective from Circuits Competition.
CoRR, 2024

OlympiadBench: A Challenging Benchmark for Promoting AGI with Olympiad-Level Bilingual Multimodal Scientific Problems.
CoRR, 2024

∞Bench: Extending Long Context Evaluation Beyond 100K Tokens.
CoRR, 2024

ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models.
CoRR, 2024

2023
Parameter-efficient fine-tuning of large-scale pre-trained language models.
Nat. Mac. Intell., March, 2023

Unlock Predictable Scaling from Emergent Abilities.
CoRR, 2023

Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models.
CoRR, 2023

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations.
CoRR, 2023

Tool Learning with Foundation Models.
CoRR, 2023

Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder.
CoRR, 2023

Exploring the Impact of Model Scaling on Parameter-Efficient Tuning.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Won't Get Fooled Again: Answering Questions with False Premises.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2023

Exploring Lottery Prompts for Pre-trained Language Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models.
CoRR, 2022

Sparse Structure Search for Parameter-Efficient Tuning.
CoRR, 2022

Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models.
CoRR, 2022

Sparse Structure Search for Delta Tuning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

COPEN: Probing Conceptual Knowledge in Pre-trained Language Models.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

OpenPrompt: An Open-source Framework for Prompt-learning.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022

Prototypical Verbalizer for Prompt-based Few-shot Tuning.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

2021
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification.
CoRR, 2021

KACC: A Multi-task Benchmark for Knowledge Abstraction, Concretization and Completion.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

2020
Graph neural networks: A review of methods and applications.
AI Open, 2020

Graph Policy Network for Transferable Active Learning on Graphs.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020


  Loading...