Yaru Hao

Orcid: 0000-0002-4463-4844

According to our database1, Yaru Hao authored at least 26 papers between 2016 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Towards Optimal Learning of Language Models.
CoRR, 2024

2023
Task-specific parameter decoupling for class incremental learning.
Inf. Sci., December, 2023

Cancer survival prediction by learning comprehensive deep feature representation for multiple types of genetic data.
BMC Bioinform., December, 2023

A novel two-way rebalancing strategy for identifying carbonylation sites.
BMC Bioinform., December, 2023

Learning enhanced specific representations for multi-view feature learning.
Knowl. Based Syst., 2023

Large Language Model for Science: A Study on P vs. NP.
CoRR, 2023

Kosmos-2: Grounding Multimodal Large Language Models to the World.
CoRR, 2023

Language Is Not All You Need: Aligning Perception with Language Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Optimizing Prompts for Text-to-Image Generation.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Prototypical Calibration for Few-shot Learning of Language Models.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Prototypical Fine-Tuning: Towards Robust Performance under Varying Data Sizes.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
Joint learning sample similarity and correlation representation for cancer survival prediction.
BMC Bioinform., December, 2022

Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers.
CoRR, 2022

Structured Prompting: Scaling In-Context Learning to 1, 000 Examples.
CoRR, 2022

Language Models are General-Purpose Interfaces.
CoRR, 2022

Prototypical Calibration for Few-shot Learning of Language Models.
CoRR, 2022

Knowledge Neurons in Pretrained Transformers.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

2021
Knowledge Neurons in Pretrained Transformers.
CoRR, 2021

Learning to Sample Replacements for ELECTRA Pre-Training.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

Self-Attention Attribution: Interpreting Information Interactions Inside Transformer.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Investigating Learning Dynamics of BERT Fine-Tuning.
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, 2020

2019
Visualizing and Understanding the Effectiveness of BERT.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019

2017
Feature Extraction and Classification of EHG between Pregnancy and Labour Group Using Hilbert-Huang Transform and Extreme Learning Machine.
Comput. Math. Methods Medicine, 2017

2016
Adaptive flocking of heterogeneous multi-agents systems with nonlinear dynamics.
Neurocomputing, 2016

Excessive use of Twitter among college students in the UK: Validation of the Microblog Excessive Use Scale and relationship to social interaction and loneliness.
Comput. Hum. Behav., 2016


  Loading...