Ermo Hua

According to our database1, Ermo Hua authored at least 17 papers between 2024 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Automating Exploratory Multiomics Research via Language Models.
CoRR, June, 2025

Deep Unfolding with Kernel-based Quantization in MIMO Detection.
CoRR, May, 2025

Technologies on Effectiveness and Efficiency: A Survey of State Spaces Models.
CoRR, March, 2025

MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding.
CoRR, January, 2025

OpenPRM: Building Open-domain Process-based Reward Models with Preference Trees.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Retrieval-Augmented Visual Question Answering via Built-in Autoregressive Search Engines.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
Fourier Position Embedding: Enhancing Attention's Periodic Extension for Length Generalization.
CoRR, 2024

How to Synthesize Text Data without Model Collapse?
CoRR, 2024

Automating Exploratory Proteomics Research via Language Models.
CoRR, 2024

CALF: Benchmarking Evaluation of LFQA Using Chinese Examinations.
CoRR, 2024

Large Language Models as Biomedical Hypothesis Generators: A Comprehensive Evaluation.
CoRR, 2024

Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding.
CoRR, 2024

Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process.
CoRR, 2024

UltraMedical: Building Specialized Generalists in Biomedicine.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Scalable Efficient Training of Large Language Models with Low-dimensional Projected Attention.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

CoGenesis: A Framework Collaborating Large and Small Language Models for Secure Context-Aware Instruction Following.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024


  Loading...