Donghai Hong

According to our database1, Donghai Hong authored at least 13 papers between 2024 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback.
CoRR, May, 2025

Mitigating Deceptive Alignment via Self-Monitoring.
CoRR, May, 2025

Generative RLHF-V: Learning Principles from Multi-modal Human Preference.
CoRR, May, 2025

A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training and Deployment.
CoRR, April, 2025

Safe RLHF-V: Safe Reinforcement Learning from Human Feedback in Multimodal Large Language Models.
CoRR, March, 2025

ThinkPatterns-21k: A Systematic Study on the Impact of Thinking Patterns in LLMs.
CoRR, March, 2025

PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Boosting Policy and Process Reward Models with Monte Carlo Tree Search in Open-Domain QA.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
Libra-Leaderboard: Towards Responsible AI through a Balanced Leaderboard of Safety and Capability.
CoRR, 2024

Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback.
CoRR, 2024

PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models.
CoRR, 2024

Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction.
CoRR, 2024

Aligner: Efficient Alignment by Learning to Correct.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024


  Loading...