Juntao Dai

According to our database1, Juntao Dai authored at least 25 papers between 2022 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
The Singapore Consensus on Global AI Safety Research Priorities.
CoRR, June, 2025

Towards Advanced Mathematical Reasoning for LLMs via First-Order Logic Theorem Proving.
CoRR, June, 2025

A Game-Theoretic Negotiation Framework for Cross-Cultural Consensus in LLMs.
CoRR, June, 2025

SafeLawBench: Towards Safe Alignment of Large Language Models.
CoRR, June, 2025

InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback.
CoRR, May, 2025

The Mirage of Multimodality: Where Truth is Tested and Honesty Unravels.
CoRR, May, 2025

Mitigating Deceptive Alignment via Self-Monitoring.
CoRR, May, 2025

Measuring Hong Kong Massive Multi-Task Language Understanding.
CoRR, May, 2025

Safe RLHF-V: Safe Reinforcement Learning from Human Feedback in Multimodal Large Language Models.
CoRR, March, 2025

ThinkPatterns-21k: A Systematic Study on the Impact of Thinking Patterns in LLMs.
CoRR, March, 2025

A control-oriented operation mode recognizing method using fuzzy evaluation and attention LSTM networks.
Appl. Soft Comput., 2025

Mitigating Reward Over-Optimization in RLHF via Behavior-Supported Regularization.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

2024
Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback.
CoRR, 2024

Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction.
CoRR, 2024

Aligner: Efficient Alignment by Learning to Correct.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Safe Reinforcement Learning using Finite-Horizon Gradient-based Estimation.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2023
AI Alignment: A Comprehensive Survey.
CoRR, 2023

Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark.
CoRR, 2023

Baichuan 2: Open Large-scale Language Models.
CoRR, 2023

BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset.
CoRR, 2023

OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research.
CoRR, 2023

Augmented Proximal Policy Optimization for Safe Reinforcement Learning.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning.
CoRR, 2022

Constrained Update Projection Approach to Safe Policy Optimization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022


  Loading...