Senjie Jin

According to our database1, Senjie Jin authored at least 16 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Reasoning or Memorization? Unreliable Results of Reinforcement Learning Due to Data Contamination.
CoRR, July, 2025

Reinforcement Fine-Tuning Enables MLLMs Learning Novel Tasks Stably.
CoRR, June, 2025

Speech-Language Models with Decoupled Tokenizers and Multi-Token Prediction.
CoRR, June, 2025

EliteKV: Scalable KV Cache Compression via RoPE Frequency Selection and Joint Low-Rank Projection.
CoRR, March, 2025

The rise and potential of large language model based agents: a survey.
Sci. China Inf. Sci., 2025

SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision Language Models.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

2024
SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision Language Model.
CoRR, 2024

MouSi: Poly-Visual-Expert Vision-Language Models.
CoRR, 2024

Secrets of RLHF in Large Language Models Part II: Reward Modeling.
CoRR, 2024

Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Improving Discriminative Capability of Reward Models in RLHF Using Contrastive Learning.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

2023
TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models.
CoRR, 2023

The Rise and Potential of Large Language Model Based Agents: A Survey.
CoRR, 2023

Secrets of RLHF in Large Language Models Part I: PPO.
CoRR, 2023

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement.
CoRR, 2023

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023


  Loading...