Zhangchen Xu

Orcid: 0000-0002-6971-412X

According to our database1, Zhangchen Xu authored at least 22 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
VisualSphinx: Large-Scale Synthetic Vision Logic Puzzles for RL.
CoRR, May, 2025

SOSBENCH: Benchmarking Safety Alignment on Scientific Knowledge.
CoRR, May, 2025

Temporal Sampling for Forgotten Reasoning in LLMs.
CoRR, May, 2025

TinyV: Reducing False Negatives in Verification Improves RL for LLM Reasoning.
CoRR, May, 2025

Distributed Consensus Network: A Modularized Communication Framework and Reliability Probabilistic Analysis.
CoRR, February, 2025

Stronger Models are Not Always Stronger Teachers for Instruction Tuning.
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, 2025

Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

KodCode: A Diverse, Challenging, and Verifiable Synthetic Dataset for Coding.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

Small Models Struggle to Learn from Strong Reasoners.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat Templates.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
Stronger Models are NOT Stronger Teachers for Instruction Tuning.
CoRR, 2024

Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning.
CoRR, 2024

ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning.
Proceedings of the 33rd USENIX Security Symposium, 2024

CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Poster: Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning.
Proceedings of the 19th ACM Asia Conference on Computer and Communications Security, 2024

POSTER: Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications.
Proceedings of the 19th ACM Asia Conference on Computer and Communications Security, 2024

SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Wireless Distributed Consensus in Vehicle to Vehicle Networks for Autonomous Driving.
IEEE Trans. Veh. Technol., June, 2023

Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications.
CoRR, 2023

Exact Fault-Tolerant Consensus with Voting Validity.
Proceedings of the IEEE International Parallel and Distributed Processing Symposium, 2023


  Loading...