Kechi Zhang

Orcid: 0000-0002-3290-0244

According to our database1, Kechi Zhang authored at least 34 papers between 2020 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
RL-PLUS: Countering Capability Boundary Collapse of LLMs in Reinforcement Learning with Hybrid-policy Optimization.
CoRR, August, 2025

A Survey on Code Generation with LLM-based Agents.
CoRR, August, 2025

StackTrans: From Large Language Model to Large Pushdown Automata Model.
CoRR, July, 2025

Enhancing Repository-Level Code Generation with Call Chain-Aware Multi-View Context.
CoRR, July, 2025

Computational Thinking Reasoning in Large Language Models.
CoRR, June, 2025

SATURN: SAT-based Reinforcement Learning to Unleash Language Model Reasoning.
CoRR, May, 2025

CodeRAG: Supportive Code Retrieval on Bigraph for Real-World Code Generation.
CoRR, April, 2025

Transformer-based code model with compressed hierarchy representation.
Empir. Softw. Eng., March, 2025

SEAlign: Alignment Training for Software Engineering Agent.
CoRR, March, 2025

aiXcoder-7B-v2: Training LLMs to Fully Utilize the Long Context in Repository-level Code Completion.
CoRR, March, 2025

LONGCODEU: Benchmarking Long-Context Language Models on Long Code Understanding.
CoRR, March, 2025

FANformer: Improving Large Language Models Through Effective Periodicity Modeling.
CoRR, February, 2025

Focused-DPO: Enhancing Code Generation Through Focused Preference Optimization on Error-Prone Points.
CoRR, February, 2025

Revisit Self-Debugging with Self-Generated Tests for Code Generation.
CoRR, January, 2025

Focused-DPO: Enhancing Code Generation Through Focused Preference Optimization on Error-Prone Points.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

CodeDPO: Aligning Code Models with Self Generated and Verified Source Code.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Benchmarking Long-Context Language Models on Long Code Understanding.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Revisit Self-Debugging with Self-Generated Tests for Code Generation.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
aiXcoder-7B: A Lightweight and Effective Large Language Model for Code Completion.
CoRR, 2024

FAN: Fourier Analysis Networks.
CoRR, 2024

HiRoPE: Length Extrapolation for Code Models.
CoRR, 2024

Deep learning for code generation: a survey.
Sci. China Inf. Sci., 2024

CodeAgent: Enhancing Code Generation with Tool-Integrated Agent Systems for Real-World Repo-level Coding Challenges.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

HiRoPE: Length Extrapolation for Code Models Using Hierarchical Position.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
CodeEditor: Learning to Edit Source Code with Pre-trained Models.
ACM Trans. Softw. Eng. Methodol., November, 2023

ToolCoder: Teach Code Generation Models to use API search tools.
CoRR, 2023

Learning Program Representations with a Tree-Structured Transformer.
Proceedings of the IEEE International Conference on Software Analysis, 2023

Implant Global and Local Hierarchy Information to Sequence based Code Representation Models.
Proceedings of the 31st IEEE/ACM International Conference on Program Comprehension, 2023

Interpretation-based Code Summarization.
Proceedings of the 31st IEEE/ACM International Conference on Program Comprehension, 2023

Self-Edit: Fault-Aware Code Editor for Code Generation.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
A Tree-structured Transformer for Program Representation Learning.
CoRR, 2022

What does Transformer learn about source code?
CoRR, 2022

Learning to represent programs with heterogeneous graphs.
Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension, 2022

2020
Learning to Represent Programs with Heterogeneous Graphs.
CoRR, 2020


  Loading...