Chiwun Yang

According to our database1, Chiwun Yang authored at least 15 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Theoretical Foundation of Flow-Based Time Series Generation: Provable Approximation, Generalization, and Efficiency.
CoRR, March, 2025

ParallelComp: Parallel Long-Context Compressor for Length Extrapolation.
CoRR, February, 2025

Video Latent Flow Matching: Optimal Polynomial Projections for Video Interpolation and Extrapolation.
CoRR, February, 2025

Curse of Attention: A Kernel-Based Perspective for Why Transformers Fail to Generalize on Time Series Forecasting and Beyond.
Proceedings of the Conference on Parsimony and Learning, 2025

Unlock the Theory behind Scaling 1-bit Neural Networks.
Proceedings of the Conference on Parsimony and Learning, 2025

2024
Unlocking the Theory Behind Scaling 1-Bit Neural Networks.
CoRR, 2024

Toward Infinite-Long Prefix in Transformer.
CoRR, 2024

Attention is Naturally Sparse with Gaussian Distributed Input.
CoRR, 2024

Enhancing Stochastic Gradient Descent: A Unified Framework and Novel Acceleration Methods for Faster Convergence.
CoRR, 2024

How to Protect Copyright Data in Optimization of Large Language Models?
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
One Pass Streaming Algorithm for Super Long Token Attention Approximation in Sublinear Space.
CoRR, 2023

A Theoretical Insight into Attack and Defense of Gradient Leakage in Transformer.
CoRR, 2023

Unmasking Transformers: A Theoretical Approach to Data Recovery via Attention Weights.
CoRR, 2023

An Automatic Learning Rate Schedule Algorithm for Achieving Faster Convergence and Steeper Descent.
CoRR, 2023

Fine-tune Language Models to Approximate Unbiased In-context Learning.
CoRR, 2023


  Loading...