Zhen Qin

Orcid: 0009-0005-9856-0843

Affiliations:
  • SenseTime Research, Shanghai, China


According to our database1, Zhen Qin authored at least 27 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Autoregressive Image Generation with Linear Complexity: A Spatial-Aware Decay Perspective.
CoRR, July, 2025

MiniMax-01: Scaling Foundation Models with Lightning Attention.
CoRR, January, 2025

LASP: Linear Attention Sequence Parallelism.
Trans. Mach. Learn. Res., 2025

Local enhanced toeplitz neural network for in-vehicle network intrusion detection.
J. King Saud Univ. Comput. Inf. Sci., 2025

Deep Non-Rigid Structure-from-Motion Revisited: Canonicalization and Sequence Modeling.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
You Only Scan Once: Efficient Multi-dimension Sequential Modeling with LightNet.
CoRR, 2024

Unlocking the Secrets of Linear Complexity Sequence Model from A Unified Perspective.
CoRR, 2024

HGRN2: Gated Linear RNNs with State Expansion.
CoRR, 2024

Linear Attention Sequence Parallelism.
CoRR, 2024

Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models.
CoRR, 2024

TAVGBench: Benchmarking Text to Audible-Video Generation.
Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October 2024, 2024

Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

CO2: Efficient Distributed Training with Full Communication-Computation Overlap.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Scaling Laws for Linear Complexity Language Models.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Exploring Transformer Extrapolation.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Vicinity Vision Transformer.
IEEE Trans. Pattern Anal. Mach. Intell., October, 2023

Linearized Relative Positional Encoding.
Trans. Mach. Learn. Res., 2023

Scaling TransNormer to 175 Billion Parameters.
CoRR, 2023

Hierarchically Gated Recurrent Neural Network for Sequence Modeling.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Toeplitz Neural Network for Sequence Modeling.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

All-pairs Consistency Learning for Weakly Supervised Semantic Segmentation.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Accelerating Toeplitz Neural Network with Constant-time Inference Complexity.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Fine-grained Audible Video Description.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Linear Video Transformer with Feature Fixation.
CoRR, 2022

Neural Architecture Search on Efficient Transformers and Beyond.
CoRR, 2022

cosFormer: Rethinking Softmax In Attention.
Proceedings of the Tenth International Conference on Learning Representations, 2022

The Devil in Linear Transformer.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022


  Loading...