Siyuan Huang
Orcid: 0009-0005-6363-833XAffiliations:
- Shanghai Jiao Tong University, China
- Shanghai AI Lab, China
According to our database1,
Siyuan Huang
authored at least 33 papers
between 2022 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on orcid.org
On csauthors.net:
Bibliography
2025
CoRR, August, 2025
TinyLVLM-eHub: Towards Comprehensive and Efficient Evaluation for Large Vision-Language Models.
IEEE Trans. Big Data, June, 2025
RoboCerebra: A Large-scale Benchmark for Long-horizon Robotic Manipulation Evaluation.
CoRR, June, 2025
CoRR, May, 2025
CoRR, May, 2025
CrayonRobo: Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation.
CoRR, May, 2025
IEEE Trans. Pattern Anal. Mach. Intell., March, 2025
Adversarial Data Collection: Human-Collaborative Perturbations for Efficient and Robust Robotic Imitation Learning.
CoRR, March, 2025
CoRR, January, 2025
PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025
Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025
Proceedings of the Findings of the Association for Computational Linguistics, 2025
2024
UniAff: A Unified Representation of Affordances for Tool Usage and Articulation with Vision-Language Models.
CoRR, 2024
SKT: Integrating State-Aware Keypoint Trajectories with Vision-Language Models for Robotic Garment Manipulation.
CoRR, 2024
PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions.
CoRR, 2024
CoRR, 2024
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models.
CoRR, 2024
ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2024
Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill.
Proceedings of the IEEE International Conference on Robotics and Automation, 2024
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
SPHINX: A Mixer of Weights, Visual Embeddings and Image Scales for Multi-modal Large Language Models.
Proceedings of the Computer Vision - ECCV 2024, 2024
Proceedings of the Conference on Robot Learning, 6-9 November 2024, Munich, Germany., 2024
Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024
2023
SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models.
CoRR, 2023
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model.
CoRR, 2023
Proceedings of the 31st ACM International Conference on Multimedia, 2023
Prompt, Generate, Then Cache: Cascade of Foundation Models Makes Strong Few-Shot Learners.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023
2022
ADAS: A Simple Active-and-Adaptive Baseline for Cross-Domain 3D Semantic Segmentation.
CoRR, 2022