Siyuan Huang

Orcid: 0009-0005-6363-833X

Affiliations:
  • Shanghai Jiao Tong University, China
  • Shanghai AI Lab, China


According to our database1, Siyuan Huang authored at least 33 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Genie Envisioner: A Unified World Foundation Platform for Robotic Manipulation.
CoRR, August, 2025

TinyLVLM-eHub: Towards Comprehensive and Efficient Evaluation for Large Vision-Language Models.
IEEE Trans. Big Data, June, 2025

RoboCerebra: A Large-scale Benchmark for Long-horizon Robotic Manipulation Evaluation.
CoRR, June, 2025

EnerVerse-AC: Envisioning Embodied Environments with Action Condition.
CoRR, May, 2025

EWMBench: Evaluating Scene, Motion, and Semantic Quality in Embodied World Models.
CoRR, May, 2025

CrayonRobo: Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation.
CoRR, May, 2025

LVLM-EHub: A Comprehensive Evaluation Benchmark for Large Vision-Language Models.
IEEE Trans. Pattern Anal. Mach. Intell., March, 2025

Adversarial Data Collection: Human-Collaborative Perturbations for Efficient and Robust Robotic Imitation Learning.
CoRR, March, 2025

EnerVerse: Envisioning Embodied Future Space for Robotics Manipulation.
CoRR, January, 2025

A3: Android Agent Arena for Mobile GUI Agents.
CoRR, January, 2025

PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

AMEX: Android Multi-annotation Expo Dataset for Mobile GUI Agents.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
Effective Tuning Strategies for Generalist Robot Manipulation Policies.
CoRR, 2024

UniAff: A Unified Representation of Affordances for Tool Usage and Articulation with Vision-Language Models.
CoRR, 2024

SKT: Integrating State-Aware Keypoint Trajectories with Vision-Language Models for Robotic Garment Manipulation.
CoRR, 2024

PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions.
CoRR, 2024

AMEX: Android Multi-annotation Expo Dataset for Mobile GUI Agents.
CoRR, 2024

GUI Odyssey: A Comprehensive Dataset for Cross-App GUI Navigation on Mobile Devices.
CoRR, 2024

SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models.
CoRR, 2024

ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2024

Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill.
Proceedings of the IEEE International Conference on Robotics and Automation, 2024

SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

SPHINX: A Mixer of Weights, Visual Embeddings and Image Scales for Multi-modal Large Language Models.
Proceedings of the Computer Vision - ECCV 2024, 2024

A3VLM: Actionable Articulation-Aware Vision Language Model.
Proceedings of the Conference on Robot Learning, 6-9 November 2024, Munich, Germany., 2024

Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models.
CoRR, 2023

Tiny LVLM-eHub: Early Multimodal Experiments with Bard.
CoRR, 2023

Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model.
CoRR, 2023

SUG: Single-dataset Unified Generalization for 3D Point Cloud Classification.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

Prompt, Generate, Then Cache: Cascade of Foundation Models Makes Strong Few-Shot Learners.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
ADAS: A Simple Active-and-Adaptive Baseline for Cross-Domain 3D Semantic Segmentation.
CoRR, 2022


  Loading...