Yue Wang

Orcid: 0000-0001-5889-0729

Affiliations:
  • Rice University, Department of Electrical and Computer Engineering, USA


According to our database1, Yue Wang authored at least 33 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving.
CoRR, 2024

DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features.
CoRR, 2024

Memorize What Matters: Emergent Scene Decomposition from Multitraverse.
CoRR, 2024

Language-Image Models with 3D Understanding.
CoRR, 2024

InstantSplat: Unbounded Sparse-view Pose-free Gaussian Splatting in 40 Seconds.
CoRR, 2024

Q-SLAM: Quadric Representations for Monocular SLAM.
CoRR, 2024

Parallelized Spatiotemporal Binding.
CoRR, 2024

Driving Everywhere with Large Language Model Policy Adaptation.
CoRR, 2024

Denoising Vision Transformers.
CoRR, 2024

Augmenting Lane Perception and Topology Understanding with Standard Definition Navigation Maps.
Proceedings of the IEEE International Conference on Robotics and Automation, 2024

EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via Self-Supervision.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training.
IEEE Trans. Neural Networks Learn. Syst., October, 2023

A Language Agent for Autonomous Driving.
CoRR, 2023

SSCBench: A Large-Scale 3D Semantic Scene Completion Benchmark for Autonomous Driving.
CoRR, 2023

FreeNeRF: Improving Few-Shot Neural Rendering with Free Frequency Regularization.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Multi-Robot Scene Completion: Towards Task-Agnostic Collaborative Perception.
Proceedings of the Conference on Robot Learning, 2022

2021
HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark.
CoRR, 2021

SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training.
CoRR, 2021

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark.
Proceedings of the 9th International Conference on Learning Representations, 2021

SACoD: Sensor Algorithm Co-Design Towards Efficient CNN-powered Intelligent PhlatCam.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

2020
Dual Dynamic Inference: Enabling More Efficient, Adaptive, and Controllable Deep Inference.
IEEE J. Sel. Top. Signal Process., 2020

FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

A New MRAM-Based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2020

SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation.
Proceedings of the 47th ACM/IEEE Annual International Symposium on Computer Architecture, 2020

Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks.
Proceedings of the 8th International Conference on Learning Representations, 2020

DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures.
Proceedings of the 2020 IEEE International Conference on Acoustics, 2020

AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs.
Proceedings of the FPGA '20: The 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2020

Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving.
CoRR, 2019

Drawing early-bird tickets: Towards more efficient training of deep networks.
CoRR, 2019

E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Live Demonstration: Bringing Powerful Deep Learning into Daily-Life Devices (Mobiles and FPGAs) Via Deep k-Means.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2019

2018
Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions.
Proceedings of the 35th International Conference on Machine Learning, 2018


  Loading...