Shiwei Liu

Orcid: 0000-0002-0564-4900

According to our database1, Shiwei Liu authored at least 72 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Hybrid Conditional Kernel SVM for Wire Rope Defect Recognition.
IEEE Trans. Ind. Informatics, April, 2024

HARDSEA: Hybrid Analog-ReRAM Clustering and Digital-SRAM In-Memory Computing Accelerator for Dynamic Sparse Self-Attention in Transformer.
IEEE Trans. Very Large Scale Integr. Syst., February, 2024

Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding.
CoRR, 2024

ConSmax: Hardware-Friendly Alternative Softmax with Learnable Parameters.
CoRR, 2024

2023
Feedback and contribution of vegetation, air temperature and precipitation to land surface temperature in the Yangtze River Basin considering statistical analysis.
Int. J. Digit. Earth, December, 2023

Future variation of land surface temperature in the Yangtze River Basin based on CMIP6 model.
Int. J. Digit. Earth, December, 2023

A Modified A Posteriori Subcell Limiter for High Order Flux Reconstruction Scheme for One-Dimensional Detonation Simulation.
J. Sci. Comput., November, 2023

Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance.
Int. J. Comput. Vis., October, 2023

Wire Rope Defect Recognition Method Based on MFL Signal Analysis and 1D-CNNs.
Sensors, April, 2023

Radiative Effects and Costing Assessment of Arctic Sea Ice Albedo Changes.
Remote. Sens., February, 2023

Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks.
Trans. Mach. Learn. Res., 2023

Effects of Atmospheric Coherent Time on Inverse Synthetic Aperture Ladar Imaging through Atmospheric Turbulence.
Remote. Sens., 2023

The Counterattack of CNNs in Self-Supervised Learning: Larger Kernel Size might be All You Need.
CoRR, 2023

E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation.
CoRR, 2023

Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective.
CoRR, 2023

Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs.
CoRR, 2023

Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity.
CoRR, 2023

AdaMerging: Adaptive Model Merging for Multi-Task Learning.
CoRR, 2023

Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity.
CoRR, 2023

Optimized Path Planning for USVs under Ocean Currents.
CoRR, 2023

Dynamic Sparsity Is Channel-Level Sparsity Learner.
CoRR, 2023

Are Large Kernels Better Teachers than Transformers for ConvNets?
CoRR, 2023

Ten Lessons We Have Learned in the New "Sparseland": A Short Handbook for Sparse Neural Network Researchers.
CoRR, 2023

REST: Enhancing Group Robustness in DNNs Through Reweighted Sparse Training.
Proceedings of the Machine Learning and Knowledge Discovery in Databases: Research Track, 2023

Enhancing Adversarial Training via Reweighting Optimization Trajectory.
Proceedings of the Machine Learning and Knowledge Discovery in Databases: Research Track, 2023

Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask?
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Predicting mutational effects on protein-protein binding via a side-chain diffusion probabilistic model.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Don't just prune by magnitude! Your mask topology is a secret weapon.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Dynamic Sparsity Is Channel-Level Sparsity Learner.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

A 28nm 53.8TOPS/W 8b Sparse Transformer Accelerator with In-Memory Butterfly Zero Skipper for Unstructured-Pruned NN and CIM-Based Local-Attention-Reusable Engine.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

A Scalable Die-to-Die Interconnect with Replay and Repair Schemes for 2.5D/3D Integration.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2023

Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models.
Proceedings of the International Conference on Machine Learning, 2023

Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication.
Proceedings of the International Conference on Machine Learning, 2023

Are Large Kernels Better Teachers than Transformers for ConvNets?
Proceedings of the International Conference on Machine Learning, 2023

Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Proceedings of the Eleventh International Conference on Learning Representations, 2023

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Revisiting Pruning at Initialization Through the Lens of Ramanujan Graph.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Data Augmented Flatness-aware Gradient Projection for Continual Learning.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Many-Task Federated Learning: A New Problem Setting and A Simple Baseline.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Lottery Pools: Winning More by Interpolating Tickets without Increasing Training or Inference Cost.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
A Novel E-Exponential Stochastic Resonance Model and Weak Signal Detection Method for Steel Wire Rope.
IEEE Trans. Ind. Electron., 2022

The Response of Land Surface Temperature Changes to the Vegetation Dynamics in the Yangtze River Basin.
Remote. Sens., 2022

A brain-inspired algorithm for training highly sparse neural networks.
Mach. Learn., 2022

Energy-efficient adaptive virtual-MIMO transmissions for LoRa uplink systems.
Digit. Signal Process., 2022

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity.
CoRR, 2022

Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training.
CoRR, 2022

Achieving Personalized Federated Learning with Sparse Local Models.
CoRR, 2022

Dynamic Sparse Network for Time Series Classification: Learning What to "See".
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets.
Proceedings of the Learning on Graphs Conference, 2022

The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity.
Proceedings of the Tenth International Conference on Learning Representations, 2022

A 200M-Query-Vector/s Computing-in-RRAM ADC-less k-Nearest-Neighbor Accelerator with Time-Domain Winner-Takes-All Circuits.
Proceedings of the 4th IEEE International Conference on Artificial Intelligence Circuits and Systems, 2022

2021
Efficient and effective training of sparse recurrent neural networks.
Neural Comput. Appl., 2021

Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware.
Neural Comput. Appl., 2021

FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity.
CoRR, 2021

Sparse Training via Boosting Pruning Plasticity with Neuroregeneration.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training.
Proceedings of the 38th International Conference on Machine Learning, 2021

Selfish Sparse RNN Training.
Proceedings of the 38th International Conference on Machine Learning, 2021

Systolic-Array Deep-Learning Acceleration Exploring Pattern-Indexed Coordinate-Assisted Sparsity for Real-Time On-Device Speech Processing.
Proceedings of the GLSVLSI '21: Great Lakes Symposium on VLSI 2021, 2021

Hierarchical Semantic Segmentation using Psychometric Learning.
Proceedings of the Asian Conference on Machine Learning, 2021

2020
A Wideband Electrical Impedance Tomography System Based on Sensitive Bioimpedance Spectrum Bandwidth.
IEEE Trans. Instrum. Meas., 2020

A Communication-Aware DNN Accelerator on ImageNet Using In-Memory Entry-Counting Based Algorithm-Circuit-Architecture Co-Design in 65-nm CMOS.
IEEE J. Emerg. Sel. Topics Circuits Syst., 2020

Topological Insights in Sparse Neural Networks.
CoRR, 2020

Topological Insights into Sparse Neural Networks.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2020

Learning Sparse Neural Networks for Better Generalization.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020

Network Performance Optimization with Real Time Traffic Prediction in Data Center Network.
Proceedings of the European Conference on Optical Communications, 2020

XNORAM: An Efficient Computing-in-Memory Architecture for Binary Convolutional Neural Networks with Flexible Dataflow Mapping.
Proceedings of the 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems, 2020

2019
On improving deep learning generalization with adaptive sparse connectivity.
CoRR, 2019

Intrinsically Sparse Long Short-Term Memory Networks.
CoRR, 2019

2018
Wideband chirp excitation source for bioelectrical impedance spectrum tomography.
Proceedings of the IEEE International Instrumentation and Measurement Technology Conference, 2018


  Loading...