Zhenwei Dai

Orcid: 0000-0001-9200-687X

According to our database1, Zhenwei Dai authored at least 28 papers between 2019 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Bradley-Terry and Multi-Objective Reward Modeling Are Complementary.
CoRR, July, 2025

LAKEGEN: A LLM-based Tabular Corpus Generator for Evaluating Dataset Discovery in Data Lakes.
CoRR, July, 2025

Attention Knows Whom to Trust: Attention-based Trust Management for LLM Multi-Agent Systems.
CoRR, June, 2025

Comprehensive Vulnerability Analysis is Necessary for Trustworthy LLM-MAS.
CoRR, June, 2025

Keeping an Eye on LLM Unlearning: The Hidden Risk and Remedy.
CoRR, June, 2025

Examples as the Prompt: A Scalable Approach for Efficient LLM Adaptation in E-Commerce.
CoRR, March, 2025

Cite Before You Speak: Enhancing Context-Response Grounding in E-commerce Conversational LLM-Agents.
CoRR, March, 2025

How Far are LLMs from Real Search? A Comprehensive Study on Efficiency, Completeness, and Inherent Capabilities.
CoRR, February, 2025

A General Framework to Enhance Fine-tuning-based LLM Unlearning.
CoRR, February, 2025

Stepwise Perplexity-Guided Refinement for Efficient Chain-of-Thought Reasoning in Large Language Models.
CoRR, February, 2025

Examples as the Prompt: A Scalable Approach for Efficient LLM Adaptation in E-Commerce.
Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2025

SimRAG: Self-Improving Retrieval-Augmented Generation for Adapting Large Language Models to Specialized Domains.
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, 2025

Stepwise Perplexity-Guided Refinement for Efficient Chain-of-Thought Reasoning in Large Language Models.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

A General Framework to Enhance Fine-tuning-based LLM Unlearning.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
SimRAG: Self-Improving Retrieval-Augmented Generation for Adapting Large Language Models to Specialized Domains.
CoRR, 2024

Session-Aware Product filter Ranking in E- Commerce Search.
Proceedings of the Second Tiny Papers Track at ICLR 2024, 2024

RA-NER: Retrieval augmented NER for knowledge intensive named entity recognition.
Proceedings of the Second Tiny Papers Track at ICLR 2024, 2024

Exploring Query Understanding for Amazon Product Search.
Proceedings of the IEEE International Conference on Big Data, 2024

2023
Graph Self-supervised Learning via Proximity Distribution Minimization.
Proceedings of the Uncertainty in Artificial Intelligence, 2023

2022
Optimizing Learned Bloom Filters: How Much Should Be Learned?
IEEE Embed. Syst. Lett., 2022

ScatterSample: Diversified Label Sampling for Data Efficient Graph Neural Network Learning.
Proceedings of the Learning on Graphs Conference, 2022

2021
Federated Multiple Label Hashing (FedMLH): Communication Efficient Federated Learning on Extreme Classification Tasks.
CoRR, 2021

Active Sampling Count Sketch (ASCS) for Online Sparse Estimation of a Trillion Scale Covariance Matrix.
Proceedings of the SIGMOD '21: International Conference on Management of Data, 2021

Learned Bloom Filters in Adversarial Environments: A Malicious URL Detection Use-Case.
Proceedings of the 22nd IEEE International Conference on High Performance Switching and Routing, 2021

2020
Adaptive Learned Bloom Filter (Ada-BF): Efficient Utilization of the Classifier with Application to Real-Time Information Filtering on the Web.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

2019
Adaptive Learned Bloom Filter (Ada-BF): Efficient Utilization of the Classifier.
CoRR, 2019

Channel Normalization in Convolutional Neural Network avoids Vanishing Gradients.
CoRR, 2019

Batch effects correction for microbiome data with Dirichlet-multinomial regression.
Bioinform., 2019


  Loading...