Gang Niu

Orcid: 0000-0002-7353-5079

Affiliations:
  • RIKEN, Japan
  • Tokyo Institute of Technology, Department of Computer Science, Japan (PhD 2013)
  • Nanjing University, State Key Laboratory for Novel Software Technology, Nanjing, China (former)


According to our database1, Gang Niu authored at least 159 papers between 2010 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
PiCO+: Contrastive Label Disambiguation for Robust Partial Label Learning.
IEEE Trans. Pattern Anal. Mach. Intell., May, 2024

On the Robustness of Average Losses for Partial-Label Learning.
IEEE Trans. Pattern Anal. Mach. Intell., May, 2024

Generating Chain-of-Thoughts with a Direct Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought.
CoRR, 2024

Direct Distillation between Different Domains.
CoRR, 2024

2023
A Parametrical Model for Instance-Dependent Label Noise.
IEEE Trans. Pattern Anal. Mach. Intell., December, 2023

Boundary-restricted metric learning.
Mach. Learn., December, 2023

Multiple-Instance Learning From Unlabeled Bags With Pairwise Similarity.
IEEE Trans. Knowl. Data Eng., November, 2023

Learning Intention-Aware Policies in Deep Reinforcement Learning.
Neural Comput., October, 2023

Class-Wise Denoising for Robust Learning Under Label Noise.
IEEE Trans. Pattern Anal. Mach. Intell., March, 2023

Representation learning for continuous action spaces is beneficial for efficient policy learning.
Neural Networks, February, 2023

Learning with Complementary Labels Revisited: A Consistent Approach via Negative-Unlabeled Learning.
CoRR, 2023

Diversified Outlier Exposure for Out-of-Distribution Detection via Informative Extrapolation.
CoRR, 2023

Atom-Motif Contrastive Transformer for Molecular Property Prediction.
CoRR, 2023

Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation.
CoRR, 2023

Making Binary Classification from Multiple Unlabeled Datasets Almost Free of Supervision.
CoRR, 2023

Enhancing Label Sharing Efficiency in Complementary-Label Learning with Label Augmentation.
CoRR, 2023

Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks.
CoRR, 2023

Investigating and Mitigating the Side Effects of Noisy Views in Multi-view Clustering in Practical Scenarios.
CoRR, 2023

Fairness Improves Learning from Noisily Labeled Long-Tailed Data.
CoRR, 2023

Diversified Outlier Exposure for Out-of-Distribution Detection via Informative Extrapolation.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Self-Weighted Contrastive Learning among Multiple Views for Mitigating Representation Degeneration.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Class-Distribution-Aware Pseudo-Labeling for Semi-Supervised Multi-Label Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Binary Classification with Confidence Difference.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Generalizing Importance Weighting to A Universal Solver for Distribution Shift Problems.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Mitigating Memorization of Noisy Labels by Clipping the Model Prediction.
Proceedings of the International Conference on Machine Learning, 2023

A Universal Unbiased Method for Classification from Aggregate Observations.
Proceedings of the International Conference on Machine Learning, 2023

Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation.
Proceedings of the International Conference on Machine Learning, 2023

Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Multi-Label Knowledge Distillation.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Distribution Shift Matters for Knowledge Distillation with Webly Collected Images.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Towards Effective Visual Representations for Partial-Label Learning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
NoiLin: Improving adversarial training and correcting stereotype of noisy labels.
Trans. Mach. Learn. Res., 2022

SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning.
Trans. Mach. Learn. Res., 2022

Learning from Noisy Pairwise Similarity and Unlabeled Data.
J. Mach. Learn. Res., 2022

Fast and Robust Rank Aggregation against Model Misspecification.
J. Mach. Learn. Res., 2022

Logit Clipping for Robust Learning against Label Noise.
CoRR, 2022

FedMT: Federated Learning with Mixed-type Labels.
CoRR, 2022

On the Effectiveness of Adversarial Training against Backdoor Attacks.
CoRR, 2022

Generalizing Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Learning Contrastive Embedding in Low-Dimensional Space.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network.
Proceedings of the International Conference on Machine Learning, 2022

To Smooth or Not? When Label Smoothing Meets Noisy Labels.
Proceedings of the International Conference on Machine Learning, 2022

Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack.
Proceedings of the International Conference on Machine Learning, 2022

Reliable Adversarial Distillation with Unreliable Teachers.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Adversarial Robustness Through the Lens of Causality.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Exploiting Class Activation Value for Partial-Label Learning.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Rethinking Class-Prior Estimation for Positive-Unlabeled Learning.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations.
Proceedings of the Tenth International Conference on Learning Representations, 2022

PiCO: Contrastive Label Disambiguation for Partial Label Learning.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Federated Learning from Only Unlabeled Data with Class-conditional-sharing Clients.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Meta Discovery: Learning to Discover Novel Classes given Very Limited Data.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

Learning and Mining with Noisy Labels.
Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022

2021
Direction Matters: On Influence-Preserving Graph Summarization and Max-Cut Principle for Directed Graphs.
Neural Comput., 2021

Information-Theoretic Representation Learning for Positive-Unlabeled Classification.
Neural Comput., 2021

Active Refinement for Multi-Label Learning: A Pseudo-Label Approach.
CoRR, 2021

Local Reweighting for Adversarial Training.
CoRR, 2021

Multi-Class Classification from Single-Class Data with Confidences.
CoRR, 2021

On the Robustness of Average Losses for Partial-Label Learning.
CoRR, 2021

Understanding (Generalized) Label Smoothing when Learning with Noisy Labels.
CoRR, 2021

Instance Correction for Learning with Open-set Noisy Labels.
CoRR, 2021

NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?
CoRR, 2021

Estimating Instance-dependent Label-noise Transition Matrix using DNNs.
CoRR, 2021

Guided Interpolation for Adversarial Training.
CoRR, 2021

Meta Discovery: Learning to Discover Novel Classes given Very Limited Data.
CoRR, 2021

Understanding the Interaction of Adversarial Training with Noisy Labels.
CoRR, 2021

Instance-dependent Label-noise Learning under a Structural Causal Model.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Probabilistic Margins for Instance Reweighting in Adversarial Training.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Understanding and Improving Early Stopping for Learning with Noisy Labels.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Multiple-Instance Learning from Similar and Dissimilar Bags.
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021

Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization.
Proceedings of the 38th International Conference on Machine Learning, 2021

CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection.
Proceedings of the 38th International Conference on Machine Learning, 2021

Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels.
Proceedings of the 38th International Conference on Machine Learning, 2021

Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification.
Proceedings of the 38th International Conference on Machine Learning, 2021

Provably End-to-end Label-noise Learning without Anchor Points.
Proceedings of the 38th International Conference on Machine Learning, 2021

Maximum Mean Discrepancy Test is Aware of Adversarial Attacks.
Proceedings of the 38th International Conference on Machine Learning, 2021

Pointwise Binary Classification with Pairwise Confidence Comparisons.
Proceedings of the 38th International Conference on Machine Learning, 2021

Learning Diverse-Structured Networks for Adversarial Robustness.
Proceedings of the 38th International Conference on Machine Learning, 2021

Learning from Similarity-Confidence Data.
Proceedings of the 38th International Conference on Machine Learning, 2021

Confidence Scores Make Instance-dependent Label-noise Learning Possible.
Proceedings of the 38th International Conference on Machine Learning, 2021

Large-Margin Contrastive Learning with Distance Polarization Regularizer.
Proceedings of the 38th International Conference on Machine Learning, 2021

Geometry-aware Instance-reweighted Adversarial Training.
Proceedings of the 9th International Conference on Learning Representations, 2021

Scalable Evaluation and Improvement of Document Set Expansion via Neural Positive-Unlabeled Learning.
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021

Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
A Survey of Label-noise Representation Learning: Past, Present and Future.
CoRR, 2020

Maximum Mean Discrepancy is Aware of Adversarial Attacks.
CoRR, 2020

Parts-dependent Label Noise: Towards Instance-dependent Label Noise.
CoRR, 2020

Class2Simi: A New Perspective on Learning with Label Noise.
CoRR, 2020

Multi-Class Classification from Noisy-Similarity-Labeled Data.
CoRR, 2020

Towards Mixture Proportion Estimation without Irreducibility.
CoRR, 2020

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Part-dependent Label Noise: Towards Instance-dependent Label Noise.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Provably Consistent Partial-Label Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Rethinking Importance Weighting for Deep Learning under Distribution Shift.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.
Proceedings of the 37th International Conference on Machine Learning, 2020

Searching to Exploit Memorization Effect in Learning with Noisy Labels.
Proceedings of the 37th International Conference on Machine Learning, 2020

Progressive Identification of True Labels for Partial-Label Learning.
Proceedings of the 37th International Conference on Machine Learning, 2020

Do We Need Zero Training Loss After Achieving Zero Training Error?
Proceedings of the 37th International Conference on Machine Learning, 2020

Learning with Multiple Complementary Labels.
Proceedings of the 37th International Conference on Machine Learning, 2020

Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels.
Proceedings of the 37th International Conference on Machine Learning, 2020

SIGUA: Forgetting May Make Learning with Noisy Labels More Robust.
Proceedings of the 37th International Conference on Machine Learning, 2020

Cross-Graph: Robust and Unsupervised Embedding for Attributed Graphs with Corrupted Structure.
Proceedings of the 20th IEEE International Conference on Data Mining, 2020

Mitigating Overfitting in Supervised Classification from Two Unlabeled Datasets: A Consistent Risk Correction Approach.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

Beyond Unfolding: Exact Recovery of Latent Convex Tensor Decomposition Under Reshuffling.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
CoRR, 2019

Searching to Exploit Memorization Effect in Learning from Corrupted Labels.
CoRR, 2019

Butterfly: A Panacea for All Difficulties in Wildly Unsupervised Domain Adaptation.
CoRR, 2019

Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative.
CoRR, 2019

Uncoupled Regression from Pairwise Comparison Data.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Are Anchor Points Really Indispensable in Label-Noise Learning?
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

How does Disagreement Help Generalization against Label Corruption?
Proceedings of the 36th International Conference on Machine Learning, 2019

Complementary-Label Learning for Arbitrary Losses and Models.
Proceedings of the 36th International Conference on Machine Learning, 2019

Classification from Positive, Unlabeled and Biased Negative Data.
Proceedings of the 36th International Conference on Machine Learning, 2019

On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data.
Proceedings of the 7th International Conference on Learning Representations, 2019

2018
Sufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities.
Neural Comput., 2018

Correction to: Semi-supervised AUC optimization based on positive-unlabeled learning.
Mach. Learn., 2018

Semi-supervised AUC optimization based on positive-unlabeled learning.
Mach. Learn., 2018

Pumpout: A Meta Approach for Robustly Training Deep Neural Networks with Noisy Labels.
CoRR, 2018

Alternate Estimation of a Classifier and the Class-Prior from Positive and Unlabeled Data.
CoRR, 2018

Matrix Co-completion for Multi-label Classification with Missing Features and Labels.
CoRR, 2018

Co-sampling: Training Robust Networks for Extremely Noisy Supervision.
CoRR, 2018

Binary Classification from Positive-Confidence Data.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Co-teaching: Robust training of deep neural networks with extremely noisy labels.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Masking: A New Perspective of Noisy Supervision.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Active Feature Acquisition with Supervised Matrix Completion.
Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018

Does Distributionally Robust Supervised Learning Give Robust Classifiers?
Proceedings of the 35th International Conference on Machine Learning, 2018

Classification from Pairwise Similarity and Unlabeled Data.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
Class-prior estimation for learning from positive and unlabeled data.
Mach. Learn., 2017

Mode-Seeking Clustering and Density Ridge Estimation via Direct Estimation of Density-Derivative-Ratios.
J. Mach. Learn. Res., 2017

Estimation of Squared-Loss Mutual Information from Positive and Unlabeled Data.
CoRR, 2017

Learning from Complementary Labels.
CoRR, 2017

Positive-Unlabeled Learning with Non-Negative Risk Estimator.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

Learning from Complementary Labels.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data.
Proceedings of the 34th International Conference on Machine Learning, 2017

Whitening-Free Least-Squares Non-Gaussian Component Analysis.
Proceedings of The 9th Asian Conference on Machine Learning, 2017

2016
Direct Density Derivative Estimation.
Neural Comput., 2016

Beyond the Low-density Separation Principle: A Novel Approach to Semi-supervised Learning.
CoRR, 2016

Theoretical Comparisons of Learning from Positive-Negative, Positive-Unlabeled, and Negative-Unlabeled Data.
CoRR, 2016

Theoretical Comparisons of Positive-Unlabeled Learning against Positive-Negative Learning.
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016

Non-Gaussian Component Analysis with Log-Density Gradient Estimation.
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, 2016

2015
Convex Formulation for Learning from Positive and Unlabeled Data.
Proceedings of the 32nd International Conference on Machine Learning, 2015

Regularized Policy Gradients: Direct Variance Reduction in Policy Gradient Estimation.
Proceedings of The 7th Asian Conference on Machine Learning, 2015

2014
Semi-supervised information-maximization clustering.
Neural Networks, 2014

Information-Maximization Clustering Based on Squared-Loss Mutual Information.
Neural Comput., 2014

Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization.
Neural Comput., 2014

Analysis of Learning from Positive and Unlabeled Data.
Proceedings of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, 2014

Transductive Learning with Multi-class Volume Approximation.
Proceedings of the 31th International Conference on Machine Learning, 2014

2013
Maximum volume clustering: a new discriminative clustering approach.
J. Mach. Learn. Res., 2013

Clustering Unclustered Data: Unsupervised Binary Labeling of Two Datasets Having Different Class Balances.
Proceedings of the Conference on Technologies and Applications of Artificial Intelligence, 2013

Squared-loss Mutual Information Regularization: A Novel Information-theoretic Approach to Semi-supervised Learning.
Proceedings of the 30th International Conference on Machine Learning, 2013

2012
Analysis and improvement of policy gradient estimation.
Neural Networks, 2012

2011
Suffcient Component Analysis.
Proceedings of the 3rd Asian Conference on Machine Learning, 2011

Maximum Volume Clustering.
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011

Transfer Learning via Multi-View Principal Component Analysis.
J. Comput. Sci. Technol., 2011

2010
Rough Margin Based Core Vector Machine.
Proceedings of the Advances in Knowledge Discovery and Data Mining, 2010

Compact Margin Machine.
Proceedings of the Advances in Knowledge Discovery and Data Mining, 2010

Bayesian Maximum Margin Clustering.
Proceedings of the ICDM 2010, 2010


  Loading...