Stanislav Fort

According to our database1, Stanislav Fort authored at least 28 papers between 2017 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Scaling Laws for Adversarial Attacks on Language Model Activations.
CoRR, 2023

Multi-attacks: Many images + the same adversarial attack → many target labels.
CoRR, 2023

2022
Constitutional AI: Harmlessness from AI Feedback.
CoRR, 2022

Measuring Progress on Scalable Oversight for Large Language Models.
CoRR, 2022

What does a deep neural network confidently perceive? The effective dimension of high certainty class manifolds and their low confidence boundaries.
CoRR, 2022

Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned.
CoRR, 2022

Language Models (Mostly) Know What They Know.
CoRR, 2022

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback.
CoRR, 2022

Predictability and Surprise in Large Generative Models.
CoRR, 2022

Adversarial vulnerability of powerful near out-of-distribution detection.
CoRR, 2022

How many degrees of freedom do we need to train deep networks: a loss landscape perspective.
Proceedings of the Tenth International Conference on Learning Representations, 2022


2021
A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection.
CoRR, 2021

Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error.
CoRR, 2021

Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes.
CoRR, 2021

Exploring the Limits of Out-of-Distribution Detection.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

On Monotonic Linear Interpolation of Neural Network Parameters.
Proceedings of the 38th International Conference on Machine Learning, 2021

Training independent subnetworks for robust prediction.
Proceedings of the 9th International Conference on Learning Representations, 2021

2020
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

The Break-Even Point on Optimization Trajectories of Deep Neural Networks.
Proceedings of the 8th International Conference on Learning Representations, 2020

2019
Deep Ensembles: A Loss Landscape Perspective.
CoRR, 2019

Emergent properties of the local geometry of neural loss landscapes.
CoRR, 2019

Stiffness: A New Perspective on Generalization in Neural Networks.
CoRR, 2019

Large Scale Structure of Neural Network Loss Landscapes.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

The Goldilocks Zone: Towards Better Understanding of Neural Network Loss Landscapes.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019

2018
Adaptive Quantum State Tomography with Neural Networks.
CoRR, 2018

2017
Towards understanding feedback from supermassive black holes using convolutional neural networks.
CoRR, 2017

Gaussian Prototypical Networks for Few-Shot Learning on Omniglot.
CoRR, 2017


  Loading...