Sebastian Farquhar

Orcid: 0000-0002-9185-6415

According to our database1, Sebastian Farquhar authored at least 34 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Detecting hallucinations in large language models using semantic entropy.
Nat., June, 2024

Holistic Safety and Responsibility Evaluations of Advanced AI Models.
CoRR, 2024

Evaluating Frontier Models for Dangerous Capabilities.
CoRR, 2024

Discovering Agents (Abstract Reprint).
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Discovering agents.
Artif. Intell., September, 2023

Stochastic Batch Acquisition: A Simple Baseline for Deep Active Learning.
Trans. Mach. Learn. Res., 2023

Challenges with unsupervised LLM knowledge discovery.
CoRR, 2023

Model evaluation for extreme risks.
CoRR, 2023

Tracr: Compiled Transformers as a Laboratory for Interpretability.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Prediction-Oriented Bayesian Active Learning.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

Do Bayesian Neural Networks Need To Be Fully Stochastic?
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

2022
CLAM: Selective Clarification for Ambiguous Questions with Large Language Models.
CoRR, 2022

Understanding Approximation for Bayesian Inference in Neural Networks.
CoRR, 2022

Active Surrogate Estimators: An Active Learning Approach to Label-Efficient Model Evaluation.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Prioritized Training on Points that are Learnable, Worth Learning, and not yet Learnt.
Proceedings of the International Conference on Machine Learning, 2022

Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Path-Specific Objectives for Safer Agent Incentives.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Prioritized training on points that are learnable, worth learning, and not yet learned.
CoRR, 2021

A Simple Baseline for Batch Active Learning with Stochastic Acquisition Functions.
CoRR, 2021

Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning.
CoRR, 2021

Evaluating Approximate Inference in Bayesian Deep Learning.
Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, 2021

Active Testing: Sample-Efficient Model Evaluation.
Proceedings of the 38th International Conference on Machine Learning, 2021

On Statistical Bias In Active Learning: How and When to Fix It.
Proceedings of the 9th International Conference on Learning Representations, 2021

2020
Single Shot Structured Pruning Before Training.
CoRR, 2020

Try Depth Instead of Weight Correlations: Mean-field is a Less Restrictive Assumption for Deeper Networks.
CoRR, 2020

Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior Approximations.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Radial Bayesian Neural Networks: Beyond Discrete Support In Large-Scale Bayesian Deep Learning.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
A Systematic Comparison of Bayesian Deep Learning Robustness in Diabetic Retinopathy Tasks.
CoRR, 2019

Radial Bayesian Neural Networks: Robust Variational Inference In Big Models.
CoRR, 2019

Differentially Private Continual Learning.
CoRR, 2019

A Unifying Bayesian View of Continual Learning.
CoRR, 2019

2018
Towards Robust Evaluations of Continual Learning.
CoRR, 2018

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.
CoRR, 2018


  Loading...