Iason Gabriel

Orcid: 0000-0002-7552-4576

According to our database1, Iason Gabriel authored at least 17 papers between 2020 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Sociotechnical Safety Evaluation of Generative AI Systems.
CoRR, 2023

Model evaluation for extreme risks.
CoRR, 2023

Representation in AI Evaluations.
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023

2022
Manifestations of Xenophobia in AI Systems.
CoRR, 2022

A Human Rights-Based Approach to Responsible AI.
CoRR, 2022

Improving alignment of dialogue agents via targeted human judgements.
CoRR, 2022

In conversation with Artificial Intelligence: aligning language models with human values.
CoRR, 2022

Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Taxonomy of Risks posed by Language Models.
Proceedings of the FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21, 2022

Power to the People? Opportunities and Challenges for Participatory AI.
Proceedings of the Equity and Access in Algorithms, Mechanisms, and Optimization, 2022

2021
Scaling Language Models: Methods, Analysis & Insights from Training Gopher.
CoRR, 2021

Ethical and social risks of harm from Language Models.
CoRR, 2021

Towards a Theory of Justice for Artificial Intelligence.
CoRR, 2021

Alignment of Language Agents.
CoRR, 2021

The Challenge of Value Alignment: from Fairer Algorithms to AI Safety.
CoRR, 2021

Modelling Cooperation in Network Games with Spatio-Temporal Complexity.
Proceedings of the AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, 2021

2020
Artificial Intelligence, Values, and Alignment.
Minds Mach., 2020


  Loading...