Sayash Kapoor

Orcid: 0000-0001-5695-280X

According to our database1, Sayash Kapoor authored at least 36 papers between 2018 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Bridging Prediction and Intervention Problems in Social Systems.
CoRR, July, 2025

Establishing Best Practices for Building Rigorous Agentic Benchmarks.
CoRR, July, 2025

A Different Approach to AI Safety: Proceedings from the Columbia Convening on Openness in Artificial Intelligence and AI Safety.
CoRR, June, 2025

Resist Platform-Controlled AI Agents and Champion User-Centric Agent Advocates.
CoRR, May, 2025

The Leaderboard Illusion.
CoRR, April, 2025

In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI.
CoRR, March, 2025

International AI Safety Report.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
CoRR, January, 2025

AI Agents That Matter.
Trans. Mach. Learn. Res., 2025

The 2023 Foundation Model Transparency Index.
Trans. Mach. Learn. Res., 2025

The 2024 Foundation Model Transparency Index.
Trans. Mach. Learn. Res., 2025

The Reality of AI and Biorisk.
Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 2025

2024
CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark.
Trans. Mach. Learn. Res., 2024

The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources.
Trans. Mach. Learn. Res., 2024

Inference Scaling fLaws: The Limits of LLM Resampling with Imperfect Verifiers.
CoRR, 2024

CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark.
CoRR, 2024

The Foundation Model Transparency Index v1.1: May 2024.
CoRR, 2024

Towards a Framework for Openness in Foundation Models: Proceedings from the Columbia Convening on Openness in Artificial Intelligence.
CoRR, 2024

On the Societal Impact of Open Foundation Models.
CoRR, 2024

A Safe Harbor for AI Evaluation and Red Teaming.
CoRR, 2024

Promises and pitfalls of artificial intelligence for legal applications.
CoRR, 2024



Foundation Model Transparency Reports.
Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES-24) - Full Archival Papers, October 21-23, 2024, San Jose, California, USA, 2024

2023
Leakage and the reproducibility crisis in machine-learning-based science.
Patterns, September, 2023

The Foundation Model Transparency Index.
CoRR, 2023

REFORMS: Reporting Standards for Machine Learning Based Science.
CoRR, 2023

Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy.
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023

2022
Weaving Privacy and Power: On the Privacy Practices of Labor Organizers in the U.S. Technology Industry.
Proc. ACM Hum. Comput. Interact., 2022

Leakage and the Reproducibility Crisis in ML-based Science.
CoRR, 2022

The Worst of Both Worlds: A Comparative Analysis of Errors in Learning from Data in Psychology and Machine Learning.
Proceedings of the AIES '22: AAAI/ACM Conference on AI, Ethics, and Society, Oxford, United Kingdom, May 19, 2022

2021
The platform as the city.
Interactions, 2021

2019
Corruption-tolerant bandit learning.
Mach. Learn., 2019

A dashboard for controlling polarization in personalization.
AI Commun., 2019

Controlling Polarization in Personalization: An Algorithmic Framework.
Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019

2018
An Algorithmic Framework to Control Bias in Bandit-based Personalization.
CoRR, 2018

Balanced News Using Constrained Bandit-based Personalization.
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018


  Loading...