Kevin Klyman

Orcid: 0009-0003-2130-3529

According to our database1, Kevin Klyman authored at least 26 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Do AI Companies Make Good on Voluntary Commitments to the White House?
CoRR, August, 2025

A Different Approach to AI Safety: Proceedings from the Columbia Convening on Openness in Artificial Intelligence and AI Safety.
CoRR, June, 2025

New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses.
CoRR, May, 2025

In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI.
CoRR, March, 2025

The 2023 Foundation Model Transparency Index.
Trans. Mach. Learn. Res., 2025

The 2024 Foundation Model Transparency Index.
Trans. Mach. Learn. Res., 2025

AIR-BENCH 2024: A Safety Benchmark based on Regulation and Policies Specified Risk Categories.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers.
Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 2025

Recourse, Repair, Reparation, & Prevention: A Stakeholder Analysis of AI Supply Chains.
Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 2025

Comparing Apples to Oranges: A Taxonomy for Navigating the Global Landscape of AI Regulation.
Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 2025

2024
The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources.
Trans. Mach. Learn. Res., 2024

Bridging the Data Provenance Gap Across Text, Speech and Video.
CoRR, 2024

Language model developers should report train-test overlap.
CoRR, 2024

AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies.
CoRR, 2024

Consent in Crisis: The Rapid Decline of the AI Data Commons.
CoRR, 2024

The Foundation Model Transparency Index v1.1: May 2024.
CoRR, 2024

AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies.
CoRR, 2024

Introducing v0.5 of the AI Safety Benchmark from MLCommons.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
CoRR, 2024

On the Societal Impact of Open Foundation Models.
CoRR, 2024

A Safe Harbor for AI Evaluation and Red Teaming.
CoRR, 2024




Acceptable Use Policies for Foundation Models.
Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES-24) - Full Archival Papers, October 21-23, 2024, San Jose, California, USA, 2024

Foundation Model Transparency Reports.
Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES-24) - Full Archival Papers, October 21-23, 2024, San Jose, California, USA, 2024

2023
The Foundation Model Transparency Index.
CoRR, 2023


  Loading...