Sunipa Dev

Orcid: 0000-0002-6647-9662

According to our database1, Sunipa Dev authored at least 32 papers between 2018 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations.
ACM Trans. Interact. Intell. Syst., March, 2024

SeeGULL Multilingual: a Dataset of Geo-Culturally Situated Stereotypes.
CoRR, 2024

MiTTenS: A Dataset for Evaluating Misgendering in Translation.
CoRR, 2024

Beyond the Surface: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation.
CoRR, 2024

2023
PaLM: Scaling Language Modeling with Pathways.
J. Mach. Learn. Res., 2023

SoUnD Framework: Analyzing (So)cial Representation in (Un)structured (D)ata.
CoRR, 2023

PaLM 2 Technical Report.
CoRR, 2023

Building Socio-culturally Inclusive Stereotype Resources with Community Engagement.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

"I wouldn't say offensive but...": Disability-Centered Perspectives on Large Language Models.
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023

The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2023

SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

MISGENDERED: Limits of Large Language Models in Understanding Pronouns.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Cultural Re-contextualization of Fairness Research in Language Technologies in India.
CoRR, 2022

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN.
CoRR, 2022

DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation.
CoRR, 2022

Socially Aware Bias Measurements for Hindi Language Representations.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

On Measures of Biases and Harms in NLP.
Proceedings of the Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, 2022

Re-contextualizing Fairness in NLP: The Case of India.
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, 2022

Representation Learning for Resource-Constrained Keyphrase Generation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

2021
Closed form word embedding alignment.
Knowl. Inf. Syst., 2021

What do Bias Measures Measure?
CoRR, 2021

VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations.
CoRR, 2021

An Interactive Visual Demo of Bias Mitigation Techniques for Word Representations From a Geometric Perspective.
Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, 2021

A Visual Tour of Bias Mitigation Techniques for Word Representations.
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021

Measures and Best Practices for Responsible AI.
Proceedings of the KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021

Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

2020
The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability.
PhD thesis, 2020

The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability.
CoRR, 2020

On Measuring and Mitigating Biased Inferences of Word Embeddings.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Attenuating Bias in Word vectors.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

2018
Absolute Orientation for Word Embedding Alignment.
CoRR, 2018


  Loading...