Fanny Jourdan

Orcid: 0009-0002-9356-4907

According to our database1, Fanny Jourdan authored at least 12 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
EuroBERT: Scaling Multilingual Encoders for European Languages.
CoRR, March, 2025

FairTranslate: an English-French Dataset for Gender Bias Evaluation in Machine Translation by Overcoming Gender Binarity.
Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 2025

ConSim: Measuring Concept-Based Explanations' Effectiveness with Automated Simulatability.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
Advancing fairness in natural language processing : from traditional methods to explainability. (L'équité dans le traitement automatique des langues : des méthodes traditionnelles vers l'explicabilité).
PhD thesis, 2024

Advancing Fairness in Natural Language Processing: From Traditional Methods to Explainability.
CoRR, 2024

2023
How Optimal Transport Can Tackle Gender Biases in Multi-Class Neural Network Classifiers for Job Recommendations.
Algorithms, March, 2023

TaCo: Targeted Concept Removal in Output Embeddings for NLP via Information Theory and Explainability.
CoRR, 2023

Are fairness metric scores enough to assess discrimination biases in machine learning?
CoRR, 2023

COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks.
CoRR, 2023

Is a Fairness Metric Score Enough to Assess Discrimination Biases in Machine Learning?
Proceedings of the 2nd European Workshop on Algorithmic Fairness, 2023

Breaking Bias: How Optimal Transport Can Help to Tackle Gender Biases in NLP Based Job Recommendation Systems?
Proceedings of the 2nd European Workshop on Algorithmic Fairness, 2023

COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023


  Loading...