Matthias Aßenmacher

Orcid: 0000-0003-2154-5774

According to our database1, Matthias Aßenmacher authored at least 19 papers between 2020 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Divergent Token Metrics: Measuring degradation to prune away LLM components - and optimize quantization.
CoRR, 2023

How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses.
CoRR, 2023

Classifying multilingual party manifestos: Domain transfer across country, time, and genre.
CoRR, 2023

How Different Is Stereotypical Bias Across Languages?
CoRR, 2023

Multimodal Deep Learning.
CoRR, 2023

A tailored Handwritten-Text-Recognition System for Medieval Latin.
Proceedings of the Ancient Language Processing Workshop, 2023

ActiveGLAE: A Benchmark for Deep Active Learning with Transformers.
Proceedings of the Machine Learning and Knowledge Discovery in Databases: Research Track, 2023

Towards Enhancing Deep Active Learning with Weak Supervision and Constrained Clustering.
Proceedings of the Workshop on Interactive Adaptive Learning co-located with European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2023), 2023

Automatic Transcription of Handwritten Old Occitan Language.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

2022
On the Current State of Reproducibility and Reporting of Uncertainty for Aspect-Based Sentiment Analysis.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2022

Pre-trained language models evaluating themselves - A comparative study.
Proceedings of the Third Workshop on Insights from Negative Results in NLP, 2022

2021
Comparability, evaluation and benchmarking of large pre-trained language models.
PhD thesis, 2021

Benchmarking down-scaled (not so large) pre-trained language models.
CoRR, 2021

Exploring Topic-Metadata Relationships with the STM: A Bayesian Approach.
CoRR, 2021

Re-Evaluating GermEval17 Using German Pre-Trained Language Models.
Proceedings of the Swiss Text Analytics Conference 2021, Winterthur, 2021

A New Benchmark for NLP in Social Sciences: Evaluating the Usefulness of Pre-trained Language Models for Classifying Open-ended Survey Responses.
Proceedings of the 13th International Conference on Agents and Artificial Intelligence, 2021

2020
Pre-trained language models as knowledge bases for Automotive Complaint Analysis.
CoRR, 2020

On the Comparability of Pre-trained Language Models.
Proceedings of the 5th Swiss Text Analytics Conference and the 16th Conference on Natural Language Processing, 2020

Evaluating Unsupervised Representation Learning for Detecting Stances of Fake News.
Proceedings of the 28th International Conference on Computational Linguistics, 2020


  Loading...