Mehdi Rezagholizadeh

Orcid: 0000-0003-4014-6007

According to our database1, Mehdi Rezagholizadeh authored at least 86 papers between 2013 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning.
CoRR, 2024

Beyond the Limits: A Survey of Techniques to Extend the Context Length in Large Language Models.
CoRR, 2024

On the importance of Data Scale in Pretraining Arabic Language Models.
CoRR, 2024

Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference.
Proceedings of the Findings of the Association for Computational Linguistics: EACL 2024, 2024

2023
NoMIRACL: Knowing When You Don't Know for Robust Multilingual Retrieval-Augmented Generation.
CoRR, 2023

On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild".
CoRR, 2023

Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT).
CoRR, 2023

SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks.
CoRR, 2023

Multimodal Audio-textual Architecture for Robust Spoken Language Understanding.
CoRR, 2023

On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications.
CoRR, 2023

An Exploration into the Performance of Unsupervised Cross-Task Speech Representations for "In the Wild" Edge Applications.
CoRR, 2023

Improved knowledge distillation by utilizing backward pass knowledge in neural networks.
CoRR, 2023

Robustdistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness.
Proceedings of the IEEE International Conference on Acoustics, 2023

CIRAL at FIRE 2023: Cross-Lingual Information Retrieval for African Languages.
Proceedings of the 15th Annual Meeting of the Forum for Information Retrieval Evaluation, 2023

Efficient Classification of Long Documents via State-Space Models.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

DyLoRA: Parameter-Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation.
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023

Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation.
Proceedings of the Findings of the Association for Computational Linguistics: EACL 2023, 2023

Do we need Label Regularization to Fine-tune Pre-trained Language Models?
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023

Practical Takes on Federated Learning with Pretrained Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: EACL 2023, 2023

On the utility of enhancing BERT syntactic bias with Token Reordering Pretraining.
Proceedings of the 27th Conference on Computational Natural Language Learning, 2023

LABO: Towards Learning Optimal Label Regularization via Bi-level Optimization.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Attribute Controlled Dialogue Prompting.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Evaluating Embedding APIs for Information Retrieval.
Proceedings of the The 61st Annual Meeting of the Association for Computational Linguistics: Industry Track, 2023

AraMUS: Pushing the Limits of Data and Model Scale for Arabic Natural Language Processing.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
KronA: Parameter Efficient Tuning with Kronecker Adapter.
CoRR, 2022

Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement.
CoRR, 2022

Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages.
CoRR, 2022

Integer Fine-tuning of Transformer-based Models.
CoRR, 2022

Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models.
CoRR, 2022

Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding.
CoRR, 2022

Learning functions on multiple sets using multi-set transformers.
Proceedings of the Uncertainty in Artificial Intelligence, 2022

Simple Yet Effective Neural Ranking and Reranking Baselines for Cross-Lingual Information Retrieval.
Proceedings of the Thirty-First Text REtrieval Conference, 2022

Huawei Noah's Ark Lab at TREC NeuCLIR 2022.
Proceedings of the Thirty-First Text REtrieval Conference, 2022

KroneckerBERT: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2022, 2022

Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Processing.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

Dynamic Position Encoding for Transformers.
Proceedings of the 29th International Conference on Computational Linguistics, 2022

Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher.
Proceedings of the 29th International Conference on Computational Linguistics, 2022

CILDA: Contrastive Data Augmentation Using Intermediate Layer Knowledge Distillation.
Proceedings of the 29th International Conference on Computational Linguistics, 2022

When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, 2022

Kronecker Decomposition for GPT Compression.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2022

From Fully Trained to Fully Random Embeddings: Improving Neural Machine Translation with Compact Word Embedding Tables.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Context-aware Adversarial Training for Name Regularity Bias in Named Entity Recognition.
Trans. Assoc. Comput. Linguistics, 2021

JABER: Junior Arabic BERt.
CoRR, 2021

Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher.
CoRR, 2021

A Short Study on Compressing Decoder-Based Language Models.
CoRR, 2021

KroneckerBERT: Learning Kronecker Decomposition for Pre-trained Language Models via Knowledge Distillation.
CoRR, 2021

How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding.
CoRR, 2021

Improving Neural Machine Translation with Compact Word Embedding Tables.
CoRR, 2021

Robust Embeddings Via Distributions.
CoRR, 2021

NATURE: Natural Auxiliary Text Utterances for Realistic Spoken Language Evaluation.
Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, 2021

Transformer-Based ASR Incorporating Time-Reduction Layer and Fine-Tuning with Self-Knowledge Distillation.
Proceedings of the Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August, 2021

Fine-Tuning of Pre-Trained End-to-End Speech Recognition with Generative Adversarial Networks.
Proceedings of the IEEE International Conference on Acoustics, 2021

Universal-KD: Attention-based Output-Grounded Intermediate Layer Knowledge Distillation.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

Towards Zero-Shot Knowledge Distillation for Natural Language Processing.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

RW-KD: Sample-wise Loss Terms Re-Weighting for Knowledge Distillation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021, 2021

How to Select One Among All ? An Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021, 2021

Annealing Knowledge Distillation.
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021

Knowledge Distillation with Noisy Labels for Natural Language Understanding.
Proceedings of the Seventh Workshop on Noisy User-generated Text, 2021

MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

End-to-End Self-Debiasing Framework for Robust NLU Training.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

ALP-KD: Attention-Based Layer Projection for Knowledge Distillation.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
From Unsupervised Machine Translation to Adversarial Text Generation.
Proceedings of the 2020 IEEE International Conference on Acoustics, 2020

Why Skip If You Can Combine: A Simple Knowledge Distillation Technique for Intermediate Layers.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

Fully Quantized Transformer for Machine Translation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, 2020

Improving Word Embedding Factorization for Compression using Distilled Nonlinear Neural Decomposition.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, 2020

2019
Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition.
CoRR, 2019

Fully Quantized Transformer for Improved Translation.
CoRR, 2019

Distilled embedding: non-linear embedding factorization using knowledge distillation.
CoRR, 2019

TextKD-GAN: Text Generation using KnowledgeDistillation and Generative Adversarial Networks.
CoRR, 2019

Bilingual-GAN: A Step Towards Parallel Text Generation.
CoRR, 2019

Latent Code and Text-based Generative Adversarial Networks for Soft-text Generation.
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019

TextKD-GAN: Text Generation Using Knowledge Distillation and Generative Adversarial Networks.
Proceedings of the Advances in Artificial Intelligence, 2019

SALSA-TEXT: Self Attentive Latent Space Based Adversarial Text Generation.
Proceedings of the Advances in Artificial Intelligence, 2019

EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

2018
Reg-Gan: Semi-Supervised Learning Based on Generative Adversarial Networks for Regression.
Proceedings of the 2018 IEEE International Conference on Acoustics, 2018

2016
A Retargeting Approach for Mesopic Vision: Simulation and Compensation.
Proceedings of the Color Imaging XXI: Displaying, 2016

2015
Image Sensor Modeling: Noise and Linear Transformation Impacts on the Color Gamut.
Proceedings of the 12th Conference on Computer and Robot Vision, 2015

2014
Image Sensor Modeling: Color Measurement at Low Light Levels.
Proceedings of the 22nd Color and Imaging Conference, 2014

Photon Detection and Color Perception at Low Light Levels.
Proceedings of the Canadian Conference on Computer and Robot Vision, 2014

2013
Maximum Entropy Spectral Modeling Approach to Mesopic Tone Mapping.
Proceedings of the 21st Color and Imaging Conference, 2013

Edge-Based and Efficient Chromaticity Spatio-spectral Models for Color Constancy.
Proceedings of the Tenth Conference on Computer and Robot Vision, 2013


  Loading...