KiYoon Yoo

Orcid: 0000-0002-6920-1607

According to our database1, KiYoon Yoo authored at least 18 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Imperceptible Protection against Style Imitation from Diffusion Models.
CoRR, 2024

2023
Open Domain Generalization with a Single Network by Regularization Exploiting Pre-trained Features.
CoRR, 2023

Advancing Beyond Identification: Multi-bit Watermark for Language Models.
CoRR, 2023

Robust Natural Language Watermarking through Invariant Features.
CoRR, 2023

Self-Distilled Self-supervised Representation Learning.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023

Finding Efficient Pruned Network via Refined Gradients for Pruned Weights.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

Robust Multi-bit Natural Language Watermarking through Invariant Features.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation.
CoRR, 2022

Model Compression via Position-Based Scaled Gradient.
IEEE Access, 2022

Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

Detection of Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, 2022

2021
Self-Distilled Self-Supervised Representation Learning.
CoRR, 2021

Self-Evolutionary Optimization for Pareto Front Learning.
CoRR, 2021

Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights.
CoRR, 2021

2020
Asynchronous Edge Learning using Cloned Knowledge Distillation.
CoRR, 2020

On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective.
CoRR, 2020

Position-based Scaled Gradient for Model Quantization and Sparse Training.
CoRR, 2020

Position-based Scaled Gradient for Model Quantization and Pruning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020


  Loading...