Hyun Kwon

Orcid: 0000-0003-1169-9892

Affiliations:
  • Korea Military Academy, Department of Artificial Intelligence and Data Science, Seoul, Republic of Korea
  • KAIST, Yuseong-gu, Daejeon, South Korea (PhD 2020)


According to our database1, Hyun Kwon authored at least 52 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack.
IEEE Access, 2024

2023
Detecting textual adversarial examples through text modification on text classification systems.
Appl. Intell., August, 2023

Multi-targeted audio adversarial example for use against speech recognition systems.
Comput. Secur., May, 2023

Adversarial image perturbations with distortions weighted by color on deep neural networks.
Multim. Tools Appl., April, 2023

Audio adversarial detection through classification score on speech recognition systems.
Comput. Secur., March, 2023

Toward Selective Adversarial Attack for Gait Recognition Systems Based on Deep Neural Network.
IEICE Trans. Inf. Syst., February, 2023

Erratum to 'Ensemble transfer attack targeting text classification systems' [Computers & Security 117 (2022) 1-8/ 102695].
Comput. Secur., 2023

Dual-Targeted Textfooler Attack on Text Classification Systems.
IEEE Access, 2023

CloudNet: A LiDAR-Based Face Anti-Spoofing Model That Is Robust Against Light Variation.
IEEE Access, 2023

2022
Toward Selective Membership Inference Attack against Deep Learning Model.
IEICE Trans. Inf. Syst., November, 2022

Priority Evasion Attack: An Adversarial Example That Considers the Priority of Attack on Each Classifier.
IEICE Trans. Inf. Syst., November, 2022

Multi-Targeted Poisoning Attack in Deep Neural Networks.
IEICE Trans. Inf. Syst., November, 2022

Compliance-Driven Cybersecurity Planning Based on Formalized Attack Patterns for Instrumentation and Control Systems of Nuclear Power Plants.
Secur. Commun. Networks, 2022

BlindNet backdoor: Attack on deep neural network using blind watermark.
Multim. Tools Appl., 2022

AdvU-Net: Generating Adversarial Example Based on Medical Image and Targeting U-Net Model.
J. Sensors, 2022

Friend-guard adversarial noise designed for electroencephalogram-based brain-computer interface spellers.
Neurocomputing, 2022

Multi-Model Selective Backdoor Attack with Different Trigger Positions.
IEICE Trans. Inf. Syst., 2022

Ensemble transfer attack targeting text classification systems.
Comput. Secur., 2022

Optimized Adversarial Example With Classification Score Pattern Vulnerability Removed.
IEEE Access, 2022

2021
Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks.
Symmetry, 2021

MedicalGuard: U-Net Model Robust against Adversarially Perturbed Images.
Secur. Commun. Networks, 2021

Classification score approach for detecting adversarial example in deep neural network.
Multim. Tools Appl., 2021

Adv-Plate Attack: Adversarially Perturbed Plate for License Plate Recognition System.
J. Sensors, 2021

SqueezeFace: Integrative Face Recognition Methods with LiDAR Sensors.
J. Sensors, 2021

Data Correction For Enhancing Classification Accuracy By Unknown Deep Neural Network Classifiers.
KSII Trans. Internet Inf. Syst., 2021

Vision Control Unit in Fully Self Driving Vehicles using Xilinx MPSoC and Opensource Stack.
Proceedings of the ASPDAC '21: 26th Asia and South Pacific Design Automation Conference, 2021

2020
Selective Audio Adversarial Example in Evasion Attack on Speech Recognition System.
IEEE Trans. Inf. Forensics Secur., 2020

CAPTCHA Image Generation: Two-Step Style-Transfer Learning in Deep Neural Networks.
Sensors, 2020

Acoustic-decoy: Detection of adversarial examples through audio modification on speech recognition system.
Neurocomputing, 2020

Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks.
IEICE Trans. Inf. Syst., 2020

Robust CAPTCHA Image Generation Enhanced with Adversarial Example Methods.
IEICE Trans. Inf. Syst., 2020

Detecting Backdoor Attacks via Class Difference in Deep Neural Networks.
IEEE Access, 2020

FriendNet Backdoor: Indentifying Backdoor Attack that is safe for Friendly Deep Neural Network.
Proceedings of the ICSIM '20: The 3rd International Conference on Software Engineering and Information Management, 2020

TargetNet Backdoor: Attack on Deep Neural Network with Use of Different Triggers.
Proceedings of the ICIIT 2020: 5th International Conference on Intelligent Information Technology, 2020

2019
Selective Poisoning Attack on Deep Neural Networks <sup>†</sup>.
Symmetry, 2019

Rootkit inside GPU Kernel Execution.
IEICE Trans. Inf. Syst., 2019

Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example.
IEEE Access, 2019

Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes.
IEEE Access, 2019

CAPTCHA Image Generation Using Style Transfer Learning in Deep Neural Network.
Proceedings of the Information Security Applications - 20th International Conference, 2019

Face Friend-Safe Adversarial Example on Face Recognition System.
Proceedings of the Eleventh International Conference on Ubiquitous and Future Networks, 2019

Priority Adversarial Example in Evasion Attack on Multiple Deep Neural Networks.
Proceedings of the International Conference on Artificial Intelligence in Information and Communication, 2019

POSTER: Detecting Audio Adversarial Example through Audio Modification.
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019

Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error.
Proceedings of the 2nd IEEE International Conference on Artificial Intelligence and Knowledge Engineering, 2019

2018
Random Untargeted Adversarial Example on Deep Neural Network.
Symmetry, 2018

CAPTCHA Image Generation Systems Using Generative Adversarial Networks.
IEICE Trans. Inf. Syst., 2018

Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers.
IEICE Trans. Inf. Syst., 2018

Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier.
Comput. Secur., 2018

Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network.
IEEE Access, 2018

One-Pixel Adversarial Example that Is Safe for Friendly Deep Neural Networks.
Proceedings of the Information Security Applications - 19th International Conference, 2018

Fooling a Neural Network in Military Environments: Random Untargeted Adversarial Example.
Proceedings of the 2018 IEEE Military Communications Conference, 2018

POSTER: Zero-Day Evasion Attack Analysis on Race between Attack and Defense.
Proceedings of the 2018 on Asia Conference on Computer and Communications Security, 2018

2017
Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural Network.
Proceedings of the Information Security and Cryptology - ICISC 2017 - 20th International Conference, Seoul, South Korea, November 29, 2017


  Loading...