International Journal For Multidisciplinary Research

E-ISSN: 2582-2160     Impact Factor: 9.24

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 6 Issue 6 November-December 2024 Submit your research before last 3 days of December to publish your research paper in the issue of November-December.

A Reactive Security Framework for Protecting AI Models from Adversarial Attacks: An Autoencoder-Based Approach

Author(s) Vasudhevan Sudharsanan
Country India
Abstract This paper proposes a reactive security framework for enhancing the resilience
of AI models against adversarial attacks [5, 6, 7, 8]. The framework leverages
runtime monitoring, anomaly detection, and model retraining to dynamically
adapt to evolving attack strategies. Anomaly detection is performed using an
autoencoder-based algorithm that identifies deviations from expected model
behavior [8, 9, 10]. Model retraining employs adversarial training to
”immunize” the model against similar attacks [5, 6]. We discuss the choice of
autoencoder architectures for different data types and detail the mathematical
foundations of both anomaly detection and adversarial training [3]. The
framework’s effectiveness is evaluated through simulations and benchmark
datasets, demonstrating its ability to secure AI models against diverse
adversarial attacks.
Field Computer > Network / Security
Published In Volume 6, Issue 6, November-December 2024
Published On 2024-12-10
Cite This A Reactive Security Framework for Protecting AI Models from Adversarial Attacks: An Autoencoder-Based Approach - Vasudhevan Sudharsanan - IJFMR Volume 6, Issue 6, November-December 2024. DOI 10.36948/ijfmr.2024.v06i06.32434
DOI https://doi.org/10.36948/ijfmr.2024.v06i06.32434
Short DOI https://doi.org/g8vgjv

Share this