Adversarially Robust Learning via Entropic Regularization

Jagatap, Gauri and Joshi, Ameya and Chowdhury, Animesh Basak and Garg, Siddharth and Hegde, Chinmay (2022) Adversarially Robust Learning via Entropic Regularization. Frontiers in Artificial Intelligence, 4. ISSN 2624-8212

[thumbnail of pubmed-zip/versions/2/package-entries/frai-04-780843-r1/frai-04-780843.pdf] Text
pubmed-zip/versions/2/package-entries/frai-04-780843-r1/frai-04-780843.pdf - Published Version

Download (1MB)

Abstract

In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that assigns high probability to points with high loss and in the immediate neighborhood of training samples. Our proposed algorithms optimize this loss to seek adversarially robust valleys of the loss landscape. Our approach achieves competitive (or better) performance in terms of robust classification accuracy as compared to several state-of-the-art robust learning approaches on benchmark datasets such as MNIST and CIFAR-10.

Item Type: Article
Subjects: Scholar Eprints > Multidisciplinary
Depositing User: Managing Editor
Date Deposited: 16 Mar 2023 09:33
Last Modified: 23 Sep 2024 04:54
URI: http://repository.stmscientificarchives.com/id/eprint/841

Actions (login required)

View Item
View Item