We assess the certified robustness of models trained in a federated fashion.
We optimize the smoothing parameters per input point in randomized smoothing.
We design the anti-adversary layer that enhances the robustness of pretrained models against strong adversarial attacks.
We design the anti-adversary layer that enhances the robustness of pretrained models against strong adversarial attacks.
We extend randomized smoothing to certify image deformations such as rotation, translation, scaling, and affine.
We analyze the effect of encouraging the learnt features from DNN to be more semantically meaningful through clustering on the PGD-Robustness of the DNN.
We leverage test time augmentation for enhancing both empirical and certified robustness of DNNs.
We replace the first convolutional layer in deep neural networks with a Gabor layer to enhance networks robustness.
We desaign an algorithm that adaptively adjusts the batch size for SGD.