Abstract
Existing artificial neural network frameworks are vulnerable to a variety of adversarial attacks. Attackers employ white box adversarial evasion attacks by exploiting model gradients with gradient ascent methods to engineer data samples that the model will misclassify to the adversary’s specification. Stochastic Gradient Switching is a novel defense approach, where each layer in a neural network is designed to be an ensemble of unique layers, all fully connected to the previous layer ensemble. During inference, one
layer is randomly selected from each ensemble to be used for forward propagation, effectively selecting one of many unique sub-networks upon each inference call. Stochastic gradient switching removes an attacker’s ability to deterministically track model gradients, subduing evasion attack efforts that require gradient ascent optimization.