Category Archive Machine Learning

Stochastic Gradient Switching for Defense Against White Box Adverserial Evasion Attacks


Existing artificial neural network frameworks are vulnerable to a variety of adversarial attacks. Attackers employ white box adversarial evasion attacks by exploiting model gradients with gradient ascent methods to engineer data samples that the model will misclassify to the adversary’s specification. Stochastic Gradient Switching is a novel defense approach, where each layer in a neural network is designed to be an ensemble of unique layers, all fully connected to the previous layer ensemble. During inference, one
layer is randomly selected from each ensemble to be used for forward propagation, effectively selecting one of many unique sub-networks upon each inference call. Stochastic gradient switching removes an attacker’s ability to deterministically track model gradients, subduing evasion attack efforts that require gradient ascent optimization.

AIOps For Cluster Orchestration

In the fields of information technology and systems management, IT operations (ITOps) is an approach or method to retrieve, analyze, and report data for IT operations. AIOps is the class of methods and procedures associated with the application of Artificial Intelligence and Machine Learning for ITOps. Mindboard is seeking to apply AIOps to improve the operations of container orchestration. In the cloud computing environment, AIOps can be used in conjunction with container orchestration to perform capacity management, event monitoring, and alerting/remediation for micro-services within a service network.

Improving Classification Accuracy with ACGAN (Keras)

Supervised machine learning uses labeled data to train models for classification or regression over a set of targets. The performance of a model is a function of the data that is used to train it. The less data that is available, the harder it is for a model to learn to make accurate predictions on unseen data.

Read the rest of the article at Mindboard’s Medium channel.

Visualizing How Convolution Neural Networks “See”

Convolution Neural Networks (CNN) learns image regognition the way human visual system does. It scans images by using filters which recognizes a unique feature. A little deeper layers identify low level features such as curves and edges, while the deeper layers idtentifies high level features such as eyes or windows. We use Keras library to visualize what CNN are learning to look when making a certain classfication.

Read the rest of the article at Mindboard’s Medium channel.

Policy Assessment Using ML

In the 21st century, every person and organization, both public and private, are somehow connected. So, being able to quickly understand and efficiently analyze whether your third-party policy documents such as NIST 800–171, ISO 27001, ISO 9001, etc., meet the standards you set for them is critical to the success of your business. Current policy assessment tools are manual, inefficient, and don’t adequately reduce risk.

We at Mindboard developed a platform to solve these problems. We are utilizing machine-learning, semantic technology, a repository of standard-meeting model documents we provide the most advanced and efficient methodology for automating and evaluating policy documents.

Read the rest of the article at Mindboard’s Medium channel.

Generating Adversarial Samples in Keras (Tutorial)

As deep learning technologies power increasingly more services, associated security risks become more critical to address. Adversarial Machine Learning is a branch of machine learning that exploits the mathematics underlying deep learning systems in order to evade, explore, and/or poison machine learning models. Evasion attacks are the most common adversarial attack method due to their ease of implementation and potential for being highly disruptive. During an evasion attack, the adversary tries to evade a fully trained model by engineering samples to be misclassified by the model. This attack does not assume any influence over the training data.

Evasion attacks have been demonstrated in the context of autonomous vehicles where the adversary manipulates traffic signs to confuse the learning model. Research suggests that deep neural networks are susceptible to adversarial based evasion attacks due to their high degree of non-linearity as well as insufficient model averaging and regularization.

Read the rest of the article at Mindboard’s Medium channel.

Active Learning for Fast Data Set Labeling

Active learning is a special case of machine learning where a model can query a user for input. In this post, we will see how we can use active learning to label large data sets. For most machine learning tasks, large amounts of labeled data is needed is need for model training. However, the process of labeling data can be extremely time consuming and/or expensive. Using active learning, we can leverage a classification model to do most of the labeling for us, so that we only need to label samples when it is most needed.

Read the rest of the article at Mindboard’s Medium channel.

Serving Machine Learning Models Using TensorFlow Serving

Exploring how TensorFlow models can be served using TensorFlow Serving…

Read the article at Mindboard’s Medium channel.

Deploying Machine Learning Models Using Docker

Productionize the Flask API for deployment using Docker via nginx, gunicorn and Docker Compose to create a scalable template for deploying machine learning models.

Read the rest of the article at Mindboard’s Medium channel

Investigating RNN Memory Stability

A Recurrent Neural Networks (RNN) is a class of Artificial Neural Network that contains connections along a temporal axis, producing a functioning memory of prior network inferences that influences the network’s output. Two of the most common types of RNN are the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) cells. LSTMs and GRUs are designed for long-term memory capability. In both cases, the RNN cell maintains a hidden memory state that undergoes an alteration after every inference call.

Read the rest of the article at Mindboard’s Medium channel.