Yearly Archive 2019

Visualizing How Convolution Neural Networks “See”

Convolution Neural Networks (CNN) learns image regognition the way human visual system does. It scans images by using filters which recognizes a unique feature. A little deeper layers identify low level features such as curves and edges, while the deeper layers idtentifies high level features such as eyes or windows. We use Keras library to visualize what CNN are learning to look when making a certain classfication.

Read the rest of the article at Mindboard’s Medium channel.

Creating Custom Lambda Layers

In the late 2018 AWS announced two new features for Lambda to make serverless deployment much easier. They are:

· Lambda layers — which is a way to manage code and dependencies across multiple lambda functions.

· Lambda Runtime API — to develop lambda functions on any programming language or a specific language version.

Read the rest of the article at Mindboard’s Medium channel.

Policy Assessment Using ML

In the 21st century, every person and organization, both public and private, are somehow connected. So, being able to quickly understand and efficiently analyze whether your third-party policy documents such as NIST 800–171, ISO 27001, ISO 9001, etc., meet the standards you set for them is critical to the success of your business. Current policy assessment tools are manual, inefficient, and don’t adequately reduce risk.

We at Mindboard developed a platform to solve these problems. We are utilizing machine-learning, semantic technology, a repository of standard-meeting model documents we provide the most advanced and efficient methodology for automating and evaluating policy documents.

Read the rest of the article at Mindboard’s Medium channel.

Deploying deep learning models on AWS lambda

AWS Lambda is a serverless computing service provided by Amazon Web Services. The definition of serverless architecture is — it is a stateless compute container designed for event-driven solutions just like microservice architecture where monolithic applications are broken into simple smaller services which are easy to code, manage and scale.

Read the rest of the article at Mindboard’s Medium channel.

Generating Adversarial Samples in Keras (Tutorial)

As deep learning technologies power increasingly more services, associated security risks become more critical to address. Adversarial Machine Learning is a branch of machine learning that exploits the mathematics underlying deep learning systems in order to evade, explore, and/or poison machine learning models. Evasion attacks are the most common adversarial attack method due to their ease of implementation and potential for being highly disruptive. During an evasion attack, the adversary tries to evade a fully trained model by engineering samples to be misclassified by the model. This attack does not assume any influence over the training data.

Evasion attacks have been demonstrated in the context of autonomous vehicles where the adversary manipulates traffic signs to confuse the learning model. Research suggests that deep neural networks are susceptible to adversarial based evasion attacks due to their high degree of non-linearity as well as insufficient model averaging and regularization.

Read the rest of the article at Mindboard’s Medium channel.

Active Learning for Fast Data Set Labeling

Active learning is a special case of machine learning where a model can query a user for input. In this post, we will see how we can use active learning to label large data sets. For most machine learning tasks, large amounts of labeled data is needed is need for model training. However, the process of labeling data can be extremely time consuming and/or expensive. Using active learning, we can leverage a classification model to do most of the labeling for us, so that we only need to label samples when it is most needed.

Read the rest of the article at Mindboard’s Medium channel.

Convolutional Generative Adversarial Network: “EyeGaze” Image Generator

A generative adversarial network (GAN) is a system composed of two neural networks: a generator and a discriminator. The discriminator takes a data instance as input, and classifies it as ‘Real’ or ‘Fake’ with respect to a training data set. The generator takes Gaussian noise and transforms it into a synthetic data sample with the goal of fooling the discriminator. The discriminator learns to classify samples as real or fake. The generator learns from errors in failed attempts at fooling the discriminator.

Read the rest of the article at Mindboard’s Medium channel.

Serving Machine Learning Models Using TensorFlow Serving

Exploring how TensorFlow models can be served using TensorFlow Serving…

Read the article at Mindboard’s Medium channel.

Deploying Machine Learning Models Using Docker

Productionize the Flask API for deployment using Docker via nginx, gunicorn and Docker Compose to create a scalable template for deploying machine learning models.

Read the rest of the article at Mindboard’s Medium channel

Investigating RNN Memory Stability

A Recurrent Neural Networks (RNN) is a class of Artificial Neural Network that contains connections along a temporal axis, producing a functioning memory of prior network inferences that influences the network’s output. Two of the most common types of RNN are the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) cells. LSTMs and GRUs are designed for long-term memory capability. In both cases, the RNN cell maintains a hidden memory state that undergoes an alteration after every inference call.

Read the rest of the article at Mindboard’s Medium channel.