Monthly Archive August 2019

Policy Assessment Using ML

In the 21st century, every person and organization, both public and private, are somehow connected. So, being able to quickly understand and efficiently analyze whether your third-party policy documents such as NIST 800–171, ISO 27001, ISO 9001, etc., meet the standards you set for them is critical to the success of your business. Current policy assessment tools are manual, inefficient, and don’t adequately reduce risk.

We at Mindboard developed a platform to solve these problems. We are utilizing machine-learning, semantic technology, a repository of standard-meeting model documents we provide the most advanced and efficient methodology for automating and evaluating policy documents.

Read the rest of the article at Mindboard’s Medium channel.

Deploying deep learning models on AWS lambda

AWS Lambda is a serverless computing service provided by Amazon Web Services. The definition of serverless architecture is — it is a stateless compute container designed for event-driven solutions just like microservice architecture where monolithic applications are broken into simple smaller services which are easy to code, manage and scale.

Read the rest of the article at Mindboard’s Medium channel.

Generating Adversarial Samples in Keras (Tutorial)

As deep learning technologies power increasingly more services, associated security risks become more critical to address. Adversarial Machine Learning is a branch of machine learning that exploits the mathematics underlying deep learning systems in order to evade, explore, and/or poison machine learning models. Evasion attacks are the most common adversarial attack method due to their ease of implementation and potential for being highly disruptive. During an evasion attack, the adversary tries to evade a fully trained model by engineering samples to be misclassified by the model. This attack does not assume any influence over the training data.

Evasion attacks have been demonstrated in the context of autonomous vehicles where the adversary manipulates traffic signs to confuse the learning model. Research suggests that deep neural networks are susceptible to adversarial based evasion attacks due to their high degree of non-linearity as well as insufficient model averaging and regularization.

Read the rest of the article at Mindboard’s Medium channel.