For deeper networks, the obsession with image classification tasks seems to have also caused tutorials to appear on the more complex convolutional neural networks. This is great if you’re into that sort of thing, however, if someone is more interested in data with timeframes then recurrent neural networks (RNNs) come in handy.
Read the rest of the article at Mindboard’s Medium channel
Deep reinforcement learning involves building a deep learning model which enables function approximation between the input features and future discounted rewards values also called Q values. We have seen how we can effectively get these q values and create a map consisting of input features and corresponding set of q values in this article.
This map of input features and all possible q values at a given state enables the Reinforcement learning agent get an overall picture of environment which further helps the agent in choosing the optimal path.
Read the rest of the article at Mindboard’s Medium channel.
A Recurrent Neural Network is a type of Artificial Neural Network that contains shared neuron layers between its inputs through time. This allows us to model temporal data such as video sequences, weather patterns or stock prices. There are many ways to design a recurrent cell, which controls the flow of information from one time-step to another. A recurrent cell can be designed to provide a functioning memory for the neural network. Two of the most popular recurrent cell designs are the Long Short-Term Memory cell (LSTM) and the Gated Recurrent Unit cell (GRU).
Read the rest of the article at Mindboard’s Medium channel.