Category Archive Deep Recurrent Neural Networks

LSTM vs GRU: Experimental Comparison

A Recurrent Neural Network is a type of Artificial Neural Network that contains shared neuron layers between its inputs through time. This allows us to model temporal data such as video sequences, weather patterns or stock prices. There are many ways to design a recurrent cell, which controls the flow of information from one time-step to another. A recurrent cell can be designed to provide a functioning memory for the neural network. Two of the most popular recurrent cell designs are the Long Short-Term Memory cell (LSTM) and the Gated Recurrent Unit cell (GRU).

Read the rest of the article at Mindboard’s Medium channel.

Scaling Reward Values for Improved Deep Reinforcement Learning

Deep Reinforcement Learning involves using a neural network as a universal function approximator to learn a value function that maps state-action pairs to their expected future reward given a particular reward function. This can be done many different ways. For example, a Monte Carlo based algorithm will observe total rewards following state-action pairs from a complete episode to make build training data for the neural network. Alternatively, a Temporal Difference approach would use incremental rewards from single time-steps and bootstrap off of predicted future rewards from the latest version of the value function model. However, no matter what approach is taken, it is important that the neural network is being efficiently fitted to the data in order to optimize the learning algorithm. There are many factors that determine a neural networks ability to fit to training data. In this post we will examine how scaling our outputs can affect our rate of convergence.

Read the rest of the article at Mindboard’s Medium channel.

Training Recurrent Neural Networks on Long Sequence

Deep Recurrent Neural Networks (RNN) are a type of Artificial Neural Network that takes the networks previous hidden state as part of its input, effectively allowing the network to have a memory. This makes RNNs useful for modeling sequential or time-series data such as stock prices. However, training RNNs on sequences greater than a few hundred time steps can be difficult. In this post, we will explore three tools that can allow for more efficient training of RNN models with long sequences: Optimizers, Gradient Clipping, and Batch Sequence Length.

Read the rest of the article at Mindboard’s Medium channel.

Q Matrix Update to train Deep Recurrent Q Networks More Effectively

Deep Recurrent Q network, as discussed in previous article, can be very helpful in building smart agents that remember their learning from distant past. This feature makes a Deep Recurrent Q network a valuable function approximator in building AI agents for Deep Reinforcement Learning.

Read the rest of the article at Mindboard’s Medium channel.