Deep Learning 02: about RNN (Recurrent Neural Networks)

 

What is a RNN?

It has the same computational unit as the feed forward neural net, but difer in the architecture of connections.
No cycles in feed forward net.
Recurrent, because they perform the same task for every element of a sequence, with the output being depended on the previous computations. They have “memory”, which can keep what has been calculated so far.
A NN with feedbacks. The outputs are related to the inputs, weights and feedbacks. Emphasised on learning and training.
It contains at least one feed-back connection, can flow round in a loop. Which enables the networks to do temporal processing and learn sequences…perform sequence recognition/reproduction or temporal association/prediction.
01
For here:
Input Units (left): { u }_{ 1 },{ u }_{ 2 },..{ u }_{ K },
Output Units (right):{ y }_{ 1 },{ y }_{ 2 },…{ y }_{ L },
Hidden Units (middle, they are allowed to be connected with themselves): { x }_{ 1 },{ x }_{ 2 },..{ x }_{ N },
Back Projections: RNNs break the restriction with connections from output units back to the hidden units.

Challenges are in how to train RNNs.
The way Converting RNN into an essentially a feedforward neural network: unrolling(unfolding).

02

A same idea can be found here:
03
If the input is a sentence with 5 words, so basically it will compute five times, and the RNN rolls into a 5-layer neural network, with each word in a layer.

04
Remember that, U,W,V are constant. We give the network different input each time, with the parameters kept stable.

Extensions:
The architectures includes Fully Recurrent NNs, Locally Recurrent NNs.
One common type consists of a standard Multi-Layer Perceptron (MLP) with added loops.
Others have more uniform structures…with every neuron connected to all the others……may also have stochastic activation functions.

Fully RNNs: Hopfield net (Continues and Discrete)
Locally RNNs: Outside Jordan, Inside Elman

FRNN: Simplest form of FRNN is an MLP with the previous set of hidden unit activations feeding back into the network along with the inputs. I think the general RNN we are discussing is FRNN.

References:
http://nikhilbuduma.com/2015/01/11/a-deep-dive-into-recurrent-neural-networks/
http://www.kdnuggets.com/2015/10/recurrent-neural-networks-tutorial.html#.VhVVKOA4hr8.linkedin

Click to access l12.pdf

http://wenku.baidu.com/link?url=qvBEr6L7zGYC2r72UIG1v4Okc7Ei4hU-JHRnHmKhmk6l7Y86CbVF1prpQkG2HA2c3tTbmgijSg4dmtKomB-Mm1UDpGL0mzNhzvSgYG_G_Tu

Published by Irene

Keep calm and update blog.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: