by rndn on 10/20/15, 9:33 AM with 11 comments
by chriskanan on 10/20/15, 2:17 PM
Here is the abstract:
This note provides a family of classification problems, indexed by a positive integer k, where all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1/6, whereas a deep network with 2 nodes in each of 2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iterated k times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.
by arcanus on 10/20/15, 5:56 PM
What evidence exists that the 'multiple levels of representation', which I understand to generally be multiple hidden layers of a neural network, actually correspond to 'levels of abstraction'?
2) I'm further confused by, "Deep learning is a kind of representation learning in which there are multiple levels of features. These features are automatically discovered and they are composed together in the various levels to produce the output. Each level represents abstract features that are discovered from the features represented in the previous level. "
This implies to me that this is "unsupervised learning". Are deep learning nets all unsupervised? Most traditional neural nets are supervised.
by dnautics on 10/20/15, 4:58 PM
by ilurk on 10/20/15, 3:26 PM
(didn't read it yet though, will do when I have time)
by memming on 10/20/15, 12:15 PM