info coming from bengio
Research in General
[1] How to write a great research paper
Basics of deep learning
[1] Learning deep architectures for AI
[2] Practical recommendations for gradient-based training of deep architectures
[3] Quick’n’dirty introduction to deep learning: Advances in Deep Learning
[4] A fast learning algorithm for deep belief nets
[5] Greedy Layer-Wise Training of Deep Networks
[7] Contractive auto-encoders: Explicit invariance during feature extraction
[8] Why does unsupervised pre-training help deep learning?
[9] An Analysis of Single Layer Networks in Unsupervised Feature Learning
[10] The importance of Encoding Versus Training With Sparse Coding and Vector Quantization
[11] Representation Learning: A Review and New Perspectives
[12] Deep Learning of Representations: Looking Forward
[13] Measuring Invariances in Deep Networks
[14] Neural networks course at USherbrooke [youtube]
Feedforward nets
[1] “Improving Neural Nets with Dropout” by Nitish Srivastava
[2] “Deep Sparse Rectifier Neural Networks”
[3] “What is the best multi-stage architecture for object recognition?”
[4] “Maxout Networks”
MCMC
[2] Radford Neal’s Review Paper (old but still very comprehensive)