Deep Learning for Perception
Georgia Tech, Spring 2015
Reading Questions
1 Week 10: Marginalized Denoising Autoencodere
Answer the following questions. Please limit your entire writeup to 1 page!
-
How do marginalized Stacked Denoising Autoencoders (mSDAs) differ from normal SDAs? What are their advantages over the original?
-
What is being marginalized in mSDAs, and how?
-
Provide some discussion on why (m)SDAs might be good for domain adapation.
2 Week 3: Backpropagation
Answer the following questions. Please limit your entire writeup to 1 to 1 1/2 pages!
-
Pick three suggestions for using backpropagation across all of this week’s readings. For each one, write a paragraph or so describing what the suggestion/trick is and analyze why you think it works (don’t use the author’s words literaly; rephrase your understanding of them!).
-
In a few paragraphs, summarize the expressive power of single-layer networks with no hidden layers and networks with one hidden layer. Given that having one hidden layer is so expressive, why do we need more than one or two hidden layers to obtain the most successful classifiers to date?