#### Brief

• Due: Friday Feb 18 11:55pm
• Starter code: starter code

In this homework, we will learn how to implement back-propagation (or backprop) for “vanilla” neural networks (or Multi-Layer Perceptrons) and ConvNets.
You will begin by writing the forward and backward passes for different types of layers (including convolution and pooling), and then go on to train a shallow ConvNet on the CIFAR-10 dataset in Python.
Next you’ll learn to use PyTorch, a popular open-source deep learning framework, and use it to replicate the experiments from before.

To summary, this homework is divided into the following parts:

• Implement a neural network and train a ConvNet on CIFAR-10 in Python.
• Learn to use PyTorch and replicate previous experiments in PyTorch (2-layer NN, ConvNet on CIFAR-10).

## Part 1 (9 points total)

Starter code for part 1 of the homework is available in the 1_cs231n folder.

### Setup

Dependencies are listed in the requirements.txt file. If working with Anaconda, they should all be installed already.

cd 1_cs231n/cs231n/datasets
./get_datasets.sh


Compile the Cython extension. From the cs231n directory, run the following.

python setup.py build_ext --inplace


### 1.1: Two-layer Neural Network (3 points)

The IPython notebook two_layer_net.ipynb will walk you through implementing a two-layer neural network on CIFAR-10. You will write a hard-coded 2-layer neural network, implement its backward pass, and tune its hyper-parameters. To receive full credit, you should obtain at least 45% accuracy.

### 1.2: Modular Neural Network (4 points)

The IPython notebook layers.ipynb will walk you through a modular neural network implementation. You will implement the forward and backward passes of many different layer types, including convolution and pooling layers.

### 1.3: ConvNet on CIFAR-10 (2 points)

The IPython notebook convnet.ipynb will walk you through the process of training a (shallow) convolutional neural network on CIFAR-10. To receive full credit, you should obtain at least 48% accuracy.

## Part 2 (11 points total)

This part is similar to the first part except that you will now be using PyTorch to implement the two-layer neural network and the convolutional neural network. In part 1 you implemented core operations given significant scaffolding code. In part 2 these core operations are given by PyTorch and you simply need to figure out how to use them.

If you haven’t already, install PyTorch (please use PyTorch vesion 1.0). This will probably be as simple as running the commands in the Get Started section of the PyTorch page, but if you run in to problems check out the installation section of the GitHub README, search Google, or come to office hours. You may want to go through the PyTorch Tutorial before continuing. This homework is not meant to provide a complete overview of Deep Learning framework features or PyTorch features.

You probably found that your layer implementations in Python were much slower than the optimized Cython version. Open-source frameworks are becoming more and more optimized and provide even faster implementations. Most of them take advantage of both GPUs, which can offer a significant speedup (e.g., 50x). A library of highly optimized Deep Learning operations from Nvidia called the CUDA® Deep Neural Network library (cuDNN) also helps.

You will be using existing layers and hence, this part should be short and simple. To get started with PyTorch you could just jump in to the implementation below or read through some of the documentation below.

• What is PyTorch and what distinguishes it from other DL libraries? (GitHub README)
• PyTorch modules
• PyTorch examples

The necessary files for this section are provided in the 2_pytorch directory. You will only need to write code in train.py and in each file in the models/ directory.

### 2.1: Softmax Classifier using PyTorch (2 points)

The softmax-classifier.ipynb notebook will walk you through implementing a softmax classifier using PyTorch. Data loading and scaffolding for a train loop are provided. In filter-viz.ipynb you will load the trained model and extract its weight so they can be visualized. To receive full credit, you should obtain at least 28% accuracy.

### 2.2: Two-layer Neural Network using PyTorch (2 points)

By now, you have an idea of working with PyTorch and may proceed to implementing a two-layer neural network. Go to models/twolayernn.py and complete the TwoLayerNN Module. Now train the neural network using

./run_twolayernn.sh


You will need to adjust hyper-parameters in run_twolayernn.sh to achieve good performance. Create a new IPython notebook twolayer.ipynb to generate a loss vs iterations plot for train and val and a validation accuracy vs iterations plot. (Follow the same format as softmax-classifier.ipynb, yet you need to make proper modifications.) Save these plots as twolayernn_lossvstrain.png and twolayernn_valaccuracy.png respectively. To receive full credit, you should obtain at least 45% accuracy.

Create a new IPython notebook twolayer-filtervis.ipynb and save visualizations of the weights of the first hidden layer called twolayernn_gridfilt.png. (Follow the same format as filter-viz.ipynb, yet you need to make proper modifications.)

### 2.3: ConvNet using PyTorch (2 points)

Repeat the above steps for a ConvNet. Model code is in models/convnet.py and the script to train ConvNet is in run_convnet.sh. Create convnet.ipynb and convnet-filterviz.ipynb, save the plots as convnet_lossvstrain.png and convet_valaccuracy.png and the filters learned as convnet_gridfilt.png. To receive full credit, you should obtain at least 48% accuracy.

### 2.4 Experiment (4803DL: 5 Extra points, 7643: 5 regular points + 5 Extra points)

Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Submit your entry on a challenge hosted on EvalAI. The website will show a live leader board, so you can see how your implementation is doing compared to others. In order to prevent you from overfitting to the test data, the website limits the number of submissions to 3 per day, and only shows the leaderboard computed on 10% of the test data (so final standings may change). You will receive 5 points for submitting something that beats vanilla ConvNet Result in Q2.3 (extra for 4803DL but regular for 7643), 5 points extra credit for beating the instructor/TA’s implementation (only for 7643). DO NOT USE EXISTING NETWORK STRUCTURE (e.g. AlexNet, VGGNet, etc.).

Evaluate your best model using test.py and upload the predictions.csv file on EvalAI. To participate, you will have to sign up on EvalAI using your gatech.edu email.

For getting better performance, some things you can try:

• Filter size: In part 1 we used 7x7; this makes pretty pictures but smaller filters may be more efficient
• Number of filters: In part 1 we used 32 filters. Do more or fewer do better?
• Network depth: Some good architectures to try include:
• [conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
• [conv-relu-pool]xN - [affine]xM - [softmax or SVM]
• [conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]

## Write-Up

1. All plots (learning curve, filter visualization) generated in previous sections and the accuracy for each network.
2. Discuss what hyper-parameters you choose to achieve this performance and why do you think this might help.
3. Include what you tried for your own network design (experiment 2.4) and how they affect the performance. (You should at lest show how learning rate, Dropout, and number of filters size affect the accuracy.)
4. You also need to include the IPython notebooks in your write-up. Please check the section Deliverables for more details.

### Code Submission

Submit the results by uploading a zip file called hw1.zip created with the following command

cd assignment/
./collect_submission.sh


For sanity check, the zip file should contain the following components:

1. All the IPython notebook files. (3 notebook files for part1 and 6 notebook files for part2)
2. For Part 1, include everything under 1_cs231n/cs231n except for datasets folder and build folder
3. For Part 2, include all model implementations (models/*.py), the shell scripts used to train the 4 models (run_softmax.sh, run_twolayernn.sh, run_convnet.sh, run_mymodel.sh), and other relevant files.
4. All the .py files in the starter code should be included in your submission.

### Write-Up Submission

Step1: Convert all IPython notebooks to PDF files with the following command

jupyter-nbconvert --to pdf filename.ipynb


You should have 9 pdf files in total. Please make sure you have saved the most recent version of your jupyter notebook before running this script.

Step2: Combine all pdf files with your write-up

Please assign pages accordingly for write up submission. Failing to do so will result in violation penalties.

### Contest Challenge Submission (EvalAI)

Remember to submit your results on EvalAI (extra credit for 4803DL, regular credit for 7643)