Giter Site home page Giter Site logo

cs231n's Introduction

CS231n Convolutional Neural Networks for Visual Recognition - Assignment Solutions

The course website: http://cs231n.stanford.edu/

Here are my solutions for this above course (Winter 2016), for the benefit of people who struggle greatly to solve them (like myself). I myself (and some of my coursemates) did not enrolled in Stanford to take this course; I just have generous access to the course notes, lecture videos and assignment code, which is made public for everyone. Therefore I do not guarantee that my solutions are correct, so if you spot any errors do let me know.

As of this writing I have yet completed the course material; completed assignments are marked [done!], otherwise they are as originally downloaded from the course site.

Assignment list:

  • Assignment #1
    • Q1: k-Nearest Neighbor classifier (20 points) [done!]
    • Q2: Training a Support Vector Machine (25 points) [done!]
    • Q3: Implement a Softmax classifier (20 points) [done!]
    • Q4: Two-Layer Neural Network (25 points) [done!]
    • Q5: Higher Level Representations: Image Features (10 points) [done!]
  • Assignment #2
    • Q1: Fully-connected Neural Network (30 points) [done!]
    • Q2: Batch Normalization (30 points)
    • Q3: Dropout (10 points)
    • Q4: ConvNet on CIFAR-10 (30 points)
  • Assignment #3
    • Q1: Image Captioning with Vanilla RNNs (40 points)
    • Q2: Image Captioning with LSTMs (35 points)
    • Q3: Image Gradients: Saliency maps and Fooling Images (10 points)
    • Q4: Image Generation: Classes, Inversion, DeepDream (15 points)

Oh, and you should check out MyHumbleSelf's Assignment Solutions. This guy is a Stanford student, so his answers would likely be what you will get if you enrolled yourself. His is from the previous intake; if you looking for Winter 2016 intake, check out ctheory's assignment solutions.

cs231n's People

Contributors

bruceoutdoors avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cs231n's Issues

About the best k

Hi, there is a problem that is a little wired.

When test the best_k, or in cross_validation, the result shows that the best k is near k == 10, because of the highest peak. However, when I did the following step, in which I change the k's value and get the result greater than 0.28. When k == 10, the result is not very good, but when k == 5, the result is highest .

So, My question is:

  1. What's your best k of cross validation?
  2. If your best k also is 10, is the best accuracy when k == 10 in the last test step?

Thanks!

ps: The plot diagram has been uploaded in my github, you could see the details: https://github.com/fortyMiles/cs231n/blob/master/assignment1/knn.ipynb

Assignment1- Svm_loss_vetorized

Line 133: Linear_svm.py
X_mask[np.arange(num_train), y] = -incorrect_counts
should be:
X_mask[np.arange(num_train), y] -= incorrect_counts

Incorrect Cross-validation for knn.

In knn.ipynb file, you have:

y_cross_validation_pred = classifier_k.predict_labels(X_train_folds[n], k)

This is incorrect because predict_labels takes in a distance matrix but you pass in a raw test matrix. So, you need to have an additional step as:

dists = classifier.compute_distances_no_loops(X_train_folds[n])
y_cross_validation_pred = classifier_k.predict_labels(dists, k)

Or you can use predict function in k_nearest_neighbor which technically does the same thing:

y_cross_validation_pred = classifier_k.predict(dists, k)

Assigment 2 - Train Model

Hi, can you please explain me a bit how did you come to the best train model in Assignment 2 - Fully Connected Nets - Train a good model!?

How did you end with this?

weight_scale = 5e-2
learning_rate = 1e-3 
model = FullyConnectedNet([100, 75, 50, 25],
              weight_scale=weight_scale, dtype=np.float64)

Thanks !

Andres

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.