Giter Site home page Giter Site logo

danakianfar / deep_learning Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 10.0 32.48 MB

Solution to the labs of the Deep Learning course of the MSc in Artificial Intelligence at the University of Amsterdam

License: MIT License

Shell 0.06% Python 26.86% TeX 73.08%
deep-learning tensorflow variational-autoencoder convolutional-neural-networks multi-layer-perceptron recurrent-neural-networks lstm

deep_learning's Introduction

Deep Learning

License

Description

Code for the labs of the Deep Learning course of the MSc in Artificial Intelligence at the University of Amsterdam.

Lab 1 - Neural Networks, Convolutions and TensorFlow

We explore image classification with two neural network architectures: multi-layer perceptrons (MLP) and convolutional neural networks (CNN). We use the CIFAR-10 dataset, and experiment with various (hyper-) parameters for the MLP such as learning rate, weight initialization, activation functions, dropout, etc. For the CNN, we apply data augmentation and batch normalization. As expected, the CNN performs considerably better than the MLP.

Problems and Solutions

Known Issues

Due to a bug the Tensorflow MLP test results are largely inflated. The code has been fixed, and the report will be updated soon.

Lab 2 - Recurrent Neural Networks

We explore the long-term dependency modelling capabilities of Recurrent Neural Networks (RNNs) and Long Short-Term Networks (LSTMs). Additionally, we train a character-level language model on natural language text and experiment with character-by-character text generation. Our initial experiment on a simple palindrome task indicate that, as expected, the LSTM is more capable of learning long-term dependencies than the vanilla RNN. To this end, we used a stacked LSTM network for the character-level language model.

Problems and Solutions

Sample Results

The results below are generated by the two-layer stacked LSTM trained by truncated backpropagation-through-time with a sequence length of 30.

Decoded samples during training on a collection of works by Carl Sagan
Training Step First 30 Characters More Characters
0 q,6N–e]Ü5“18>JS'qfeh;+D©C©J -
100 Ze sugthage we clol sutes ute Ze sugthage we clol
200 ‘in capsionary It the moving 2 26, byfest on the marne of animent Fermar
300 ould beis. Thereece count of t he lawsian are reall
400 I may to paracellions and in time radio time.. Wou
500 moneasts are not the idea down too dead, but triliple
1500 You look wavellices in millions of Triton than change from magnifications
Sentence Completion
Initial Sequence Decoded Text
theory of evolution require to be compared within their states that there is nothing their cell can cross
Humans grew up in forests through the rule the pollen traces the seeds we have seen there has been variated animals

Lab 3 - Generative Models

We explore generative modelling with the naive bayes algorithm and variational inference on the MNIST dataset. The intractability issues that arise in generative modelling, specifically in the case of the naive bayes, are explored. We then use variational autoencoders on the same task and discuss how the intractability issues are resolved. We explore the performance of the naive bayes and variational autoencoder by sampling images from the models.

Problems and Solutions

Results


Manifold learned by the VAE


Samples from the Naive Bayes model

Poster Presentations

We presented the Weight Uncertainty in Neural Networks paper by Blundell et al. This was done in collaboration with Jose Gallego and Krsto Proroković.

The poster can be found here. The template for the poster was taken from David Duvenaud's repository.

Dependencies

  • Python 3.x: Tensorflow 1.4, SciPy, NumPy, Matplotlib

Copyright

Copyright © 2017 Dana Kianfar.

This project is distributed under the MIT license. Please follow the UvA regulations governing Fraud and Plagiarism in case you are a student.

deep_learning's People

Contributors

danakianfar avatar egavves avatar kgavrilyuk avatar petered avatar tomrunia avatar uvadlc avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.