Giter Site home page Giter Site logo

zubingou / cs224n-stanford-winter-2019 Goto Github PK

View Code? Open in Web Editor NEW

This project forked from zhanlaoban/cs224n-stanford-winter-2019

0.0 1.0 0.0 455.34 MB

The collection of ALL relevant materials about CS224N-Stanford/Winter 2019/2021 course. THANKS TO THE PROFESSOR AND TAs!

Jupyter Notebook 77.33% Shell 0.21% Python 20.51% JavaScript 0.42% TeX 1.53%

cs224n-stanford-winter-2019's Introduction

CS224N-Stanford-Winter-2019 & 2021

The collection of ALL relevant materials about CS224N-Stanford/Winter 2019 course. THANKS TO THE PROFESSOR AND TAs!
All the rights of the relevant materials belong to Standfor University.

Links

As1

  • lec1 Word Vectors

reading

  • note: Word Vectors I: Introduction, SVD and Word2Ve  
  • Word2Vec Tutorial - The Skip-Gram Model
  • word2vec中的数学原理详解
  • Efficient Estimation of Word Representations in Vector Space (original word2vec paper)
  • Distributed Representations of Words and Phrases and their Compositionality (negative sampling paper)

practice

  • Computation on Arrays: Broadcasting
  • coding: Assignment1
  • Gensim

As2

  • lec2 Word Vectors 2 and Word Window Classification
  • lec3 Neural Networks
  • lec4 Backpropagation

reading

  • note: Word Vectors II: GloVe, Evaluation and Trainin
  • review-differential-calculus
  • gradient-notes
  • CS231n notes on network architectures
  • CS231n notes on backprop

practice

  • python review
  • written: Assignment2 Derivatives and implementation of word2vec algorithm
  • coding: Assignment2 Derivatives and implementation of word2vec algorithm

As3

  • lec5 Dependency Parsing

reading

  • note: Dependency Parsing
  • note: Language Models and Recurrent Neural Network

practice

  • PyTorch Tutorial
  • written: Assignment3 Dependency parsing and neural network foundations
  • coding: Assignment3 Dependency parsing and neural network foundations

As4

  • lec6 Recurrent Neural Networks and Language Models
  • lec7 Vanishing Gradients, Fancy RNNs, Seq2Seq
  • lec8 Machine Translation, Attention, Subword Models
  • lec12 Subword Models
  • Winter 2020 | Low Resource Machine Translation

reading

  • lec6: The Unreasonable Effectiveness of Recurrent Neural Networks
  • lec7: Understanding LSTM Networks
  • lec8: Sequence to Sequence Learning with Neural Networks
  • lec8: Neural Machine Translation by Jointly Learning to Align and Translate
  • lec8: Effective Approaches to Attention-based Neural Machine Translation
  • lec8: Massive Exploration of Neural Machine Translation Architectures (practical advice for hyperparameter choices)
  • lec8 note

practice

  • written: Assignment4 Neural Machine Translation with sequence-to-sequence, attention, and subwords
  • coding: Assignment4 Neural Machine Translation with sequence-to-sequence, attention, and subwords

As5

  • lec9 Practical Tips for Projects
  • lec10 Question Answering
  • lec11 Convolutional Networks for NLP
  • lec13 Contextual Word Embeddings
  • lec14 Transformers and Self-Attention
  • Winter 2020 | BERT and Other Pre-trained Language Models
  • Hung-yi Lee: Machine Learning (2020,Spring) Transformer

reading

  • note: Machine Translation, Sequence-to-sequence and Attention
  • read: Attention and Augmented Recurrent Neural Networks
  • lec14: The Illustrated Transformer

practice

  • written: Assignment5(2021) Self-supervised learning and fine-tuning with Transformers
  • coding: Assignment5(2021) Self-supervised learning and fine-tuning with Transformers

Final project

  • lec15 Natural Language Generation
  • lec16 Coreference Resolution
  • lec17 Multitask Learning
  • lec18 Constituency Parsing, TreeRNNs
    • Paper: Parsing with Compositional Vector Grammars.
    • Paper: Constituency Parsing with a Self-Attentive Encoder
  • lec19 Bias in AI
  • lec20 Future of NLP + Deep Learning

reading

  • final-project-practical-tips
  • default-final-project-handout
  • project-proposal-instructions
  • Practical Methodology_Deep Learning book chapter
  • Highway Networks

practice

  • anotate codes
  • train baseline

2021

lec9 Transformers

  • slides: lec9
  • read: Attention Is All You Need

lec10 More about Transformers and Pretraining

  • slides: lec10 More about Transformers and Pretraining
  • read: The Illustrated BERT, ELMo, and co.
  • read: Contextual Word Representations: A Contextual Introduction
  • read: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

lec11 Question Answering

  • slides: lec11 Question Answering
  • read: SQuAD: 100,000+ Questions for Machine Comprehension of Text
  • read: Bidirectional Attention Flow for Machine Comprehension
  • read: Reading Wikipedia to Answer Open-Domain Questions
  • read: Latent Retrieval for Weakly Supervised Open Domain Question Answering
  • read: Dense Passage Retrieval for Open-Domain Question Answering
  • read: Learning Dense Representations of Phrases at Scale
  • ACL2020 Tutorial: Open-Domain Question Answering

lec12 Natural Language Generation

  • slides: Natural Language Generation
  • read: The Curious Case of Neural Text Degeneration
  • read: Get To The Point: Summarization with Pointer-Generator Networks
  • read: Hierarchical Neural Story Generation
  • read: How NOT To Evaluate Your Dialogue System

lec14 T5 and large language models: The good, the bad, and the ugly

  • slides: T5 and large language models: The good, the bad, and the ugly
  • read: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

lec15 Integrating knowledge in language models

  • slides: Integrating knowledge in language models
  • read: ERNIE: Enhanced Language Representation with Informative Entities
  • read: Barack’s Wife Hillary: Using Knowledge Graphs for Fact-Aware Language Modeling
  • read: Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
  • read: Language Models as Knowledge Bases?

lec16 Social & Ethical Considerations in NLP Systems

  • slides: Social & Ethical Considerations in NLP Systems

lec17 Model Analysis and Explanation

  • slides: Model Analysis and Explanation

lec18 Future of NLP + Deep Learning

  • slides: Future of NLP + Deep Learning

cs224n-stanford-winter-2019's People

Contributors

zhanlaoban avatar zubingou avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.