ML 논문을 자주 읽자.
- Deep RL
- Adversarial Examples
- Approximate Inference
- Applications
- CV
- NLP
- Speech
- Learning Theory
- GAN
- Deep Learning Theory
- Optimization
- Neuroscience
- Equivariant Networks
- Where Neuroscience meets AI (And What’s in Store for the Future)
- Advances in Approximate Inference
- Practical Uncertainty Estimation and Out-of-Distribution Robustness in Deep Learning
- Policy Optimization in Reinforcement Learning
- Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities
- Equivariant Networks for Hierarchical Structures
- Deep Transformation-Invariant Clustering
Escaping the Gravitational Pull of SoftmaxRethinking Pre-training and Self-training- Do Adversarially Robust ImageNet Models Transfer Better?
- Contrastive learning of global and local features for medical image segmentation with limited annotations
- Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning
- On the training dynamics of deep networks with L2 regularization
- Towards a Better Global Loss Landscape of GANs
- Is normalization indispensable for training deep neural network?
- Debiased Contrastive Learning
- The Autoencoding Variational Autoencoder
- Joint Contrastive Learning with Infinite Possibilities
- What Do Neural Networks Learn When Trained With Random Labels?
- H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks
- A Ranking-based, Balanced Loss Function Unifying Classification and Localisation in Object Detection
- Large-Scale Adversarial Training for Vision-and-Language Representation Learning
- Measuring Robustness to Natural Distribution Shifts in Image Classification
- The Complete Lasso Tradeoff Diagram
- Adversarial Training is a Form of Data-dependent Operator Norm Regularization
- Most ReLU Networks Suffer from l2 Adversarial Perturbations
- What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Unsupervised Data Augmentation for Consistency Training
- How to represent part-whole hierarchies in a neural network - Geoffrey Hinton
- Exploring Simple Siamese Representation Learning - after BYOL (SimSiam)
- Understanding self-supervised Learning Dynamics without Contrastive Pairs - after BYOL & SimSiam
- Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors
- EfficientNetV2
- Towards General Purpose Vision Systems
- How Many Data Points is a Prompt Worth?