Convolutional Neural Networks
Computer Vision
CS 543 / ECE 549
University of Illinois
Jia-Bin Huang
05/05/15
Reminder: final project
• Posters on Friday, May 8 at 7pm in SC 2405
– two rounds, 1.25 hr each
• Papers due May 11 by email
• Cannot accept late papers/posters due to
grading deadlines
• Send Derek an email if you can’t present your
poster
Today’s class
• Overview
• Convolutional Neural Network (CNN)
• Understanding and Visualizing CNN
• Training CNN
• Probabilistic Interpretation
Image Categorization: Training phase
Training
Labels
Training
Images
Classifier
Training
Training
Image
Features
Trained
Classifier
Image Categorization: Testing phase
Training
Labels
Training
Images
Classifier
Training
Training
Image
Features
Trained
Classifier
Image
Features
Testing
Test Image
Outdoor
PredictionTrained
Classifier
Features are the Keys
SIFT [Loewe IJCV 04] HOG [Dalal and Triggs CVPR 05]
SPM [Lazebnik et al. CVPR 06] DPM [Felzenszwalb et al. PAMI 10]
Color Descriptor [Van De Sande et al. PAMI 10]
• Each layer of hierarchy extracts features from output
of previous layer
• All the way from pixels  classifier
• Layers have the (nearly) same structure
Learning a Hierarchy of Feature Extractors
Layer 1 Layer 2 Layer 3 Simple
Classifie
Image/Video
Pixels
Image/video Labels
Engineered vs. learned features
Image
Feature extraction
Pooling
Classifier
Label
Image
Convolution/pool
Convolution/pool
Convolution/pool
Convolution/pool
Convolution/pool
Dense
Dense
Dense
Label
Biological neuron and Perceptrons
A biological neuron An artificial neuron (Perceptron)
- a linear classifier
Simple, Complex and Hypercomplex cells
David H. Hubel and Torsten Wiesel
David Hubel's Eye, Brain, and Vision
Suggested a hierarchy of feature detectors
in the visual cortex, with higher level features
responding to patterns of activation in lower
level cells, and propagating activation
upwards to still higher level cells.
Hubel/Wiesel Architecture and Multi-layer Neural Network
Hubel and Weisel’s architecture Multi-layer Neural Network
- A non-linear classifier
Multi-layer Neural Network
• A non-linear classifier
• Training: find network weights w to minimize the
error between true training labels 𝑦𝑖 and
estimated labels 𝑓𝒘 𝒙𝒊
• Minimization can be done by gradient descent
provided 𝑓 is differentiable
• This training method is called
back-propagation
Convolutional Neural Networks
(CNN, ConvNet, DCN)
• CNN = a multi-layer neural network with
– Local connectivity
– Share weight parameters across spatial positions
• One activation map (a depth slice), computed
with one set of weights
Image credit: A. Karpathy
Neocognitron [Fukushima, Biological Cybernetics 1980]
LeNet [LeCun et al. 1998]
Gradient-based learning applied to document
recognition [LeCun, Bottou, Bengio, Haffner 1998]
What is a Convolution?
• Weighted moving sum
Input Feature Activation Map
.
.
.
slide credit: S. Lazebnik
Input Image
Convolution
(Learned)
Non-linearity
Spatial pooling
Normalization
Convolutional Neural Networks
Feature maps
slide credit: S. Lazebnik
Input Image
Convolution
(Learned)
Non-linearity
Spatial pooling
Normalization
Feature maps
Input Feature Map
.
.
.
Convolutional Neural Networks
slide credit: S. Lazebnik
Input Image
Convolution
(Learned)
Non-linearity
Spatial pooling
Normalization
Feature maps
Convolutional Neural Networks
Rectified Linear Unit (ReLU)
slide credit: S. Lazebnik
Input Image
Convolution
(Learned)
Non-linearity
Spatial pooling
Normalization
Feature maps
Max pooling
Convolutional Neural Networks
slide credit: S. Lazebnik
Input Image
Convolution
(Learned)
Non-linearity
Spatial pooling
Normalization
Feature maps
Feature Maps Feature Maps
After Contrast
Normalization
Convolutional Neural Networks
slide credit: S. Lazebnik
Input Image
Convolution
(Learned)
Non-linearity
Spatial pooling
Normalization
Feature maps
Convolutional filters are trained in a
supervised manner by back-propagating
classification error
Convolutional Neural Networks
slide credit: S. Lazebnik
SIFT Descriptor
Image
Pixels
Apply gradient
filters
Spatial pool
(Sum)
Normalize to unit
length
Feature
Vector
Lowe [IJCV 2004]
SIFT Descriptor
Image
Pixels Apply
oriented filters
Spatial pool
(Sum)
Normalize to unit
length
Feature
Vector
Lowe [IJCV 2004]
slide credit: R. Fergus
Spatial Pyramid Matching
SIFT
Features
Filter with
Visual Words
Multi-scale
spatial pool
(Sum)
Max
Classifier
Lazebnik,
Schmid,
Ponce
[CVPR 2006]
slide credit: R. Fergus
Deformable Part Model
Deformable Part Models are Convolutional Neural Networks [Girshick et al. CVPR 15]
AlexNet
• Similar framework to LeCun’98 but:
• Bigger model (7 hidden layers, 650,000 units, 60,000,000 params)
• More data (106 vs. 103 images)
• GPU implementation (50x speedup over CPU)
• Trained on two GPUs for a week
A. Krizhevsky, I. Sutskever, and G. Hinton,
ImageNet Classification with Deep Convolutional Neural Networks, NIPS 2012
Using CNN for Image Classification
AlexNet
Fully connected layer Fc7
d = 4096
d = 4096
Averaging
Softmax
Layer
“Jia-Bin”
Fixed input size:
224x224x3
ImageNet Challenge 2012-2014
Team Year Place Error (top-5) External data
SuperVision – Toronto
(7 layers)
2012 - 16.4% no
SuperVision 2012 1st 15.3% ImageNet 22k
Clarifai – NYU (7 layers) 2013 - 11.7% no
Clarifai 2013 1st 11.2% ImageNet 22k
VGG – Oxford (16 layers) 2014 2nd 7.32% no
GoogLeNet (19 layers) 2014 1st 6.67% no
Human expert* 5.1%
Team Method Error (top-5)
DeepImage - Baidu Data augmentation + multi GPU 5.33%
PReLU-nets - MSRA Parametric ReLU + smart initialization 4.94%
BN-Inception ensemble
- Google
Reducing internal covariate shift 4.82%
Beyond classification
• Detection
• Segmentation
• Regression
• Pose estimation
• Matching patches
• Synthesis
and many more…
R-CNN: Regions with CNN features
• Trained on ImageNet classification
• Finetune CNN on PASCAL
RCNN [Girshick et al. CVPR 2014]
Fast R-CNN
Fast RCNN [Girshick, R 2015]
https://github.com/rbgirshick/fast-rcnn
Labeling Pixels: Semantic Labels
Fully Convolutional Networks for Semantic Segmentation [Long et al. CVPR 2015]
Labeling Pixels: Edge Detection
DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection
[Bertasius et al. CVPR 2015]
CNN for Regression
DeepPose [Toshev and Szegedy CVPR 2014]
CNN as a Similarity Measure for Matching
FaceNet [Schroff et al. 2015]
Stereo matching [Zbontar and LeCun CVPR 2015]
Compare patch [Zagoruyko and Komodakis 2015]
Match ground and aerial images
[Lin et al. CVPR 2015]FlowNet [Fischer et al 2015]
CNN for Image Generation
Learning to Generate Chairs with Convolutional Neural Networks [Dosovitskiy et al. CVPR 2015]
Chair Morphing
Learning to Generate Chairs with Convolutional Neural Networks [Dosovitskiy et al. CVPR 2015]
Understanding and Visualizing CNN
• Find images that maximize some class scores
• Individual neuron activation
• Visualize input pattern using deconvnet
• Invert CNN features
• Breaking CNNs
Find images that maximize some class scores
person: HOG template
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
[Simonyan et al. ICLR Workshop 2014]
Individual Neuron Activation
RCNN [Girshick et al. CVPR 2014]
Individual Neuron Activation
RCNN [Girshick et al. CVPR 2014]
Individual Neuron Activation
RCNN [Girshick et al. CVPR 2014]
Visualizing the Input Pattern
• What input pattern originally caused a given
activation in the feature maps?
Visualizing and Understanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
Layer 1
Visualizing and Understanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
Layer 2
Visualizing and Understanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
Layer 3
Visualizing and Understanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
Layer 4 and 5
Visualizing and Understanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
Invert CNN features
• Reconstruct an image from CNN features
Understanding deep image representations by inverting them
[Mahendran and Vedaldi CVPR 2015]
CNN Reconstruction
Reconstruction from different layers
Multiple reconstructions
Understanding deep image representations by inverting them
[Mahendran and Vedaldi CVPR 2015]
Breaking CNNs
Intriguing properties of neural networks [Szegedy ICLR 2014]
What is going on?
x
x ¬ x+a
¶E
¶x
¶E
¶x
http://karpathy.github.io/2015/03/30/breaking-convnets/
Explaining and Harnessing Adversarial Examples [Goodfellow ICLR 2015]
What is going on?
• Recall gradient descent training: modify the
weights to reduce classifier error
• Adversarial examples: modify the image to
increase classifier error
http://karpathy.github.io/2015/03/30/breaking-convnets/
Explaining and Harnessing Adversarial Examples [Goodfellow ICLR 2015]
w
ww



E

x ¬ x+a
¶E
¶x
Fooling a linear classifier
• Perceptron weight update: add a small
multiple of the example to the weight vector:
w  w + αx
• To fool a linear classifier, add a small multiple
of the weight vector to the training example:
x  x + αw
http://karpathy.github.io/2015/03/30/breaking-convnets/
Explaining and Harnessing Adversarial Examples [Goodfellow ICLR 2015]
Fooling a linear classifier
http://karpathy.github.io/2015/03/30/breaking-convnets/
Breaking CNNs
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
Unrecognizable Images [Nguyen et al. CVPR 2015]
Images that both CNN and Human can recognize
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
Unrecognizable Images [Nguyen et al. CVPR 2015]
Direct Encoding
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
Unrecognizable Images [Nguyen et al. CVPR 2015]
Indirect Encoding
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
Unrecognizable Images [Nguyen et al. CVPR 2015]
Take a break…
Image source: http://mehimandthecats.com/feline-care-guide/
Training Convolutional Neural Networks
• Backpropagation + stochastic gradient descent
with momentum
– Neural Networks: Tricks of the Trade
• Dropout
• Data augmentation
• Batch normalization
• Initialization
– Transfer learning
Dropout
Dropout: A simple way to prevent neural networks from overfitting [Srivastava JMLR 2014]
Data Augmentation (Jittering)
• Create virtual training
samples
– Horizontal flip
– Random crop
– Color casting
– Geometric distortion
Deep Image [Wu et al. 2015]
Parametric Rectified Linear Unit
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
ImageNet Classification [He et al. 2015]
Batch Normalization
Batch Normalization: Accelerating Deep Network Training by
Reducing Internal Covariate Shift [Ioffe and Szegedy 2015]
Transfer Learning
• Improvement of learning in a new task through the
transfer of knowledge from a related task that has
already been learned.
• Weight initialization for CNN
Learning and Transferring Mid-Level Image Representations using
Convolutional Neural Networks [Oquab et al. CVPR 2014]
Convolutional activation features
[Donahue et al. ICML 2013]
CNN Features off-the-shelf:
an Astounding Baseline for Recognition
[Razavian et al. 2014]
How transferable are features in CNN?
How transferable are features in deep
neural networks [Yosinski NIPS 2014]
Deep Neural Networks Rival the Representation
of Primate Inferior Temporal Cortex
Deep Neural Networks Rival the Representation of Primate IT Cortex for
Core Visual Object Recognition [Cadieu et al. PLOS 2014]
Deep Neural Networks Rival the Representation
of Primate Inferior Temporal Cortex
Deep Neural Networks Rival the Representation of Primate IT Cortex for
Core Visual Object Recognition [Cadieu et al. PLOS 2014]
Deep Rendering Model (DRM)
A Probabilistic Theory of Deep Learning [Patel, Nguyen, and Baraniuk 2015]
CNN as a Max-Sum Inference
A Probabilistic Theory of Deep Learning [Patel, Nguyen, and Baraniuk 2015]
Model
Inference
Learning
Tools
• Caffe
• cuda-convnet2
• Torch
• MatConvNet
• Pylearn2
Resources
• http://deeplearning.net/
• https://github.com/ChristosChristofidis/aweso
me-deep-learning
Things to remember
• Overview
– Neuroscience, Perceptron, multi-layer neural networks
• Convolutional neural network (CNN)
– Convolution, nonlinearity, max pooling
– CNN for classification and beyond
• Understanding and visualizing CNN
– Find images that maximize some class scores;
visualize individual neuron activation, input pattern and
images; breaking CNNs
• Training CNN
– Dropout, data augmentation; batch normalization; transfer
learning
• Probabilistic interpretation
– Deep rendering model; CNN forward-propagation as max-
sum inference; training as an EM algorithm

Lecture 29 Convolutional Neural Networks - Computer Vision Spring2015

  • 1.
    Convolutional Neural Networks ComputerVision CS 543 / ECE 549 University of Illinois Jia-Bin Huang 05/05/15
  • 2.
    Reminder: final project •Posters on Friday, May 8 at 7pm in SC 2405 – two rounds, 1.25 hr each • Papers due May 11 by email • Cannot accept late papers/posters due to grading deadlines • Send Derek an email if you can’t present your poster
  • 3.
    Today’s class • Overview •Convolutional Neural Network (CNN) • Understanding and Visualizing CNN • Training CNN • Probabilistic Interpretation
  • 4.
    Image Categorization: Trainingphase Training Labels Training Images Classifier Training Training Image Features Trained Classifier
  • 5.
    Image Categorization: Testingphase Training Labels Training Images Classifier Training Training Image Features Trained Classifier Image Features Testing Test Image Outdoor PredictionTrained Classifier
  • 6.
    Features are theKeys SIFT [Loewe IJCV 04] HOG [Dalal and Triggs CVPR 05] SPM [Lazebnik et al. CVPR 06] DPM [Felzenszwalb et al. PAMI 10] Color Descriptor [Van De Sande et al. PAMI 10]
  • 7.
    • Each layerof hierarchy extracts features from output of previous layer • All the way from pixels  classifier • Layers have the (nearly) same structure Learning a Hierarchy of Feature Extractors Layer 1 Layer 2 Layer 3 Simple Classifie Image/Video Pixels Image/video Labels
  • 8.
    Engineered vs. learnedfeatures Image Feature extraction Pooling Classifier Label Image Convolution/pool Convolution/pool Convolution/pool Convolution/pool Convolution/pool Dense Dense Dense Label
  • 9.
    Biological neuron andPerceptrons A biological neuron An artificial neuron (Perceptron) - a linear classifier
  • 10.
    Simple, Complex andHypercomplex cells David H. Hubel and Torsten Wiesel David Hubel's Eye, Brain, and Vision Suggested a hierarchy of feature detectors in the visual cortex, with higher level features responding to patterns of activation in lower level cells, and propagating activation upwards to still higher level cells.
  • 11.
    Hubel/Wiesel Architecture andMulti-layer Neural Network Hubel and Weisel’s architecture Multi-layer Neural Network - A non-linear classifier
  • 12.
    Multi-layer Neural Network •A non-linear classifier • Training: find network weights w to minimize the error between true training labels 𝑦𝑖 and estimated labels 𝑓𝒘 𝒙𝒊 • Minimization can be done by gradient descent provided 𝑓 is differentiable • This training method is called back-propagation
  • 13.
    Convolutional Neural Networks (CNN,ConvNet, DCN) • CNN = a multi-layer neural network with – Local connectivity – Share weight parameters across spatial positions • One activation map (a depth slice), computed with one set of weights Image credit: A. Karpathy
  • 14.
  • 15.
    LeNet [LeCun etal. 1998] Gradient-based learning applied to document recognition [LeCun, Bottou, Bengio, Haffner 1998]
  • 16.
    What is aConvolution? • Weighted moving sum Input Feature Activation Map . . . slide credit: S. Lazebnik
  • 17.
  • 18.
    Input Image Convolution (Learned) Non-linearity Spatial pooling Normalization Featuremaps Input Feature Map . . . Convolutional Neural Networks slide credit: S. Lazebnik
  • 19.
    Input Image Convolution (Learned) Non-linearity Spatial pooling Normalization Featuremaps Convolutional Neural Networks Rectified Linear Unit (ReLU) slide credit: S. Lazebnik
  • 20.
    Input Image Convolution (Learned) Non-linearity Spatial pooling Normalization Featuremaps Max pooling Convolutional Neural Networks slide credit: S. Lazebnik
  • 21.
    Input Image Convolution (Learned) Non-linearity Spatial pooling Normalization Featuremaps Feature Maps Feature Maps After Contrast Normalization Convolutional Neural Networks slide credit: S. Lazebnik
  • 22.
    Input Image Convolution (Learned) Non-linearity Spatial pooling Normalization Featuremaps Convolutional filters are trained in a supervised manner by back-propagating classification error Convolutional Neural Networks slide credit: S. Lazebnik
  • 23.
    SIFT Descriptor Image Pixels Apply gradient filters Spatialpool (Sum) Normalize to unit length Feature Vector Lowe [IJCV 2004]
  • 24.
    SIFT Descriptor Image Pixels Apply orientedfilters Spatial pool (Sum) Normalize to unit length Feature Vector Lowe [IJCV 2004] slide credit: R. Fergus
  • 25.
    Spatial Pyramid Matching SIFT Features Filterwith Visual Words Multi-scale spatial pool (Sum) Max Classifier Lazebnik, Schmid, Ponce [CVPR 2006] slide credit: R. Fergus
  • 26.
    Deformable Part Model DeformablePart Models are Convolutional Neural Networks [Girshick et al. CVPR 15]
  • 27.
    AlexNet • Similar frameworkto LeCun’98 but: • Bigger model (7 hidden layers, 650,000 units, 60,000,000 params) • More data (106 vs. 103 images) • GPU implementation (50x speedup over CPU) • Trained on two GPUs for a week A. Krizhevsky, I. Sutskever, and G. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS 2012
  • 28.
    Using CNN forImage Classification AlexNet Fully connected layer Fc7 d = 4096 d = 4096 Averaging Softmax Layer “Jia-Bin” Fixed input size: 224x224x3
  • 29.
    ImageNet Challenge 2012-2014 TeamYear Place Error (top-5) External data SuperVision – Toronto (7 layers) 2012 - 16.4% no SuperVision 2012 1st 15.3% ImageNet 22k Clarifai – NYU (7 layers) 2013 - 11.7% no Clarifai 2013 1st 11.2% ImageNet 22k VGG – Oxford (16 layers) 2014 2nd 7.32% no GoogLeNet (19 layers) 2014 1st 6.67% no Human expert* 5.1% Team Method Error (top-5) DeepImage - Baidu Data augmentation + multi GPU 5.33% PReLU-nets - MSRA Parametric ReLU + smart initialization 4.94% BN-Inception ensemble - Google Reducing internal covariate shift 4.82%
  • 30.
    Beyond classification • Detection •Segmentation • Regression • Pose estimation • Matching patches • Synthesis and many more…
  • 31.
    R-CNN: Regions withCNN features • Trained on ImageNet classification • Finetune CNN on PASCAL RCNN [Girshick et al. CVPR 2014]
  • 32.
    Fast R-CNN Fast RCNN[Girshick, R 2015] https://github.com/rbgirshick/fast-rcnn
  • 33.
    Labeling Pixels: SemanticLabels Fully Convolutional Networks for Semantic Segmentation [Long et al. CVPR 2015]
  • 34.
    Labeling Pixels: EdgeDetection DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection [Bertasius et al. CVPR 2015]
  • 35.
    CNN for Regression DeepPose[Toshev and Szegedy CVPR 2014]
  • 36.
    CNN as aSimilarity Measure for Matching FaceNet [Schroff et al. 2015] Stereo matching [Zbontar and LeCun CVPR 2015] Compare patch [Zagoruyko and Komodakis 2015] Match ground and aerial images [Lin et al. CVPR 2015]FlowNet [Fischer et al 2015]
  • 37.
    CNN for ImageGeneration Learning to Generate Chairs with Convolutional Neural Networks [Dosovitskiy et al. CVPR 2015]
  • 38.
    Chair Morphing Learning toGenerate Chairs with Convolutional Neural Networks [Dosovitskiy et al. CVPR 2015]
  • 39.
    Understanding and VisualizingCNN • Find images that maximize some class scores • Individual neuron activation • Visualize input pattern using deconvnet • Invert CNN features • Breaking CNNs
  • 40.
    Find images thatmaximize some class scores person: HOG template Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps [Simonyan et al. ICLR Workshop 2014]
  • 41.
    Individual Neuron Activation RCNN[Girshick et al. CVPR 2014]
  • 42.
    Individual Neuron Activation RCNN[Girshick et al. CVPR 2014]
  • 43.
    Individual Neuron Activation RCNN[Girshick et al. CVPR 2014]
  • 44.
    Visualizing the InputPattern • What input pattern originally caused a given activation in the feature maps? Visualizing and Understanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
  • 45.
    Layer 1 Visualizing andUnderstanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
  • 46.
    Layer 2 Visualizing andUnderstanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
  • 47.
    Layer 3 Visualizing andUnderstanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
  • 48.
    Layer 4 and5 Visualizing and Understanding Convolutional Networks [Zeiler and Fergus, ECCV 2014]
  • 49.
    Invert CNN features •Reconstruct an image from CNN features Understanding deep image representations by inverting them [Mahendran and Vedaldi CVPR 2015]
  • 50.
    CNN Reconstruction Reconstruction fromdifferent layers Multiple reconstructions Understanding deep image representations by inverting them [Mahendran and Vedaldi CVPR 2015]
  • 51.
    Breaking CNNs Intriguing propertiesof neural networks [Szegedy ICLR 2014]
  • 52.
    What is goingon? x x ¬ x+a ¶E ¶x ¶E ¶x http://karpathy.github.io/2015/03/30/breaking-convnets/ Explaining and Harnessing Adversarial Examples [Goodfellow ICLR 2015]
  • 53.
    What is goingon? • Recall gradient descent training: modify the weights to reduce classifier error • Adversarial examples: modify the image to increase classifier error http://karpathy.github.io/2015/03/30/breaking-convnets/ Explaining and Harnessing Adversarial Examples [Goodfellow ICLR 2015] w ww    E  x ¬ x+a ¶E ¶x
  • 54.
    Fooling a linearclassifier • Perceptron weight update: add a small multiple of the example to the weight vector: w  w + αx • To fool a linear classifier, add a small multiple of the weight vector to the training example: x  x + αw http://karpathy.github.io/2015/03/30/breaking-convnets/ Explaining and Harnessing Adversarial Examples [Goodfellow ICLR 2015]
  • 55.
    Fooling a linearclassifier http://karpathy.github.io/2015/03/30/breaking-convnets/
  • 56.
    Breaking CNNs Deep NeuralNetworks are Easily Fooled: High Confidence Predictions for Unrecognizable Images [Nguyen et al. CVPR 2015]
  • 57.
    Images that bothCNN and Human can recognize Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images [Nguyen et al. CVPR 2015]
  • 58.
    Direct Encoding Deep NeuralNetworks are Easily Fooled: High Confidence Predictions for Unrecognizable Images [Nguyen et al. CVPR 2015]
  • 59.
    Indirect Encoding Deep NeuralNetworks are Easily Fooled: High Confidence Predictions for Unrecognizable Images [Nguyen et al. CVPR 2015]
  • 60.
    Take a break… Imagesource: http://mehimandthecats.com/feline-care-guide/
  • 61.
    Training Convolutional NeuralNetworks • Backpropagation + stochastic gradient descent with momentum – Neural Networks: Tricks of the Trade • Dropout • Data augmentation • Batch normalization • Initialization – Transfer learning
  • 62.
    Dropout Dropout: A simpleway to prevent neural networks from overfitting [Srivastava JMLR 2014]
  • 63.
    Data Augmentation (Jittering) •Create virtual training samples – Horizontal flip – Random crop – Color casting – Geometric distortion Deep Image [Wu et al. 2015]
  • 64.
    Parametric Rectified LinearUnit Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification [He et al. 2015]
  • 65.
    Batch Normalization Batch Normalization:Accelerating Deep Network Training by Reducing Internal Covariate Shift [Ioffe and Szegedy 2015]
  • 66.
    Transfer Learning • Improvementof learning in a new task through the transfer of knowledge from a related task that has already been learned. • Weight initialization for CNN Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks [Oquab et al. CVPR 2014]
  • 67.
    Convolutional activation features [Donahueet al. ICML 2013] CNN Features off-the-shelf: an Astounding Baseline for Recognition [Razavian et al. 2014]
  • 68.
    How transferable arefeatures in CNN? How transferable are features in deep neural networks [Yosinski NIPS 2014]
  • 69.
    Deep Neural NetworksRival the Representation of Primate Inferior Temporal Cortex Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition [Cadieu et al. PLOS 2014]
  • 70.
    Deep Neural NetworksRival the Representation of Primate Inferior Temporal Cortex Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition [Cadieu et al. PLOS 2014]
  • 71.
    Deep Rendering Model(DRM) A Probabilistic Theory of Deep Learning [Patel, Nguyen, and Baraniuk 2015]
  • 72.
    CNN as aMax-Sum Inference A Probabilistic Theory of Deep Learning [Patel, Nguyen, and Baraniuk 2015]
  • 73.
  • 74.
    Tools • Caffe • cuda-convnet2 •Torch • MatConvNet • Pylearn2
  • 75.
  • 76.
    Things to remember •Overview – Neuroscience, Perceptron, multi-layer neural networks • Convolutional neural network (CNN) – Convolution, nonlinearity, max pooling – CNN for classification and beyond • Understanding and visualizing CNN – Find images that maximize some class scores; visualize individual neuron activation, input pattern and images; breaking CNNs • Training CNN – Dropout, data augmentation; batch normalization; transfer learning • Probabilistic interpretation – Deep rendering model; CNN forward-propagation as max- sum inference; training as an EM algorithm

Editor's Notes

  • #2 Afternoon Image and region categorization Use one image , what do you see Can see different, transfer knowledge Can know a lot of things by seeing it a dog Type of categorization
  • #5 At testing time, we are given with an new image, we also represent it in terms of image features and apply the trained model to obtain the prediction
  • #6 At testing time, we are given with an new image, we also represent it in terms of image features and apply the trained model to obtain the prediction
  • #8 Make clear that classic methods, e.g. convnets are purely supervised.
  • #28 Two-GPU architecture
  • #30 Best non-convnet in 2012: 26.2%