6. 6
報道
Deep Learning in the News
13!
Researcher Dreams Up Machines
That Learn Without Humans
06.27.13
Scientists See Promise in
Deep-Learning Programs
John Markoff
November 23, 2012
Google!taps!U!
of!T!professor!
to!teach!
context!to!
computers!
03.11.13!
slide credit: Bengio KDD’14
7. ヒントン先生(Google)
7
報道
Deep Learning in the News
13!
Researcher Dreams Up Machines
That Learn Without Humans
06.27.13
Scientists See Promise in
Deep-Learning Programs
John Markoff
November 23, 2012
Google!taps!U!
of!T!professor!
to!teach!
context!to!
computers!
03.11.13!
slide credit: Bengio KDD’14
8. 8
報道
Deep Learning in the News
13!
Researcher Dreams Up Machines
That Learn Without Humans
06.27.13
Scientists See Promise in
Deep-Learning Programs
John Markoff
November 23, 2012
Google!taps!U!
of!T!professor!
to!teach!
context!to!
computers!
03.11.13!
ベンジオ先生
Montreal大学
slide credit: Bengio KDD’14
9. 9
報道
Deep Learning in the News
13!
LeCun先生 (Facebook)
Researcher Dreams Up Machines
That Learn Without Humans
06.27.13
Scientists See Promise in
Deep-Learning Programs
John Markoff
November 23, 2012
Google!taps!U!
of!T!professor!
to!teach!
context!to!
computers!
03.11.13!
slide credit: Bengio KDD’14
15. 画像認識
Krizhevsky+ NIPS’12
• デモ:http://deeplearning.cs.toronto.edu/
Figure 4: (Left) Eight ILSVRC-2010 test images and the five labels considered The correct label is written under each image, and the probability assigned 15
16. 類似画像検索
16
images and the five labels considered most probable by our model.
and the probability assigned to the correct label is also shown
Right) Five ILSVRC-2010 test images in the first column. The
Krizhevsky+ NIPS’12
今は20層の深層学習 by Google!
41. Layer-Wise Unsupervised Pre-training Layer-wise Unsupervised Learning
reconstruction
of features =
More abstract …
…
…
features
features
input
?
…… …
45!
43. 教師なし事前学習 (unsupervised pre-training)
Layer-wise Unsupervised Learning
More abstract …
…
…
features
features
input
…
Even more abstract
features
47!
44. 教師有り事後学習(supervised post-training)
Supervised Fine-Tuning
More abstract …
…
…
features
features
input
…
Even more abstract
features
Output
f(X) six
?
Target
= Y
two!
• AddiMonal!hypothesis:!features!good!for!P(x)!good!for!P(y|x)!
48!
slide credit: Bengio KDD 2014