SlideShare une entreprise Scribd logo
1  sur  72
Télécharger pour lire hors ligne
Deep	Learning
Filling	the	gap	between
practice	and	theory
Preferred Networks
Daisuke Okanohara
hillbig@preferred.jp
Aug. 3rd 2017
Summer School of Correspondence and Fusion of AI and Brain Science
Background:
Unreasonable	success	of	deep	learning
l DL	succeed	in	solving	many	complex	tasks
̶ Image	recognition,	speech	recognition,	natural	language	processing,	robot	
controlling,	computational	chemistry	etc.
l But	we	don’t	understand	why DL	work	so	well
̶ Its	success	is	much	higher	than	our	understanding
Background
DL	research	process	become	close	to	science	process
l Try	first,	examine	next
̶ First,	we	obtain	an	unexpected	good	result	experimentally	
̶ We	then	find	a	theory	that	explains	why	it	work	so	well
l This	process	is	different	from	previous	ML	research
̶ Careful	design	of	new	algorithms	sometimes	(or	often)	doesn’t	work
̶ Many	results	contradict	our	intuition
Outline
Three	main	unsolved	problems	in	deep	learning
l Why	can	DL	learn	?
l Why	can	DL	recognize	and	generate	real	world	data	?
l Why	can	DL	keep	and	manipulate	complex	information	?
Why can DL learn ?
Optimization	in	training	DL
l Learn	a	NN	model	f(x;	q) by	minimizing	a	training	error	L(q)
L(q) = Si l(f(xi; q), yi)
where	l(f(xi; q), yi) is	a	loss	function	and	q is	a	set	of	parameters	
l E.g.	two	layer	feed	forward	NN
f(x;	q))	=	a(W2(a(W1x))
where	a	is	an	element-wise	activate	function	such	as	
a(z)=max(0,	z)
l(f(xi; q), yi) = ||f(xi;	q) – yi||2 (L2	loss)
Gradient	descent
Stochastic	Gradient	Descent
l Gradient	descent
̶ Compute	the	gradient	of	L(q) with	regard	to	q; g(q), then	update	q
using	g(q) as	
qt+1 := qt – at g(qt)
where	at>0 is	a	learning	rate
l Stochastic	gradient	descent:	
̶ Since	the	exact	computation	of	gradient	is	expensive,	we	instead	use	an	
approximated	gradient	by	using	a	sampled	data	set	(mini-batch)
g’(qt) = 1/|B| Si∈B l(qt, xi, yi)
-αg
θ2
θ1
Contour of L(q)
Optimization	in	Deep	learning
l L(q) is	highly	non-convex	and	includes	many	local	optima,	
plateaus	and	saddle	points
̶ In	plateau	regions,	the	gradient	becomes	almost	zero	and	the	
convergence	becomes	significantly	slow
̶ In	saddle	points,	only	few	directions	will	decrease	L(q) and	it	is	hard	to	
escape	from	such	points
Saddle pointsPlateau
local optimum
Miracle	of	deep	learning	training
l It	was	believed	that	we	cannot	train	large	NNs	using	SGD
̶ Impossible	to	optimize	non-convex	problem	of	over	million	dimensions
l However,	SGD	can	find	a	solution	with	low-training	error
̶ When	using	large	model,	it	often	find	a	solution	with	zero	training	error
̶ Moreover,	an	initialization	doesn’t	matter	
(c.f.	<->	K-means	require	good	initializer)
l More	surprisingly,	SGD	can	find	a	solution	with	low-test	error
̶ Although	the	model	is	over-parameterized,	it	does	not	over-fit	and	
achieves	generalization
l Practically	OK,	but	we	want	to	know	why
Why	can	DL	learn	?
l Why	does	DL	succeed	in	find	a	solution	with	a	low	train	error?
̶ Although	obtimization is	a	highly	non-convex	optimization	problem
l Why	does	DL	succeed	in	finding	a	solution	with	a	low	test	
error	?	
̶ Although	NN	is	over	parametrized	and	no	effective	regularization
Loss	surface	analysis	using	spherical	spin	glass	model	(1/5)
[Choromanska+	2015]
l Consider	a	DNN	with	ReLU s(x)=max(0, x)
where	q	is	the	normalization	factor
l We	can	re-express	this	as	
where	Ai,j=1 if	the	path	(i,	j)	is	active	
and	Ai,j=0 if	the	path	is	inactive
̶ ReLU can	be	considered	as	a	switch;	the	path	is	active	if
all	ReLU are	active	and	is	inactive	otherwise
ReLU is	active
ReLU is	inactive
xi
Y
Path	is	active	if	all	
Relu is	active
Loss	surface	analysis	using	spherical	spin	glass	model	(2/5)	
l After	several	assumptions,	this	function	can	be	re-
exampressed as	a	H-spin	spherical	spin-grass	model
s.t.
l Now,	we	can	use	the	analysis	of	spherical	spin-grass	model
̶ We	now	know	the	distribution	of	critical	points
̶ k:	Index	(the	number	of	negative	eigenvalues	of	the	Hessian)
k=n:	local	minimum,	k>0:	saddle	point
12
Loss	surface	analysis	using	spherical	spin	glass	model	(3/5)
Distribution	of	critical	points
Almost	no	critical	points
with	large	k above	LEinf
->	Few	local	minima
In	the	band [LE0,	LEinf]
many	critical	points	with	
small	k	are	found	in	near
LE0
->local	minima	are	close	
to	the	global	minimum
Loss	surface	analysis	using	spherical	spin	glass	model	(4/5)
Distribution	of	test	losses
14
l This	analysis	is	relied	on	several	unrealistic	assumptions
̶ Such	as	
“Each	activation	is	independent	from	inputs”
“Each	path‘s	input	is	independent”
l Can	we	remove	these	assumptions	or	show	these	assumptions	
hold	in	almost	training	cases	?
Loss	surface	analysis	using	spherical	spin	glass	model	(5/5)
Remaining	problem
Depth	creates	no	bad	local	minima	[Lu+	2017]
l Non	convexity	comes	from	depth	and	nonlinearity
l Depth	only	creates	non	convexity
̶ Weight	space	symmetry	means	that	there	are	many	distinct	
configuration	with	same	loss	values	which	would	result	in	a	non-convex	
epigraph
l Consider	a	following	feed	forward	linear	NN	
minW L(W) = ||WH WH-1 …W1X – Y||2
Then	If X	and	Y have	full	row	rank,	then	all	local	minima	of	L(W)
are	global	minima	[Theorem	2.3,	Lu,	Kawaguchi	2017]
Deep	and	Wide	NN	also	create	no	bad	local	minima	
[Nguyen+	2017]
l If	the	following	conditions	hold	
̶ (1)	Activation	function	s is	analytic	on	R,	strictly	monotonically	
increasing
̶ (2)	s is	bounded*
̶ (3)	the	loss	function	l(a) is	twice	differentiable,
̶ l’(a)=0 if	a is	a	global	minimum
̶ (4)	Training	samples	are	linearly	independent,
then	every	critical	point	for	with	the	weight	matrices	have	full	
column	rank,	is	a	global	minimum
̶ We	can	achieve	these	conditions	if	we	use	sigmoid,	tanh or	softplus for	
s and	the	squared	loss	for	l
̶ ->	Solved	in	the	case	of	non-linear	NN	with	some	conditions
Why	DL	can	learn	?
l Why	does	DL	succeed	in	find	a	solution	with	a	low	train	error?
̶ Although	obtimization is	a	highly	non-convex	optimization	problem
l Why	does	DL	succeed	in	finding	a	solution	with	a	low	test	error	?	
̶ Although	NN	is	over	parametrized	and	no	effective	regularization
NN	is	over	parametrized	but	achieves	generalization
l Although	the	number	of	parameters	of	DNN	is	much	larger	
than	the	number	of	samples,	DNN	does	not	overfit and	
achieves	generalization
l Large	model	tend	to	achieve	low	test	error
Number	of	
parameters
Test	error
(lower	the	better)
When	num.	of	parameters	is	larger	
than	num.	of	training	samples
“overfitting”	is	observed
Conventional	ML	models
DNN
No	over-fitting	is	observed	
Moreover	the	test	error	decreases	as	the		
num.	of	parameters	is	increased
Random	Labeling	experiment	[Zhang+	17]
l Model	capacity	should	be	restricted	to	achieve	generalization
̶ C.f.	Rademacher complexity,	VC-dimension,	uniform	stability
l Conduct	an	experiment	on	a	copy	of	the	test	data	where	the	
true	labels	were	replaced	by	random	labels
->	NN	model	easily	fit	even	for	random	labels
l Compare	the	result	with	that	using	regularization	techniques
->	No	significant	difference
l Therefore	NN	model	has	enough	model	complexity	to	fit	to	
random	labeling	but	it	can	generalize	well	w/o	regularization
̶ For	random	labels,	NN	memorize	the	samples,	but	for	true	labels	NN	
learn	patterns	for	generalization	[Arpit+	17]
l … WHY?
SGD	plays	a	significant	role	for	generalization
l SGD	achieves	an	approximate	Bayesian	inference	[Mandt+	17]
̶ Bayesian	inference	provides	a	sample	following	q ~ P(q|D)
l SGD’s	noise	removes	unnecessary	information	of	input	to	
estimate	output	[Shwartz-Ziv+	17]
̶ During	training	the	mutual	information	between	input	and	the	network	
is	decreased	but	that	between	the	network	and	output	is	kept
l Sharpness	and	norms	of	weights	also	relate	to	generalization
̶ Flat	minima	achieve	generalization.	But	it
depends	on	the	scale	of	weights
̶ If	we	find	a	flat	minimum	with	small	norm	of	weights,	then	it	achieves	
generalization	[Neyshabur+	17]
FlatSharp
Training	always	converge	to	the	solution	with	low-test	error
[Wu+	17]
l Even	when	we	optimize	the	model	with	different	initializations,	
they	always	converge	to	the	solution	with	low	test	error
l Flat	minima	have	large	basin	while	sharp	minima	have	small	basin
̶ Almost	parameters	will	converge	to	flat	minima
l Flat	minima	corresponds	to	the	low	model	complexity	
=	low	test	error
l Question:	Why	does	NN	learning	induce	flat	minima	?
Flat	minima	have	large	basin
Sharp	minima	have	small	basin
Why can DL recognize and generate
real world data ?
Why	does	deep	learning	work	?
Lin’s	hypothesis	[Lin+	16]
l Real	world	phenomena	have	following	characteristics
1. Low	order	polynomial
̶ Known	physical	interactions	have	at	most	4th-order	polynomials
2. Local	interaction
̶ Number	of	interactions	between	objects	increases	linearly
3. Symmetry
̶ Small	degree	of	freedoms	
4. Markovian
̶ Most	generation	process	depends	on	only	the	previous	state
l ->	DNN	can	exploit	these	characteristics	
24/50
Generation	and	recognition	(1/2)
l Data	x	is	generated	from	unknown	factors	z
l Generation	and	recognition	are	inverse	operations
z
x
E.g.	Image	generation,	recognition
z:Object,	Position	of	camera,	Lighting	condition
(Dragon,	[10,	2,	-4],	white)
x:Image		
Generation
z
x
Recognition
(Inference)
Inference:	Infer	the	posterior	
P(z|x)
Generation Recognition
Generation	and	recognition(2/2)
l Data	is	often	generated	from	multiple	factors
̶ Uninteresting	factors	are	sometimes	called	covariates	or
disturbance	variables	of	hidden	variable
l Generation	process	can	be	very	complex
̶ Each	step	can	be	non-linear
̶ Gaussian,	non-Gaussian	noises	are	added	at	several	steps
̶ E.g.	Image	rendering	requires	dozens	steps
l In	general,	generation	process	is	unknown
̶ Any	generation	process	is	the	approx.	of	actual	process
26/50
z1
x
c
h
z2
hm
Why do we consider generative models?
l For	more	accurate	recognition	and	inference
̶ If	we	know	the	generate	process,	we	can	improve	recognition	and	inference
u “What	I	cannot	create,	I	do	not	understand”
Richard	Feynman
u “Computer	vision	is	inverse	computer	graphics”
Geofferty Hinton
̶ By	inverting	the	generation	process,	we	obtain	recognition	process
l For	transfer	learning
̶ By	changing	covariates,	we	can	transfer	the	learned	model	to	other	
environments
l For	sampling	examples	to	compute	statistics	and	validation
27/50
E.	g.	Mapping	of	hand-written	data	into	2D	using	VAE
Original	hand-written	data	is	high-dimension	(784-dim)
If	we	map	these	data	into	2-dim	space,	types,	shapes	change	smoothly
If	we	want	to	classify	“1”,	
we	need	to	find	this	simple	
boundary
Representation learning is more powerful than
the nearest neighbor method and manifold learning
l Actually	we	can	significantly	reduce	the	required	training	samples	when	
using	representation	learning	 [Arora+	2017]
l Using	the	distance	metric	defined	on	the	original	space,	or	the	
neighborhood	notion	may	not	work	
?
In	reality,	samples	with	the	same	label	are	
located	in	very	different	places	in	the	
original	space.	Their	region	may	not	be	
even	connected	in	original	space
Ideally,	near	sample
will	help	to	determine
the	label
Man with
glasses
Real-world	data	is	distributed	in	low-dimensional	manifold
30/50
Each	point	
corresponds	to	a	
possible	data
Data	distributed	in	
low-dimensional	
space
C.f.	distribution	of	galaxies	
in	the	universe
Why	does	low-dimensional
manifold	appear	?
Low	dimensional	factor	
is	converted	to	
high-dimensional	data
without	increasing	the	
complexity	[Lin+16]
Original	space	and	latent	space
31/50
generate
recognition
l In	the	latent	space,	the	meaning	of	data	is	smoothly	changed
Learning	is	easy	in	the	latent	space
32/50
generate
recognition
l Since	many	tasks	related	to	the	factors,	the	classification	
boundary	becomes	simple	in	the	latent	space
Require	many	training	examples	
in	the	original	space
Require	few	training	examples
in	the	latent	space
How	to	learn	a	generative	and	inference	model	?
l Generation	process	and	its	counterpart	recognition	process	
are	highly	non-linear	and	complex
l ->	Use	a	deep	neural	network	to	approximate	them
z
x
Generation
x	=	f(z)
z
x
Recognition
z	=	g(x)
Deep	generative	models
Fast	sampling	
of	x
Compute
the	likelihood
P(x)
Produce	sharp
image
Stable
Training
VAE
[Kingma+	14]
√ △
Lower-bound
(IW-VAE	
[Burda+15])
X √
GAN
[Goodfellow+ 14,16]
(IPM)
√ X √ X-△
AutoRegressive
[Oord+	16ab]
△-√
(Parallel	
multi-scale	
[Reed+	17])
√ √ √
Energy model
[Zhao+	16]	[Dai+	17]
△-√ △
Up	to	
constant	
√ △
VAE:	Variational AutoEncoder [Kingma+	14]
z
μ
(μ,	σ)	=	Dec(z;	φ)
x〜N(μ,	σ)
σ
x
A	NN	network	outputs	mean	and	covariance
(μ,	σ)	=	Dec(z;	φ)
Generate	x in	the	following	steps
(1) Sample	z	= N(0,	I)
(2) Compute (μ,	σ)	=	Dec(z;	φ)
(3) Sample	x	=	N(μ,	σI)
Defined	distribution
p(x)	=	∫p(x|z)p(z)dz
VAE:	Variational Autoencoder
Induced	distribution
l p(x|z)	is	a	Gaussian	and	p(x) corresponds	to	(infinitely-many)	
mixture	of	Gaussians
p(x)	=	∫p(x|z)p(z)dz
̶ Neural	network	can	model	complex	relation	between	z	and	x
VAE:	Variational AutoEncoder
Use	maximum	likelihood	estimation	for	learning	the	parameter	q
Since	the	exact	likelihood	is	intractable,	we	instead	maximize
the	lower	bound	of	likelihood	known	as	ELBO	(Evidence	lower	bound)
The	proposal	distribution	q(z|x)
should	be	close	to	the	true	
posterior	p(z|x)
Maximizing	wrt.	q(z|x) correspond	
to	the	minimization	of	
KL(q(z|x)	||	p(z|x))	
=	Learn	the	encoder	as	a	side	effect
Reparametization Trick
Since	we	take	an	expectation	with	regard	to	Q(z|x)	it	is	difficult	to	compute	
the	gradient	of	ELBO	wrt.	Q(z|x)
->	We	can	use	reparamerization trick	!		
μ' σ'
x'
z
μ σ
x
ε
Converted	computation	graph	
can	be	regarded	as	an	
auto-encoder	where	a	noise	εσ
is	added	to	the	latent	variable	μ
The	problem	of	maximum	likelihood	estimation	against
low-dimensional	manifold	data	(1/3)	[Arjovsky+	17ab]
l Maximum	likelihood	estimation	(MLE)	estimate	a	distribution	
P(x)	using	a	model	Q(x)	
LMLE(P,	Q)	=	Sx P(x)	log	Q(x)
̶ Usually,	this	is	replaced	with	the	empirical	distribution	(1/N)Si log	Q(xi)
l In	low-dimensional	manifold	data,	P(x)	=	0	in	most	x
l To	model	such	P,	Q(x) also	should	satisfy	Q(x)	=	0	in	most x
l If	we	use	such	Q(x),	log	Q(x)	is	undefined	(or	NaN)	when	Q(xi)	=	
0,	so	we	cannot	optimize	Q(x)	using	MLE
l to	solve	this	->	Use	Q(x) s.t. Q(xi)>0	for	all	{xi}
̶ E.g.	Q(x)	=	N(µ, s)	,	this	means	a	sample	is	µ with	added	noise	s
The	problem	of	maximum	likelihood	estimation	against
low-dimensional	manifold	data	(2/2)
l MLE	require	Q(xi)	>0	for	all	{xi}
l to	solve	this	->	Use	Q(x) s.t. Q(xi)>0	for	all	{xi}
l Q(x)	=	N(µ, s)	 this	means	a	sample	is	µ with	added	noise	s
̶ This	makes	blurry	images	
l Another	difficulty	is	there	is	no	notion	of	
the	closeness	wrt.	the	space	geometry	
When	the	area	size	of	the	intersection	are
same,	MLE	will	give	the	same	score.
Although	the	left	distribution	is	close	to	the
true	distribution,	MLE	scores	are	same
GAN(Generative	Adversarial	Net)
[Goodfellow+	14,	17]
l Compete	two	neural	networks	to	learn	a	distribution
l Generator	(counterfeiters)
̶ Goal:	deceive	the	generator
̶ Learn	to	generate	a	realistic	sample	that	can	deceive	the	generator
l Discriminator	(Police)
̶ Goal:	detect	a	sample	generated	by	the	generator
̶ Learn	to	detect	the	difference	between	real	and	generated	ones
Generator
Real
Discriminator
RealFake
Chosen	randomly
GAN:	Generative	adversarial	
z
x =	G(z)
x
Sample	x	in	the	following	step
(1) Sample	z	〜 U(0,	I)
(2) Compute	x	=	G(z)
(without	adding	noise)
No	adding	noise	step	
at	last
Training	of	GAN
l Use	Discriminator	D(x)
̶ Output	1	if	x	is	estimated	as	real	and	0	otherwise
l Train	D	to	maximize	V	and	G	to	minimize	V
̶ If	learning	succeeded,	this	learning	will	reach
the	following	Nash	equilibrium
∫p(z)G(z)dz=P(x),	D(x)=1/2
̶ Since	D	provides	dD(x)/dx to	update	G,	so	
they	are	actually	cooperate	to	learn	P(x)
z
x'
x = G(z)
{1(Real),	0(Fake)}
y = D(x)
x
Modeling	low	dimensional	manifold	
l When	z	is	low-dimensional	data,	the	deterministic	function	
x	=	F(z)	outputs	low-dimensional	manifold	in	the	space	x
l Using	CNN	for	G(z)	and	D(x)	is	also	important	
̶ D(x)	becomes	similar	score	when	x	and	x’	are	similar
l Recent	study	showed	that	training	without	using	discriminator	
is	also	able	to	generate	realistic	data	[Bojanowski+	17]
l These	two	factors	are	important	to	produce	realistic	data
z
x=F(z)
z ∈ R1 x ∈ R2
Demonstration	of	GAN	training
http://www.inference.vc/an-alternative-update-rule-for-generative-adversarial-networks/
45
Each	generated
samples	follows
dD(x)/dx
Training	GAN
https://github.com/mattya/chainer-DCGAN
After	30	minutes
46
After	2	hours
47
After	1	day
48
49
LSGAN [Mao+	16]
Stacked	GAN	
http://mtyka.github.io/machine/learning/2017/06/06/highres-gan-faces.html
New	GAN	papers	are	coming	out	every	week
GAN	Zoo	https://github.com/hindupuravinash/the-gan-zoo	
l Since	GAN	provides	a	new	way	to	train	a	probabilistic	model	
many	GAN	papers	are	coming	out,	(20	papers/mon	Jul.2017)
l Interpretation	of	GAN	framework
̶ Wasserstein	Distance,	Integral	Probability	Measure,	Inverse	RL
l New	stable	training	method
̶ Lipschitzness of	D,	Ensemble	of	Ds,	etc.	
l New	Applications
̶ Speech,	Text,	Inference	model	(q(z|x))
l Conditional	GAN
̶ Multi-class	Super-resolution,
Super	Resolution	+	Regression	loss	for	perception	network
[Chen+	17]
l Generate	photo-realistic	image	from	segmentation	result
̶ High	resolution,	globally	consistent,	stable	training
Output: photo-realistic imageInput: Segmentation
ICA:	Independent	component	analysis
Reference:	[Hyvärinen 01]
l Find	a	component	z that	generates	data	x
x	=	f(z)
where f	is	an	unknown	function	called	mixture	function	and	
components	are	independent	each	other p(z)	=	Pp(zi)
l When	f is	linear and	p(zi) is	non-Gaussian,	we	can	identify	f and	
z correctly
l However,	when	f is	nonlinear,	we	cannot	identify	f and	z
̶ There	are	infinitely	many	possible	f and	z
l ->	When	data	is	time-series	data	x(1),	x(2),	…,	x(n)	and	they	are	
generate	from	z which	are	(1)	non-stationary	or	(2)	stationary	
independent	sources,	we	can	identify	non-linear f and	z
Non-linear	ICA	for	non-stationary	time	series	data
[Hyvärinen+ 16]
l When	sources	are	independent	and	non-stationary,	we	can	
identify	a	non-linear	mixture	function	f	and	z
l Assumption:	sources	change	slowly
̶ sources	can	be	considered	as	stationary	in	
short	time	segment
̶ Many	interesting	data	have	this	property
1. Divide	time	series	data	into	segments
2. Train	multi-class	classifier	to	classify	
each	data	point	into	each	segment
3. The	last	layer’s	feature	corresponds	to
(linear	mixture	of)	independent	sources
Non-linear	ICA	for	stationary	time	series	data
[Hyvärinen+	17]
l When	sources	are	independent	and	stationary,	we	can	also	
identify	a	non-linear	mixture	function	f and	z
l Sources	should	be	uniform	dependent
̶ for	x	=	s(t)	and	y=s(t-1)
1. Train	a	binary	classifier	to	classify	whether	given	data	pairs	are	
taken	from	adjacent	(x(t),	x(t+1)) or	random	(x(t),	x(u))	
2. The	last	layer’s	features	correspond	to
(linear	mixture	of)	independent	sources
Conjectures	[Okanohara]
l Train	a	multi-class	classifier	with	very	large	number	of	classes	
(e.g.	Imagenet).	Then	the	features	of	last	layer	correspond	to	
(mixture-of)	independent	component
̶ To	show	this,	we	need	a	reasonable	model	between	the	set	of	labels	
and	independent	components
̶ Dark	knowledge	[Hinton14]	is	effective	to	transfer	the	model	because	
this	reveals	the	independent	components
l Similarly	GAN’s	discriminators	(or	the	energy	functions)	also	
extract	the	independent	components
Why can DL keep and manipulate
complex information ?
Information	Abstract	Level
l Abstract	knowledge
̶ Text,	relation
l Model
̶ Simulator	/	generative	model
l Raw	Experience
̶ Sensory	stream	
Abstract
Detailed
Small volume
Independent from problem/task
context
Large volume
Dependent on
problem/task/context
Local	representation	vs	distributed	representation
l Local	representation
̶ each	concept	is	represented	by	one	symbol
̶ e.g.	Giraff=1,	Panda=2,	Lion=3,	Tiger=4
̶ no	interfere,	noise	immunity,	precise
l Distributed	representation
̶ each	concept	is	represented	by	a	set	of	symbol,	and	each	symbol	
participates	in	representing	many	concepts
̶ Generalizable
̶ less	accurate
̶ interfere
Giraff Pand Lion Tiger
Long neck ◯
four legs ◯ ◯ ◯ ◯
body hair ◯ ◯ ◯
paw pad ◯ ◯
High	dimensional	vector	vs		low	dimensional	data
l High	dimensional	vector
̶ Random	two	vectors	are	always	almost	orthogonal
̶ many	concepts	can	be	stored	within	one	vector
u w =	x	+	y	+	z,	
̶ Same	characteristics	as	local	representation
l Low	dimensional	vector
̶ Interfere	each	other
̶ Cannot	keep	precise	memory
̶ Beneficial	for	generalization
l Interference	and	generalization	are	strongly	related
Two	layer	feedforward	network	=		memory	augmented	network	
[Vaswani+	17]
l Memory	augmented	network
a	=	V	Softmax(Kq)
̶ K	is	a	key	matrix	(i-th row	corresponds	to	a	key	for	i-th memory)
̶ V	is	a	value	matix.	i-th column	correspond	to	a	value	for	i-th value
̶ We	may	use	winner-take-all	instead	of	Softmax
l Two	layer	feedforward	network
a	=	W2Relu(W1x)
̶ i-th row	of	W1 corresponds	to	a	key	for	i-th memory
̶ i-th column	of	W2 corresponds	to	a	value	for	i-th memory
Three	layer	feed-forward	network	is	also	memory-augmented	
network	[Okanohara unpublished]
l Three	layer	feed-forward	network	can	be	considered	as	first	
layer	is	used	for	computing	keys	and	second	stores	key	and	t	
a	=	W3Relu(W2Relu(W1x))
l key:	Relu(W1x)
l The	i-th row	of	W2 corresponds	to	the	key	of	i-th memory	cell
l The	i-th column	of	W3 corresponds	to	the	value	of	i-th
memory	cell
Two-leayr NN	update	rule	interpretation
[Okanohara unpublished]
l The	update	rule	of	two	layer	feedforward	network	for
h	=	Relu(W1x)
a	=	W2h
is
dh	=	W2
Tda
dW2=	da hT
dW1=	dh	diag(Relu’(W1x))	xT
=	W2
Tda	diag(Relu’(W1x))	xT
l
These	update	rules	correspond	to	storing	the	error	(da)	as	a	
value	and	storing	input	(x)	as	a	key	for	memory	network
̶ Update	only	for	active	memories	(Relu’(W1x))
Resnet is	memory	augmented	network	
[Okanohara unpublished]
l Since	resnet is	the	following	form	
h	=	h	+	Resnet(h)
and	Resnet(h)	consists	of	two	layer,	we	can	interpret	it	as	
recalling	memory	and	add	it	to	the	current	vector
̶ Squeeze	operation	correspond	to	limit	the	number	of	memory	cells
l Resnet lookups	memory	iteratively
̶ Large	number	of	steps	=	large	number	of	memory	lookups
l This	interpretation	is	different	from	using	shortcut	[He+15]	or	
unrolled	iterative	estimation	[Greff+16]
Infinite	memory	network
l What	happen	if	we	increase	the	number	of	hidden	units	
iteratively	for	each	training	sample	?
̶ This	is	similar	to	“Memory	Networks”	where	we	store	previous	hidden	
activation	in	explicit	memory		or	“Progressive	Network”	[Rusu+	16]	
where	we	incrementally	add	new	network	(and	fixed	old	network)	for	
each	new	task
l We	expect	that	it	can	prevent	from	catastrophic	forgetting	and	
achieve	one-shot	learning
̶ How	to	make	sure	generalization	?
Conclusion
l There	are	still	many	unsolved	problems	in	DNN
̶ Why	can	DNN	learn	in	general	setting	?
̶ How	to	represent	real	world	information	?
l There	are	still	many	unsolved	problems	in	AI	
̶ Disentanglement	of	information
̶ One-shot	learning	using	attention	and	memory	mechanism
u Avoid	catastrophic	forgetting,	interference	
̶ Stable,	data-efficient	reinforcement	learning
̶ How	to	abstract	information
u grounding	(language),	strong	noise	(e.g.	dropout),	extract	hidden	
factors	by	using	(non-)stationary	or	commonality	among	task
References
l [Choromanska+	2015]	“The	loss	surface	of	multilayer	networks”,	A.	Choromanska,	
and	et	al.,	AIstats 2015
l [Lu+	2017]	”Depth	creates	No	Bad	Local	Minima”,	H.	Lu,	and	et	al.,	
arXiv:1702.08580
l [Nguyen+	2017]	“The	loss	surface	of	deep	and	wide	neural	networks”,	Q.	Nguyen,	
and	et	al.,	arXiv:1704.08045
l [Zhang+	2017]	“Understanding	deep	learning	requires	rethinking	generalization”,	C.	
Zhang,	and	et	al.,	ICLR	2017
l [Arpit+	2017]	”A	Closer	Look	at	Memorization	in	Deep	Networks”,	D.	Arpit,	and	et	al.,	
ICML	2017
l [Mangt+	2017]	“Stochastic	Gradient	Descent	as	Approximate	Bayesian	Inference”,	S.	
Mandt and	et	al.,	arXiv:1704.04289
l [Shwartz-Ziv+	2017]	“Opening	the	Black	Box	of	Deep	Neural	Networks	via	
Information”,	R.	Shartz-Ziv,	and	et	al.,	arXiv:1703.00810
l [Neyshabur+	17]	“Exploring	Generalization	in	Deep	Learning”,	B.	Neyhabur,	
and	et	al.,	arXiv:1706.08947
l [Wu+	17]	“Towards	Understanding	Generalization	of	Deep	Learning:	
Perspective	of	Loss	Landscapes”,	L.	Wu	and	et	al.,	arXiv:1706.10239
l [Lin	+	16]	“Why	does	deep	and	cheap	learning	work	so	well”,	H	W.	Lin,	and	
et	al.,	arXiv1708.08226
l [Arora+	17]	“Provable	benefits	of	representation	learning”,	S.	Arora,	and	et	
al.,	arXiv:1706.04601
l [Kingma+	14]	”Auto-Encoding	Variational Bayes”,	D.	P.	Kingma and	et	al.,	
ICLR	2014
l [Burda+	15]	“Importance	Weighted	Autoencoders”,	Y.	Burda and	et	al.,	
arXiv:1509.00519
l [Goodfellow+	14]	“Generative	Adversarial	Nets”,	I.	Goodfellow,	and	et	al.,	
NIPS	2014
l [Goodfellow 16]	“NIPS	16	Tutorial:	Generative	Adversarial	Networks”,	
arXiv:1701.00160
l [Oord+	16a],	“Conditional	Image	Generation	with	PixelCNN decoders”,	A.	
Oord	and	et	al.,	NIPS	2016
l [Oord+	16b],	“WaveNet:	A	Generative	Model	for	Raw	Audio”,	A.	Oord	and	
et	al.,	arXiv1609.03499
l [Reed+	17]	“Parallel	Multiscale	Autoregressive	Density	Estimation”,	S.	Reed	
and	et	al,	arXiv:1703.03664
l [Zhao+	17]	”Energy-based	Generative	Adversarial	Network”,	J.	Zhao	and	et	
al.,	arXiv:1609.03126
l [Dai+	17]	“Calibrating	Energy-based	Generative	Adversarial	networks”,	Z.	
Dai	and	et	al.,	ICLR	2017
l [Arjovsky+	17a]	”Towards	principled	methods	for	training	generative	
adversarial	networks”,	M.	Arjovsky,	and	et	al,	arXiv:1701.04862
l [Arjovsky+	17b]	“Wasserstein	Generative	Adversarial	Networks”,	M.	
Arjovsky,	and	et	al.,	ICML	2017
l [Bojanowski+	17]	“Optimizing	the	Latent	Space	of	Generative	Networks”,	P.	
Bojanowski	and	et	al.,	arXiv:1707.05776
l [Chen+	17]	”Photographic	Image	Synthesis	with	Cascaded	Refinement	
Networks”,	Q.	Chen	and	et	al.,	arXiv:1707.09405
l [Hyvärinen+	01]	“Independent	Component	Analysis”,	A.	Hyvärinen and	et	
al.,	John	Wiley	‘	Sons.	2001
l [Hyvärinen+	16]	“Unsupervised	Feature	Extraction	by	Time-Contrastive	
Learning	and	Nonlinear	ICA”,	A.	Hyvärinen and	et	al,	NIPS	2016
l [Hyvärinen+	17]	“Nonlinear	ICA	of	Temporally	Dependent	Stationary	
Sources”,	A.	Hyvärinen and	et	al,	AISTATS	2017
l [Vaswani+	17]	“Attention	is	all	you	need”,	A.	Vaswani,	arxiv:1706.03762	(the	
idea	appears	only	in	version	3	https://arxiv.org/abs/1706.03762v3)
l [He+	15]	“Deep	Residual	Learning	for	Image	Recognition”,	K.	He	and	et	al.,	
arXiv:1512.03385
l [Rusu+	16]	“Progressive	Neural	Networks”,	A.	Rusu+	and	et	al.,	
arXiv:1606.04671

Contenu connexe

Tendances

SSII2019OS: 深層学習にかかる時間を短くしてみませんか? ~分散学習の勧め~
SSII2019OS: 深層学習にかかる時間を短くしてみませんか? ~分散学習の勧め~SSII2019OS: 深層学習にかかる時間を短くしてみませんか? ~分散学習の勧め~
SSII2019OS: 深層学習にかかる時間を短くしてみませんか? ~分散学習の勧め~SSII
 
FeUdal Networks for Hierarchical Reinforcement Learning
FeUdal Networks for Hierarchical Reinforcement LearningFeUdal Networks for Hierarchical Reinforcement Learning
FeUdal Networks for Hierarchical Reinforcement Learning佑 甲野
 
データサイエンス概論第一=2-1 データ間の距離と類似度
データサイエンス概論第一=2-1 データ間の距離と類似度データサイエンス概論第一=2-1 データ間の距離と類似度
データサイエンス概論第一=2-1 データ間の距離と類似度Seiichi Uchida
 
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State SpacesDeep Learning JP
 
“機械学習の説明”の信頼性
“機械学習の説明”の信頼性“機械学習の説明”の信頼性
“機械学習の説明”の信頼性Satoshi Hara
 
強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習Eiji Uchibe
 
汎用なNeural Network Potential「Matlantis」を使った新素材探索_2022応用物理学会_2022/3/22
汎用なNeural Network Potential「Matlantis」を使った新素材探索_2022応用物理学会_2022/3/22汎用なNeural Network Potential「Matlantis」を使った新素材探索_2022応用物理学会_2022/3/22
汎用なNeural Network Potential「Matlantis」を使った新素材探索_2022応用物理学会_2022/3/22Matlantis
 
数学で解き明かす深層学習の原理
数学で解き明かす深層学習の原理数学で解き明かす深層学習の原理
数学で解き明かす深層学習の原理Taiji Suzuki
 
Deeplearning輪読会
Deeplearning輪読会Deeplearning輪読会
Deeplearning輪読会正志 坪坂
 
【DL輪読会】Data-Efficient Reinforcement Learning with Self-Predictive Representat...
【DL輪読会】Data-Efficient Reinforcement Learning with Self-Predictive Representat...【DL輪読会】Data-Efficient Reinforcement Learning with Self-Predictive Representat...
【DL輪読会】Data-Efficient Reinforcement Learning with Self-Predictive Representat...Deep Learning JP
 
SSII2022 [TS1] Transformerの最前線〜 畳込みニューラルネットワークの先へ 〜
SSII2022 [TS1] Transformerの最前線〜 畳込みニューラルネットワークの先へ 〜SSII2022 [TS1] Transformerの最前線〜 畳込みニューラルネットワークの先へ 〜
SSII2022 [TS1] Transformerの最前線〜 畳込みニューラルネットワークの先へ 〜SSII
 
サルでもわかるディープラーニング入門 (2017年) (In Japanese)
サルでもわかるディープラーニング入門 (2017年) (In Japanese)サルでもわかるディープラーニング入門 (2017年) (In Japanese)
サルでもわかるディープラーニング入門 (2017年) (In Japanese)Toshihiko Yamakami
 
【DL輪読会】時系列予測 Transfomers の精度向上手法
【DL輪読会】時系列予測 Transfomers の精度向上手法【DL輪読会】時系列予測 Transfomers の精度向上手法
【DL輪読会】時系列予測 Transfomers の精度向上手法Deep Learning JP
 
Graph Neural Networks
Graph Neural NetworksGraph Neural Networks
Graph Neural Networkstm1966
 
[DL輪読会]Deep Learning 第15章 表現学習
[DL輪読会]Deep Learning 第15章 表現学習[DL輪読会]Deep Learning 第15章 表現学習
[DL輪読会]Deep Learning 第15章 表現学習Deep Learning JP
 
Transformerを雰囲気で理解する
Transformerを雰囲気で理解するTransformerを雰囲気で理解する
Transformerを雰囲気で理解するAtsukiYamaguchi1
 
【論文読み会】Deep Clustering for Unsupervised Learning of Visual Features
【論文読み会】Deep Clustering for Unsupervised Learning of Visual Features【論文読み会】Deep Clustering for Unsupervised Learning of Visual Features
【論文読み会】Deep Clustering for Unsupervised Learning of Visual FeaturesARISE analytics
 
データサイエンス概論第一=8 パターン認識と深層学習
データサイエンス概論第一=8 パターン認識と深層学習データサイエンス概論第一=8 パターン認識と深層学習
データサイエンス概論第一=8 パターン認識と深層学習Seiichi Uchida
 
Matlantisがもたらす革新的なマテリアルの創出_POL共催セミナー_20220304
Matlantisがもたらす革新的なマテリアルの創出_POL共催セミナー_20220304Matlantisがもたらす革新的なマテリアルの創出_POL共催セミナー_20220304
Matlantisがもたらす革新的なマテリアルの創出_POL共催セミナー_20220304Matlantis
 

Tendances (20)

SSII2019OS: 深層学習にかかる時間を短くしてみませんか? ~分散学習の勧め~
SSII2019OS: 深層学習にかかる時間を短くしてみませんか? ~分散学習の勧め~SSII2019OS: 深層学習にかかる時間を短くしてみませんか? ~分散学習の勧め~
SSII2019OS: 深層学習にかかる時間を短くしてみませんか? ~分散学習の勧め~
 
FeUdal Networks for Hierarchical Reinforcement Learning
FeUdal Networks for Hierarchical Reinforcement LearningFeUdal Networks for Hierarchical Reinforcement Learning
FeUdal Networks for Hierarchical Reinforcement Learning
 
データサイエンス概論第一=2-1 データ間の距離と類似度
データサイエンス概論第一=2-1 データ間の距離と類似度データサイエンス概論第一=2-1 データ間の距離と類似度
データサイエンス概論第一=2-1 データ間の距離と類似度
 
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
 
“機械学習の説明”の信頼性
“機械学習の説明”の信頼性“機械学習の説明”の信頼性
“機械学習の説明”の信頼性
 
強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習
 
汎用なNeural Network Potential「Matlantis」を使った新素材探索_2022応用物理学会_2022/3/22
汎用なNeural Network Potential「Matlantis」を使った新素材探索_2022応用物理学会_2022/3/22汎用なNeural Network Potential「Matlantis」を使った新素材探索_2022応用物理学会_2022/3/22
汎用なNeural Network Potential「Matlantis」を使った新素材探索_2022応用物理学会_2022/3/22
 
数学で解き明かす深層学習の原理
数学で解き明かす深層学習の原理数学で解き明かす深層学習の原理
数学で解き明かす深層学習の原理
 
Deeplearning輪読会
Deeplearning輪読会Deeplearning輪読会
Deeplearning輪読会
 
【DL輪読会】Data-Efficient Reinforcement Learning with Self-Predictive Representat...
【DL輪読会】Data-Efficient Reinforcement Learning with Self-Predictive Representat...【DL輪読会】Data-Efficient Reinforcement Learning with Self-Predictive Representat...
【DL輪読会】Data-Efficient Reinforcement Learning with Self-Predictive Representat...
 
SSII2022 [TS1] Transformerの最前線〜 畳込みニューラルネットワークの先へ 〜
SSII2022 [TS1] Transformerの最前線〜 畳込みニューラルネットワークの先へ 〜SSII2022 [TS1] Transformerの最前線〜 畳込みニューラルネットワークの先へ 〜
SSII2022 [TS1] Transformerの最前線〜 畳込みニューラルネットワークの先へ 〜
 
サルでもわかるディープラーニング入門 (2017年) (In Japanese)
サルでもわかるディープラーニング入門 (2017年) (In Japanese)サルでもわかるディープラーニング入門 (2017年) (In Japanese)
サルでもわかるディープラーニング入門 (2017年) (In Japanese)
 
【DL輪読会】時系列予測 Transfomers の精度向上手法
【DL輪読会】時系列予測 Transfomers の精度向上手法【DL輪読会】時系列予測 Transfomers の精度向上手法
【DL輪読会】時系列予測 Transfomers の精度向上手法
 
Graph Neural Networks
Graph Neural NetworksGraph Neural Networks
Graph Neural Networks
 
[DL輪読会]Deep Learning 第15章 表現学習
[DL輪読会]Deep Learning 第15章 表現学習[DL輪読会]Deep Learning 第15章 表現学習
[DL輪読会]Deep Learning 第15章 表現学習
 
Transformerを雰囲気で理解する
Transformerを雰囲気で理解するTransformerを雰囲気で理解する
Transformerを雰囲気で理解する
 
NLPソリューション開発の最前線
NLPソリューション開発の最前線NLPソリューション開発の最前線
NLPソリューション開発の最前線
 
【論文読み会】Deep Clustering for Unsupervised Learning of Visual Features
【論文読み会】Deep Clustering for Unsupervised Learning of Visual Features【論文読み会】Deep Clustering for Unsupervised Learning of Visual Features
【論文読み会】Deep Clustering for Unsupervised Learning of Visual Features
 
データサイエンス概論第一=8 パターン認識と深層学習
データサイエンス概論第一=8 パターン認識と深層学習データサイエンス概論第一=8 パターン認識と深層学習
データサイエンス概論第一=8 パターン認識と深層学習
 
Matlantisがもたらす革新的なマテリアルの創出_POL共催セミナー_20220304
Matlantisがもたらす革新的なマテリアルの創出_POL共催セミナー_20220304Matlantisがもたらす革新的なマテリアルの創出_POL共催セミナー_20220304
Matlantisがもたらす革新的なマテリアルの創出_POL共催セミナー_20220304
 

En vedette

20171024 DLLab#04_PFN_Hiroshi Maruyama
20171024 DLLab#04_PFN_Hiroshi Maruyama20171024 DLLab#04_PFN_Hiroshi Maruyama
20171024 DLLab#04_PFN_Hiroshi MaruyamaPreferred Networks
 
An introduction to property based testing
An introduction to property based testingAn introduction to property based testing
An introduction to property based testingScott Wlaschin
 
Differences of Deep Learning Frameworks
Differences of Deep Learning FrameworksDifferences of Deep Learning Frameworks
Differences of Deep Learning FrameworksSeiya Tokui
 
【DLL3】20170904_AIガイドライン_PFN丸山宏
【DLL3】20170904_AIガイドライン_PFN丸山宏【DLL3】20170904_AIガイドライン_PFN丸山宏
【DLL3】20170904_AIガイドライン_PFN丸山宏Preferred Networks
 
Lecture univ.tokyo 2017_okanohara
Lecture univ.tokyo 2017_okanoharaLecture univ.tokyo 2017_okanohara
Lecture univ.tokyo 2017_okanoharaPreferred Networks
 

En vedette (7)

20171024 DLLab#04_PFN_Hiroshi Maruyama
20171024 DLLab#04_PFN_Hiroshi Maruyama20171024 DLLab#04_PFN_Hiroshi Maruyama
20171024 DLLab#04_PFN_Hiroshi Maruyama
 
An introduction to property based testing
An introduction to property based testingAn introduction to property based testing
An introduction to property based testing
 
Differences of Deep Learning Frameworks
Differences of Deep Learning FrameworksDifferences of Deep Learning Frameworks
Differences of Deep Learning Frameworks
 
【DLL3】20170904_AIガイドライン_PFN丸山宏
【DLL3】20170904_AIガイドライン_PFN丸山宏【DLL3】20170904_AIガイドライン_PFN丸山宏
【DLL3】20170904_AIガイドライン_PFN丸山宏
 
Introduction to Chainer
Introduction to ChainerIntroduction to Chainer
Introduction to Chainer
 
Lecture univ.tokyo 2017_okanohara
Lecture univ.tokyo 2017_okanoharaLecture univ.tokyo 2017_okanohara
Lecture univ.tokyo 2017_okanohara
 
Deep parking
Deep parkingDeep parking
Deep parking
 

Similaire à Deep Learning Practice and Theory

CS Education for All. A new wave of opportunity
CS Education for All. A new wave of opportunityCS Education for All. A new wave of opportunity
CS Education for All. A new wave of opportunityPeter Donaldson
 
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)Michael Zock
 
Deep learning short introduction
Deep learning short introductionDeep learning short introduction
Deep learning short introductionAdwait Bhave
 
Clare Corthell: Learning Data Science Online
Clare Corthell: Learning Data Science OnlineClare Corthell: Learning Data Science Online
Clare Corthell: Learning Data Science Onlinesfdatascience
 
Data Science Salon: Introduction to Machine Learning - Marketing Use Case
Data Science Salon: Introduction to Machine Learning - Marketing Use CaseData Science Salon: Introduction to Machine Learning - Marketing Use Case
Data Science Salon: Introduction to Machine Learning - Marketing Use CaseFormulatedby
 
Data Science Salon Miami Presentation
Data Science Salon Miami PresentationData Science Salon Miami Presentation
Data Science Salon Miami PresentationGreg Werner
 
What every teacher should know about cognitive research
What every teacher should know about cognitive researchWhat every teacher should know about cognitive research
What every teacher should know about cognitive researchStephanie Chasteen
 
Lessons learned from building practical deep learning systems
Lessons learned from building practical deep learning systemsLessons learned from building practical deep learning systems
Lessons learned from building practical deep learning systemsXavier Amatriain
 
Deep learning with tensorflow
Deep learning with tensorflowDeep learning with tensorflow
Deep learning with tensorflowCharmi Chokshi
 
What every teacher should know about cognitive science
What every teacher should know about cognitive scienceWhat every teacher should know about cognitive science
What every teacher should know about cognitive scienceStephanie Chasteen
 
Otago maths association pd 2014
Otago maths association pd 2014Otago maths association pd 2014
Otago maths association pd 2014mshasanbegovic
 
How to Start Doing Data Science
How to Start Doing Data ScienceHow to Start Doing Data Science
How to Start Doing Data ScienceAyodele Odubela
 
Presentation for IAA - Oxford Careers Service 24 November 2015
Presentation for IAA - Oxford Careers Service 24 November 2015Presentation for IAA - Oxford Careers Service 24 November 2015
Presentation for IAA - Oxford Careers Service 24 November 2015Gill Clough
 
James Langley presentation about Computer science & ICT curriculum
James Langley presentation about Computer science & ICT curriculumJames Langley presentation about Computer science & ICT curriculum
James Langley presentation about Computer science & ICT curriculumpetzanet.HR Kurikulum
 
Deep Learning Online Course It's Not as Difficult as You Think.pdf
Deep Learning Online Course It's Not as Difficult as You Think.pdfDeep Learning Online Course It's Not as Difficult as You Think.pdf
Deep Learning Online Course It's Not as Difficult as You Think.pdfMicrosoft azure
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales IntroSololocales1
 
Solo Locales Introduction v2
Solo Locales Introduction v2Solo Locales Introduction v2
Solo Locales Introduction v2Sololocales1
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales IntroSololocales1
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales IntroSololocales1
 

Similaire à Deep Learning Practice and Theory (20)

CS Education for All. A new wave of opportunity
CS Education for All. A new wave of opportunityCS Education for All. A new wave of opportunity
CS Education for All. A new wave of opportunity
 
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
 
Deep learning short introduction
Deep learning short introductionDeep learning short introduction
Deep learning short introduction
 
Clare Corthell: Learning Data Science Online
Clare Corthell: Learning Data Science OnlineClare Corthell: Learning Data Science Online
Clare Corthell: Learning Data Science Online
 
Data Science Salon: Introduction to Machine Learning - Marketing Use Case
Data Science Salon: Introduction to Machine Learning - Marketing Use CaseData Science Salon: Introduction to Machine Learning - Marketing Use Case
Data Science Salon: Introduction to Machine Learning - Marketing Use Case
 
Data Science Salon Miami Presentation
Data Science Salon Miami PresentationData Science Salon Miami Presentation
Data Science Salon Miami Presentation
 
What every teacher should know about cognitive research
What every teacher should know about cognitive researchWhat every teacher should know about cognitive research
What every teacher should know about cognitive research
 
Lessons learned from building practical deep learning systems
Lessons learned from building practical deep learning systemsLessons learned from building practical deep learning systems
Lessons learned from building practical deep learning systems
 
Deep learning with tensorflow
Deep learning with tensorflowDeep learning with tensorflow
Deep learning with tensorflow
 
Welcome is431 s11
Welcome is431 s11Welcome is431 s11
Welcome is431 s11
 
What every teacher should know about cognitive science
What every teacher should know about cognitive scienceWhat every teacher should know about cognitive science
What every teacher should know about cognitive science
 
Otago maths association pd 2014
Otago maths association pd 2014Otago maths association pd 2014
Otago maths association pd 2014
 
How to Start Doing Data Science
How to Start Doing Data ScienceHow to Start Doing Data Science
How to Start Doing Data Science
 
Presentation for IAA - Oxford Careers Service 24 November 2015
Presentation for IAA - Oxford Careers Service 24 November 2015Presentation for IAA - Oxford Careers Service 24 November 2015
Presentation for IAA - Oxford Careers Service 24 November 2015
 
James Langley presentation about Computer science & ICT curriculum
James Langley presentation about Computer science & ICT curriculumJames Langley presentation about Computer science & ICT curriculum
James Langley presentation about Computer science & ICT curriculum
 
Deep Learning Online Course It's Not as Difficult as You Think.pdf
Deep Learning Online Course It's Not as Difficult as You Think.pdfDeep Learning Online Course It's Not as Difficult as You Think.pdf
Deep Learning Online Course It's Not as Difficult as You Think.pdf
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales Intro
 
Solo Locales Introduction v2
Solo Locales Introduction v2Solo Locales Introduction v2
Solo Locales Introduction v2
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales Intro
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales Intro
 

Plus de Preferred Networks

PodSecurityPolicy からGatekeeper に移行しました / Kubernetes Meetup Tokyo #57
PodSecurityPolicy からGatekeeper に移行しました / Kubernetes Meetup Tokyo #57PodSecurityPolicy からGatekeeper に移行しました / Kubernetes Meetup Tokyo #57
PodSecurityPolicy からGatekeeper に移行しました / Kubernetes Meetup Tokyo #57Preferred Networks
 
Optunaを使ったHuman-in-the-loop最適化の紹介 - 2023/04/27 W&B 東京ミートアップ #3
Optunaを使ったHuman-in-the-loop最適化の紹介 - 2023/04/27 W&B 東京ミートアップ #3Optunaを使ったHuman-in-the-loop最適化の紹介 - 2023/04/27 W&B 東京ミートアップ #3
Optunaを使ったHuman-in-the-loop最適化の紹介 - 2023/04/27 W&B 東京ミートアップ #3Preferred Networks
 
Kubernetes + containerd で cgroup v2 に移行したら "failed to create fsnotify watcher...
Kubernetes + containerd で cgroup v2 に移行したら "failed to create fsnotify watcher...Kubernetes + containerd で cgroup v2 に移行したら "failed to create fsnotify watcher...
Kubernetes + containerd で cgroup v2 に移行したら "failed to create fsnotify watcher...Preferred Networks
 
深層学習の新しい応用と、 それを支える計算機の進化 - Preferred Networks CEO 西川徹 (SEMICON Japan 2022 Ke...
深層学習の新しい応用と、 それを支える計算機の進化 - Preferred Networks CEO 西川徹 (SEMICON Japan 2022 Ke...深層学習の新しい応用と、 それを支える計算機の進化 - Preferred Networks CEO 西川徹 (SEMICON Japan 2022 Ke...
深層学習の新しい応用と、 それを支える計算機の進化 - Preferred Networks CEO 西川徹 (SEMICON Japan 2022 Ke...Preferred Networks
 
Kubernetes ControllerをScale-Outさせる方法 / Kubernetes Meetup Tokyo #55
Kubernetes ControllerをScale-Outさせる方法 / Kubernetes Meetup Tokyo #55Kubernetes ControllerをScale-Outさせる方法 / Kubernetes Meetup Tokyo #55
Kubernetes ControllerをScale-Outさせる方法 / Kubernetes Meetup Tokyo #55Preferred Networks
 
Kaggle Happywhaleコンペ優勝解法でのOptuna使用事例 - 2022/12/10 Optuna Meetup #2
Kaggle Happywhaleコンペ優勝解法でのOptuna使用事例 - 2022/12/10 Optuna Meetup #2Kaggle Happywhaleコンペ優勝解法でのOptuna使用事例 - 2022/12/10 Optuna Meetup #2
Kaggle Happywhaleコンペ優勝解法でのOptuna使用事例 - 2022/12/10 Optuna Meetup #2Preferred Networks
 
最新リリース:Optuna V3の全て - 2022/12/10 Optuna Meetup #2
最新リリース:Optuna V3の全て - 2022/12/10 Optuna Meetup #2最新リリース:Optuna V3の全て - 2022/12/10 Optuna Meetup #2
最新リリース:Optuna V3の全て - 2022/12/10 Optuna Meetup #2Preferred Networks
 
Optuna Dashboardの紹介と設計解説 - 2022/12/10 Optuna Meetup #2
Optuna Dashboardの紹介と設計解説 - 2022/12/10 Optuna Meetup #2Optuna Dashboardの紹介と設計解説 - 2022/12/10 Optuna Meetup #2
Optuna Dashboardの紹介と設計解説 - 2022/12/10 Optuna Meetup #2Preferred Networks
 
スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演
スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演
スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演Preferred Networks
 
Deep Learningのための専用プロセッサ「MN-Core」の開発と活用(2022/10/19東大大学院「 融合情報学特別講義Ⅲ」)
Deep Learningのための専用プロセッサ「MN-Core」の開発と活用(2022/10/19東大大学院「 融合情報学特別講義Ⅲ」)Deep Learningのための専用プロセッサ「MN-Core」の開発と活用(2022/10/19東大大学院「 融合情報学特別講義Ⅲ」)
Deep Learningのための専用プロセッサ「MN-Core」の開発と活用(2022/10/19東大大学院「 融合情報学特別講義Ⅲ」)Preferred Networks
 
PFNにおける研究開発(2022/10/19 東大大学院「融合情報学特別講義Ⅲ」)
PFNにおける研究開発(2022/10/19 東大大学院「融合情報学特別講義Ⅲ」)PFNにおける研究開発(2022/10/19 東大大学院「融合情報学特別講義Ⅲ」)
PFNにおける研究開発(2022/10/19 東大大学院「融合情報学特別講義Ⅲ」)Preferred Networks
 
自然言語処理を 役立てるのはなぜ難しいのか(2022/10/25東大大学院「自然言語処理応用」)
自然言語処理を 役立てるのはなぜ難しいのか(2022/10/25東大大学院「自然言語処理応用」)自然言語処理を 役立てるのはなぜ難しいのか(2022/10/25東大大学院「自然言語処理応用」)
自然言語処理を 役立てるのはなぜ難しいのか(2022/10/25東大大学院「自然言語処理応用」)Preferred Networks
 
Kubernetes にこれから入るかもしれない注目機能!(2022年11月版) / TechFeed Experts Night #7 〜 コンテナ技術を語る
Kubernetes にこれから入るかもしれない注目機能!(2022年11月版) / TechFeed Experts Night #7 〜 コンテナ技術を語るKubernetes にこれから入るかもしれない注目機能!(2022年11月版) / TechFeed Experts Night #7 〜 コンテナ技術を語る
Kubernetes にこれから入るかもしれない注目機能!(2022年11月版) / TechFeed Experts Night #7 〜 コンテナ技術を語るPreferred Networks
 
Matlantis™のニューラルネットワークポテンシャルPFPの適用範囲拡張
Matlantis™のニューラルネットワークポテンシャルPFPの適用範囲拡張Matlantis™のニューラルネットワークポテンシャルPFPの適用範囲拡張
Matlantis™のニューラルネットワークポテンシャルPFPの適用範囲拡張Preferred Networks
 
PFNのオンプレ計算機クラスタの取り組み_第55回情報科学若手の会
PFNのオンプレ計算機クラスタの取り組み_第55回情報科学若手の会PFNのオンプレ計算機クラスタの取り組み_第55回情報科学若手の会
PFNのオンプレ計算機クラスタの取り組み_第55回情報科学若手の会Preferred Networks
 
続・PFN のオンプレML基盤の取り組み / オンプレML基盤 on Kubernetes 〜PFN、ヤフー〜 #2
続・PFN のオンプレML基盤の取り組み / オンプレML基盤 on Kubernetes 〜PFN、ヤフー〜 #2続・PFN のオンプレML基盤の取り組み / オンプレML基盤 on Kubernetes 〜PFN、ヤフー〜 #2
続・PFN のオンプレML基盤の取り組み / オンプレML基盤 on Kubernetes 〜PFN、ヤフー〜 #2Preferred Networks
 
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...Preferred Networks
 
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...Preferred Networks
 
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCの潮流とScheduler拡張事例 / Kub...
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCの潮流とScheduler拡張事例 / Kub...KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCの潮流とScheduler拡張事例 / Kub...
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCの潮流とScheduler拡張事例 / Kub...Preferred Networks
 
独断と偏見で選んだ Kubernetes 1.24 の注目機能と今後! / Kubernetes Meetup Tokyo 50
独断と偏見で選んだ Kubernetes 1.24 の注目機能と今後! / Kubernetes Meetup Tokyo 50独断と偏見で選んだ Kubernetes 1.24 の注目機能と今後! / Kubernetes Meetup Tokyo 50
独断と偏見で選んだ Kubernetes 1.24 の注目機能と今後! / Kubernetes Meetup Tokyo 50Preferred Networks
 

Plus de Preferred Networks (20)

PodSecurityPolicy からGatekeeper に移行しました / Kubernetes Meetup Tokyo #57
PodSecurityPolicy からGatekeeper に移行しました / Kubernetes Meetup Tokyo #57PodSecurityPolicy からGatekeeper に移行しました / Kubernetes Meetup Tokyo #57
PodSecurityPolicy からGatekeeper に移行しました / Kubernetes Meetup Tokyo #57
 
Optunaを使ったHuman-in-the-loop最適化の紹介 - 2023/04/27 W&B 東京ミートアップ #3
Optunaを使ったHuman-in-the-loop最適化の紹介 - 2023/04/27 W&B 東京ミートアップ #3Optunaを使ったHuman-in-the-loop最適化の紹介 - 2023/04/27 W&B 東京ミートアップ #3
Optunaを使ったHuman-in-the-loop最適化の紹介 - 2023/04/27 W&B 東京ミートアップ #3
 
Kubernetes + containerd で cgroup v2 に移行したら "failed to create fsnotify watcher...
Kubernetes + containerd で cgroup v2 に移行したら "failed to create fsnotify watcher...Kubernetes + containerd で cgroup v2 に移行したら "failed to create fsnotify watcher...
Kubernetes + containerd で cgroup v2 に移行したら "failed to create fsnotify watcher...
 
深層学習の新しい応用と、 それを支える計算機の進化 - Preferred Networks CEO 西川徹 (SEMICON Japan 2022 Ke...
深層学習の新しい応用と、 それを支える計算機の進化 - Preferred Networks CEO 西川徹 (SEMICON Japan 2022 Ke...深層学習の新しい応用と、 それを支える計算機の進化 - Preferred Networks CEO 西川徹 (SEMICON Japan 2022 Ke...
深層学習の新しい応用と、 それを支える計算機の進化 - Preferred Networks CEO 西川徹 (SEMICON Japan 2022 Ke...
 
Kubernetes ControllerをScale-Outさせる方法 / Kubernetes Meetup Tokyo #55
Kubernetes ControllerをScale-Outさせる方法 / Kubernetes Meetup Tokyo #55Kubernetes ControllerをScale-Outさせる方法 / Kubernetes Meetup Tokyo #55
Kubernetes ControllerをScale-Outさせる方法 / Kubernetes Meetup Tokyo #55
 
Kaggle Happywhaleコンペ優勝解法でのOptuna使用事例 - 2022/12/10 Optuna Meetup #2
Kaggle Happywhaleコンペ優勝解法でのOptuna使用事例 - 2022/12/10 Optuna Meetup #2Kaggle Happywhaleコンペ優勝解法でのOptuna使用事例 - 2022/12/10 Optuna Meetup #2
Kaggle Happywhaleコンペ優勝解法でのOptuna使用事例 - 2022/12/10 Optuna Meetup #2
 
最新リリース:Optuna V3の全て - 2022/12/10 Optuna Meetup #2
最新リリース:Optuna V3の全て - 2022/12/10 Optuna Meetup #2最新リリース:Optuna V3の全て - 2022/12/10 Optuna Meetup #2
最新リリース:Optuna V3の全て - 2022/12/10 Optuna Meetup #2
 
Optuna Dashboardの紹介と設計解説 - 2022/12/10 Optuna Meetup #2
Optuna Dashboardの紹介と設計解説 - 2022/12/10 Optuna Meetup #2Optuna Dashboardの紹介と設計解説 - 2022/12/10 Optuna Meetup #2
Optuna Dashboardの紹介と設計解説 - 2022/12/10 Optuna Meetup #2
 
スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演
スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演
スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演
 
Deep Learningのための専用プロセッサ「MN-Core」の開発と活用(2022/10/19東大大学院「 融合情報学特別講義Ⅲ」)
Deep Learningのための専用プロセッサ「MN-Core」の開発と活用(2022/10/19東大大学院「 融合情報学特別講義Ⅲ」)Deep Learningのための専用プロセッサ「MN-Core」の開発と活用(2022/10/19東大大学院「 融合情報学特別講義Ⅲ」)
Deep Learningのための専用プロセッサ「MN-Core」の開発と活用(2022/10/19東大大学院「 融合情報学特別講義Ⅲ」)
 
PFNにおける研究開発(2022/10/19 東大大学院「融合情報学特別講義Ⅲ」)
PFNにおける研究開発(2022/10/19 東大大学院「融合情報学特別講義Ⅲ」)PFNにおける研究開発(2022/10/19 東大大学院「融合情報学特別講義Ⅲ」)
PFNにおける研究開発(2022/10/19 東大大学院「融合情報学特別講義Ⅲ」)
 
自然言語処理を 役立てるのはなぜ難しいのか(2022/10/25東大大学院「自然言語処理応用」)
自然言語処理を 役立てるのはなぜ難しいのか(2022/10/25東大大学院「自然言語処理応用」)自然言語処理を 役立てるのはなぜ難しいのか(2022/10/25東大大学院「自然言語処理応用」)
自然言語処理を 役立てるのはなぜ難しいのか(2022/10/25東大大学院「自然言語処理応用」)
 
Kubernetes にこれから入るかもしれない注目機能!(2022年11月版) / TechFeed Experts Night #7 〜 コンテナ技術を語る
Kubernetes にこれから入るかもしれない注目機能!(2022年11月版) / TechFeed Experts Night #7 〜 コンテナ技術を語るKubernetes にこれから入るかもしれない注目機能!(2022年11月版) / TechFeed Experts Night #7 〜 コンテナ技術を語る
Kubernetes にこれから入るかもしれない注目機能!(2022年11月版) / TechFeed Experts Night #7 〜 コンテナ技術を語る
 
Matlantis™のニューラルネットワークポテンシャルPFPの適用範囲拡張
Matlantis™のニューラルネットワークポテンシャルPFPの適用範囲拡張Matlantis™のニューラルネットワークポテンシャルPFPの適用範囲拡張
Matlantis™のニューラルネットワークポテンシャルPFPの適用範囲拡張
 
PFNのオンプレ計算機クラスタの取り組み_第55回情報科学若手の会
PFNのオンプレ計算機クラスタの取り組み_第55回情報科学若手の会PFNのオンプレ計算機クラスタの取り組み_第55回情報科学若手の会
PFNのオンプレ計算機クラスタの取り組み_第55回情報科学若手の会
 
続・PFN のオンプレML基盤の取り組み / オンプレML基盤 on Kubernetes 〜PFN、ヤフー〜 #2
続・PFN のオンプレML基盤の取り組み / オンプレML基盤 on Kubernetes 〜PFN、ヤフー〜 #2続・PFN のオンプレML基盤の取り組み / オンプレML基盤 on Kubernetes 〜PFN、ヤフー〜 #2
続・PFN のオンプレML基盤の取り組み / オンプレML基盤 on Kubernetes 〜PFN、ヤフー〜 #2
 
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
 
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
 
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCの潮流とScheduler拡張事例 / Kub...
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCの潮流とScheduler拡張事例 / Kub...KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCの潮流とScheduler拡張事例 / Kub...
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCの潮流とScheduler拡張事例 / Kub...
 
独断と偏見で選んだ Kubernetes 1.24 の注目機能と今後! / Kubernetes Meetup Tokyo 50
独断と偏見で選んだ Kubernetes 1.24 の注目機能と今後! / Kubernetes Meetup Tokyo 50独断と偏見で選んだ Kubernetes 1.24 の注目機能と今後! / Kubernetes Meetup Tokyo 50
独断と偏見で選んだ Kubernetes 1.24 の注目機能と今後! / Kubernetes Meetup Tokyo 50
 

Dernier

The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 

Dernier (20)

The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 

Deep Learning Practice and Theory