Chủ Nhật, 18 tháng 3, 2018

Waching daily Mar 19 2018

>> The next topic,

the last short talk in the series is on Deep Learning.

And I'd like to

invite our next speaker, Vineeth Bubramanian,

who's a hometown speak in the sense

that he's faculty at IT Hyderabad.

Vineeth finished his PhD at the University of Arizona,

after which he taught for a couple of years.

And now, he's come back

to take a faculty of bachelor in IT Hyderabad.

So over to you, Vineeth. Thank you.

>> All right. Okay. So, how many of you

in this room work on Deep Learning,

use Deep Learning, read Deep Learning?

Okay. Works for PJ on behalf of Deep Learning?

All right. Okay, let's get started.

We can go to the next slide, please. Thank you.

So, I think I don't need to talk about this,

I think there's been

explosive growth of Deep Learning in recent years.

I think we've all seen applications of

Deep Learning in vision, text,

speech and I think the number of publications in

Deep Learning has been growing over the years.

The bottom right graphic that you see there is

an interesting graphic there actually.

It's Google's graphic on the number of

folders on Google Servers,

which hold Deep Learning models.

I was just looking for Deep Learning statistics and

that's one of the ways people have

measured how Deep Learning has grown,

but that's an interesting statistic but that's

also going up fairly exponentially.

So, I think most of you are working in this space,

you don't need this background.

But, I'll quickly go over this and probably then try to

cover what I'm trying to cover on.

I think that I'll try to focus

on today's state of Deep Learning,

which is what I'm intended to be here for.

But, Deep Learning has a pretty deep history.

So, it started in 1940

with the McCulloch-Pitts model of the neuron.

Then came Rosenblatt with the perceptron.

Then came the Widrow-Hoff learning rule in 1960.

Unfortunately, Rosenblatt claimed

that the perceptron could

approximate any kind of a function.

And Minsky and Papert came in

1969 and showed the XOR example,

and showed that this is not possible.

And then came a kind of a dark age

for neural networks at that point in time until say,

the mid-80's when backpropagation came into the picture.

And again, neural networks took a spike.

But, probably late 90's and early 2000's,

it was dull again until 2006,

when Geoffrey Hinton and

Ruslan Salakhutdinov came up with their

Hierarchical Feature Learning using

restricted Boltzmann machines

to train deep-belief networks.

And since then, I think it's been an upward trend

and it's now effectively

the golden age for Deep Learning.

I'm not going to go more into this,

I think there's of course,

a lot of landmark achievements that were in

between like the neocognitron, and things like that.

But with just this background,

what I'll focus on for the rest of

my talk today is to actually talk about, I think.

Okay, one part that I missed, of course which, oops.

I think going back, is that still a problem?

Okay. So, I think all of us know by now that

AlexNet was one of the turning points

for Deep Learning that happened in 2012,

when Deep Learning models when the ImageNet challenge.

So, what I'll focus on for the rest of

the talk today is actually post 2012,

so we won't touch until 2012 what neural networks were.

So, I've tried to cover the thoughts in my head.

And probably, I should start with a disclaimer

that this talk obviously contains my own bias.

So, I'm sure each of you here can

come and give a talk on Deep Learning for 20 minutes.

So, bear with me if it disagrees,

and I'll be happy to talk offline if required.

So, I think I've tried to

cover the topics into three parts.

One is the consolidation

of successes of Deep Learning over the last few years.

So, it became successful for

computer vision and even before that for speech.

So, what would have happened in that space

and how was that success got consolidated?

I mean, in some sense establishing its territory,

something in that sense.

Then I'll talk about exploration of new frontiers.

What are the new directions and

emerging directions in this space?

And of course, I think no talk on Deep Learning

is complete without some Deep Learning bashing.

So, we will talk about some limitations,

which are challenges and opportunities

depending on how you look at it.

So, one of the main things that have happened in

the last four years in Deep Learning is obviously depth.

Okay, that's what the name stands for.

So, we also saw this in

Professor Joha's slides yesterday,

that I think the ImageNet challenge has been

solved by deeper and deeper networks.

Every other depth of the network

has been going higher and higher,

and it kind of stopped with the residual nets

at about 152 layers.

And today, I think most of the networks,

I should say, are in the order of 100's.

And very recently, I think last year there

was a paper on AAAI,

which talked about, When and Why Are

Deep Networks Better than Shallow Ones?

And they did an empirical study.

And I was actually personally surprised to conclude.

And in fact, I think it's the second paper on this list.

The first line of the abstract is, 'Yes, they do'.

That's the first line of the abstract.

Okay. Sorry, I think it's the first paper then,

Do Deep Convolutional Nets Really Need to be Deep?

I think the first line of

the abstract is, 'Yes, they do'.

So, they actually seem

to conclude that you do need depth,

because that's always a debate

in the Deep Learning community whether

you really need all the depth?

Can you manage this with some shallow networks?

So on and so forth.

So, that's been one of

the prime components of

developments over the last few years.

And the depth may not be only along one axis,

can also be at the other axis in case of LSTMs,

stack up a lot of LSTMs.

So, I think that's been another trend in

Deep Learning over the last few years.

I've probably referenced a paper

here that talks about this in more detail,

the understanding is that by stacking LSTMs,

you're probably learning

a more hierarchical feature representation

in the data space.

And the Google Neural Machine Translation,

which goes into Google Translate,

actually does use a stacked LSTM in its encoder.

So, that's another trend

that's been happening in Deep Learning.

The other thing that's happened over the last four years

is various kinds of hybridizing of architectures.

So, you take CNNs,

LSTMs, you mix-match them.

You probably take other components,

add them all together.

And take train layers from one,

untrained layers and mix them up.

I think that's been another trend that's actually

been happening over the last few years.

And that's led to various applications in say,

video captioning, image captioning, video classification,

object detection, and many other applications

where this has been one of the main themes.

But I think an interesting thing,

which probably reflects some of

the points covered yesterday too is,

I think in all of these efforts,

there has been an underlying theme to maintain

the end-to-end learning on these architectures.

Because as we all understand now,

that Deep Learning is representation learning.

So, doing the end-to-end

learning kind of helps you with that.

You don't want to again go back

into handcrafting the features.

So, doing the end-to-end learning helps you

with learning the features automatically.

So that's something that's maintained

despite all the modules that you try to

bring into whatever architecture that you're

trying to put together for Deep Learning.

So to some extent, it's become the mix-and-match,

plug-and-play kind of an approach at this point in time.

Probably, very loosely speaking,

I can say this is culminated in Capsule Nets,

which is Geoffrey Hinton's new development in October.

I think it's still not caught on much.

It has some interesting ideas but I think

it's probably not caught on as much.

So, the last four, five years,

we've also seen a significant amount of

development on the hyperparameter engineering space.

A lot of little, little developments that have taken say,

an idea like CNN from where it was four,

five years ago to where it is today

to solve various kinds of problems.

So, in terms of regularization,

you have DropOut, then DropConnect,

which is a generalization of DropOut,

batch normalization, or simply just data augmentation,

or adding noise to the data, label or gradient.

So, these have all been

different ways of doing regularization.

In terms of weight initialization,

there are again been a bunch of methods today.

In fact, I think there

was a paper a couple of years back called,

'All You Need Is A Good Init'.

They show that all you need is a good initialization

to get to a good local minimum.

But I think to this day, a lot of

practitioners use what is called

the Xavier's or the Glorot's initialization,

and then at times He's initialization.

Then, also choosing gradient descent parameters.

So, choosing the learning rate or

momentum in your gradient descent process.

So, that has led to a lot of methods like Adagrad,

RMSProp, Adam, Nesterov Momentum, so on and so forth.

Then on the activation functions front,

you have Rectified Linear Unit,

Exponential Linear Unit, which is more recent,

Parametric Rectified Linear Unit, so on and so forth.

And of course, a variety of loss functions

depending on what problem you're trying to solve.

And of course, this has a flip side.

A lot of this is hyperparemeter engineering

to make things work for a particular task,

and obviously that has a flip side,

that is perhaps the elephant in

the room for Deep Learning is how

to really choose which parameter to use when.

So, we'll revisit that towards the end.

And of course, I think the last four,

five years wouldn't be

complete without talking about the success of

Deep Reinforcement Learning and the success

of it has been reinforced over these years.

It started with Atari Breakout in 2013,

when not only did the model learn to play the game,

but it also learned strategies to build tunnels,

to go up and hit the roof and

bounce off the roof and hit the ceilings,

and get maximum points.

Then of course, in 2016 was AlphaGo,

much televised, much followed.

I think everybody knew that AlphaGo beat Lee Sedol,

the South Korean player four-one

on a series of five games.

But more recently, about a month back,

was AlphaZero, which is again DeepMind,

all of them are DeepMind's creations.

AlphaZero from DeepMind, which played the game of chess.

I think to put that in perspective,

I think all of you chess fanatics here,

all of you know that Magnus Carlsen is world number

one and he has an Elo rating of 2800.

So, Stockfish is one of

the best computer chess-playing systems

that has an Elo rating of 3,00.

So, which means if Stockfish played

Magnus Carlsen a series of 100 games,

Stockfish would beat Magnus Carlsen 95 times.

That's what the difference in 500 Elo points mean.

And, AlphaZero was trained by playing against itself.

Unlike AlphaGo, where there are also

heuristics put into the system,

AlphaZero is actually trained from

scratch by just playing against itself,

nothing else involved in the training.

And by training it just for nine hours,

they had AlphaZero play Stockfish a series of 100 games.

And AlphaZero beat Stockfish 28-0.

So, which has been like

a huge revelation for the community because

of the fact that it learned to

play completely by itself just by self-play,

nothing else, no strategies,

no rules given to it.

Of course, there's some fine print

on the kind of hardware that was

used to train these kinds of systems,

and that's of course a limitation

in academic environments.

That's something to be aware of.

In addition to these kinds of developments,

there's of course been a proliferation

of applications of Deep Learning.

It's everywhere now, especially in the vision,

text, and speech space.

Very broadly speaking, wherever there

is compositionality in your data,

Deep Learning seems to just work very, very well.

Like vision, text,

speech have some inherent compositionality in them,

and it just seems to learn features

in these kinds of domains really,

really well as against many other domains.

Then, obviously with the success

of Deep Learning in many applications,

we've also seen a proliferation

of Deep Learning frameworks.

So, now you have a lot of options coming from Google,

Facebook, Amazon, and Microsoft.

So, the graphic on the right,

kind of it's a little old,

it's about six months old,

it talks about the most searched framework.

I think as of six months back, it was TensorFlow.

But I think you'll take it with a pinch of salt,

these things keep changing with time.

And I think there are many other frameworks which are

popular and it depends

on the task that you're trying to achieve,

and what flexibility you

want while coding in that framework.

And of course, I think this is probably

an indication of how strong it is now.

It's in the business space, a lot of startups.

We're not going to get into naming any of them,

but just plenty of them at the space across the board,

so many application domains at this point.

So, let me take a few minutes to also talk about

the emerging directions in Deep Learning.

More at an algorithmic level,

a lot of these things are still in

development and have not gone

to a stage of

deployment or reaching the consumer I would say,

again, exceptions are always there but to a large extent.

And I think one of

the hottest areas in this space is GANs,

like deep generative models.

I think Generative Adversarial Networks have

been quite hot in the last couple of years.

Also, there are other frameworks

too Variational Autoencoders, pixel CNNs,

many other ways of generating images,

videos, and of late even text and documents.

So, in fact, I think there's a pretty popular post

on Quora by Yann LeCun

that says that adversarial training

is the coolest thing since sliced bread.

Okay. So, I'm quite sure it was an exaggeration,

but I think it was meant to say that it's

a pretty powerful training method.

So, many possible applications

in this space, art as an example.

And I think there are already some applications out there

which use GANs for esthetics and art.

But I think the broader capability of GANs is

perhaps the potential to contribute

to perhaps unsupervised or semi-supervised learning,

where you can probably

generate data similar to some dataset,

very heavily limited data and

probably train models which

can work reliably in a robust manner.

So, of course I think that space has

been hardly explored at this point in time.

The other thing that people have been trying

to do successful to a limited

extent is transfer learning.

How do you take Deep Learning to newer and newer tasks?

Okay. So, I think to a large extent within

similar domains between

very similar tasks it has worked very well.

I think everybody takes AlexNet and

modifies it to a new vision task and so on and so forth.

But I think what we mean by transfer learning

here is to take things to new domains,

where there's very little data,

very little annotation, very little expertise even.

So, can you actually use Deep Learning for

translating models into these kinds of spaces.

And of course, some examples here

would be Zero-shot learning,

One-shot learning, Few-shot learning,

where you have to classify data in one of

these classes but then you don't have

any data or have very little data from these classes.

So then, how do you transfer

models in these kinds of spaces?

Another important I think

algorithmic development that's come about

in Deep Learning the last few years is

the concept of attention and memory.

I think it started with natural language processing,

I think in 2015.

I think there are better experts here.

But since then it's been used for

images and video processing for captioning,

for visual question answering,

and also an interesting dimension is to go

into memory networks and Neural Turing Machines,

where attention on memory helps

you say read and write from memory,

and simulate a Turing Machine using Neural Networks.

So, there has been some work there about,

I mean probably I think if you google

Neural Turing Machines you can read more about it.

So, in fact, there's a paper in

late 2017 that say's that attention is all you need.

They say that you don't need RNNs at all,

you can kind of achieve

network similar to RNNs

with normal networks with just attention.

Okay. So that's one of the claims that they have

which probably makes sense

but hasn't really proven itself.

And of course I think this discussion would be

incomplete without other efforts that have

popped up in the last couple of years especially

on understanding the theory

behind Deep Learning and of course,

understanding the error surfaces.

Finally, Deep Learning is

all about navigating the error surface.

That's all Deep Learning is all

about in the training process.

So, I think the theory of

Deep Learning last year there was a paper based.

They said there they came up with something

called the information bottleneck principle.

I think I won't go into details today.

You can Google up if you're

interested by someone in Israel if I'm right.

By a group in Israel. And they say

that that's the reason why Deep Learning generalize well.

And more recently there was also a paper by Kawaguchi,

Bengio and others on

trying to study generalization and Deep Learning.

They actually came up with a bound, and

they said that the bound actually

depends on how many times you

validate your deep network on a different validation set.

Okay, which was interesting.

Which was an interesting way of

studying the generalization.

And there's also some work on understanding Deep Learning

using Random Matrix theory from Google Brain primarily.

In terms of understanding error surfaces,

I think there has been a lot

of interesting work in this space.

So, in fact, we do some work in this space

too and I'll be happy to

discuss offline about those things.

So, there was a work by Kawaguchi MIT on

Deep Learning with a local minima where

he claimed that under certain conditions,

when you have really very vast

parameter spaces such as Neural Networks,

all local minima are actually global minima.

Okay, they're all global minima. So you either

have saddle points or you

have global minima under

certain conditions and constraints of course.

So, but there have been various angles.

There's a group of Michael Jordan's group

has some papers on how to escape

saddle points efficiently using

portal gradient descent methods,

not directly tested on Deep Learning though.

More than the theoretical space at this point.

There's also some work on trying to understand

what kind of local minima are actually

good for Deep Learning and there's a paper

called Sharp Minima can Generalize for Deep Nets.

Where they say that how flat

a local minima is helps

in telling how generalizable it is.

More flat, more generalizable.

Okay, so that's what they claim.

And they have some interesting work

on how do you transform,

how do you go from a sharp minima to

a flat minima without changing

the cost function value, okay.

It's just an interesting thing again.

Of course, another thing here is efficient Deep Learning.

So, how do you make Deep Learning work on edge devices?

So, we all know AlexNet has

about 60 million parameters which boils

down to about 200 MB

plus and VGGNet takes about 500 MB plus.

So, how do you really make these things

efficient on edge devices?

Efficiency could mean storage wise,

efficiency could mean compute wise,

could be power wise, energy-wise,

whatever efficiently means.

So, there's lots of interest

growing in this space obviously

because I think many companies

want to take Deep Learning to hardware,

may make it a deployable on hardware directly.

I think one of the best works in

this space has been deep compression from

Stanford which actually won

the best paper award in ICLR 2016.

Where they just had a sequence of a pipeline

of simple things which brought down,

which got about 50X compression

on VGG with zero loss in performance.

Absolutely zero loss in performance.

Of course, since then there have been many other methods

more recent ones include knowledge distillation,

binarized neural nets and so on and so forth.

I think there's a pretty good survey that

came out recently on the various kinds of

deep model compression and acceleration methods

that if you're interested you can probably look at.

There's also now more recent work on doing Deep Learning,

but the input being a graph.

So far we've seen text,

we've seen speech, we've seen images.

One is the input to the Neural Network is a graph.

So, for the last couple

of years again there's been some

interesting traction in this space.

There's also a new sub-area matching

called Geometric Deep Learning,

which is about how do you do

Deep Learning on non-euclidean spaces?

So again probably I'll encourage you to go visit

that website to know more about it.

Of course, Deep Learning meets physics.

So, we'll probably covered this a little bit more in

probably the last few slides that I have.

So, the last few slides that I have is of course,

I think we've all seen it in

different forums over the last since yesterday,

as to what are the limitations,

why is Deep Learning not good?

So, we've covered a few of these things

in the next few slides and

probably I'll stop with that for the next stop.

As you've already seen interpretability and

explainability is one of the key limitations.

So,I think there are two issues here.

Why Deep Learning models work,

and how Deep Learning models work?

I think both are an issue.

I think why Deep Learning models work,

I think is more about the theory and trying to

study its generalization capabilities

and things like that.

But what we're talking about here is how

do Deep Learning models

really work. I mean how do they work?

I think so far most of the effort in this space has

been in trying to understand visualizing the weights,

probably trying to get a peek a little bit

more into some of the examples especially in

the image space has been trying to see if

a particular model classified

an image as a particular class.

Okay, what did it really see in

that image to give that particular class label?

Then you try to identify which portion of

the image the model was looking on.

So, you have some methods such as grad Cam.

I mean we have some work or plus plus in this space.

So, I think most of

the efforts in this space has been how

do you visualize the weights and understand them?

There's something called backprop to image

and you play around with the system to do it.

But I think what we mean by

interperability is something much much more.

I think there's a long way to go here.

I think we talked about it in

multiple sessions over the last couple of days.

Then what we really want from

Deep Learning Networks is

to give rationale for decisions.

It's not okay getting a peek into

the black boxes one step forward, which is great,

but I think what we're really looking for is for

the Deep Learning models to rationalized

their decisions and tell us why they did something.

And obviously that's when these models would go into

healthcare and other risks

sensitive domains where life

could be at stake when a decision is made.

Another important thing is the need for Causal Inference.

So there's a difference between

causality and correlation.

And it's important to understand

the Deep Learning models capture

correlation not causality, right?

So, this is just an example

here of a statistic where ice cream sales and

shark attacks have very similar statistics

between the months of say January to November.

Probably Siberia or someplace like that.

And obviously there's no causality here.

Okay, shark attacks is not

causing people to go eat ice cream.

Okay? So, obviously there's more to

understanding Causal Inference in Deep Learning.

I think there is some recent efforts from

Bernard Berofsky group in Max Planck.

So, in fact they had a paper last CVPR too.

But again, a long way to go.

A long, long, long way to go here in terms of

understanding causal relationships in

data automatically using Deep Learning.

And I think Professor Raj already spoke yesterday

about this, robustness and consistency.

Okay? So we already saw this yesterday,

where given an image which was

classified as the correct label,

a little bit of distortion of noise,

that's what you see here.

It gets classified as an ostrich.

So the third image is an ostrich in all those images.

Okay? So, in fact there's

another example that they have in the same paper.

So, the paper in CVPR of 2015.

Where all of these images have their corresponding labels

and they've actually classified with

99 percent confidence.

Okay, so you see. As you all can see,

you can see a cheetah there,

a robin there and a centipede there.

Okay? So, if you don't it's

your fault not Deep Learning's, okay?

But I mean this set

of images were generated by

taking just random noise images,

and adding a small component of what they

thought network learnt as

a representation of a cat let's say.

So, you can actually do simple things like back prop to

image for all cat images and try

to understand what is that base image

that the network thinks is a cat, okay?

And then you add a small portion of it to a noisy image,

and then boom the network thinks it's a cat now.

Okay? So, clearly this does not reflect human cognition.

So it seems to be learning some

discriminative features which it

thinks is a cat and

then these are the results that you actually see.

Okay, so, again a long way to go in terms of

robustness and consistency of

results across various kinds of data.

Integrating domain knowledge.

So, this is a recent work last year, 2016,

which talked about learning physical intuition

of block towers by example,

where they try to build

block towers and try to predict when it would fall.

Okay. So, an obvious question here is,

why don't we integrate physics laws?

Why should it be data driven?

Okay? So, why don't we integrate prior knowledge?

Why don't we integrate priors?

And obviously, this also connects to bringing together,

say, Bayesian approaches with Deep Learning.

So, that's another space that I think that's

reasonably open and needs progress.

Take one moment. Hyperparameters and Engineering.

Okay. I think I'll quickly finish.

I think all of us know this.

This is the elephant in the room, the hardest part of

deep learning is to find out what hyperperameters.

So, this is actually a

recent paper that's to be published,

where for a deep reinforcement learning

algorithm they take the same method,

two implementations, and get results and they found

that the results completely

vary with a huge amount of variance.

Okay. So, which clearly means questions,

the very reproducibility of these kinds of

models across just implementations.

Okay. Not even datasets.

Okay. I'll conclude with this last slide.

So, there's actually a paper called Deep Learning:

A Critical Appraisal that came up a couple of weeks back,

which seems to be reflecting some thoughts about,

is deep learning hitting a wall?

So, there's actually Francois Chollet's comment

out there, which says,

"For most problems where Deep Learning has enabled

transformationally better solutions in vision and speech,

we've entered diminishing returns territory

in the last year or so."

Okay. Of course, I think, in today's world,

at about three to six months,

if you don't get good results,

I mean, that's one generation gone, right?

So, that's how fast things

are going at this point in time.

But, things have slowed down at this point in time in

domains where we have known

good performance, so that, kind of,

begs the question, while we escape all kinds of

local saddle points and

probably hit better local minima while training,

has the field itself hit a local minima?

Of course, there Hinton

recently talked about moving beyond

backprop and I think that's

potentially another direction to look at.

I think I'll stop here.

I'm sorry if I exceeded time and I know we're

waiting for the big talk today.

>> Hello sir.

My question is more of a philosophical.

Why is it called Deep Learning?

I mean, we work so much and,

you know, combine and supervise the purchase also.

And GAN is handling

unsupervised problems and all and included meta-learning?

>> It's just another way of calling neural networks.

In fact, Hinton says in one of his interviews

that he just wanted to coin

a fancy word that makes people take notice.

So that's why we came up with Deep Learning.

So, I think, it's finely, just a name.

>> You discuss about the visual attention.

If I will apply Deep Learning for

making the saliency model, saliency maps.

So, how we will value it because most

of the complex this,

where people [inaudible] more

salient and another 50 percent

is telling region B is the most salient.

So, how will it be validated

because we don't have the real ground truth?

>> Sure. I mean,

I was talking about visual

attention in a different context.

I think saliency is a different way of

looking at visual attention.

In this case, what we're talking about is when we,

let's say, you do something, like, image captioning.Okay?

So, when you try to capture

an image, the typical, I mean, today,

one of the most popular approaches is to ask

the network to decide on

a particular part of the image to look at,

based on that you come up with a word.

Then, the network looks at another part

of the image, comes up with another word,

so on, and so forth, and then you

string the words together to make a caption.

Okay? So, the challenge there is to find out,

which part of the image should

my network look at, at a particular point in time?

So, I think there are methods for that.

There's something called hard attention,

soft attention, and things like that.

So, that's slightly different from saliency.

Saliency, I think, is a different thing,

I think in that case the labeling is subjective.

So, I mean, that's a separate problem by itself,

I think, maybe, we can talk offline about it.

I think in those kinds of cases,

you will have to do some kind of

crowdsource, vote aggregation,

or something like that, to come up with some kind

of a robust ground-truth

before you evaluate those kinds of models.

But today, I think, for a large part with

all of these methods and model saliency,

people just try to subjectively

evaluate and see how things work.

>> Thank you so much.

>> Hi. So, once you said

that this thing that Deep Learning is nothing,

but actually the does studies

related to neural networks only.

>> Yes. that's correct.

>> I just wanted to ask the question that,

if it's related to Neural Networks, then,

how much of the concepts or the theories established for

the Neural Networks are true

actually for the real brain as well,

where the extra neutrons exist?

Is anybody studying- are

any kind of studies being made related to that as well?

>> I think it's broadly understood today that

Neural Networks or Deep Learning as we

see are inspired by the human brain,

but they don't mimic or emulate or

simulate or anything in that context.

So, I think the model of the

neuron and all that is kind of

borrowed from the studies on the human brain,

but beyond that, I think,

there's not direct similarity at this point.

>> Are any scientific groups actually working to make

any kind of similarity or

any correlation between the two?

>> I mean there is-

>> There has to be a good mix

between the medical sciences

as well as the computer scientist as well.

>> I think, there is

a lot of work and personally, I think,

the one group of work that I'm personally aware of

is Tomaso Poggio's group in MIT,

I think they do some work in this space.

But at least, I have not seen

something convincing in that space,

to be honest, I mean, that's my understanding.

>> If you're available then I would

like to talk to you after break.

>> Sure. >> Thank you so much.

>> Yeah. Thank you.

>> Okay. Are there any more questions for Vineeth?

>> Okay. One last.

I think, people are trying to set it up.

So, maybe, we should use that time to have a question.

>> Hi. So, I want to ask like,

most of the part of your presentation was about,

like images, images, and images,.

>> Okay. All right. Okay.

>> So, how about text and other things?

>> Sure. Okay. So, in fact,

there is an interesting thing I wanted

to start with the disclaimer.

So, of late in Machine Learning,

there's been a lot of discussion

about bias in Machine Learning.

So, I have to disclaim that I work

a little more towards vision,

so that's why you had the bias in

these slides towards vision.

So, that was the only reason.

Okay. So, it's more, that I work in vision,

so most of the slides were towards vision,

but I think, maybe if I can,

if I have a couple of minutes.

So, if you look a few years back,

I think the fields of speech,

vision, NLP, all of them were very far away,

each of them had their own conferences.

I think, the feature extraction used to be

so different in these domains,

that there was not, of course,

the Machine Learning algorithms are similar,

but feature extraction used to be so different,

that there was not

much of a crosstalk between these domains.

But I think, with Deep Learning and

Feature Extraction being automated,

I think the methods that apply to all these domains,

are, kind of, coming together.

And that's the reason today you have so much

of crosstalk between these modalities.

Because the step that differentiated them,

which is Feature Extraction is now

gone out of the picture, right?

So, I think a lot of these methods are still relevant to

text but I think there are a lot of experts

here on text and speech,

I would definitely recommend

you to talk to them about it.

I'm sure they know more than that.

>> Sure. Thank you.

>> Okay. I think we will not, you know,

hold the next talk any longer.

With that, thank you again Vineeth.

Thank you Vineeth again. Yeah.

>> Thank you.

For more infomation >> Deep Learning: A Review - Duration: 32:07.

-------------------------------------------

Prejudice or Sexy? What Japanese REALLY think of GYARU GIRLS in Japan. - Duration: 6:55.

They're cute. Harajuku and Gyaru culture are both very unique in their own ways.

That's why they split in two.

The image I have of Gyaru girls is long lashes, and lots of makeup.

One of my friends was very individual like that.

Hey guys it's Cathy Cat. Gyaru are the wild girls of Japan

They dress sexy, they are super cute and they are a little bit wilder

than we maybe expect the person to be.

That's a fashion style that has been very big in Japan, especially in Tokyo

but it's changed over the last couple of years.

This time we are gonna go onto the streets of Tokyo

and ask Japanese people about their opinion of Gyaru and also

what the general idea is, what their first impression is,

and if they maybe ever wanted to try it.

Let's go and ask Japanese people. And don't forget,

to subscribe to our channel as well, so you don't miss any future videos

that we upload when we interview people here in Japan.

Let's go and Ask Japanese.

What's the image you have of gyaru girls?

They are just so cute. They don't wanna look like everyone else.

I wanna keep doing things differently from others.

They're cute. Harajuku and Gyaru culture are both very unique in their own ways.

That's why they split into two ways.

In the middle are people that just go with what's trendy in normal fashion.

And then on both opposite ends are Harajuku kawaii girls and Gyaru girls.

Do you think both those styles are unique to Japan?

The Harajuku kawaii culture has started from Japan.

That's what people appreciate about Japan.

People from abroad like seeing it.

The world acknowledges Harajuku Fashion.

But Gyaru girls pick a type of "kawaii", cute, that men like.

It's a cute style that is stronger and sexy.

So a childish kawaii and a sexy kawaii?

Yes exactly!

Harajuku kawaii is girly and fluffy stuff that girls like.

But Gyaru girls have a more edgy kind of kawaii.

Sexy and strong, cute girls.

Edgy cute is a nice way of saying it.

If you became a gyaru what would you do? - I wanna walk around town.

I would flaunt my sexiness.

What image do you have of Gyaru girls? - Image?

I don't have a bad image but...

But in Japanese society, we don't think highly of Gyaru girls.

We have the image that they don't settle for a real job, or won't work as hard

That's why there have become less. And well, the trend has been going down...

It's not trendy to be a gyaru right now, that's why there are less.

The image I have of Gyaru girls is long lashes, and lots of makeup.

What do you personally think of Gyaru girls?

I would not want a gyaru girlfriend...

As a friend though, yes. They can wear what they like.

If you asked me yes or no... I'd say no...

Why so?

Well as girlfriend... Gyaru girls seem so excited and party like...

I can only imagine them as friends to have a good time with.

What image do you have of Gyaru girls?

Gyaru? They are really unique!

They are really individual and they have their own Gyaru makeup.

There was also the really dark Mamba makeup years ago.

Even though they were rejected by society, they decided to be themselves.

And I thought that was amazing.

Their clothes and such are very unique.

I don't think they are bad girls. Some girls do really cute makeup still...

I knew some girls who were gyaru, and it didn't bother me...

They didn't have a bad personality or anything.

One of my friends was very individual like that.

I didn't have a weird image of gyaru girls. I thought she looked cute.

You keep saying weird image. Does gyaru culture have a bad image?

My parents often told me never to become like my friend.

So initially I had a bad image of gyaru.

But I made friends with many, and those girls were nice girls.

That was just the bad image that the older generation had of those girls.

So the parents generation thought badly of them?

My parents forbid me to wear that fashion or their kind of makeup.

What is so bad about gyaru?

My parents seemed to think that those girls are so loud and flashy.

They wore thick eyeliner and such...

My parents were against that.

That's why they forbid me to copy that.

The makeup was too strong. - They didn't like that.

The parents generation has a lot of prejudice against gyaru.

Even though you're born with black hair, they die it blonde and such...

I personally loved that though...

When I was in high school I wore thick makeup too

So you were a Gyaru girl in school? - Yes I was.

What did you like about being a gyaru?

I liked how we all were so loud and excited and had a good time.

Gyaru seem to always have fun. - Yes we would when we hung out.

Got it. Thank you so much!

So did that make you curious what this Gyaru Fashion is all about?

We've actually done a special video,

Because we are actually..... ?

(To director) What are you writing on the blackboard.

We actually went and interviewed people that work at the Kurogyaru cafe

which is actually in Shibuya, Tokyo, where most gyaru girls used to be.

It's girls who are still living the gyaru and kurogyaru livestyle.

And they have their own bar. They dress interested customers up

in the same style, give them the same makeover and

so you have a chance to experience that gyaru lifestyle.

And you can ask them anything about it.

If you are curious about that one, we've done a video about it.

That's in the links at the end of the video.

Be sure to check that out, it was a lot of fun.

Thank you for watching until the end, and don't forget to subscribe to future videos.

We upload... 1...2..3... we upload a lot of videos every week.

So you'll NEVER be spoiled for choice... ?

No wait a second. You will be spoiled for choice? Yes.

You will be spoiled for choice for future videos. So...

Thank you very much and catch you soon. Bye.

For more infomation >> Prejudice or Sexy? What Japanese REALLY think of GYARU GIRLS in Japan. - Duration: 6:55.

-------------------------------------------

News Conference: UMBC & Kansas St - Postgame - Duration: 45:49.

For more infomation >> News Conference: UMBC & Kansas St - Postgame - Duration: 45:49.

-------------------------------------------

News Conference: Florida State & Xavier - Postgame - Duration: 27:31.

For more infomation >> News Conference: Florida State & Xavier - Postgame - Duration: 27:31.

-------------------------------------------

PAKISTAN or TURKEY - Which Military is Better? - Duration: 5:45.

Coming up in this episode of FTD facts

You'll see a side-by-side comparison of the military power of these two amazing countries Pakistan and Turkey

How's it going guys? How you doing today?

My name is Leroy Kenton and welcome back to another episode of

FTD facts and guys if you love learning about the military and the military power of countries and you want to see more videos relating

to the military and military power give this video a

Thumbs up and if we get over a thousand likes on this episode we know that you guys would want to watch more military

Episodes and for all the new faces here to FTD facts hit that subscribe button

we give you facts about the different countries and cultures all around the world so if you love learning about our earth a

Few facts is a place where you can learn all that stuff ok?

so don't want to begin with this military comparison starting off with the country of Pakistan the Pakistan armed forces of the military forces of

Pakistan they're the sixth largest in the entire world in terms of active military

Personnel as well as you're the largest in all of the Muslim countries Pakistan has the world's sixth largest

Nuclear arsenal as well the total manpower available in Pakistan's military is 95 million

They have a total of six hundred and thirty-seven thousand active personnel and their reserve personnel are two hundred and eighty-two

Thousand now to introduce

Turkey Turkey ranks 8th in the world for his total military power and what makes a Turkish military force so

Interesting is that since turkey is the only?

Secular power that can negotiate with Middle Eastern countries at a cultural level the Turkish army has a huge impact on

middle-eastern relations as well as politics turkeys total manpower available is at

41 million six hundred and forty thousand

It's active personnel are a total of three hundred eighty two thousand eight hundred and fifty

And they have a reserve personnel of three hundred and sixty thousand five hundred and sixty-five now

Let's take a look at the divisions of their military and compare them side-by-side

Starting off with the pakistani army when we look at the pakistan army. They have a total of

2,920 for combat tanks

And they have two thousand eight hundred and twenty eight armored fighting vehicles

They have a total of 134 rocker projectors, and they're told artillery equal

3278 as well as they have

465 self-propelled artillery for turkeys army strength they have two thousand four hundred forty-five combat tanks

7550 armored fighting vehicles they have eight hundred eleven rocket projectors as well as they have six hundred ninety

seven towed artillery and

1113 self-propelled artillery now. Let's move on from their army to their Air Forces to see what's going on up there

Pakistan has a total of 301 fighter aircraft. They have 394 attack aircraft

261 transport aircrafts 190 trainer aircraft and their total helicopters are at

316 and out of those they have 52 attack helicopters for Turkey they have

207 fighter aircrafts they also have 270 tact aircrafts their transport aircrafts are 439 they have

276 trainer aircrafts and their total helicopters are at

455 and they have 70 attack helicopters now for the comparison of their navies in Pakistan's Navy. There's a total of

197 naval assets

They have 17 patrol crafts eight submarines ten frigates and three-mile warfare vessels in Turkey's Navy. They have

194 total naval assets

thirty-four patrol crafts sixteen frigates twelve submarines and nine Corvettes

And they have eleven mine warfare vessels now currently Pakistan has a total defence budget of seven billion dollars u.s.

But Pakistan is said to increase its defense spending to eight point seven eight billion dollars u.s.. During the fiscal year of

2017 to 2018 so in the year 2018 now so that's going to be coming up real soon

if not in effect already now a large portion of Pakistan's military is going to it's

Huge budget priorities that are its combat aircraft submarines surface warships

And the country's various indigenous missile programs

Turkey their defense budget is a little bit higher at eleven point five billion dollars u.s.

And look at this so turkeys defense budget has increased by nearly fifty percent since Jen of

2017 and this is according to the Official Gazette

Despite having a second largest army in NATO Turkey was not among the top 15 military spenders in the year 2016

And those top sprinters were of course US China Russia

Saudi Arabia as well as India now that concludes your military comparison of Pakistan and Turkey

Let me know all your thoughts and comments down below in the comment

Section and guys if you haven't seen it already you got to check out our country comparison where we do a side-by-side comparison

But just on the country on a whole of Pakistan and Turkey

We'll have that video at the end of this episode so once it's done. We'll just have one of those

Cute little card things that you can watch that episode

I highly recommend it and guys don't forget to follow me on social media have those links down below

You can send me a message on Instagram. I'll be replying to as much of them as possible this week, and that's it

I'm done talking. I'll see you guys real soon later

So right in front of you now is our country comparison of Pakistan and Turkey

I highly recommend that one as well as our other which is better episodes where we do side-by-side

Comparisons of other countries so continue to learn here in FTD facts come back tomorrow for even more episodes yeah, and I'll see you soon

For more infomation >> PAKISTAN or TURKEY - Which Military is Better? - Duration: 5:45.

-------------------------------------------

wild animals finger family song for kids | dinosaurs,Gorilla,nursery rhymes,NASH TOON Tv - Duration: 11:12.

finger family rhymes dinosaurs

For more infomation >> wild animals finger family song for kids | dinosaurs,Gorilla,nursery rhymes,NASH TOON Tv - Duration: 11:12.

-------------------------------------------

PAKISTAN or TURKEY - Which Country is Better? - Duration: 5:50.

Since we started our which is better series some of you have been asking for us to do Pakistan or turkey

Which country is better, so that's exactly what I'm doing in this episode welcome back to another episode of FTD facts

My name is Leroy Kenton and guys if you just found this channel you want to click that subscribe button as well as that bound

Notification because we post videos every single week pretty much every single day

So if you love learning about this planet and everything in it, you know what to do, okay so for this episode

I'm going to start off with Pakistan Pakistan is a country in South Asia with a population that makes it the sixth most

populous country in the entire world

The name Pakistan comes from PAC which means pure and Stan is land means land of the pure and that's in the Persian language

And the ER do languages Pakistan became independent from the British Indian empire in the year 1947 along with its neighboring country

India Pakistan's total population sits at one hundred and ninety nine point three million people and it's population densities

227 point seven zero square kilometers it has a land area of seven hundred and ninety six thousand

ninety-five square kilometers and this excludes Pakistani administered Kashmir now for Turkey turkey is a

Transcontinental country now what makes Turkey so unique is that it's a nation in

Eastern Europe as well as Western Asia with a cultural connection to ancient Greek Persian

Roman Byzantine as well as the Ottoman Empire's turkeys over ninety percent Asian

Geographically, but it has more than 2000 years of European history they practice Islam

But has been home to the world's largest

Christian Church for nearly a thousand years

Turkeys total population is estimated to be eighty one point nine two million people and it's population density is

104 point five four people per square kilometer and has a land area of

780

3562 square kilometers now moving away from geography to the money of the

Countries the currency used in Pakistan is a Pakistani rupee and in Turkey

They use the Turkish lira

Pakistan's economy is a 24th largest in the higher world in terms of purchasing power

Parity and is a 40 second largest in terms of nominal gross domestic

product although some aspects of economic freedom have advanced modestly in Pakistan in recent years

decades of internal political

Disputes and low levels of foreign investment have led to very up-and-down growth as well as led to a lot of under development

Pakistan's total GDP is

283 billion u.s.. Dollars and its GDP per capita is 5.2

5,000 looking at Pakistan's total exports that number is at 20 point five billion dollars and its top three

Exports our house linens rise as well as non knit men's suits

Pakistan's total imports were forty five point nine billion dollars according to the latest numbers and his top imports are refined petroleum

Crude petroleum as well as palm oil now

Let's look at the economy of turkey based on the current state of its economy

Turkey is defined as an emerging market and one of the newly industrialized

Countries in the world over the past decade Turkey has shifted more of its focus to its service sectors like tourism

communications and transportation while they slightly decrease their dependency on

agricultural and industrial aspects Turkey's total GDP is

857 billion dollars and his GDP per capita is twenty four point two thousand dollars his total export toward one hundred and thirty nine

Billion dollars and Turkey's top three exports are cars

Gold as well as delivery trucks looking at the total imports for Turkey

That was a hundred and eighty eight billion dollars cars were the number one

Imports as well as other imports that were unspecified and refined petroleum

Followed suit, okay

So when we compare the living costs with Pakistan and Turkey which countries cheaper delivered in which countries more?

Expensive compared the cost of living in Pakistan compared to Turkey we see that food is 18% cheaper in Pakistan

Housing is 19% cheaper

But clothing is 11% more expensive now transportation is 46% less expensive in Pakistan

Personal care is during 9% less and detainment

However is 64 percent higher in Pakistan than in Turkey so in total the cost of living in Pakistan is 14%

Cheaper than the total cost of living in Turkey and the final thing to look at is a national debt and the debt per citizen

Pakistan has a national debt of 69 point four seven billion dollars u.s.. And that leaves its debt per citizen at just

372 dollars now Turkey's national, debt is two hundred and thirty two point two billion dollars

But its debt per citizen is two thousand eight hundred and eighty nine dollars

So that concludes this side-by-side comparison of Pakistan or Turkey, which country is better

Let me know down below your thoughts and comments about any or both of these countries and guys don't forget to check out the military

comparison of Turkey and

Pakistan that video will be at the end of this episode so you can click on it and just go straight to it as

Well as we have some links down below to some other good stuff like my social media where you can follow me to see what?

I'm up to when I'm not filming these episodes until next time guys. I'll see you real soon

He goes and before you head on out of here. Here's that video I was talking about where you can look at the military

Comparison of Turkey and Pakistan we also have other which is better episodes where you compare other nations, so yeah again?

Thanks for watching you guys have been awesome, and I'll see you real soon in another episode

For more infomation >> PAKISTAN or TURKEY - Which Country is Better? - Duration: 5:50.

-------------------------------------------

EDLが解説!【YouTube】再生リストの説明欄を編集する方法 - Duration: 0:57.

For more infomation >> EDLが解説!【YouTube】再生リストの説明欄を編集する方法 - Duration: 0:57.

-------------------------------------------

Como eu estou aprendendo português - Duration: 5:31.

Hey guys, what's up

Today I want to make a quick video

and talk about my language learning journey with Portuguese in Portuguese

my Portuguese is not great but I don't think it's horrible either

but I still make mistakes and I still pronounce things wrong

and yes if you haven't subscribed to my channel yet, please do!

and leave a comment as well about how I can improve my Portuguese

So I started learning Portuguese last year in August

And I practiced and studied every day since then

now I am learning with Duolingo

and I'm almost finished with the course

like the tree

I really like Duolingo because it seems to help me

I also have books to learn with or to help out

like this

I also try to read children's books on my kindle aloud

like "The Little Prince"

And yes I have my phone settings are in Portuguese

this helps a little

but one of the biggest things that helps me

is a huge amount of immersion, like all day

every day I try to watch a series on Netflix

or videos on Youtube

in Portuguese with subtitles

so I know what's being said

this helps a lot especially with my accent

and I also try to talk with myself

to see what I can improve

I translate songs and articles to English

I also write journals and make video diaries

to help as well

I think that the most important thing that I do is

integrating it into my every day life and using Portuguese every day

and yes these are some of the things that I do

and a little background with my language learning journey

but my head is spinning

so i'm gonna go now, I will see you guys in the next video, bye

For more infomation >> Como eu estou aprendendo português - Duration: 5:31.

-------------------------------------------

Storage Unit Hunting - 2 In 1 Day by Coach Dom Costa - Duration: 2:28.

Hello from Coach Dom Costa we are at the first unit the ten dollar unit so let's see

what we got yeah lights are dead over here so we got odds and sods no dead

bodies find out anything is good in here oh my god a thousand containers of weed

well the empty containers so I'll go through this and we'll see what we can

find got a load and go more to do I'll keep you posted

so I think what you guys saw before is I got the medical marijuana a container

unit as man I got lots of containers here and the lids

well kids I scored the medical marijuana or cannabis unit I got bags I got all

the little bottles you could ever want Well back again with the second unit

let's open put it up and see what we got

not too shabby I like the things are in containers oh dude we got horns antlers

Bowflex containers

Mother Mary this might turn out more info following after all that work with

the storage locker I had to get a beer but I'm really grateful that I had a

chance to go get that stuff so hit the links below

I've always got more stories I can't tell you like the other guys if it's

going to turnout or not so far so good what I've seen in the unit's I spent

basically 180 bucks for two units I think I got plenty to resale and I'll

dig through it and I'll keep you posted of the new stuff that I find but I got

to wrap up this video and thanks for watching as always and I'll catch you on

the next one I think for me it's the thrill of the hunt of what you might find

and hey antlers and horns come on it's a good day talk to you soon!

For more infomation >> Storage Unit Hunting - 2 In 1 Day by Coach Dom Costa - Duration: 2:28.

-------------------------------------------

Increase Productivity - Become 50% More Productive Right Now - Duration: 1:57.

Hi.. I'm Judy Machado-Duque author of Life Purpose Playbook and founder of

Productivity Goddess and in this video I'm going to share with you how to

become 50 percent more productive right now. So, make sure you watch this video

until the end so you can get a copy of my free PDF called my before noon

income-producing method. So, let's begin! Step one is to write a list on a piece

of paper on your planner on your phone whatever you use to keep yourself

organized write a list the night before of everything that needs to get done

tomorrow, make sure you include your big projects - anything that's going to help

you to build your business to bring your goals to you faster to move you towards

your goals and not just all the simple little to do. Step two is to prioritize

your list. so, look at the list and anything that is goal related it's going to

bring you closer to your goals it's gonna build your business you want to

prioritize 1 2 & 3 your top 3 priorities for the day. You're going to take action on

those 3 before noon so those three that you've prioritized

you're going to work on first thing in the morning after you've waken up after

you've done your morning routine after you've had breakfast to get into the

office or your home office and you're gonna start on those first three because

those are your three big priorities and you're going to get those done before noon.

So, there you go now you know a really simple but crazy powerful technique to

help you to become 50 percent more productive right now. Now can you use a

little bit of help in really mastering this before noon commitment this

technique? Well I've got a free pdf for you called my before noon

income-producing method. So, make sure you click the link in the description below

to download that now. If you liked this video make sure you press the like

button below, share it with your friends and be sure to subscribe! Thanks for

watching and we'll see you in the next video!

Không có nhận xét nào:

Đăng nhận xét