Thứ Hai, 26 tháng 2, 2018

Waching daily Feb 26 2018

>> So it's my great pleasure to introduce Eric Horvitz,

who is a Technical Fellow and Director of all of

the Microsoft Research Labs worldwide and my manager.

Eric is a world-known figure

in artificial intelligence and machine learning.

He's done work in machine learning,

perception, natural language processing,

decision-making, human-AI collaboration, cognition.

Eric is not only a computer scientist

with a PhD in computer science from Stanford,

he also was an MD in neuroscience.

I know very few people in the world who

have that much breadth and depth of knowledge.

He's the winner of the Feigenbaum

Prize, Allen Newell Prize.

He's a fellow of the National Academy of Engineering,

ACM, AAAI,

the American Academy of Arts and Sciences and he's

an absolute pleasure to listen to so I

won't stand between you and him. Here's Eric for you.

>> Thank you very much to hear that from you.

Well, it's a pleasure to be here in India in Bangalore.

I spent three days, had a fabulous time at the lab,

and now coming up and visiting with you here.

It's a pleasure to greet you as academic colleagues.

Microsoft is in its 27 year now

and the mission has remained the same for over 27 years.

Primary mission, expand the state of

the art with or without regard to Microsoft.

Of course, the second mission there is

transfer technologies and innovations

as fast as possible.

Keep Microsoft vibrant.

But the first point here is all about

the fact that we're

interested in the science and the frontiers;

both theory and practice.

And by being worldwide,

an open research model we reach out to academics like

yourselves and students and

we hope there's also reach in,

and that keeps us really fresh and keep

the best ideas in

the mix in them a launch at Microsoft Research.

We just added a new lab,

Microsoft Research Montreal and

it was in the news this week,

President Trudeau just made

it announcements as Prime Minister Trudeau

in Canada about the lab and

some reverse brain drain for Canada

in this case from CMU moving into into Montreal.

So with that said, the reason I came to

Microsoft was even in 1992,

my discussions with Bill Gates and

Nathan Myhrvold the founder of MSR,

we're all about intelligence.

Building computers that can hear,

think, question, engage,

see objects, PCs might do someday

bill fight and we should really start working on

this and it's always been a major pillar at the lab.

I've heard people destitute discussing what is AI.

I like this definition

of artificial intelligence is the science

of pursuing computational mechanisms

underlying thought and intelligent behavior.

If you look back to the proposal in

1955 by the founders of the field,

somewhere in their first couple of paragraphs they said,

"We're trying to find how to make machines solve

the kinds of problems now reserved for humans".

And they also pointed out in those days there were

four pillars of a sort on the challenge;

they called that perception,

learning, reasoning, and natural language.

The lateral particulate being pointed

out as being uniquely

human and saying something deep about intelligence.

Now over the years there's been a rise

of a rich set of subdisciplines

all within the AI family

surrounding these four pillars in a variety of ways.

And while we tend to

hear as the public and even as

researchers about the big wins.

We hear about Watson and Deep Blue in chess

and the driverless car with the Google car we're seeing.

And recently the AlphaGo work,

but in reality the research has

continued and there'd been

a stream of results over the decades.

For example at Microsoft Research,

our teams work with the operating systems group for

over a decade and under the hood in Windows,

we have machine learning and decision theory

being used to guess what you're going to do

next at any moment and

pre-fetch and pre-launch applications

to speed things up in magical ways for example.

In The United States, in the 90s,

we introduced automatic handwriting recognition

until which pipelines with

language models to begin scanning handwritten letters,

those are dwindling thankfully.

But still apply to a few of them

but 25 billion letters per year being handled

by AI systems in

The United States and of

course now worldwide around the world.

These aren't celebrated in the press like AlphaGo,

but the point is that this is not new.

There's been an ongoing stream that we've been

learning and learning over the years.

Now that said, we have had in

the AI community inflection points

and one of these inflection points happened around

2009 and at Microsoft Research

when Geoff Hinton and his group visited

us and tried again

and Geoff Hinton never gives up with his team.

His use of these layered deep networks,

he used to call them deep belief networks,

these cognition networks and now with

the discovery that the same methods

that we used in late 80s

essentially were actually more powerful than we

thought they were just famished for data all this time.

With language speech we started playing

in our speech team with

the switchboard dataset which

is a low bandwidth conversations on telephone lines,

and just last year we hit human levels and

now our teams exceeded

human levels of transcription in speech.

It's not just the speech the same methods also envision,

our Beijing team with building with

RESNET systems that could see

better than humans in terms of categorization

and one of the ImageNet classification tasks.

And I just mentioned now

reading comprehension just last week,

the first results came

in were both Alibaba and Microsoft Research.

Microsoft Research being a bit ahead.

So that they could actually

do better than humans at answering

questions based on Wikipedia text

in what's called the reading

comprehension squad challenge.

Now with these kinds of successes,

companies like Microsoft academia, academic colleagues,

other competitors to Microsoft; Google,

Amazon, Facebook are pressing

these advances into service.

So for example, we now have

well considerable magic to seven or eight years ago,

real-time speech to speech translation

in many languages in Skype.

Tools coming available now that let

programmers call cognitive services doing

vision in the cloud to recognize

faces and emotion and objects for example.

And then products like

the Office product line; Cortana for example,

now reading email and extracting

for example promises you make to other people

and requests that are made to you and remind you about

them by place, time, user location.

There's so much more opportunity

ahead to apply these methods even today's methods.

I've often called AI the sleeping giant for healthcare.

By the way it's still asleep.

There's so much to be

done and even with today's methods and

yesterday's methods that have never been pressed

into the service that they deserve.

For example, the system we built several years ago

that's distributed in hospitals out the world,

computing the probability that a patient will be

readmitted in 30 days from thousands of variables,

right from electronic health record data

and machine learning going on

locally at every hospital

because every hospital is a bit different.

But the model here used in

that healthcare area of taking sense data,

go into predictive models,

going to decision models that know how to meet costs and

benefits is a powerful pipeline, golden pipeline,

it can be applied in many many fields right now,

and not just for automation,

but for education,

recommendation and insight building as well.

In this, if I didn't talk about

scientific understandings coming through the lens,

the computational lens of artificial intelligence,

Daphne Koller's team several

years ago showed how you can take them,

the morse code of gene expression data and through

probabilistic modeling methods developed

in the uncertainty community,

generating modular designs or seeing

the modularity of biology

and how things work when it came to regulation.

I think in stunning work,

I have to celebrate Sara-Jane Dunn and her team's work at

our Cambridge Lab in

unraveling mysteries of embryo genesis.

How does a little ball,

a fertilized egg become a human being and roll out?

How do stem cells become different tissue types?

It seemed like a big complex problem,

but applying z3 that many of you knows in

AI theorem prover used in verification and other places,

figured out the code and found that

stem cells go to tissue types with three control points.

This was an AI problem and insight building here.

So that said let's say where do we go from here.

About eight months ago we formed a unit called

Microsoft Research AI at

our main Research Center in Seattle.

And the basic idea behind doing this was back to

our four pillars here and our sub-disciplines,

Microsoft researchers is based in an open lab,

we've always across our sites

have hired top talent in different areas;

these sub-disciplines of AI.

And the thought was, given where we are with

artificial intelligence and by

the way even with these inflection points,

my view is that things have been moving very slowly.

I think that if we saw people where

we were today in 1955,

they'd be severely disappointed and arise.

Can you vouch for that?

We expected so much more.

But we can do more we believe

even and given the platform we're at right now.

So here's the kind of the dream sequence

in putting together Microsoft Research AI.

Let's crystallize, let's put together these teams in

a new way and think about a set of

shared aspirations that we can all

organize around and then

organize all of our resources and planning around,

there are actually five aspirations.

I'm going to talk about three today

given my limited time.

Number one, attain more general intelligence.

Second one, master human-AI collaboration and the third,

pursue insights and possibilities

with AI people and society.

Let's talk about attain more general intelligence.

If you think about it, we've

become quite good at building idiot savants.

These narrow wedges that we celebrate.

Yes we can use them in object recognition,

we could run a translator,

maybe we can get a question answering system to work,

but these aren't the kinds of intelligence's

that the founders of AI sought to build someday.

We'd like to address what I would call

an understanding and pursue

an understanding of the mysteries of human intellect.

What are these mysteries?

How did we learn in unsupervised way, even as toddlers,

massive amounts of information,

understandings and knowledge?

How do we understand comments,

common sense of the world, gravity,

containment, the common sense of social discourse?

How do we apply and solve

many different kinds of tasks in

daily life from one to

the other that seem quite different?

In some ways, you might say that

we've become masters at building wedges,

narrow wedges of competency,

and we want to go to richer integrative intelligences.

When we look at this as one perspective is combining

even the competencies we'd been able to

develop to date into symphonies,

well-coordinated symphonies of intelligence.

So as one example here,

one project we called,

"The Integrative AI" project.

One of the projects in this area is called,

the "Situated Interaction,"

situated intelligence project.

We're trying to combine natural language,

dialog abilities,

vision abilities, social discourse, common sense,

the common sense of space and time and engagement,

and interaction abilities to build new kinds of

experiences and to also understand what it takes

to coordinate among these competencies.

Turns out that there's a magic we think in

the actual coordination and using

machine learning to coordinate multiple components,

each of which has developed separately,

pulling together these modular abilities

into a unified whole.

In some ways, doesn't it seems like we're

singular intelligence as us people? We're probably not.

We're probably thousands of competencies coordinated so

with such fluidity that it feels

like a bright singular intelligence.

Now, another area that is just going to give you

a couple different directions

for doing this general intelligence pursuit area.

Now, we all celebrated.

We're all very excited about

Deep Blue in chess and later poker.

In the AI community, we actually

solved the game of checkers fully.

For example, it's interesting games of

perfect information where the magic

behind a lot of this is

we can actually run simulations with

perfect state information trillions

of times and collect data,

as you all know, and then use

machine learning to learn value functions,

learned moves, and start

playing it at better than human levels.

But this doesn't apply to the real world.

We're in a very, very data scare situation,

even if you believed in the power of

machine learning to learn some of the nuances of life.

One approach is to sort of think about what's

the analogy for richer worlds

to playing trillions of games against

oneself and learning how to solve those problems.

And its building rich simulations

with an eye to data collection.

So AirSim project,

Sital Shah and she support others on the team,

have really tried to build

a system with the rich physics,

friction, lighting and take

actual high-fidelity implementations of

sensors as they exist,

for example, on a drone and model them too so we

would see what an actual drone would see in this world.

The mess in world,

the crashes and so on.

And one example is we can actually begin to learn.

So we made a very clear decision to have

data collection harnesses in

these simulated worlds

because it's all about machine learning.

And, for example, if we run

a stereoscopic cameras in this world,

and we actually start computing distances from them,

since we know ground truth of depth,

can we build a convolutional neural network from

the data streams from the simulated

world and then begin using it

in the world and show how well it would work in

the actual world and then worry later about

the difference between the fidelity of

the simulated world than the actual

world, the open world?

And so once we can do that,

we could even start doing

reinforcement learning in these worlds.

So, as an example,

and I'm moving to driving here,

this is a reinforcement learning system,

trying to learn by itself how to drive.

And you can see the first 10 of trillions of runs,

where the car becomes

brilliant overtime because this learning

about grabbing in the world with an objective function,

using some basic principles of reinforcement learning,

getting better and better even in real time here.

Now, here's a really cool example of where we showed,

I don't know what sound here,

where we showed how we can train up a drone to

start doing the task of

inspecting power lines without crashing into them.

So after thousands of training sessions.

We have this beautiful system

we can now deploy in the world we

think to in remote areas,

make sure the power lines are all doing well.

Again, trained on the sensors

in this world and then are running.

So it is one direction,

and building it and

collecting large amounts of data in simulated worlds.

Let's talk a little bit about mastering

Human AI Collaboration.

Over the years, my colleagues have almost, say,

the hardcore AI planning colleagues that said,

"Well why do you play with

human interaction, that HCI stuff?"

My reaction has been

reasoning and decision-making in ways

about what people are trying to do and recognizing

plans and trying to help

them is harder than playing chess.

It's harder than a go move,

and it's a really hard formal interesting area

that mixes decision-making,

learning with design and social affordances.

So what are some proponents

and some directions in this world.

I want to just inspire people to work with more

on these interesting problems

that basically build systems,

AI systems that can actually augment and

extend human beings in a variety of ways?

Well, one approach is to use AI to develop

new kinds of engagement models directly,

new kinds of perceptual abilities.

So in our Cambridge lab,

we build systems that actually used a generative models.

These aren't videos here,

but this is actually a system that's

recognizing at a distance human hand pose and gesture.

I've often said, if you can get

thumb and forefinger into the digital world,

you can build civilization.

So, we want to see what people are

saying and doing, how they're engaging.

Wouldn't it be great to have

computers that recognizes at a distance?

But let's talk about cognition now,

moving from perception to cognition as another piece of

the important recipe of

building systems that can augment human intelligence.

I like viewing 20th century cognitive psychology

as characterizing human blindspots,

biases and gaps in our ability.

So if you look at the human abilities as

this y-axis here and this blob of gaps

as kind of cognitive artifice

we all share essentially even though we're

all different in our own ways.

We have almost untapped knowledge in

AI about the detailed biases,

gaps and blindspots of humans.

And here's the, again, the dream sequence.

Might we build system

someday that are explicitly designed to fill those gaps.

We have areas of psychology and

memory and attention and judgment.

We can start, and we have at

Microsoft Research, leverage these results,

a machine learning and sensing to

actual design custom-tailored extensions

in different areas with different kinds of tasks.

Now, you can start a straightforward to do

some very basic things, but a year and a half ago,

I was very happy to see this Camelyon Grand Challenge,

which basically provided the AI community

with thousands of slides with

the correct answer is they're

metastatic breast cancer and

this lymph node, Camelyon challenge here.

Now, humans were superior

to the best deep learning system that,

of course, the deep learning system is the best,

but humans were superior.

But when you combine even in a naive way,

when you combine the output

of the deep net with the human,

you cut down error significantly.

Just running these together, right?

And these were experts that were

better on their own than best system.

Here's another example.

In the crowdsourcing, Walter Martin with Ece Kamar.

There's a challenge to help the astronomers recognize and

tag hundreds of thousands of galaxies

of different types with human eyeballs.

There's so much data.

The astronomers need help from citizens.

And so it turns out there's a system

called Galaxy Zoo where people are looking at this data.

These galaxies and tagging

them volunteers that are called, "Citizen Scientists."

We also can apply machine perception

to the same problem and

about almost 500 features analyze each of these galaxies.

Well, it turns out, when you actually

used machine learning to figure out how best

to weave computer vision

with human intellect, human perception,

you can figure out when to call humans,

how they should help and

how many people you might need to vote, for example,

with labels and go to full accuracy with

half human effort in the right way or

0.95 accuracy with a quarter of human efforts.

And this was combining machine learning to figure out how

to do the weave with a planner that

could sort of guess ahead

in an AlphaGo style sense looking

ahead in a non-myopic way to

compute the value of human contact,

human reasoning, and human perception.

Another really interesting area in

the complementarity realm is human errors.

Several years ago, we took a large amount of

medical data from emergency rooms.

And we defined a surprise,

a human's surprise, a surprise of an expert position.

As an emergency room doctor

tells the patient to go home, they're fine.

And within 48 hours,

they show up at the hospital again,

and they're admitted to the hospital,

if it's a serious issue, they are

admitted as an inpatient,

with a serious problem that was not

encoded anywhere on the chart when they left.

That's called a human surprise.

That's an interesting training signal.

If you had massive amounts of

data, including the surprises,

you can actually compute a system that will infer in

real-time medical entities that

hide in the shadows of expert cognition.

And then, we could tell our doctors,

we've built a model

not that's not going to replicate humans,

but it's been trained on the frontier of your knowledge,

on the illusions that you face

daily on these dangerous areas

that you'll make in a hospital.

And at discharge time,

the system, as the patient,

the physician is writing up discharge notes,

this is my comfort and say,

"Hey, I've got a couple

of things that might surprise you about this patient.

Do you want to see you what I'm thinking?"

This is a very powerful complementarity.

By the way, I should say that in

the United States, hospital,

in-hospital deaths due to

human error in the latest report,

make it the third most common cause of death in

the US over a quarter of a million people per year.

You can see that when I say AI

is the sleeping giant in healthcare,

might be systems be like

the safety nets that

catch bridge workers someday when they're

working on bridges just in

case they fall as they're painting the bridge of

putting in a rivet someday

and isn't this an interesting application of AI systems.

Now, the last piece that's really important I think In

thinking about how to build systems to help human beings,

it's like they have coordination of initiatives.

So, here's our complementarity of

human cognition and machine intelligence.

But we want to think about, in a real-world setting,

what's the design and

the control of the mix of initiatives?

And we saw that example

here that makes it very, very concrete.

From my colleagues at Johns Hopkins,

I sit on their advisory board,

and I've been following their work over the years,

Gregg Hagar and team in surgery.

So, Carol Riley, a number of years ago,

for her master's project,

working with the Johns Hopkins robotics team.

Both the system that can look at surgical videos.

And that could understand,

you might call actions in

surgery to build a grammar

of surgery that can be recognized.

So, did quite well at recognizing different actions,

a tighten suture, a loosening,

a left transfer, these are the phrases surgeons

use when they talk about their surgical actions.

And then, you can take this ability to detect

state to recognize surgical plans of a human surgeon.

And then imagine someday,

what would it be like for a surgeon in

manual mode to work

with an AI system that's looking over,

that's going to assist hand-in-hand in

a back and forth volley of a mix of initiatives?

See the auto mode here now,

imagine you also had some UI conventions,

when is the system back automated, when is it not?

Here's the manual mode, it's inserting

this suture needle into a thankfully not a patient,

but some sort of cookie dough,

in fact, it's still here.

And you'll see how automatically now,

the system understands that it'll grab

it, and pull it, and tighten.

This is the first wee hours of a coordination

between human and AI robot in surgery someday.

Now, initiatives also interesting

in human-computer interaction.

I just want you to watch this interaction here by

the receptionist that I have by my door at Microsoft.

Watch the facial expressions

being controlled by the AI system.

We have audio here on this?

>> Are you here looking for Zack?

>> Yes.

>> Are you here for the two o'clock meeting with Zack?

Sorry, did you say you were

here for the two o'clock meeting with Zack?

>> Yes.

>> Is one of you John?

>> Yes.

>> Sorry, I can't tell who is

speaking when you stand so close together.

Which one of you said they are John?

>> I did.

>> Right. Hi John,

Zack is expecting you.

Will you be joining the meeting?

Sir, Sorry, will you be joining the meeting?

>> Yes.

>> Alright. I'll let Zack know

you will be joining his meeting with John.

I'm sorry, I think Zack is running a little bit late.

I'm pretty sure he's on his way.

>> There are four layers of uncertainty in there.

And before we built the current models,

they're all just entropy measures.

Now, they're controlling gestures, gaze.

When the system is coming forward,

there's a lot of subtlety that system.

Let's move on now to pursuing

insights possibilities with AI people in society.

Wanted to start now by talking

about some of the dimensions

here of the incredible opportunity

and challenge ahead here,

and start with leveraging AI on societal challenges.

Again, sleeping giant material

again here are possibilities.

Show a couple of examples that I found inspirational in

my own work with students over the years and colleagues.

So, there's an interesting application

of machine learning, decision making,

and planning, when it comes to large-scale,

public health, epidemics for example.

Cholera is a disease that will

kill about 100,000 -

150,000 people per year around the world.

A few years ago,

K. Radinsky and I said,

let's take a large amount of data about

historical data going back

100 years looking at cholera epidemics,

and look at DBpedia for

detailed information as to

where they happen geographically,

look at the weather feed, and so on.

And we learned to predict in

advance high-risk regions for cholera next year,

this season, and discovered

new things that we took to the actual epidemiologists.

Like for example, we discovered that

a very long dry spell in a region,

that's then hit with a flood,

is at much higher risk than

normal years of hydration for variety of reasons,

that we're still looking at and trying to

investigate with experts in cholera.

But here's the thing.

If you can get fresh water to a cholera patients,

someone suffering from cholera,

you can cut the mortality rate from 50 percent down to

like one percent or less.

So, the idea is to predict in

advance where cholera is going to be and make

sure you have fresh water at

those locations even if

it's expensive to do that planning.

And now, there are very short-term acting vaccines,

and the Gates Foundation and other groups want to

know they're expensive and short-acting.

Where should we take them? And when?

If we can compute and infer

based on weather patterns and structure,

we can do a great job with that, one example.

Here is another example,

people with special needs like sight

impaired and this is getting into

the seeing AI system which you may have heard about.

And the idea was could we

take again those wedges we have of

different kinds of abilities to see

objects, to caption photos,

to caption imagery,

to recognize friends and their emotions,

and actually speak text to speech,

to scan barcodes, and understand products in the stores.

Might we start addressing

the special needs of sight

impaired people in one package?

Now, what's interesting in this project started out as

a Hackathon project is a key

but was a developer in our London office.

And he made a comment once that,

one thing he doesn't like being sight

impaired at developers meetings,

and with program managers, and so on at Microsoft was,

he doesn't know if people are

listening to him when he speaks.

And so, the first goal was,

let's build the system that lets him understand who's

sitting in front of him and their emotion.

It was a really big deal to him to have this ability,

and that's what led to

the richer system called Seeing AI,

now, which is now available as an iPhone download.

There's so much to do for AI for Earth.

In general, thinking about sustainability.

Many different directions here in

United States the National Science Foundation has funded.

Several different terms of

what's called computational sustainability.

Looking at challenges for example,

like how do I ideally set up a reserve that will do

its best to protect wildlife in city region.

Here's a project I want to talk about.

I think it really stresses the power

of being creative and leveraging existing data streams.

So, it turns out right now over

The United States and over India,

in South Asia, and other places, China.

If you look up in the sky,

there are thousands of flying sensors at this moment.

And back when it's just a component,

I'm looking at this problem,

we realize we can actually go to services that

would tell us information,

ground radar trails on each of

these airplanes flying at any moment.

And we wondered, can we build

a weather map from these thousands of sensors?

The United States, NOAA agency,

which does the weather maps that the planes use and they

plan based on whether people on TV use and so on,

launched just a few tens of a balloons,

every few hours around the nation.

And it's a very, very sparse set

of sensors they build their map on.

It seemed much more fidelity to be sleep.

Think about the geometries,

think about the graphical models,

and think about how to convert and

harness these thousands of

sensors for our goal now of

inferring wind speeds over the nation.

So, if you've got it as true cloud service,

I pause for the giggle there, called Windflow,

windflow.azurewebsites.net,

you can see what NOAA says

about the winds over North America,

and see what Windflow says,

it's often quite different.

And we have good information that Windflow is quite a bit

better based on traditional holdout

set analyses but also we had a fun time.

I'll share with you a bit of launching

weather balloons in Eastern Washington, for example.

And to really get

ground truth on where

balloons would go based on Windflow,

we call in the FAA,

we say were going to fly a balloon

today and it's going to be very high.

You have to register

and report these balloons to the FAA.

By the way, these balloons go so high.

Fearsome beauty with you.

You see the curvature of the Earth which is really nice.

But look at what are Windflow model told

us and what NOAA says here.

And we were only 12 miles

off and NOAA was hitting 6 miles off.

These shows in general how much better we

do with multiple launches here.

And we're now working with a major carrier

to use our winds,

pointing their onboard planner

at our Windflow as opposed to NOAA,

to do better routing of

their planes with lower CO2 footprint.

And this same work was leveraged

to build a richer hybrid weather system,

later by someone that

[inaudible] who went off to Stanford who their students,

high school students, college students,

pre-doc from India working with us.

Here's another example from several years ago

on leveraging existing feeds with

AI techniques to learn about people in

situations where there is

an opportunity for recovery from a disruption.

The Lake Kivu quake was in

the Democratic Republic of the Congo in 2008.

And we ended up getting access to cellphone data,

just ins and outs from cell towers from Rwanda,

which was a country adjacent

to the Democratic Republic of the Congo.

Just 140 cell towers,

these dots in this Voronoi diagram,

we had access to literally just numbers

of calls in and out of a cell towers.

Based on that three years of data,

we could listen for anomalies

of people using the phone more.

During a time when there might be

something surprising going on like the

ground-shaking combining that with

a model of dispersion from an epicenter.

We were able to actually infer

just a few tens of kilometers

off where the epicenter was 17 miles

off by converting the cell towers

across Rwanda into an array antenna.

Looking at the bumps over

every single one of those antennas.

Then, we can compute where

is there a disruption going on in

Rwanda over time and how is the

persisting weighted by the population levels,

and even use a decision theory model to say,

take the current uncertainty

a coherent measure of uncertainty and

our inferences and compute

an ideal reconnaissance plan to learn more.

All from existing cell tower,

ins and outs, taking that smart lens of AI on that data.

So before summarizing, I want to talk a little

bit about some of the downside and

rough edges with AI right now that are

posing some interesting opportunities and

challenges to thinking through some of

the upside of the fruits of machine intelligence.

This includes the whole area of trustworthiness and

safety of our systems as we've

come to rely upon them more.

Especially in high-stakes areas

like healthcare and transportation.

The area of fairness accuracy and

transparency which is coming to the fore now,

it's interesting how just a few

years ago we were so excited to

get like a classifier running in a healthcare setting,

where a predictive model

even in criminal justice to be helpful and now we're

stepping back as a community

would hold workshops and rising areas of

scholarship in trying to

understand how can we detect when our systems are

leveraging implicit

sometimes very subtle but powerful biases

in the data about gender,

about race, about criminal propensities from

datasets that might be doing injustice to society.

Because, they will at low cost fuel vicious circles.

For example, if you have a lot of

pleasing data gathered by looking at this side of

the tracks because of concerns with

the broken windows on this side of

the tracks that broken window pleasing as it's called.

And then you apply this dataset somewhere else,

you'll tend to replicate falsely

the demographics based on the assumptions

made as to where the first sensors we're focused.

And, at a place like

Microsoft Word we feel cognitive services.

We want to develop policies and

formal metrics for detecting

bias and for the biasing systems.

And it's not for the faint hearted this

task it's a very interesting technical challenge,

and also its context-sensitive.

If I didn't include gender in a medical reasoning system,

it would be malpractice.

However, if I fold gender in

inappropriately in a resume recommendation system,

I'm doing it a disservice by

amplifying a bias that

we don't like to see in our inferences.

Theory of transparency is coming to the fore now.

I often say it's interesting,

and Raj Reddy will remember this,

back in the expert systems days,

explanation was a major topic.

There were whole dissertations.

Randy Davis did a whole dissertation on how do you

explain the reasoning in a production system.

Explanation was considered critical

and we're coming back to that now saying, "Wait a minute.

How do you explain an inference to

somebody who just had their loan rejected as to why?"

The Europeans, within two months from now,

have told us with

a regulatory action and document known as GDPR,

that we better be able, we as providers,

whether we're academics or industry,

as providers of services,

we better be able to describe

what they say in a clear way the logic of

the processing behind important decisions

made by automated systems.

Jobs and economy, we'll just leave that for

our discussion point maybe in the panel this afternoon.

There's an ongoing debate

which jobs are going to go away,

how disruptive will AI be as we get into

the science and capabilities of human intellect.

Won't we remove some jobs that we relied upon?

And maybe even jobs that are,

this time around, in very,

very high-paying sectors

like pathologists, and radiologists,

and certain kinds of other diagnosticians,

as well as truck drivers, for example.

A major fraction of United States,

I discovered, is supported by truck

driving as it's primary source of income.

Others point out that just like any other technology,

AI will create new constellations

of opportunities to work,

new kinds of jobs, especially

if we get that augmentation right,

augmenting human intellect to create

new kinds of productivities.

So, on this interesting area of AI people in society,

we set up a board called the

AI and Ethics in Engineering and Research

Advisory Board at Microsoft

reporting to Satya Nadella and

the SLT or the Senior Leadership Team that

is working now with representatives from

every major division to come up

with policies in safety and transparency,

labor and economics, and issues around bias and fairness,

as well as standards of practice for

human-AI collaboration where

generally, how these systems work together.

I should say that we just published a book.

It was a collaboration of

Microsoft Research and our policy group last week,

and it's available for free download right now.

Satya Nadella and Harry Shum

wrote a foreword to this book.

And it captures some of the thinking

coming out of the Ether Committee

right now where things are headed at Microsoft Research.

And finally, I want to make a comment

that we can't do this alone.

So, Microsoft worked very hard to bring together

an organization known as

the Partnership on AI to

benefit people and society or just PAI.

It was fabulous working on,

I'm chairing this board right now,

but we brought together Amazon, Facebook, Microsoft,

Google, DeepMind and Apple

all to the same table where we decided,

let's come up with best practices

for the community when it comes

to these rough edges of AI for a better future.

So, I'm going to stop by just summarizing

pathways ahead for AI.

We need to amp it

up on pursuing principles of intelligence.

It's exciting, yes, but we haven't

made much progress if you look very carefully.

We put the hats on up the founders of our field.

We want a harnessed AI to augment

human intellect and empower people to achieve more,

this is a whole area in itself that

requires focus and attention and technology.

We want to work to solve societal challenges,

as well as address rising ethical issues with AI,

especially when it's applied naively.

And there are some surprises

and twists and turns there that are coming to

the fore that we didn't expect, not even 10 years ago.

And it's very important that we collaborate

widely on technology and

policy and engage multiple stakeholders in industry,

academia, policy, civil liberties organizations,

and the general public. Thanks very much.

>> Thank you Eric,

and I'm sure there must be questions

that you would want to ask Eric,

so yeah, most of us. Can you get the mic please.

Wait for the runners to give you a mic, and then you can.

If you want, you can raise

your hand so I know who has a question.

Yeah. That's the next point.

Pull it down.

>> Okay.

>> Yes, it is working now.

Really, thank you very much for your fantastic talk,

just going over the history as well as the future.

It looks like a great future and lots of opportunities.

I wanted to delve deeper a little bit

into the ethics question that you alluded to.

I think that's a question I

have thought about and I have had no answers.

As you said, it is not for the

weak-hearted because, so gender,

for example, or this,

there are lots of correlated variables in the data set.

We have known that if we use zip code,

then it's a proxy for this,

if you start using maybe

your grandfather's Polish degree,

that becomes a proxy for this.

So whatever, there are many, many other variables.

And if you start to remove all such variables,

then you will be left with nothing to

be able to really make a useful prediction.

So, I'm curious where we are in terms of sort of

solving some of these issues because they look

extremely hard to me and I don't even know how to start.

>> Yeah, no, this is a very good question.

We're doing deep dives at Microsoft.

Some folks might know as several of our centers,

New York City, in particular, New England lab,

as well as Redmond have significant efforts in

this area that's called FATE and FAT ML.

It's actually a workshop

called FAT ML, Fairness, Accountable,

and Transparent Machine Learning now,

where people are developing

methods and they're showing, for example,

some methods for let's say,

detecting and de-biasing gender bias

in large scale vector representations

of language for example.

And they also show trade-offs

like we can actually do this measure if you

try to fix the challenge

with nurses are women and doctors are men.

And you take

these obviously biased

nuanced dependencies and neutralize

them by stretching these spaces

in different ways that you

induce errors of various kinds too.

And so it's going to be a kind of

a trade-off at times where,

or at least being transparent, right?

If you detect the problem and let's say,

there's a problem with this data.

I want to point out by the way, it's not

just nuances of bias,

there are legal issues in

The United States and probably in India too,

there are laws about

the anti-discrimination laws where you can't

discriminate by gender in certain situations or by race.

And so systems that are doing that,

they can be formally called breaking the law.

And there will be

court cases based on these and people bringing

these issues to trial and to judges and to juries.

So, we need to develop methods to detect,

characterize, if we can't fix, at least disclose.

If we can fix,

characterize the loss in

accuracies we're getting by trying to

balance out the negative biases

that might be detrimental to society.

But a challenging area,

and I have to say

that I hear you're coming to

Seattle this summer, we should actually talk more.

>> Okay. We'll take one question there at the back. Yeah.

>> Human can recognized real and fake emotion.

Can machine can recognize real and fake type of

emotion like a real smile or fake smile?

>> I see.

Yeah. So I would say that

I would think that AI systems can recognize and

generate real realistic and fake expressions.

To me, that's an easier task than doing better

than humans on transcription, for example.

So I have no doubt that

systems that we build will it could

even have super human abilities

to understand and process human emotion.

That's what you're asking. To me,

that's very visual and

I think our systems are pretty good at that,

even analyzing it down to the actual sets of muscles and

their patterns of activation someday where,

in fact, I can imagine systems being built,

and we've seen this in some areas

like detecting deception in humans,

where we build machines that are better than people

at doing that giving us

signals in voice and in facial expressions that

there's deception going on by

this human being that are super human.

Humans are fooled easily and

the judges are fooled easily but the system says,

"Watch out. This person's lying.".

>> I think there's one question

somewhere there at the back, right?

Alright. Okay. Then we'll take this one. Yeah.

>> Hi Eric.

This is [inaudible] Institute of Technology, Delhi.

And I will touch upon the topic of trustworthiness on AI,

and I think you talked about the sleeping giant

on the healthcare into a sleeping giant.

And on different note,

so better Domingo's yesterday,

we did that whether

would you vote for a robot for a president,

I think on a different note.

But the question is basically, when we want to put

>> Isn't it easier these days?

>> Yeah.

>> It's good to say yes to that.

>> Yeah. Yeah. So the question is

basically when we want to push these AI systems,

especially on the health front.

And we see a lot of

resistance from the doctors had such and I have

few friends in all industry of

medical sciences who also agree on that that you

can't push the AI systems and how a lot of

restraint or we have lot of disconnection from there.

And that's why even like in India,

this is even more critical where we have very dearth of

doctors and a lot of patient and lot

of population as such.

So since you have been on the neuroscience side

as well and you have interacted with a lot of people,

what is your take how these technologies can be pushed to

the other front and how we can

convince people that AI is trustworthy on those fronts?

And I think you touch on a few topics

like etsy and all but how

to convince the people or

the other community so there's not an AI company [inaudible].

>> So there's two pieces there.

There's the actual is the system robust

and trustworthy and then can you convince people of such.

So let's say there's the dependency

there but there's two separate issues.

I think one way to work on what's called the

translation of these advances in computer science into

healthcare is to point out how poor people are,

how poor experts are doing these making decisions,

and also picking the problems correctly,

picking the problems right I mean.

So we have a student right now working with

40,000 surprising deaths at

University of Washington Hospital

working with these UW students.

We have all this data and we want to figure

out why do these patients

die when they came in as

elective patients to the hospital?

Over many years, we have lots of

patients, the surprise deaths.

And we said, "Let's build systems to reason

about where was the failure to rescue."

It's called the medicine FTR.

When did it happen? The AI system it could have helped.

Well, we're in a situation now where we have deaths in

the human situation and we're

looking at where AI could have helped a bit.

You can imagine, by picking the right problems,

arguments get more quiet

in terms of what you're trying to address.

We're building systems that are augmenting and assisting,

easing the daily burden of

a physician also very acceptable. I like that idea.

Just make sure that I'm making the decisions.

That's fine let's augment the position, for example.

I think, 25 years from now,

by the way, I think I said this 25 years ago,

but now it's different, hopefully.

Twenty five years from now,

I think will be more obvious

that it'll be kind of funny looking back in

history as to why it took so long to get

these inference tools into

health care among other fields.

Now on robustness, let me just

disclose to the world here that I'm

a trusting driver of the Tesla in autopilot.

I've had it for a couple of years now

and I'd become accustomed to using it.

I'm also one of the rare AI aficionados

that was almost killed in

that car in a stochastic situation

that I didn't expect on the same road as usual,

where I had $15,000 of damage,

two tires blow out,

and almost severe injuries. A lot to do there.

>> Okay, we'll take

one more question there at the back somewhere.

Yeah. If you can raise your hand. Yeah.

>> Hello. So I have two questions for you.

One is about the models

of complementarity that you shown.

>> Yes.

>> So there, it was shown that like human,

we have some perception,

some biases in our mind

and machine learning can compliment it.

But machine-learning itself is data driven and

that data is what has occurred in the past,

which also reflects the bias as well.

So how can that compliment then decrease

or reduce that bias in

human perception? That's one question.

And the second question is one point

that you've shown in your slides about biodiversity,

like AI now can be used for biodiversity issues.

So I want to know how this can be useful for

preserving animals or minerals

because this is also a burning issue.

Human can't exist unless they also do exist. Thank you.

>> Let me refer briefly to

the first question for

now, before I get to the second one,

and just say that humans and

machines even from the same data,

learn differently, and that

gives you an a complementarity opportunity.

And typically, I'm thinking of

complementarities when it comes to

not just biases but blindspots,

inferential abilities, the ability to

see different things in a vision and visual image.

So you can imagine rare cases where the same biases are

in both and that wouldn't be

a very good situation for complementarity.

But you can also imagine studying both,

want to study both size and understand how to build,

in an engineering sense,

the complementarity and how to harness it,

but it's a good question.

The second comment, let me

just say E. O. Wilson, who was a very,

very well renowned sociobiologist

and sustainability person told

me at a very small meeting after

coming out of a meeting on AI opportunities

in the environment and asked me if I can

quote him and I actually quoted him at

a meeting where I gave

a whole talk on the answer to your question,

pulling together many pieces of work that

AI may be the only hope for humanity on this planet.

I said, "That's a very strong statement. Can I say that?"

Because I believe it. He really believes.

And by AI, I think he meant not

just the missed the learning,

the data-centric sciences in the inference in

the planning and the optimisation and perception.

I think he's also talking about

this idea of doing modeling and

simulation was in his mind, too.

But today, in preparing this talk, last night,

I had examples that I was going to pull in to

show you exactly what people are

doing particularly Andreas Kraus at

ETH Zurich, colleague Ohms Cornell,

when it came to a reservation problem with

models about what different species needed to do to

survive in their niches and overlaying the constraints

of land ownership where

I could give you back these acres

and you'll own the same acres,

but can I distribute it differently and put

these paths and automatically and have

an optimization system with

sub-modular optimization running and outputting

alternate plans with probabilities

of survival of species

based on the models coming up as well.

Very impressive work with a beautiful visualization and

display kinda console that lets

you interact with the biological models,

as you optimize in tight loops.

And so that's an example of some work going on in

this space and these systems that actually now

being used with examples

in periods of several years ago now.

So people are going back now looking at

the impact in national park areas and so on.

So it's very, very exciting work.

There are many other applications also

in conservation more generally.

Things like ride-sharing done well is

a great example for CO2 footprint and so on.

So I'll stop there, but thanks for the great question.

>> Thank you very much for the questions.

I'm sure there are more and maybe we can do it offline.

That I'd like to thank Eric again.

I will give a small token of appreciation.

And thank you, Eric, for being here.

It's great talk.

Thank you very much. Yes.

>> Thank you.

For more infomation >> Keynote Talk - Microsoft Research Labs - Expand The State of The Art - Duration: 56:03.

-------------------------------------------

Náboženství - Duration: 6:36.

For more infomation >> Náboženství - Duration: 6:36.

-------------------------------------------

Versión Completa: Aprender de los niños. César Bona, maestro - Duration: 45:57.

For more infomation >> Versión Completa: Aprender de los niños. César Bona, maestro - Duration: 45:57.

-------------------------------------------

OMFG-Hello(Cover by Argyrisyolo1251) - Duration: 3:54.

For more infomation >> OMFG-Hello(Cover by Argyrisyolo1251) - Duration: 3:54.

-------------------------------------------

"Hay que escuchar e invitar a los niños a participar". César Bona, maestro - Duration: 3:45.

For more infomation >> "Hay que escuchar e invitar a los niños a participar". César Bona, maestro - Duration: 3:45.

-------------------------------------------

Osiris and Set ETYPTIAN MYTHOLOGY - Duration: 3:24.

after the world was created the first five gods were born they were Osiris

Isis Set Nephthys and Horus the elder Osiris was the first born of the

five so he was the Lord of the earth and Isis was taken as his wife he was a kind

and just ruler and Egypt became a paradise under his rule Set his brother

became jealous of Osiris's success and the relationship was always on edge but it

was truly strained when Nephthys sets wife disguised herself as Isis and

seduced Osiris as a revenge Set created a beautiful coffin to fit Osiris exactly

and threw a grand party he presented the coffin and told the guests that

whoever could fit inside could keep it Osiris volunteered to try to fit in when

he laid down set slammed the coffin shut on Osiris and dumped it into the Nile

where I was carried away to Byblos Phoenicia the coffin became lodged into

a tree the tree grew around the coffin until it was completely contained in it

soon after the king of Byblos Malcander was wandering the banks when he saw the

tree he admired its beauty and decided to have the tree cut and made into an

ornamental pillar for the court that is where Osiris remain trapped until he

died Isis meanwhile had left Egypt in search

of her husband and eventually came to Byblos disguised as an older woman she

sat on the shore crying for her missing husband and was found by some royal hand

maidens who felt bad so they brought her back to the palace there she presented

herself as a goddess Isis to the king and queen and they promised her anything

they wanted as long as she spared them she requested only the pillar which they

allowed her to take after leaving the court she caught Osiris from the tree

and carried his body back to Egypt where she hid him from the set she left him

to go gather some herbs to make a potion to bring him back to life

leaving her sister Nephthys to guard the body while she was gone

Seth learned of his brother's return and went to find the body he found it and

hacked the body to pieces and then scattered them across the lands Isis was

horrified who his Nephthys help she was able to recover all the body parts

she then revived him and she was once again alive even though Osiris was now

living she was complete and can no longer rule the land

of living he withdrew into the afterlife where he became the Lord and judge of

the dead before he left Isis assumed the form of a kite and flew

around him she became pregnant with a son named Horus who she hid from set

fearing what he might do to him when Horace was fully grown he was a fierce

warrior he battled with set for control over the world and drove set from the

lands the chaos which set had unleashed onto the world was conquered by Horace

who restored order and balance the lands and now ruled with his mother so that

concludes the story of Osiris if you enjoyed this story and you want to hear

more on mythology and legends from around the world please feel free to

subscribe to this video you can subscribe by clicking right on that fire

icon if you want to watch more myths over there is last week's video if you

have any suggestions of myths you want me to cover please leave a comment in

the comment section below as always thank you for watching and good bye

For more infomation >> Osiris and Set ETYPTIAN MYTHOLOGY - Duration: 3:24.

-------------------------------------------

Vertritt Viktor Orbán eine „gefährliche Logik" in der Europapolitik? | 26.02.2018 | www.kla.tv/12007 - Duration: 5:05.

For more infomation >> Vertritt Viktor Orbán eine „gefährliche Logik" in der Europapolitik? | 26.02.2018 | www.kla.tv/12007 - Duration: 5:05.

-------------------------------------------

HOW TO HELP OTHERS AND DO GOOD 💗 JUSTICE - Duration: 4:09.

Hi guys, my name is Alicia.

And my daughter's name is, Chloe and she's 10 years old.

And she's a Girl With Heart.

Hi, I'm Kimberly.

And this is Juliette.

And this is Ella.

And they're, Justice Girls With Heart Ambassadors.

Hi, I'm Laura.

And I am the mom of Girl with Heart Ambassador, Noelia.

Hi guys.

It's Noe , from Girls With Heart.

It's important to have fun while doing good.

And for us, we're part

of an afterschool club called, Samaritans 365.

And I actually run the club, at, the children's school.

And in this club, we have fun,

while learning about giving back.

So for fun.

I like to sew.

So, we get to sew together.

Chloe has her own non-profit organization

and she makes custom tote bags.

So those tote bags are made, how?

By a sewing machine by me and my mom.

Yeah, so she makes custom tote bags and it's really cool

'cause, she passes em out to homeless women.

And we decided to make a GoFundMe account

and, get donations from people that she knew.

And do other types of activity to collect money

to help those in need in Puerto Rico.

We actually collected $8,200.

And we traveled to Puerto Rico,

so she could see first hand the community

that she was helping.

She was helping rebuild their high school

that got destroyed by Hurricane Maria.

One of the ways that we like to give back is

by donating our gently used, shoes and clothes

to local foster care agencies

and to children in need in our community.

It's important to teach my daughter to give back

because I believe it's teaching her how

to be a community giver.

It's teaching her to be active in her community.

And it's also teaching her that,

it's always better to give than to receive.

I have always taught Noey,

that there are four major characteristics

that you should have

in order to be a successful human being.

One of them is have good grades.

So academics.

The other one is character.

The other one is literacy.

And the fourth one is community service.

I think it's important to teach children

from a young age the concept of giving back.

It helps them think about others.

Not just themselves.

And it also gives them a sense of gratitude and appreciation

for everything they do have.

I am extremely proud of my daughter, Chloe.

Because, I believe she's inspiring

so many other young girls and young boys

to give back in the community.

I believe she's inspiring other kids

to start their own businesses,

start their own organizations,

and really just, get out there and do stuff

without having limitations on them.

My hope is that by teaching them now the importance

of giving back will help them grow

into kind, compassionate adults

with a sense of social responsibility,

to help others in need.

I have fun giving back, because I get to help others.

It's fun to help others because it fills my heart

with the magic of giving.

I have fun giving back because it makes me happy.

And to me happy means fun.

It's important to give back because I think

that we all should do our part, in this world.

And also, I like giving back is because I get

to spend more time with my mom.

Yeah, we get to do a lot of community service projects.

And, with doing that,

I get to spend a lot of time with Chloe.

Which is also really cool because

it gives us an opportunity to bond all the time.

Even if you don't have an organization

that you're affiliated with.

Or you're unsure, about how to get involved.

There's always something you can do on your own.

By thinking of good deeds and things that you can do

to bring a smile to someone else's face.

Noelia, I am so proud of you,

for having a big heart and trying to help out, Puerto Rico.

And collecting all that money.

And going door by door.

And helping all those families and all those kids in need.

I am, you make me so proud.

Thank You I am so proud of you.

Thanks for watching our video.

Make sure to subscribe for more awesome Justice videos.

For more infomation >> HOW TO HELP OTHERS AND DO GOOD 💗 JUSTICE - Duration: 4:09.

-------------------------------------------

PUBGFODA (PUBG Parodia Bad Bunny - Amorfoda) - Duration: 2:24.

For more infomation >> PUBGFODA (PUBG Parodia Bad Bunny - Amorfoda) - Duration: 2:24.

-------------------------------------------

Post Malone - Psycho (Lyrics) feat. Ty Dolla $ign - Duration: 3:39.

Hey :P

For more infomation >> Post Malone - Psycho (Lyrics) feat. Ty Dolla $ign - Duration: 3:39.

-------------------------------------------

Poderosa tormenta azota el centro del país | Noticiero | Telemundo - Duration: 0:45.

For more infomation >> Poderosa tormenta azota el centro del país | Noticiero | Telemundo - Duration: 0:45.

-------------------------------------------

Homekeepers - Jennifer LeClaire "Dream Wild" - Duration: 28:31.

For more infomation >> Homekeepers - Jennifer LeClaire "Dream Wild" - Duration: 28:31.

-------------------------------------------

Why You Should Drink Warm Water During Pregnancy - Duration: 3:34.

For more infomation >> Why You Should Drink Warm Water During Pregnancy - Duration: 3:34.

-------------------------------------------

Allah is one By Yousuf Nasir - Duration: 1:25.

Asslamu O Alaikkum Dear Viwers

I hope every1 is fine by grace of Allah

If u see this Video u cannot stay without sharing it

We used one word God –This Word means David God

My request for u is that always read or write Allah

Gods is for No Muslim

Allah is one

For Student if 2 marks r cut no tension kids

Have a nice day Share and comment Have nice day Take Care

Thanks for Watching my Video

Không có nhận xét nào:

Đăng nhận xét