Thứ Sáu, 12 tháng 10, 2018

Waching daily Oct 12 2018

So when we think about how to train neural networks, what do you need to know about, say, back propagation?

One thing you don't need to know about back prop is how to implement it.

That's one of the brilliant things that TensorFlow does for us, is it takes the internals of back propagation and does that all for us underneath the hood.

But there are some important things to know.

The first is that back prop really does rely on this idea of gradience, things needs to be differentiable for us to be able to learn on them.

One or two small discontinuities in our various functions are fine, but in general we need differentiable functions to be able to learn with neural nets.

Other things that gradients can vanish.

If our networks get too deep, so if signal to noise ratios get bad as you go further and further down the model and learning can really become quite slow.

Ray lou's can be useful there; there are also some other strategies that we won't talk about in this class.

But in general you do want to think about limiting the depth of your model to sort of the minimum effective depth if you can.

It's also important to know that gradients can explode; if our learning rates are too high, we get these sort of crazy instabilities, we can get NaNs in our model.

The thing to do there is to try again with a lower learning rate.

Last thing to know is that ray lous can die.

It's possible that because we have this hard cap at zero, if we end up with everything below that value of zero there's no way for gradients to get propagated back through and we'll never be able to pull ourselves back up into the land of living ray lou layers.

So keep an eye out for those and again try again with a different initialization or a lower learning rate.

At training time, it's often very useful for us to have normalized feature values when they come in.

If things are on roughly the same scale, this can help speed the conversions of neural nets.

So the exact value of the scale doesn't really matter; we often recommend negative one to plus one as an approximate range.

It could minus five to plus five, or zero to one, it doesn't really matter so long as all of our inputs are on roughly the same scale.

Finally, one last trick that's useful in training deep networks is the idea of an additional form of regularization that is called dropout.

And dropout is kind of a funny idea.

When we apply dropout, what we're saying is that with probability P we take a node and we essentially remove it from the network for a single gradient step.

On different gradient steps, we repeat and we'll take different nodes to drop out randomly.

So the more you dropout, the stronger regularization you have.

And you can kind of see this clearly where if you drop everything out you have an extremely simple model that is essentially useless.

If you drop out nothing, you allow the model to have its full complexity and if you have dropout somewhere in the middle, you're applying some sort of useful regularization there.

Dropout is one of the key advances that has enabled a number of the strong results that we've gotten recently that has pushed deep learning to the forefront.

For more infomation >> Training Neural Nets - Duration: 2:54.

-------------------------------------------

Handmade Headband for Baby - Tutorial by Anjurisa #6 - Duration: 4:20.

Hello, this is Risa from Anjurisa Welcome to my channel

In this video, I will show you how to make this handmade headband for baby using my fabric flower tutorials

Before we start, consider pressing the SUBSCRIBE button so you won't miss any update

These are the materials we need to make this headband

I have made this fabric ruffle flower tutorial, you can watch it here

For this little fabric flower, you can click the link here

To make this mini bloomy rose, you can click here for the tutorial

and we're going to need stamen

A piece of lace

Felt fabric

An elastic band

and some tools like hot glue gun, scissors, and pliers

First, attach these two little flowers together

Like this

Attach the fabric ruffle flower next to the small flowers

Cut the stamen with pliers

and put the stamen between these little flowers

Attach the lace

I will cut the elastic band before attaching it to the flowers

Our baby headband is finished!

Thanks for watching, if you enjoy this baby headband tutorial,

please like, comment, share, and SUBSCRIBE~

For more infomation >> Handmade Headband for Baby - Tutorial by Anjurisa #6 - Duration: 4:20.

-------------------------------------------

Going to the Emergency Room | Your Child's First Emergency Room Visit - Duration: 21:39.

Hey everyone! Welcome to MomTalks. I'm your host Laura with St. Louis

Children's Hospital. I am actually in the emergency department with two of our

awesome emergency nurses. This is Jessica and this is Angela. We are so excited

about MomTalks. We think this is going to be a really important series for moms,

dads, grandpa's, grandma's, whoever has a small child in their life or a teenager.

This is going to be really informative. Our first series as part MomTalks is

"There's a First Time for Everything," and that's why we're in the emergency

department. Regardless of how careful you are, how many vitamins, influenza,

whatever it is your child might have the, emergency department might be in your

future. I have three small boys. I am a frequent flyer so I thought it would be

best to talk to the experts. And, it's actually emergency nurses week so we're getting

to celebrate in the best way possible by being here in the emergency department

with our friends just to learn more about the process and what might bring

you here. So, what might bring you here to the emergency department?

Well, there's many things so anything from ear infections, cough, asthma, respiratory

stuff, broken bones, belly pain, vomiting, anything extreme...It

runs the extremes..car accidents, anything yes. Car accidents as well. So, Trauma; it's not just the bumps and

the bruises but the big bumps and bruises? Yes. Okay, so how many kiddos are we seeing a day?

25 patients per day and we averaged her for a year about 53,000 patients and we

had we've surged up to about 200 patients and respiratory season so it

can be a bit busy when you get here when the flu is active and the community and

also any respiratory viruses we do see 180 to 200 during that time what our

search seasons so when the kids when when the kiddos are back at school and

they all are around each other and they get some respiratory virus so whether

you have asthma and you're when you're getting runny nose and a cough and it

brings you to the ER but also they're passing around viruses around at school

so Gerry Mac Dre yes flu is a big one that we're waiting to see here soon so

when would you do we do have flu season we do get busier in the emergency

department flu shots get them yes don't like s to hang out so so it sounds like

asthma and breathing is a big one that's gonna send kiddos when do you know it's

time to come to the emergency department or how do you decide what's the best for

your kiddo so I think there's different options so

I'm whenever a patient like if they have a smile a lot of doctors do really good

with talking to the families and teaching them to watch for any type of

retraction so weight pulling on the belly pulling in the ribcage point up at

their neck or if their nose is going in and out things like that are things to

watch for and you can also call your pediatrician usually they have like an

on-call person on that you can talk to you and they can kind of walk through it

and if it's something that you need to come to the ER for and those are the big

main things yeah kind of watch for okay and when to come I know with one of my

kiddos he was having like a stomach virus but we weren't sure what it was

and I called the nurses triage line that we have at Children's and

24-hour service where basically they were like oh good you know with going

through its symptoms and talking through everything and like it could very well

be as appendix you you should come into the emergency department so there are

resources to kind of vet how how long what you need to do my six-year-old the

very first time we came to the emergency department we wrote a birthday party and

was laying on his belly playing on the floor and a kiddo ran by and

accidentally kind of knocked him in the back of the head

and he split his chin open just about as wide as my thumb my thumbnail and we

were like oh man and it was it wasn't like profusely bleeding but it was like

oh that looks brutal and so then of course all the moms at the birthday

party gathered around and we're all like oh god did we stay the stitches I don't

know do we glue it at home it's kind of like on his face so what do we do what

do we do and it finally I decided okay we're just gonna go to the emergency

department and he did end up getting glued shut basically but with it being

on his face we had a plastic surgeon come down and visit with us and kind of

say like yeah it's probably a good thing you came so it didn't glue crooked and

have issues later but I mean so so is that pretty common that yeah you might

think you might need stitches but really it's glue or maybe it's not even that or

you know should you feel stupid about it cuz then you're like oh my gosh I like

the crazy weirdo who's here no no it happens a lot so they can kind of vary

just a pain on where they're at on your body so you can have cuts anywhere in

your arms your legs your hands your head your face it just kind of depends on

where it's located at if they need to clue it or stitch it or if it's in your

like sometimes they'll just staple it so it just kind of depends and like you

said we have possibly really well 24/7 we also have our ENT which is that your

nose throat so if it's something on their face like sometimes they'll come

down and they'll be able to fix that up too but our physicians also do a great

job and they do a lot of sewing so they suture those close to sew other that way

our physicians are very well versed with that and can handle that too so kind of

taught me to sew you just thought you have to go to the emergency department

um how do you get here we're the farms that of how to get to champ Children's

emergency I think of it in the ambulance course if you come in a car you will

come to our front drive and you'll be greeted by ballet and you'll go through

our security and be greeted by our triage nurses and medics who will then

ask you of some questions and take your child's blood pressure and

heart rate and get a respiratory rate and then check in and they'll check in

and then you'll be seated in the waiting room and then you'll come back to their

room where you'll be greeted by your nurses and your physicians in the back

in the emergency department okay so talk me through a registration a little bit

because I know on the website on children's stl children's org we've got

a pre-check what does that all entail and how does that maybe help you

timewise but that's always something we're concerned about right like we

don't want to be waiting we know we our child needs help but how can we help

ourselves we do have on the website that you can sign up for an appointment in

the emergency department so it will give you times that are available and so if

you do that when you come in within 15 minutes of your appointment time we will

see you in the member to Department triage and then you'll be

coming back to the back I know I've done it and the other kid fell off a

waterslide and got a concussion knocked out cold and we drove ourselves here and

while I was driving my husband checked us in online and it was a pretty quick

process so very helpful and just kind of knocking out a couple of those

introductory questions or just something to think about for sure so you do your

triage they ask you the questions how do you know how fast you're gonna

get back here because we talk about surges right so some days you might be

103 here some days you might be one of 40 and you know didn't give it time how

many beds do we have back here we have 33 33 beds so how do you know or how

long should you expect a wait so it just kind of depends on the season so make

she was saying if it's a service season it could be anywhere from it depending

on you know you could be as quick as five minutes whereas you can be up to

three four hours just kind of depending on that respiratory season how it's

going or if it's a big trauma time like if there's ice outside and there's a

bunch of car accidents that can kind of hold up things too so we move as quick

as we can through them by taking the best care and making sure everybody has

a thorough assessment so sometimes you may have to wait but we promise we will

get to you and get you seen right so if I'm out there and I waited for an hour

and I feel like oh my gosh are we ever gonna get back they're like they're in a

lot of pain what do we do so you can come back up to the triage desk there's

also a medic they can also check you out and see if

there's anything going on they can retake the dividers they can send you to

one of the triage nurses also who can totally recheck you and see if they need

to push you back with her got it got it so when I get checked in am I allowed to

get my child snacks because this was a big one the three-year-old was like I

need a snack I wasn't allowed to so when when did that happen sooner than we

always actually go to the triage nurse who can talk to our providers crackers

just likes water water got it because yeah they were saying that one of our

kiddos couldn't have a snack because he might have to have surgery and so they

were trying to make sure that but leaves are empty not to have anything yeah

something to think about while you're on the drive just mentally preparing

yourself that snacks might not be an option okay so you make it past triage

you're done waiting they bring you back so we want to show you what when these

rooms look like so you kind of know what to look for right so now that we have a

theme what what's not to love about a theme so everything I think mergency

Department is animals right so it's very kid-friendly it's very just warm and

welcoming as much as we can make it so there's elephants and lions and all

kinds of stuff around and I again I think that's just one of the advantage

to coming to a pediatric hospital but there are other options when you're

going to emergency departments that might have emergency attrex specific

emergency staff on-site so we work with low back we work with Christian in

Illinois it's at Shiloh so there are other options and resources if you

cannot make it to Children's I just want to throw that out there for pediatric

feel and all right so we're going into room

number five okay so you get admitted you get brought back here a team member will

beat you couldn't tell us what's in the room lots of buttons then I'm not gonna

touch so what so what is all this and does this apply to everybody or just

sometimes so there's a mixture of things in here so we want to make sure we're

always prepared for anything that can come into any room and so in a basic

room we have in our carts there are different supplies in there there's

specimen cups there's suction extra suction stuff in there there's

thermometers there's pulse ox --is to make sure you can check vitals on

anything like that is all and they're just so they can assess the patient we

also have in case someone is sick and they're vomiting there's we also have

emesis basins that are up in the cabinets there's um we have airway

equipment just in case no matter what is going on with the patient there's always

a potential that you may need airway equipment so there's a lot full of them

and it's important to know that this is tiny tiny people very tiny little babies

all the way up until this adult I would assume like teenager junior high type

age that they have these things and I would assume not every hospital has the

tiny people's stuff good idea right yes we have anything from the tiniest of all

people to adult size we get parents who unfortunately are here with their child

and they make it and so we can take care of them to own it even says it on it oh

I think we have a question so we do have one question so on about how tiny this

is yes so Stacey is asking how she can make an appointment oh that's a great

question sure children's website is emergency

department tab and then there's a button there that you press and it'll walk you

through the steps think events eyes like skip the weight and it might even be on

the home page we actually just redid our website so definitely peruse find it and

you will you'll see where it says I think it's like a big blue squared and

it says skip the weight and I think you can pick which hospital you're looking

to come to Children's is obviously one of them and then you pick the time range

and then it'll ask you a couple of questions about your your child's age

yeah you can say like possible appendicitis or uncontrollably that's a

fun one to type in so yeah great question

all right so we'll kind of go through the size of things what else is in this

room that's kind of scary new things that your kid's gonna probably be pretty

overwhelmed too and you as a parent I mean it's terrifying this is this is the

nicest worst place to be seven but you're in good hands so so what else are

we looking at so this is our monitor where we take the final signs but it

can't make a lot of beeping noises so it could be a little scary for kids when

they come in and they're getting their vital signs taken we also have an

overhead light here so if they are getting like we talked about the cut

repair earlier there might be a bright light in their face so that's overhead

we have suction on the wall and also our oxygen on the wall which can make noise

as if we're using it and then over here we have equipment that the doctors are

using to look in eyes and ears and throats and I would demonstrate them

right now and pretty sure I'm not supposed to touch it and then also we

have all the airway equipment up on the head of the bed like Angela said we're

prepared to take care of anything and we always want to be ready for it if

there's a decompensating patient in the room so we have all of our equipment

here okay and TV in that closed-circuit TV it's actual normal TV I can attest to

this so I watched the Big Bang Theory for like an hour while we study evidence

that's what you do and then there's all right buttons again I'm not gonna touch

them cuz I'll throw you in trouble this is how we find our friends yes so again

if you're sitting here and you're like it's everybody this is kind of what's

the process like once you get back here it's like okay here we are know it so

once you're back here it just depends on how quickly you come back I'm your nurse

may actually be ready for you walking straight into the room with you it may

take just a couple of minutes while they read over your chart and it just kind of

depends registration will also come in the room and meet you back here again to

fully register you see you do not have to do that process out front kind of

saves you some time out there for them to come back here and do that um you can

also depend on what you're here for if you're here and you need some x-rays

like if you haven't hurt arm every don't you I have Chinese in the room I'm oh

you're ready on critical patients we'll get them in the room okay I hope imaging

like right yes it's just I get my bearings right down the hall that way

right just in the back hallway so they can come and get you in the meantime

take you for your x-rays that are ordered

while you're in Turkish and there's our medics that can come in and they can

bring you medicine if medicine was ordered while you weren't rash also we

have standing orders that allow us to give pain medication or order x-rays

while we're out in triage so how much would I care for the patient to so those

are different people that can meet them in the room and then of course your

physicians are gonna come see you so your doctors will come see you and

assess you just depending on like I said if it's a certain season it may take

just a little bit longer but they are in pretty quickly though to see you and

see you and you start your caring good things with a quick reminder to submit

your questions if you have any day I mean I cannot talk about how Paul v you

ladies are and how much we want to help so by all means please submit your

questions now's a great time to ask I'm sure we do have one question does the

hospital offer interpretations for other languages excellent question so they do

we have a video interpretation iPad and if the language is not on that iPad we

have interpreters that come into the hospital and then we can video

conference with an iPad so it's all video conference I think I heard at one

point we're up to 80 languages that we have seen and helped and talked I mean I

mean the list goes on is Spanish Spanish Chinese Japanese Korean I mean the list

goes on and on so absolutely having those services because I would think

those translations get kind of scary when you're talking about medical things

that even if English is your first language through a scanner you know our

donors and it was what are some of the most popular questions you guys get here

we have many specialists in the hospital too so depending on what your child's

here for the weight can be a little bit longer because you'll see multiple

doctors and then multiple specialty services will come here or a level one

trauma center as we talking about so we have those specialties here 24 hours a

day that are able to be here at the bedside to help take care of your child

so we when a dog's got his concussion we were seen by neuro we were seeing my

ortho we got seen by imaging and then we had a

really awesome resident from the emergency team who was here and it felt

like I was answering a lot of questions a few times but I think that's because

it's a multiple

the place real live stuff happened in here but so so any do parents ever

expressed like I feel like I just answered this question but there's

purpose right right right so they asked so the

physicians to ask a lot of the same questions as the nurses in the back and

also the nurses in triage and that's just to make sure we're not missing

anything we want to make sure that we have the questions answered

sometimes you're frazzled your kids not feeling well and they come in the front

door and maybe you missed something or you have to tell them something that's

important that all helped with the care no that's why the questions are asked

sometimes multiple times by different people but sometimes we try to meet

actually in the room if we can together some types of physicians and the nurses

will come in together so they can ask the questions together so you don't have

to keep answering this I definitely know like when when they were asking all the

questions about Alex I was like when was the last time he ate I was like oh I

think it was like noon yeah I was totally noon and then it was

like oh wait no no no he had popcorn and it was like three and then you know you

remember things and you and all of a sudden he started acting kind of weird

just not himself and it was like oh this is a whole new situation and we've

started from scratch I think we've got another question we do have another

question so is there a number to call if it's not life threatening to ask if you

should go to the ER oh it is our answer or answer line nurses are able to help

feel that question and we can get that number to post on this Facebook put it

on me I will definitely put it in the comments and also your pediatrician you

can always call your pediatrician - are we

again we didn't mention to about how we have Child Life Specialist here in the

department so we have Child Life Specialist so they help to provide just

distraction to the kids if we're doing any interventions on your kids nervous

that we do have them 12 hours a day and then we have social work in the

department 24 hours a day so what are they what are they used for distractions

they use iPads they have music they will go on like a lot of the teenage boys the

big thing is like YouTube so they'll go on YouTube and they'll play videos for

you because what you guys are doing your thing to try and minimize the scary I

mean the distraction to it very nice and then Social Work is available for

questions resources resources questions support of families so they're here 24

hours a day very nice very nice what else can we talk about what else I'm

have only touched on I think we've talked about the reasons that we're

coming who's gonna be here who's your resources here don't forget to submit

those questions we're happy to help of course but I think this is really

helpful just because this is one of the scariest experiences that families go

through it's very stressful and people react to stress in different ways and

and sometimes that way it can be agonizing and you just want your kiddo

to feel better but it's it's nice to kind of see the process and know where

you're going to be at and who you're gonna talk to so thank you guys so much

for your time thank you for all the awesome questions a reminder that our

next episode of mom talks is going to be next Friday we are going to be at 11

o'clock as always we're actually going to be at the children's Specialty Care

Center and doing the first time your child has to have minor surgery so

another really awesomely scary experience for a lot of people but

something's super common with ear tubes and just general procedures of setting

bones and things that so we will actually be in surgery

next week so we'll get a nice walkthrough of what that experience is

like through registration and and seeing what their rooms look like and why you

might use them as a resource as well so that's what we're here for so thank you

so much for tuning and thank you so much for the questions and we'll see you next

week

For more infomation >> Going to the Emergency Room | Your Child's First Emergency Room Visit - Duration: 21:39.

-------------------------------------------

Embeddings - Duration: 14:44.

Hi I'm Sally Goldman and I'm a research scientist at Google and one of the main things I work on is recommendation systems.

And one thing really fundamental to doing these recommendation systems is embeddings and I'm going to talk about those today.

As a motivating example I'm going to look at the problem of collaborative filtering.

So let's say I have a million movies and I have a half million users, and for each user I know which movies that user has watched.

The task is simple: I'd like to recommend movies to users.

To solve this problem I'm really going to have to learn some structure, something that let's me say these movies are similar to each other, so if you've watched these 3 movies then this is a good movie to recommend.

So as a simple starting point, let's try to take these movies and just put them along a line of one dimensional embedding.

So I will say I have maybe to the left I'll put animated movies and as I move to the right, I'll have more adult-like movies.

This starts to do nice things.

I have Shrek and The Incredibles, those are both animated movies for kids and if you watch one the other one is a good recommendation.

But then I have the The Triplets of Belleville which is an animated movie but really Harry Potter, though not an animated movie, I think is really a much closer movie to The Incredibles.

The Triplets of Belleville is not really oriented for kids as much, it's not sort of a blockbuster movie that a lot of people go to see.

And on the other side for example I'd say Blue and Memento are probably better recommendations for each other than The Dark Knight Rises.

So just having a single line, as much as I try, it's going to be really hard to capture all the intricacies in movies that make people like one versus another.

So what if we add another dimension and now I have 2 dimensions?

So what if I bring the blockbuster movies up towards the top and the more art house movies down?

Now I've achieved some of the things I've wanted.

I've got Shrek and The Incredibles and Harry Potter kinda nearby and they're all pretty similar movies and in the bottom right I have Blue and Memento.

And you can imagine that there's a lot of other aspects you'd want to capture and you'd want more than 2 dimensions, and we would.

In reality we could imagine 20, 50, even 100 dimensions to sort of do these embeddings.

But let's stick with 2 dimensions because I can draw it.

So let's add a few more movies to this and I went ahead and added some axis.

I have the X axis which is sort of more children oriented movies to the left and more adult movies to the right.

And the Y axis, more blockbuster movies to the top and more art house films on the bottom.

And you can see a lot of nice structure here and you can see that movies nearby each other are kind of similar and that's really the goal of what we want.

Now I'm drawing this geometrically but I do want to make sure everyone understands that there's a very simple way to represent these embeddings and that's what's going to happen when I learn them in a deep neural network.

So just using Shrek and Blue as an example, each of these is just a single point in this two dimensional space and the way we write down a point is just a value on the X axis and a value on the Y axis.

So for example Shrek is just the point minus 10.95 or Blue is 0.65 minus 0.2.

So each movie here can just be represented as two reels and the similarity between movies is now captured by how close these points are.

And although I'm only going to draw 2 dimensions, in reality you do want to do this in D dimensions, 2 isn't going to be enough to capture everything.

Implicitly as you think about what you're doing, this is really assuming that interest in movies can be captured by D dimensions.

I'm allowing D different aspects to be selected and then I can move the movies independently among these D aspects and use that to now bring similar movies nearby to each other.

Each movie now is just a D dimensional point, I can write it down as D real values and the cool thing is we can actually learn these embeddings from data and we can do this with a deep neural network without adding a lot of new things to what you've already seen.

There's no separate training process needed, we're just going to use back propagation exactly as before and the embedding layer is just a hidden layer and we'll have one unit for every dimension you want in your embedding.

Supervised information is going to allow us to tailor these embeddings for whatever task you're after.

If you want to do movie recommendation, then we want these embeddings to be geared towards recommending movies.

We will need some sort of training signal, we'll look at some concrete examples but in this example if a user has watched a set of movies then to some extent those movies are similar to each other and should be nearby and we'll aggregate this of course over lots of data.

Intuitively these hidden units are learning how to organize the data in a way to optimize whatever metric we've decided to put as the final objective of the network.

So now let's go back and look at how would this actually be input to the neural network.

The matrix I show on the right is sort of the classic way we think of collaborative filtering input.

I have one row for every user and one column for every movie and a check in this simple case indicates the user has watched the movie.

So now let's think about how we do this within TenserFlow.

Each example is really just going to be one row of this matrix, so let's focus on the bottom row that I've highlighted in yellow.

If there's a half million movies I don't really want to list all the movies you haven't watched, it's so much more efficient to just write down the movies you have watched.

And when I do back propagation I'll be computing dot products and I'd like that also, the time, just to depend on the movies you have watched.

So to achieve this we're going to use the following input representation and to do this we're going to have 2 phases.

The first pre-processing phase we're going to build what we call a dictionary.

A dictionary is just a mapping from each feature, in this case each movie, to an integer from 0 to the number of movies -1.

So I'll just do this in the order I've shown them in the columns.

So column 0 I'll call movie 0, column 1 movie 1 and so on, and this is a one time thing we do as pre-processing.

Now I can efficiently represent that bottom example as just the 3 movies that user did watch, I don't need to worry about all the other ones.

I do it kind of as a pictorial view but in reality it's just 3 integers - 1, 3, 999,999 - because those are the indices for the 3 movies that user has watched.

Okay so now that we have the input representation we can now look at how this fits into the full network and I'm going to use 3 different examples to help illustrate it.

The first example I want to look at is the problem of predicting a home sales price.

So this would traditionally be done as a regression problem.

I'd like to optimize the square loss between the predicted price and the true sale price.

So the thing that I really would like to create an embedding layer for here are the words in the sale, the house description ad.

Because although there are a set of words, I really need to understand what words are similar in terms of figuring out the size of the house so I may say this is a spacious house or I may say it's roomy.

Those are words that are used that kind of capture the same thing and so I want to begin understanding how these words that real estate agents put in ads helps us understand something about the home.

So we have lots and lots of words that might be in an ad, and any given ad has 100 words or so, and so again we really do want the sparse embedding just like we talked about but my vocabulary is over words versus movies.

I'm going to learn a 3 dimensional embedding in this little toy example just so I can draw it, again in reality you'd probably want a lot more than 3 dimensions.

And I'm always in these examples going to draw my embedding layer as green, it's really a hidden layer, in this case 3 units because I want a 3 dimensional embedding.

I also may have other input data like the latitude, longitude, number of rooms and you can add all that, I just used latitude and longitude as an example.

And then in pink I'm showing the fact that we can have whatever other hidden layers we want, these are just your standard hidden layers, you can have as many as you want.

You can decide how many units and then at the end they'll go into a single unit that [unintelligible] the regression problem will give us a real value and will optimize the L2 loss with the sale price.

In the process of doing back propagation just like you've seen, the embedding layer will be learned.

As another example, suppose I want to learn to classify handwritten digits.

So I have the digits 0 to 9 and I have some training data where there's actually a label of the correct digit.

So here this sparse thing I want to create an embedding of is just the raw bitmap of the drawing, whether there's a white or black, so 0 or 1.

I can introduce whatever other features I'd like and again I have an embedding layer which I'll stick with keeping them 3 dimensions, so the representation of the digital will go into that.

In pink I show we can have whatever additional hidden layers and in this case we'll have a [unintelligible] layer.

We're gonna have the 10 digits and basically learn a probability distribution over the digits of how probable we think it is that this is each of the digits.

I can take the one hot target probability distribution from what I know the right answer is and optimize a soft max loss.

In the process of doing this, in training with back propagation, I will learn to embed the images.

And now let's look at the example we've been studying of collaborative filtering, the movie recommendation problem.

This is actually interesting, it brings up an aspect we haven't seen yet which is where is my training data here, right?

I just know for each user there is a set of movies, so how do I know what the right movie to recommend is? What am I going to use as the label?

What we do is, suppose the users watch 10 movies, we use a simple trick.

We'll randomly pick 3 movies and hold those out, take them away and those are the labels, so those are the movies I'd like to recommend, they're good recommendations because you watched them, and I'll take the other 7 movies and use them as my training data.

Once I've done that, this is very similar to what we just talked about with the character recognition.

I'll take the 7 movies that are my training data and we know how we can get the sparse representation, we'll bring them into the embedding layer.

We can take whatever other features we want, maybe the genre, maybe the director, whatever else we want to take about the movie or the user and then we can bring those into additional hidden layers and we'll have a logit layer.

And note this logit layer is big, instead of 10 different nodes like in the digit prediction, if I had a half million movies there's gonna be a half million of these.

There's issues with that, it's out of the scope of this discussion.

But we will get a distribution over those half million movies of what movies we think you'd like, we will then optimize the soft and max loss with the held out movies that we know you do like.

And in doing this in the back propagation and just the standard training, we will learn the embeddings of the movies like we talked about.

So I do want to come back now and just make sure it's clear how what we learned in the deep neural network ties to the geometric view I gave at the beginning.

Let's look at the deep network on the left and let's take a single movie.

Right if you think of the input layer, each of those nodes at the bottom represent as one of these half million movies, I've picked one movie and just made it black.

In this example I said I had 3 hidden units so I was going with 3 dimensional embedding.

So that black node will have an edge connecting it to each of those units; I used red for the first one, magenta for the second and brown for the third one.

When you're done training your neural network, those edges are weights, each edge has a real value associated with it, that's my embedding.

The red is my X value, the magenta is my Y value and the brown is the Z.

So this particular movie would be embedded in a 3 dimensional space as 0.9, 0.2 and 0.4.

As with any deep neural network there are hyperparamaters and one of the hyperparameters we have in the embedding layer is how many embedding dimensions, how many hidden units do you want in that layer?

Higher dimensions are good because it allows us to tease apart more distinctions and therefore we can learn better relationships.

On the downside, as I increase the number of dimensions there is also a chance of overfitting and it's going to lead to slower training and the need for more data.

So a good empirical rule of thumb is the number of dimensions to be roughly the fourth root of the size of my vocabulary, the number of possible values.

But this is just a rule of thumb and with all hyperparameters you really need to go use validation data and try it out for your problem and see what gives the best results.

An embedding can also just be thought of as a tool.

One of the things we get from these embeddings is we map items - movies, texts for example the words in the housing description - to these low dimensional real vectors in a way that similar items are nearby.

It creates structure into these items that really we didn't have any structure and the structure is in fact geared towards what you're trying to do with it.

We can also apply embeddings to dense data, for example if I look at the way audio or soundtracks are represnted, it's already dense.

But we don't have any meaningful metric, I don't know how to say this audio is similar to that.

And so we can use embeddings just to learn a similarity metric among already dense data, and even further we can embed diverse types of data - texts, images, audio - jointly and learn a similarity metric across them.

For more infomation >> Embeddings - Duration: 14:44.

-------------------------------------------

Classification - Duration: 7:26.

So we've talked a lot about regression.

But sometimes what we want to do with a machine learning model is make a classification.

Is it A or not A, is it spam or not spam, is the puppy cute or not cute?

Now we can use logistic regression as a foundation for classification, by taking our probability outputs and applying a fixed threshold to them.

For example, we might decide to mark something as spam if it exceeds a spam probability of 0.8.

That 0.8 is our classification threshold.

Now once we've chosen to make a classification threshold, how are we going to evaluate the quality of that model?

We need some new metrics, our regression metrics aren't sufficient.

One classic way of evaluating classification performance is to use accuracy.

And by accuracy we mean count all the things you got right and divide it by all the things that there were.

Basically what percentage of the things did you get correct.

Interestingly enough, even though accuracy is a very intuitive and widely used metric, it has some key flaws.

In particular, accuracy breaks down when we have class imbalance in our problems.

Imagine if we were to try and use accuracy to assess the quality of a model that is predicting ad click-through rates for display ads.

In display ads, our click-through rates are often 1 in 1,000, 1 in 10,000 or even lower.

So I might have a model that has absolutely no features in it except for a bias feature that tells it to predict false, always.

this predict false always model would have an accuracy of 99.999% in display ads predictions, but would add absolutely no value.

Clearly accuracy is doing something wrong here.

So to deal with class imbalance problems, we need a more fine grained way of looking at the way that our models predict onto positives and negatives or different classes.

So we can think about these different kinds of successes and different kinds of failures along a 2x2 grid that has true positives, false positives, false negatives and true negatives.

To help us understand these, let's remember the story of the little boy who cried wolf.

Now this little boy is a shepherd, a wolf comes to town, if he correctly spots the wolf that's a true positive.

He sees the wolf, he says "wolf", true positive saves the town, good job.

Now a false positive is when that little boy says "wolf" but there really wasn't a wolf.

That is a false positive, it makes everybody annoyed.

A false negative may be even worse.

A false negative - there was a wolf coming along and the little boy was asleep or didn't see it and the wolf went in and ate all the chickens.

That's really no good at all.

A true negative is when the boy did not cry wolf and indeed there was no wolf, everything's fine.

So we can combine these ideas into a couple of different metrics.

One of them is precision which is when the little boy said "wolf", how many times was he right?

How precisely was he able to say "wolf"?

Recall on the other hand is of all of the wolves that tried to come into the village, how many did we get?

Now what's interesting is that these things are often in a little bit of tension.

Because if you imagine that you want to do a better job at recall, the right thing to do is to be more and more aggressive about saying "wolf" even when you just hear a little noise off in the bushes.

So we can think of that as lowering our classification threshold.

But if we want to be really precise, the right thing to do is to only say "wolf" when we're absolutely sure so we might think of that as raising our classification threshold.

So these two metrics are often in tension and doing well at both of them is important.

It also means that whenever someone tells you what the precision value is, you need to also ask about the recall value before you can say anything about how good the model is.

Now precision and recall are both well defined when there is one specific classification threshold that we've chosen.

But we might not know in advance what the best classification threshold is going to be and we still want to know if our model is doing a good job.

Well, a reasonable thing we could do would be to try and evaluate our model across many different possible classification thresholds.

And in fact we have a metric that looks at the performance of our model across all possible classification thresholds.

And this is called an ROC curve, Receiver Operating Characteristics curve.

And the idea is that we evaluate every possible classification threshold and look at the true positive and false positive rates at that threshold.

We then draw a little curve that connects those dots and the area under that curve has an interesting probabilistic interpretation.

It goes like this:

If I were to pick a random positive example, closing my eyes I pick one out of our distribution, and I pick a random negative example, what is the probability that my model will correctly assign a higher score to the positive than it does to the negative?

In a sense, what's the probability it gets that little pairwise order incorrect?

Turns out that that probability is exactly equal to the probability value of the area under the ROC curve.

So if I see a value of 0.9 area under ROC, that's the probability that I'll get that pairwise comparison correct.

One last measure to think about is prediction bias.

Now prediction bias is defined by taking the sum of all of the things that we predict and comparing them to the sum of all the things we observe.

Basically we would like the expected values that we predict to be equal to the observed values.

If they're not, we say that the model has some bias.

A bias of 0 would show that the sum of the predictions equals the sum of the observations.

Now bias is a very simplistic metric in that it's easy to fool.

We could have a model that has almost no value to it, it just predicts the mean of all the class probabilities to create a zero bias model.

However, it's a useful canary.

Because if one of our more complicated models does not have zero bias, it means that something is going on.

It gives us something to dig into as a way to debug our models.

So if our model does not have zero bias it's definitely cause for concern and allows us to maybe slice the data and see what areas the model is not doing a good job of having zero bias on.

However just having zero bias by itself is not an indicator that the model is perfect, we need to keep looking at other metrics for that.

We can look at more fine grained use of bias by looking at a calibration plot.

With the calibration plot, what we do is we take groups of data, we bucket them up and look at the mean prediction versus the mean observation for things in that bucket.

Obviously we do need to have buckets of data to make calibration be meaningful.

For example if I'm looking at flipping a coin, any given coin flip will either come up exactly heads or exactly tails, basically exactly 1 or exactly 0.

But my probabilistic predictions will be 0.5 or 0.3 or some value in between 0 and 1.

So it only makes sense to compare those mean predictions to mean observations if I aggregate across a sufficiently large number of them.

For more infomation >> Classification - Duration: 7:26.

-------------------------------------------

Multi-Class Neural Nets - Duration: 3:43.

So up until now, we've talked about classification for binary class problems.

Is something spam or not spam? Is the puppy cute or not cute?

And logistic regression, with a classification threshold, is very well-suited to these sorts of binary class classification problems.

But in the real world, we're often not choosing just between two classes, sometimes we need to pick a label out of one of a range of classes.

For example, is the object animal, vegetable, mineral or man made object?

Is the color red, orange, green, blue, indigo or violet?

Do we have a picture of an apple, a car, a banana, a dog, blah blah blah.

There's lots of areas where being able to do good multi-class classification is a useful thing.

Now, interestingly enough, we can build off of some of the technology that we already have with binary class classification.

One classic way of doing this is through the one versus all multi-class classification.

So essentially what we do is we have one logistic regression output node in our model for every possible class.

So one node might identify "is this an apple?" Yes/No. Another might say "is this a picture of a bear?" Yes/No.

A third might say "is this candy?", yes or no. And we have one output node for every possible class that we're looking at.

We can do this in a deep network by having different output nodes at the outset of the model and share the internal representation through the rest of the model so these can be trained reasonably efficiently together.

In some problems we know that an example will belong to only one class at a time.

For example, a given fruit is either a banana or a pear or an apple.

In this case, we'd like the sum of the probabilities of all of our little output nodes to sum to exactly one and this can be achieved by using something called Softmax.

Softmax is essentially a generalization of the same logistic regression that we used, but generalized to more than one class.

When we have a single label, multi-class classification problem we use Softmax.

This encodes some helpful structure to the problem and allows us to use those outputs as well-calibrated probabilities.

In other cases we might have a multi-label classification problem.

For example, an image might contain both an apple and a banana in it.

Or it might contain three different dogs, or a dog and a person and we'd want to be able to identify all of those different labels in the same example.

And in that case we do need to use a one versus all classification strategy, where each output is computed independently and the outputs do not all necessarily sum to one.

When we're training a multi-class classification, we've got a couple of options here.

We can use full Softmax, just straight out of the box and this is relatively expensive to train.

You can think if you have a million classes then you essentially needed to train a million output nodes for every single example.

Now it's possible that if you're trying to disambiguate between the dog being a labrador and a poodle, that knowing that it's not a toaster is actually quite an easy thing.

And so we can a little bit more efficient there by doing something called candidate sampling, where we train the output nodes for the class that it belongs to and then we take a sample of the negative classes and only update a sample of the output nodes.

This is quite a bit more efficient at training time, it doesn't seem to hurt performance very much in a lot of cases; obviously at inference time we still need to evaluate every single output node.

For more infomation >> Multi-Class Neural Nets - Duration: 3:43.

-------------------------------------------

Intro to Neural Nets - Duration: 2:50.

At this point, we should recognize this problem as a simple, non-linear problem.

Something that we can solve easily with feature cross products.

But what happens if we get a slightly more complicated problem?

Maybe something that looks like this.

At some level we've got maybe a set of spirals interacting with each other.

Now we can probably sit around and do some math and think of the right feature cross products to add.

But it's easy to think that our data sets might be more and more complicated.

And eventually we would like some way for our models to learn the non-linearities themselves without us having to specify them manually.

This is the promise of deep neural nets, that do an especially good job at complex data, including image data, audio data, and video data.

We'll learn more about neural nets in this section.

So we'd like to have models that learn the non-linearities themselves, without us having to specify them manually.

How are we gonna to do that?

Well we probably need a model with some additional structure to it.

Let's take a look at our linear model.

Where we have a number of inputs, each with a weight that's combined linearly, to produce an output.

Well, if we wanna get a non-linearity in there, maybe we need to have an additional layer in there.

So now we can add those guys up in a nice linear combination, into a second layer.

That second layer gets linear combined and we haven't yet achieved any non-linearity.

Because a linear combination of linear functions is still linear.

Well, that's not good enough, so clearly what we need is a second layer right?

So we put a second layer in there and.. we're still linear.

Because even if we add as many layers as we want, any linear combinations of linear functions is still gonna be linear.

Okay, we need to do something else.

And that something else is we need to stick in a non-linearity.

That non-linearity can go at the output of any of our little hidden notes in there.

One common non-linearity that we use is called ray lou.

And this takes a linear function, and chops it off at zero.

So if you're above zero, you're a linear function; if your function returns a value below zero, we cap that at zero.

Simplest possible non-linear function, and this allows us to create non-linear models.

Now we could use any non-linearity in here, a lot of folks also use [unintelligible], but it turns out that ray lou gives state of the art results for a wide number of problems, and it's very simple.

Once we had this, we can stack these layers up and we can create arbitrarily complicated neural networks.

Now when we train these neural nets, obviously we are in a non convex optimization, so initialization may matter.

The method that we use for training these, is a variant of gradient descent, called back propagation.

And back propagation essentially allows us to do gradient descent in this non convex optimization in a reasonably efficient manner.

For more infomation >> Intro to Neural Nets - Duration: 2:50.

-------------------------------------------

Regularization for Sparsity - Duration: 1:42.

So let's dig a little deeper into feature crosses.

They can be great but they can also cause some problems.

In particular if we're crossing sparse features.

For example maybe one of our features is the words in a search query and the other feature might be unique videos that we have to look up.

So now we have maybe millions of possible words, maybe millions of possible videos and we're crossing those, we're going to get a lot of coefficients.

All of that means that our model size is going to explode, taking memory, possibly slowing down runtime.

And a lot of those combinations are going to be super rare even if we have a lot of training data and so we may just end up with some noisy coefficients and possibly overfitting.

So you know the answer, if we're overfitting we want to regularize.

And now we're going to say can we regularize in a way that also will reduce our model size and our memory usage?

So what we'd like to do is just try to zero out some of the weights, in which case we won't have to deal with some of those particular crosses.

This could save us RAM and this can also potentially help us with overfitting.

But we have to be a little careful we don't want to lose the right coefficients, we just want to lose the ones that are sort of extra noisy.

So what we'd like to do is explicitly zero out weights, and that's what we call L zero regularization.

It would just penalize you for having a weight that was not zero.

But that's not convex, it's hard to optimize, sort of a comma trail problem.

Instead what we do is we relax that to an L1 regularization, which just penalizes the sum of the absolute values of the weights.

And by doing that we still encourage the model to be very sparse, it will drive a lot of those coefficients to zero.

And that's a little different than L2 regularization, which also tries to make the weight small but won't actually drive them to zero for you.

For more infomation >> Regularization for Sparsity - Duration: 1:42.

-------------------------------------------

Mundo Oscuro en tu Kodi - Duration: 11:31.

Hello again Infonauta, welcome or welcome to a new installment of your YouTube channel Infoductiva ...

this time I bring you a GNU GPL add-on with contents

esoteric ... mystery ... dark world ...

the complement in question is called ... Dark World

interesting, right?

We begin ...

Well, the first thing I'm going to do is show you

this is the add-on I'm going to give a secondary button or keep pressing

if you are on an Android device

then adjustments

so you can see that we have three tabs

one complies with the standard that we have been counting for a long time

so that when we want to reset the add-on because it has been updated

and we have some malfunction we have to click on the default and then OK

here in the second tab as you see allows us

select different dependencies that we have and configure them

and here below we have the video and cache settings where we can select

what video quality do we want

set the subtitles, clean the cache, clean the history and delete the database

of the search history, well I'm going to leave it as is, I'm going to press OK and then I'm going to click on the add-on to help you

decide if you like it or not, if you want to install it or not this add-on

in your Kodi multimedia center

how do you see here gives us advice

that you have to look at it accompanied by the creators of the addon, do not take responsibility

of future traumas

that could cause, this is because dark world is dedicated mainly to that, to the terror to the dark world, to the esoteric world

anyway...

here we have as you see 11 sections

all right?

addons from the same creator ... well let's start at the top ...

this is a label, the heading let's say it like that

here it tells us when it was updated ... a few days ago ... on September 16

here allows us to read

this short text that you see here on the left

when you press it how you see here it tells us to start the interactive part of the addons

discover the story of the young Vanessa through the 15 chapters

that is hidden within the content of the addons this is a horror game

here is a game but right now it gives problems

let's hope they repair it soon

here we have the horoscope and here we have to enter ... we press on entering ...

and this is where the actual content of this add-on that is called dark world really is ...

here we have the first chapter

of Vanessa's story that is the game I have commented on before

here we have a content index

and then we have 15 sections

well, 13 reality

General parapsychology

as you see here gives us different contents

to access documentaries, conferences and content in General parapsychology

we also have spiritualism

as you see here we have 18 sections

occultism would be the next section

as you see

Dreams meaning

astrology, alchemy

in short ... divination ...

esotericism ...

where we found 13 sections

among which we can highlight

the runes, the tarot

the Kabbalah or the Kabbalah ... numerology, cartomancy, geomancy

witchcraft would be the next section

with types of magic

anyway

all kinds of related content

with the dark world

demonic would be the next section

every time we change section it makes a sound

characteristic of a scary movie scene

as you see demoniac brings also

a lot of content

Myths and legends

zombies, vampires, Slenderman, the girl on the curve

anyway

enchanted places

12 sections

quite interesting

prophecies

where we have 33 sections

as you see from nostradamus

going through the Mayan, Egyptian civilizations

Julio Verne

there's really a lot of content here

it is a very complete addons

on this subject

sects

where we have 20 sections

documentation

Content difficult to catalog

here gives you 24 sections

specialized YouTube channels

and how do you see here we have 37 very interesting sections

how you see is a complement that does not leave anyone indifferent

it really is a very original complement

with a pretty thematic

particular let's say it like that

if you are interested in this content and those who add it to your Kodi, you just have to stay because then

I'll explain how to add it to your multimedia center

Kodi

let's add the source by clicking on the gear

in the file browser

click on add source and click on none

and here we put the address that I leave in the description of this video

then click on OK

down here put a name

with which to identify it and finally click on OK

we go to the main screen

the time has come to enter addons

in the package in the upper left corner

we start installing from a zip file

and let's go select the source that we just added

next let's go into cultural addons

and curious

and here we have a folder

that is called dark world

We press and as always in these cases as you see we have a repository and then the dark world video plugin

so the first thing we're going to do is install the repository

and if possible from the repository

download your add-on

since this one surely

be its source or its original repository

remember to be patient in this step

here we have decosub repo

well then let's check it coming in install

from repository, let's go into this newly added repository that is called

decosub

click on it

and how do you see gives us the possibility

of repositories that we already have and video addons where we find

dark world, apart from other 3 that we already tried

in 3 previous videos, if you have not seen them I recommend you take a look

but good today's video is about dark world we press how you see is a thematic addons

dedicated to the esoteric world, paranormal, mystery and terror

based on YouTube with GNU GPL license so that all content

Being on the internet and is open or the addons is a collaboration of consumption and more S 80 click on install

We wait for the poster to appear indicating that we have been installed correctly in

how he sees Lang resist

more dependencies and here we have it

dark world

addons installed

good that's all

thank you very much, share my videos on your social networks

thank you for subscribing and for giving the notification bell many thanks for your comments and for your see you

in the new delivery of your YouTube channel

Infoductiva See you soon Infonaut

For more infomation >> Mundo Oscuro en tu Kodi - Duration: 11:31.

-------------------------------------------

The History of Red Dead Redemption & Beta Version - Duration: 34:27.

What's up, people?

Here we are with a new episode of Hot Topic focusing on something a bit different from the usual.

I'm Gary7 MT for the GTA Series Videos crew and in honor of the upcoming release of Red Dead Redemption 2,

we're delving deep into the red dead series' history.

From its birth to the key of its success and, of course, we'll try to analyze the beta and removed content from the two already released titles of the series,

Red Dead Revolver and Red Dead Redemption.

Before going into the details, thanks are in order to Monokoma, Firex and all the users and staff members at Unseen64.net,

MobyGames.com, TheCuttingRoomFloor, GTAForums, RedDeadForums, Reddit and the Red Dead Wiki.

It's thanks to them and their passion that we're able to prove theories,

locate the right sources and even find new things we're able to show you.

Here you go.

Looks like you still got some business with them brothers.

They ain't what you call kindly fellows.

Open the damn door, woman!

Rockstar San Diego are the creators of Red Dead but before they were known as such,

the California studio was known as Angel Studios founded by Colombian artist Diego Angel in 1984.

The company originally produced 3D work for various media, the first of which was a volcano animation for Scientology's Dianetics.

Dianetics by L. Ron Hubbard.

Still, the studio's 3D effects are best known from films like "The Lawnmower Man"

and music videos like Peter Gabriel's song "Kiss that Frog".

Angel Studios shifted its focus toward the video game industry only in the 90's,

joining a group of companies that would develop video games for the Nintendo 64 console.

From 1996 to 2000, Angel Studios both ported and developed games for Sega, Nintendo, Microsoft and Capcom

- titles like "MR. Bones",

"Midtown Madness" and "Resident Evil 2".

Capcom offered Angel Studios an opportunity to create something entirely new,

following the acclaimed success of Resident Evil 2's porting.

The title's codename was "SWAT" and Angel Studios first envisioned the project

as a single player split-screen title where you controlled a 4-man SWAT team.

This title's premise was quite similar to hired guns,

an Amiga game developed by the GTA Series' original developer "DMA Design", known today as Rockstar North.

Capcom video game designer from 1984 to 2003, Yoshiki Okamoto, author of another western title, Gun.Smoke,

was put in charge of the angel studios project.

His personal fixation with the genre in general and one Western movie in particular called "Blindman"

with the one and only Ringo Starr, believe it or not, changed the project entirely.

SWAT ceased to be the game's title, but rather became a codename for Spaghetti Western Action Title.

And this is where the history of Red Dead Revolver began.

Capcom and Angel Studios announced the game and showed it to the public during a few events using images and videos.

What was shown was heavily programmed and not actual gameplay because,

due to the troubled development, the game was unplayable.

Chris Bratt's YouTube show, "People Make Games", exposed more about the initial concept and status of the title

thanks to Dominic Craig, one of Red Dead Revolver's Lead Designers.

In his opinion the game wasn't fun because the shooting mechanics were weird:

partially inspired by Japanese action games and partially by "Panzer Dragoon" and "Tenchu: Stealth Assassins"

- with the latter being the very first stealth game ever produced.

Capcom not only financed the project during the first years, but like we said,

also sent some of their developers and lead artists to California.

With Okamoto as project leader,

while Akiman designed all the characters in the game using the developers as models.

Because of Bratt's video we learned that the character of Pig Josh was based on Lead Designer Josh Needleman-Carlton,

Mr. Kelly was based on Michael Kelly the Lead Engineer

while Perry shared not only the name but likeness, of Particle Artist Chris Perry.

The liaison between Angel Studios and Rockstar Games began in 2000.

This was when Angel Studios developed and released under the Rockstar label, Midnight Club: Street Racing,

Smuggler's Run and Smuggler's Run 2.

On November 20th, 2002, Take-Two interactive announced that it had acquired Angel Studios

for a combined cash and stock value of 34.7 million dollars.

By the end of 2002, Capcom had already been funding the project for three years

and started getting cold feet culminating with their complete backing out of it after Take-Two's acquisition.

Unlike many corporate buyouts, for Capcom, that acquisition was far from bad news

- this is because at the time the Japanese publisher was already courting Take-Two and Rockstar Games

to obtain the rights to publish the Grand Theft Auto series in Japan.

In June 2003, the deal was finally sealed and Rockstar Games announced a partnership agreement with Capcom

to localize, publish and distribute the blockbuster title GTA 3 for PS2 and PC in Japan.

Following their purchase, Angel Studios was renamed Rockstar San Diego

and Rockstar Games executives reviewed the studio's projects in development to sort out what was worth keeping.

Dan and Sam Houser once remarked that one project that always caught their eye was "a cowboy game that looked very good."

"For the time it looked visually spectacular, but speaking to the management guys there,

it was a complete mess.

It didn't really exist yet as a game." according to Dan Houser.

"Capcom were prepared to walk away from the project,

so we said we'd finish it and all they ever wanted was the rights to publish it in Japan

if we ever did finish it - which they never thought it could be."

Despite being unplayable, Rockstar Games started work on the game

after sending the Capcom designers home and taking over the development.

They salvaged whatever good they found and scrapped almost everything else.

The original version was more like a classic arcade "on rails" shooter with fast paced gameplay,

while the final version ended up being more an action-adventure title with a bit of free-roam in the levels

and a definitely slower but more rewarding gameplay.

According to Dominic Craig, the controls were the first thing redone with a more robust cover system,

but the narrative remained the game's primary flaw.

At that time Rockstar were more narrative developers,

while Capcom's interest focused more on gameplay.

What they ended up doing was stealing from Western movies from the 60's and 70's, environments and characters,

and blending them all together into the story of a bounty hunter seeking revenge on his parents killers.

Again thanks to the "People Make Games" episode, we now know

that the game's narrative was originally heavily inspired by the film, "High Plains Drifter"

in which Clint Eastwood reprises a role similar to the one from Leone's Dollar Trilogy.

Go ahead.

Thus the story is of a mysterious cowboy seeking vengeance on behalf of a murdered man.

Now while implied, it is never confirmed that this cowboy is in fact

the same man back from the grave.

In the first draft of the story, Red was supposed to die with the rest of his family at the very beginning of the game

and return from the beyond to satisfy his own personal vendetta.

Red's name was supposed to be "Red Hand" due to his burned hand being wrapped in a red bandana,

which would serve as an identifying mark to fear.

Not just those responsible for his family's death but all outlaws as well.

From the original idea to what we got at the end, technically speaking, the game was scrapped and rebuilt,

surely using the same assets, but with different visuals, graphic style,

HUD, animations and more.

Various things in the original title have been completely dropped by the way.

Starting from a snowy level,

and more frequent use of the horse.

Jack's Dead-Eye ability in the final game was originally Red's ability

- and maybe even the power-ups were all available to Red

after reaching specific criteria like any arcade game.

The multiplayer was already part of the game, but we lost some special abilities, like flying.

Still from "People Make Games", Dominic Craig also talked about the train chapter of Red Dead Revolver.

He explains that Capcom's original level was like a Mario title with a big heart shaped coach

and a princess character in it with armed enemies with Gatling Guns shooting at you

as you rode by on your horse.

Thanks to the original trailers, some of the differences between the Capcom version and the final version are made manifest.

Other than these videos, not much is left from the beta version of Red Dead Revolver, except a logo,

artwork and some images - and while we're on the artwork tip, here's some trivia from Bratt's video.

Red Harlow's face in the game's cover art was apparently inspired by Owen Wilson's screaming face

from the 2000 film "Shanghai Noon's" poster the developers just happened to have hanging on the office walls.

Wow.

Despite the enthusiasm and effort that went into developing it,

not everybody had faith in Red Dead Revolver.

According to a rumor shared by Chris Bratt, Rockstar's idea was to publish the game under the "Global Star Software" label,

Take-Two's low-budget publisher for second-tier titles.

The point was to avoid the game being released as a "Rockstar title".

Due to favorable reviews and almost a million copies sold between PlayStation 2 and the original Xbox,

very little time passed between the release of Red Dead Revolver and the first glimpse at a sequel.

Originally, Red Dead Redemption was to be a direct sequel of the first game being named "Red Dead Revolver 2".

We can see that Red and John somehow share the same facial scars.

According to Dominic Craig, originally the plot for Red Dead Redemption was meant to center around Red's son,

with the boy being angry at his father who's been a wanted man ever since he killed the governor.

After he obtains some semblance of a normal, happy, life,

of course the bad guys show up and shoot that all to hell.

At that point the protagonist starts his revenge mission - to find his dad again.

Craig's idea was more a "Once Upon a Time in the West" sort of story.

He wanted an action title that felt like a "Spaghetti Western ",

while Rockstar's vision was more "The Wild Bunch"

- a tale of wanted criminals from a time that unbeknownst to them, has already passed.

We're gonna stick together, just like it used to be!

When you side with a man, you stay with him, and if you can't do that, you're like some animal. You're finished!

We're finished. All of us!

The very first time Rockstar showed Red Dead Redemption was in 2005

with this brief teaser at the Sony E3 press conference

- which was really more of a tech demo that showcased the lighting effects and new graphics.

After this teaser, four more years passed without news, screenshots or videos

until the game was officially announced on February 3rd, 2009 with its Red Dead Redemption moniker.

With over 15 million units sold as of February 2017, and an average score of 95% from world reviewers,

Red Dead Redemption is universally acclaimed as one of the greatest games for both PlayStation 3 and Xbox 360.

Like its predecessor, the game was never released for PC - and according to leaked documents,

ex-developer statements and more, a troubled development is something that also marked Red Dead Redemption.

Built in different compartments it used at least three or more different versions of the RAGE engine

forcing the developers to create new tools for compatibility and more.

But we're not interested in technical rabble but the leftover goodies, so let's get to it.

Let's start with the game's logo.

Thanks to Aaron Rix, Rockstar San Diego's ex graphic designer, we do have some concept logos for Red Dead Redemption

before the final version made the cut.

We also have some digital compositions with illustrations by George Davis

featuring in-game items like the special rosary given by a nun after reaching the maximum honor,

the 2D design of the promotional playing cards and some in-game graphic designs of stores,

journals, signs, posters and more for the game's world.

In Steve Hartman's portfolio, a 3D artist for Rockstar New England - formerly Mad Doc Software

- we can see building images in the game clearly taken using internal tools to manage camera position.

All these buildings look like finals

- even the on-screen radar's identical to the one used by Rockstar in the final build of the game.

On Sinclair's YouTube channel instead we see a small clip of the gate exploding in Cochinay

- the clip shows the model of the area and the explosion without any effects or texture applied.

Meanwhile a beta radar appears in images from Jason Muck's portfolio.

He was the sole Senior Environment Artist specializing in vehicles, props, and weapons for Red Dead Redemption.

This radar was bulky, and lacked any transparency effects whatsoever.

The icons were way different too with the poker table marked always with the ace of spades card,

the nearest safehouse with a classic house icon and the stagecoach with the wheel of a cart.

Other icons were a pitchfork in a green circle, a silver circle inside a red circle

and a big black circle inside another red circle.

What all these icons are supposed to stand for, we have no idea.

On another shot the radar is more familiar

- the only difference is the color of the stamina bar being dark blue, instead of light blue.

On another screen we see an unknown icon showing a golden bull skull - maybe indicating cattle

- and two other rounded ones with a B and F inside - the first one's for Bonnie or Seth Briars,

while the F may be for Luisa Fortuna or some cut character.

What's interesting is that there's no way to have a pending mission in the game for Bonnie or Seth

while also having one for Luisa.

Another noteworthy point is, on the radar we can still see placed on the shore a white and a red icon

mimicking either the wheel of a stagecoach or steamboat.

These icons could mark the docks where the player would be allowed to use the steamboats

or even dock rafts, canoes and other boats.

Thanks to Muck's portfolio we can see the general style of the game at this point in its development was completely different,

a more classic western, than a gritty, violent one.

The High Power Pistol didn't have the engravings or the handle in mother-of-pearl and apparently,

the rifle was attached directly to the bandolier, not shoulder strap.

Thanks to promotional images, videos, and files from inside the game,

we can see a younger or at least less detailed John with a slightly slimmer face,

Abraham Reyes' hair was slightly shorter and he was sporting a goatee without the big mustache.

Allende's facial hairs were different too and the character was supposed to be definitely fatter

according to the very first artwork.

Ah, perhaps I should tie you to a horse and let it drag you around town,

or let the dogs fight you, huh.

Thanks to pre-release sketches by Hethe Srodava, former Rockstar Senior Concept Artist,

we can see how Luisa Fortuna has slightly changed - the starting points to the final design.

And a recent sketch of John Marston - recent because the original concepts are all in Rockstar's hands.

Except for drawings of the main character, the artist had chances to share more of his work,

like this one depicting the first rendition of the Treasure Hunters gang

- or maybe originally they were part of a grave robbing gang opposed to Seth.

We can also see various images of NPCs and side characters that made it into the final game

- some in Red Dead Redemption, others in Undead Nightmare.

Some are unchanged, others instead are very different, like the Sasquatch shown in this sketch

and six other possible variants of the creature in this one - perhaps this indicates six choices that had to be narrowed down

or that the idea was to make all six sasquatches that you have to kill in the game somehow unique.

We eat berries and mushrooms, you fool.

Or we did. Now, none of us are left.

Some maniac's been murdering us.

Various other characters had a different voice actor or simply a different accent all together.

Nobody needs to kill anyone Bill.

You do so love to talk in riddles, Mr. Marston.

You do so love to talk in riddles, Mr. Marston.

The Elegant Suit lacked the hat according to artwork and an official screenshot.

I hate to take money from a lady, miss.

While the US Marshal Uniform was more a Sheriff Uniform

- the original badge was shaped like a Sheriff's.

Targeting changed a bit.

The original reticle resembled the GTA 4 version with the health segments inside

- this feature was cut from the single player, but it's still available in multiplayer.

The Dead Eye reticle was also different, just like the marks set over the enemies during this ability's use.

And while we're still on weapons, if we take a look at the weapons wheel shown in the gameplay trailer reveal of Red Dead Redemption

we see originally there were only six of the eight weapons slots:

pistols, lasso, shotguns, rifles, sniper rifles and the knife.

The two missing are the fist and throwing items that in the original concept

could totally be placed as a sub-selection of the one showing the knife.

Speaking of differences - without delving again into the various steps of the minimap

- we can see that originally while wanted, the amount of the bounty and the last committed crime

was shown in the top right of the screen with the word "Wanted"

- and yes, there's also the black background missing.

The honor bar was smaller and without the various segments indicating the grade of Marston's honor.

Money owned was simply placed on the left middle of the screen.

The inventory was totally different with all icons inside a red border.

Now let's analyze other beta aspects of Red Dead thanks to this shot

from this gameplay reveal of Red Dead Redemption from 2009.

Without considering the developer's annotation on the selling page,

we can see here how things were a bit different in this build of the game.

The "Basic Campsite" for example, despite being placed in Kit, was a Consumable

considering the "x2" written on the icon, meaning that while the player could always use the improved camp-site,

the basic site could run out of resources and needed to be purchased at stores like ammo to be used again.

We can also see other things used in the final game

like the Pleasance Deed for the Stranger mission "Water and Honesty", the Nosalida package from "Poppycock"

and the "Letter from Sam" obtainable in the last encounter of the mission "California".

If not tied to a removed Stranger mission to be discussed later,

the Sacred Relic could be the first rendition of the Rosary given by the nun in the game after reaching the maximum honor.

The treasure box is still a mystery.

Maybe it's the original reward for the Treasure Hunter Challenge or something completely different.

One more departure from the final version of the game is that the player was supposed to be able to buy not only a bandolier,

but a double bandolier as well.

Another leftover from GTA 4 was the original icon marking the location to start missions.

The Cheats option was originally shown directly in the main menu, not under the Options menu

and the font used for the subtitles was different, more similar to the one already used in GTA 4.

Considering the E3 2005 teaser trailer for a possible comparison,

we can see that the small town shown at the end of the video is similar to Armadillo from the final build,

only smaller and with slightly different building models.

Also, at some point in the development, there were no buildings at the end of Armadillo.

The MacFarlane's Ranch was originally named McFarling's Ranch

with Bonnie MacFarlane named after the aunt of former Rockstar San Diego Designer, Rob Hanson.

Bonnie MacFarlane. Miss, Bonnie MacFarlane.

This is unconfirmed though, as the only source we have is from the Wiki pages.

The Chuparosa bank is apparently impenetrable in the game, but according to a couple of pictures,

we were supposed to be able to rob it - maybe during a cut mission given by someone.

Another difference we do have proof of, is that at some point during development,

the top of the Nekoti Rock, in Tall Trees, wasn't covered in snow.

Thanks to a trainer, we can also navigate way over the natural borders created by Rockstar

and see that the map, despite not being heavily detailed, stretches far beyond what can even be seen in the game.

Originally, the player would be able to hunt and skin bats, but then they were removed

and left only as scripted atmospheric events.

Just like the bats, the 3D model of the Sasquatch was already in files of Red Dead Redemption

- both creatures ended up being added in the DLC "Undead Nightmare".

While the first creature was even shown in an official image of the game,

it's unlikely that the Bigfoot was to be present in Red Dead Redemption.

Maybe as an Easter Egg or a very rare event just to freak out the players, who knows?

There were originally more bounties to collect - five more members of Dutch's Gang,

four unknown outlaws and a Mexican.

There were originally also wanted posters for both John Marston and Abraham Reyes.

Likewise, in Undead Nightmare there was supposed to be another missing person, Lloyd Duffy,

but it's unknown why Rockstar chose to get rid of him.

Next to nothing is known about beta or removed story missions

- probably due to the lack of analysis and decryption tools for the game.

Thanks to videos and images from Rockstar we are able to uncover some differences between the missions we played

and how these missions were supposed to be.

In "Spare the Rod, Spoil the Bandit", the player was supposed to be able to reach the balcony on the second floor

from outside and shoot at the enemies with the hostages inside the house.

The mission "Father Abraham" originally intended the player to throw dynamite at the Mexican army convoy,

not rig it on the road and later blow it with a detonator.

Do it now!

Lastly, the showdown with the Mexican army during the mission "Cowards Die Many Times" was way different.

The army was supposed to enter and take position on various vantage points inside the town of Chuparosa, as seen here.

It could also be that originally the mission wasn't tied to De Santa at all, but only to Reyes.

But there's not always a video or image left and such is the case in the mission with Norra Hawkins.

According to Wiki, Norra was a woman in Great Plains that once encountered,

would call Marston for help in retrieving her stranded dog.

Wiki further claims that in the final version of Red Dead Redemption, as soon as Norra spawns,

the game kills her to prevent the mission from starting,

but said through modding it was possible to still play this mission.

Unfortunately, none of this seems to be true: there are no images or videos of the lady or her mission.

We even delve through the files searching for her 3D model, her name or anything else in the subtitles

and mission objectives, pertaining to her but nothing surfaced.

Thus the process of killing her to avoid the player starting the mission seems far-fetched.

Modding permits us to play with items that aren't supposed to be obtainable,

like the Undead Horse that turned out to just be a prop for a specific mission.

Then to drive an automobile - a totally cool thing to do,

even possible in multiplayer now that the game has become an actual lawless wild west for modders.

Some things that can't be reinstated in the game through modding are the Stranger missions

"Mother Superior" and "The Dwarf and the Giant".

Of these missions, all we have left are some audio files.

Mother superior wants Marston to find four stolen relics - and as said before,

one of these relics could be the one shown by Rockstar in the Kit menu inside the gameplay reveal of Red Dead Redemption.

The other stranger mission starts with Marston meeting a lonely dwarf

where he agrees to find a friend for him.

He first reaches a drunken man who directs him to a giant who supposedly lives in the hills.

Once found, the giant greets Marston with hostility and John is forced to fight him to calm it down.

After the fight the giant agrees to meet the dwarf,

but they don't get along and with no better explanations, the giant ends up accidentally killing a girl.

The murder upsets the dwarf who directs Marston to kill the giant,

but the player is given the choice to whether kill him or not.

It's not unusual for developers to reuse cut content in new games or new iterations of a video game series,

but in Rockstar's case, with Red Dead Redemption 2, this is highly unlikely

- even with their new habit of recycling old content into new the way they've done in GTA Online.

Despite that, surely Red Dead Redemption 2 will have its own removed and beta content,

some of which will be revealed thanks to pre-release screenshots and videos,

and then in the future through in game files - hopefully using a PC version of the game.

And that pretty much wraps this episode of Hot Topic.

Of course more will be discovered in the future

and we hope this video will encourage you guys to search and find more info.

Soon Red Dead Redemption 2 will be in our hands and we'll be able to play and enjoy a new adventure

with new and old characters that will accompany us to a western world of outlaws and criminals.

And when it does, we'll do a full examination of the game with walkthroughs, graphic comparisons,

Easter Egg videos and much more, so be sure to watch this channel for all the coming updates about Red Dead Redemption 2,

Grand Theft Auto Online and other Rockstar titles.

Keep following us on Twitter, Facebook and Instagram

or jump in our Discord server to holla at other fans of Rockstar Games.

From GTA Series Videos, this was Gary7 MT.

Peace.

For more infomation >> The History of Red Dead Redemption & Beta Version - Duration: 34:27.

-------------------------------------------

BGS 2018: Gameplay de Resident Evil 2, Battlefield V, Kingdom Hearts 3 e mais - Duration: 3:11.

For more infomation >> BGS 2018: Gameplay de Resident Evil 2, Battlefield V, Kingdom Hearts 3 e mais - Duration: 3:11.

-------------------------------------------

👋😃👋[LIBRAS] Cine Gibi 5 "Luz, Câmera, Ação!" (FILME COMPLETO) | Turma da Mônica - Duration: 1:11:34.

For more infomation >> 👋😃👋[LIBRAS] Cine Gibi 5 "Luz, Câmera, Ação!" (FILME COMPLETO) | Turma da Mônica - Duration: 1:11:34.

-------------------------------------------

Учитесь правильно расслабляться. Расслабление любимым предметом. Практика от Сауле Тинибаевой - Duration: 1:24.

For more infomation >> Учитесь правильно расслабляться. Расслабление любимым предметом. Практика от Сауле Тинибаевой - Duration: 1:24.

-------------------------------------------

Toy Cars for Kids Learn Fun Colors Orange Street Vehicles Airplanes - Duration: 4:14.

For more infomation >> Toy Cars for Kids Learn Fun Colors Orange Street Vehicles Airplanes - Duration: 4:14.

-------------------------------------------

Vidcon Earth - Duration: 2:01.

Hey, you!

It is about time to take a break from your Fortnite gaming and top 10s to check out the Vidcon Earth 2019 live stream!

It will have all your favorite Youtubers, doing challenges, interviews, and MORE.

We will have some smaller and bigger Youtubers alike!

What are some names you can recognize that will be here? Well what about:

PewDiePie

Dude Perfect

Vsauce

iDubbbz

Alan Becker

Jake Paul

And so...

Much...

MORE!

Guys, I'm going to Vidcon Earth.

Going to Vidcon Earth, so if you see me there, say hi. Guys, get hyped mates, cause we're going to Vidcon Earth!

The internet is changing, and so should Vidcon!

Now all people around the world can see their favorite Youtubers live, and if you tune in, make sure to tweet

@YouTube pictures of you enjoying the stream, as well as your favorite memories of these Youtubers!

We will also be talking about how YouTube is doing now, and ways to improve it in the future!

Also, send us your best dabbing video, and whoever has the best dab could win a HUGE cash prize of up to 25 grand ($25,000)!

Interact with our awesome chat to help determine the winner, as well as ask your favorite Youtubers questions!

Do you think strawberries are sexy? (Side Note: I did NOT script this)

This is gonna be the GREATEST…

VIDCON…

EVER!

Also, all donations through Superchat go straight to funding Vidcon Earth's continuation in the future!

Vidcon Earth: The Internet is changing, so should we.

Live stream starts June 10th, 2019 at 2:00 PM to 5:00 PM. You must be 18 or older to enter the contest.

Some Youtubers may ignore you because you all dumb.

Yes, we are running 90% of the stream's popularity based off the fact that it is called VidCon Earth, and not a different name.

Không có nhận xét nào:

Đăng nhận xét