Thứ Ba, 20 tháng 3, 2018

Waching daily Mar 20 2018

(Music)

For more infomation >> Best Intro for YouTube!!!! - Duration: 0:10.

-------------------------------------------

Qu'est-ce que c'est ? - TRIVIAL GURIPA #5 in Normandy - Duration: 2:32.

Hi folks!

We're at Sword Beach,

at the Ouistreham- Rivabella Beach.

Here, bathers go by some remnants of the war,

scarce in the region.

Do you know what they are?

We asked this question in social media.

Yes, they're the obstacles known as Dragon's teeth.

In earlier episodes, we talked about different obstacles and traps

that the Germans laid along the beaches of Normandy.

Among them, the Dragon's teeth

weren't used as much, and only a few remnants still stand today.

They are big blocks of concrete in a pyramidal shape,

that worked as an anti-tank obstacle

or to stop any vehicle in general.

Usually, they occupied a very long line

and were distributed in several rows.

Land mines and barbed wire were also laid in between them.

During WW2, they were used by both sides,

but are usually associated with the German one

and were employed particularly on the Siegfried Line:

the defense system that protected Germany's west border.

Despite their symbolic name,

in reality, the Dragon's teeth weren't as fierce as planned.

It wasn't hard for the Allied engineers

to make way through them

either by destroying them or by burying them.

But don't worry folks,

to see them, you don't have to go that far.

You can just visit Sword Beach,

where you'll find these fine examples.

Their use in the Atlantic Wall can also be seen.

So now you know,

the next time you come to sunbathe

or swim at Ouistreham-Rivabella

don't forget to visit these remnants of the Battle of Normandy.

If you liked this trivia,

hit like, share, and follow us in social media.

Until next time, folks!

For more infomation >> Qu'est-ce que c'est ? - TRIVIAL GURIPA #5 in Normandy - Duration: 2:32.

-------------------------------------------

A-Trak And Eli Gesner On Their 'Impossible' Young Thug Video 'Ride With Me' - Duration: 8:39.

A-Trak, with Falcons, has produced a really great song with performances by Yung Thug,

and 24 hours.

Hey everybody, my name is Eli Morgan Gessner, and I am the style editor here at Uproxx.

And today, we have the wonderful story of a young Canadian boy who took his OCD and

bad posture, and used it to become one of the greatest DJs of all time.

Ladies and gentlemen, it's my old friend, DJ A-Trak.

Hi.

You're a DJ.

You were, a five time?

World champion, yes.

DMC world scratch champion.

It wasn't all DMC.

I started scratching and messing around with DJ'ing at thirteen, and then at fifteen I

was world champion and I kept entering more battles and I accumulated five of these world

titles.

Fifteen years of age! What a revelation!

I didn't see much sunlight from the age of thirteen

to fifteen.

We all grew up listening to sort of classic rock, and, you know, through like Beastie

Boys and Cypress Hill, that was like the funnel to get into hip hop.

I really fell in love with hip hop around ninety-four with Wutang, Biggie, that whole

era.

So I'm listening to hip hop, right?

And I'm hearing scratching on records.

I tried playing the piano but I wasn't very good at it, or I just, it didn't feel like

my instrument.

And, one day I tried scratching a record on my dad's record player, and I discovered

a knack.

I showed my brother and his friends one day, and they were like yo, what the hell, you

can scratch?

We can't scratch.

What the hell?

And, I started practicing every day, and I would come home after school, practice, have

dinner, do homework, after dinner, and then that was my day, every day.

I had this sort of general idea of making a skate video.

I ended up making the video with them.

The concept, being that because of our relationship, back at Zoo York, in the nineties, I on one

hand, was like oh, let's use the old hi-8 video camera that we shot all the original

Zoo York footage with.

It's the impossible video.

Thug showed up, on time, but there was a lot of hanging out time in the van.

At one point, you opened the door and said, guys we're losing sunlight.

They ran right out.

We did the video.

For this song, I hit you up, it was sort of like hey Eli, this is your world, can you

help me figure this out?

And then, you hit me back with this super duper, dream reply of like, here's a test

cut of like twenty seconds of this secret footage that no-one's seen before, that I

just happen to be digitizing, how about we do that for your video.

And, I still have the camera, and we filmed Thug with that camera.

And I'm reading my email and I'm like, brain explodes.

Okay, okay, okay.

Can I call you Eli?

One thing that me and you have been talking about, and this is the current state of media

and culture and music, which I've been a little bit like, uh, I don't know guys.

It seems like, the idea of we're trying to make something original, has become secondary

to I know ya'll like this so here it is again!

Like, that's kind of where I feel there's a shortcoming in culture.

It's not like, oh, there's clearly Biggie Smalls, and there's clearly Tupac, and there's

clearly De La Soul, and that's clearly Public Enemy.

But you've always been more optimistic about it all.

What I would say is, that, in fact, right now in hip hop, there's something for everyone

for sure.

And maybe you're referring to what a lot of people would call maybe soundcloud rap?

Or just like a certain form of sort of druggie, very free kind of abstract rap?

But there's that, but you can also go and listen to some rappidy rap too.

Neil Soul is back!

Like, there's something for everyone, for sure.

So, I don't know.

And by the way, when you were saying that, you know, maybe that type of rap...

You seem to be hinting that it's less original.

That's not, that's not really what I'm getting at.

And I'm sure like anything, it's like people being like, you're a DJ.

Like, on the radio DJ contest?

Yeah, now people would be like oh you're a rapper, do you sip lean, and is your name

Little something.

But by the way, part of what's cool, even about that scene, is that being weird is celebrated,

and it took hip hop a long time to break through to that.

Then when people were like, oh this worked for that guy?

I'm going to do something quite similar.

But I don't even think people approach it with that much of a sort of cheapish sameness.

I think it really is that legitimately that, this is a culture.

And you know, if someone is listening to a certain kind of music and then they have aspirations

to make that music, maybe their first couple of records will sound like what they listen to.

One of the differences between now and the nineties is that that first song that someone

makes, the whole world gets to hear it, whereas back then you had to get a record deal and

you wouldn't hear it until they honed their skills.

Yeah.

A lot of the guys that get dismissed now for being samey, a year later developed their

own identity.

It's just that the removal of the gatekeepers with everything just being posted onto soundcloud

right away, you get to see that development stage.

And you know what, another thing that's fascinating by the way, is a lot of the rappers who seem

to have basic skills are actually a lot more skilled then they give off the impression,

and they choose to make this kind of rap because there's an immediacy that is super punk-rock

and undeniable.

So, it's funny hearing some of those rappers who might get popular from having a song where

they're just going amasasavasasaamagavasasana, amagadasavanasadada over like a distorted

808.

And then, you can, you can interview one of these guys, and he'll be like, oh but I also

have, oh what's the term...

It's not backpack rap.

I'm trying to think of the term.

Lyricist rap?

But it's funny, like I saw an interview with XXXTentation, where he was like, oh I have

like earl type of raps.

Because earl sweatshirt is, you know, is lyrical.

And then, I've heard those records and it's true.

So, a rapper who might be known for blown out distorted, and kind of make ignorant records

is also fully capable of making rappidy-rap records.

And so when some of the old heads will just say, like oh what happened to the skills.

It's more...

It's deeper than that.

It's a conscious decision to make records that translate in a live setting.

And by the way, live rap is blowing up too.

Let's talk about this.

Me and you both have had the distinction of working with Mr. Kanye West.

Yours was far more successful than mine was, but, he was smart enough to pick you.

Damon at that point in time was starting a sort of off shoot imprint that was supposed to literally

be a rock label.

And, the day before Kanye saw me in London, Damon Dash saw me.

And he wanted me to DJ for Samantha, maybe not knowing that she was actually a DJ as well.

And Dame kept trying to pair us for about twenty four hours until I met Ye.

He was really heavily trying to be like, Samantha, this guy A-Trak, he's a really good DJ.

He's going to DJ for you.

It's going to make your show cool.

And she was like, Dame, I'm a DJ, I know DJs, but she didn't really want me to DJ for her.

And, I was just like, wait, I think I know your brother, and this whole thing.

And then, next thing you know, I met Ye and he was like, you're going to DJ for me.

And then, as it turns out, we're having this conversation at this whole Rockefeller thing

shindig.

And so Dame's also somewhere, and Samantha is also somewhere.

And Kanye and I are making are talking and we're making our master plan.

And he's like, da-da-da, I'm going to take you on tour, and the crowd's going to do this,

and say that, take this guy's number, he's my manager.

And then Dame sees what's going on, and I'll always remember this.

He screams, he goes, Samantha!

You see what's happening?

Kanye is about to hire A-Trak!

That's why he's Kanye West!

I was trying to have you work with A-Trak.

But Kanye West is stealing A-Trak from you right now, Samantha!

It's so Dame.

And both she and I knew that this Kanye thing was probably best for everyone.

What up, this is A-Trak, and you're on Uproxx.

For more infomation >> A-Trak And Eli Gesner On Their 'Impossible' Young Thug Video 'Ride With Me' - Duration: 8:39.

-------------------------------------------

🔴Live hi #Roadoto 70 chillen mit euch cool Dienstag Stream - Duration: 35:07.

For more infomation >> 🔴Live hi #Roadoto 70 chillen mit euch cool Dienstag Stream - Duration: 35:07.

-------------------------------------------

Diferencia Entre Nada y Nadie | #EspañolReal - Duration: 10:27.

For more infomation >> Diferencia Entre Nada y Nadie | #EspañolReal - Duration: 10:27.

-------------------------------------------

FACE 2 FACE: El Shaarawy and Pellegrini interview each other! - Duration: 8:07.

- Hey, Zio! You good? - Hey, Zio! Yeah, all good.

Joseph asks: "Why does everyone call you 'Zio' [uncle]?

Right. And why do I call everyone Zio?

It started off in my last year at AC Milan.

There was this security guard at Milanello.

He and I used to call each other 'Zio'. I can't remember exactly when it started.

Then when I joined Roma I just started calling everyone Zio.

That's how it all began.

Marco asks: "Your signing was announced with a video of you playing FIFA.

"Do you actually play at all and are you any good?"

Yeah, I do play from time to time

but not that often.

We don't really get that much time.

I'm no world-beater but it's the taking part that counts, right?

Ok, my turn, Zio.

This is from Stewel92Fan.

They call you Il Faraone. What nickname would you give me?

- Zio. Ah, come off it. That's too easy.

Zio... Zio's good.

Matteo wants to know how much you celebrated the first goal against Chelsea.

And who scored it?

Who could it be?

So, did you celebrate?

I sure did, especially as I never thought you'd score from there.

The things you expect least are always the best.

But yeah, I did celebrate loads.

1-0 straight from the whistle. What was it, 38 seconds or something?

They've shown your goal thousands of times all over the place

so I've even memorised the second you scored.

Valery asks: "How did you feel after scoring that first goal against Chelsea?"

- It was... - Unexpected!

No, it was... an amazing feeling to score straight away

with less than 40 seconds of the clock.

- 38. - 38, exactly. Less than 40.

It boosted our confidence and for me personally it was an unforgettable goal.

Anico asks: "What's Stephan's most annoying habit?"

is when we have an hour or two to rest

and you want to play snooker or have the TV on with the volume whacked right up.

I'm there trying to get some rest so I go: "Ste, turn it down."

And 15 or 20 mins later you actually turn it down.

Anico asks you the same question. What's my most annoying habit?

Your worst habit is that as soon as you come into the room

you lie down and want everything switched off.

Lights off...

You have to rest.

You always want to go straight to sleep!

Whereas you want the volume up on full for half an hour!

10 or 15 minutes yeah, then we can rest afterwards. But you don't.

Nicolas says: "Did you practise any other sports when you were a kid?"

Yes, I did. I did swimming for a couple of years, from the age of six to eight.

Why are you laughing?

What's up?

You just make me laugh.

I can't help it.

Question five is from Lenny.

What did you think of the fans at the Olimpico against Chelsea

The support was fantastic.

They were behind us for the entire 90 minutes and never stopped.

It's always great to see the stadium full in Rome so it really was amazing.

Especially when you score.

- They shout your name. - When they shout your name. That's a real rush.

- A real thrill. - Yeah, a real thrill.

Lollo asks you: "What's the atmosphere like in the Roma dressing room?"

It's great. Very chilled.

It's full of good lads and we have a laugh too.

- Yep, I can second that. -You know that yourself.

El Shaarawy's the only problem. Otherwise it's great.

Michael says: We've seen a new gesture when you celebrate. Can you explain that one?

Right, so my new gesture...

This one, right?

I did it for my best friend.

- Whose name is? - Manuel. He's in America now.

We came up with it last summer.

He said to me, "When I can't come to the stadium cos I'm in America,

would you celebrate like that for me if you score?"

I promised him I would so that's what I do after every goal.

- It's for him. - Nice.

Roberta wants to know

what's your favourite film or TV series?

My favourite film, well it's my favourite genre, but this film in particular, is Law Abiding Citizen.

It's brilliant.

As for TV series, at the moment I'm watching Narcos,

which is nice but I don't have a favourite TV series.

This is from Giulia. Can you cook?

If so, which is your speciality? Chicory?

- I'm sorry Giulia but I can't cook. - Yep, I can confirm that...

At best I could rustle up some plain pasta.

Me too. That and chicken breast.

Yeah, I can do chicken breast in the pan. You just turn it over.

Valeria wants to know what football skill of Lorenzo's you'd like to take off him?

Get the list out your pocket.

What would I take off you?

The number of years left in your career, since you're four years younger than me.

- You can't say that! - Why not?

Because it's impossible.

Come on Zio, that doesn't count. You have to say something else.

I'd like to go back four years.

- But that's impossible. - Yeah but it's ok.

- Something that by training... - No, I've already answered.

If you could be a video game character, who would you be?

I've always been crazy about Batman.

He's a video game character.

Batman is cool.

He's got that armour and an amazing car.

All black. It's cool.

I love Batman.

This is for both of us from Gabri, who's got quite an imagination.

What does he say?

What would you do if a seagull flew into your house?

What would I do if a seagull came into my house?!

A bat came into my house once when I was in Milan.

My brother and I were there with the towels trying to shoo it away.

Trying to kill it?

Because it kept going around in circles and we didn't know what to do.

- Did you open the windows? - They were open but it kept going round the room.

You could have just let it be.

What about you?

- Seagulls are big, though. - Yeah, they are.

It's not a bat; it's a big thing.

What would you do?

I'd open the windows and then leave the house.

Go for a walk and hope it's gone when I come back.

Maybe he just popped in for a look.

Ok, thanks Zio.

See you all soon.

A big hello to all our fans.

- Ciao. - Ciao.

Shooting – you can't shoot with your laces.

You got lucky with that goal against Chelsea.

Did I laugh too much?

For more infomation >> FACE 2 FACE: El Shaarawy and Pellegrini interview each other! - Duration: 8:07.

-------------------------------------------

This is what happens when you do EDG in 2 minutes! - Duration: 3:59.

For more infomation >> This is what happens when you do EDG in 2 minutes! - Duration: 3:59.

-------------------------------------------

YOUTUBE IFŞALANIYOR DİSS TRACK F.T / - Duration: 2:12.

For more infomation >> YOUTUBE IFŞALANIYOR DİSS TRACK F.T / - Duration: 2:12.

-------------------------------------------

Best Gym Hip Hop Workout Music 2018 - Svet Fit Music - Duration: 37:21.

Svet Fit Music

For more infomation >> Best Gym Hip Hop Workout Music 2018 - Svet Fit Music - Duration: 37:21.

-------------------------------------------

YENİ AKIM YEŞİL UZAYLI (Dame Tu Cosita) 👽 - Duration: 3:51.

For more infomation >> YENİ AKIM YEŞİL UZAYLI (Dame Tu Cosita) 👽 - Duration: 3:51.

-------------------------------------------

Optimization Tricks: momentum, adaptive methods, batch-norm, and more - Duration: 10:16.

Deep learning is closely related to mathematical optimization.

What people usually mean by optimization is to find a set of parameters that minimize

or maximize a function.

In the context of neural networks, this usually means minimizing a cost function by iteratively

tuning the trainable parameters.

Perhaps the biggest difference between pure mathematical optimization and optimization

in deep learning is that in deep learning we do not optimize for maximum performance

directly.

Instead, we use an easier to optimize cost function on a training set and hope that minimizing

that would improve the performance on a separate test set.

We already talked about how optimization works in its simplest form in the earlier videos.

We pick a loss function that we want to minimize, do a forward pass given a mini-batch of inputs,

take the derivative of the loss function with respect to the weights, update the weights,

and iterate.

This is how vanilla stochastic gradient descent works.

There are some tricks that we can make this process more efficient.

First, let's focus on the update rule we had.

This update rule tells us to take a step towards a lower loss value guided by the gradient.

This usually works fine but can be a little slow to converge.

One useful trick that we can do is to add a velocity term to this update rule, which

helps our updates gain momentum towards minima at every update.

This is called the Momentum algorithm.

The momentum algorithm is simple and straightforward.

We define a parameter that accumulates a weighted average of the past gradients.

Then use this parameter in our update rule.

In other words, we add a weighted average of past updates to our current update.

The value of the momentum parameter determines the weight of the past updates in this weighted

average.

As a result of accumulating the gradients, the updates get larger when the algorithm

repeatedly gets gradients having a similar direction.

You can imagine the current state of the weights as a ball moving down on a surface defined

by the cost function.

The ball gains momentum as it moves towards the basin even if it hits some small pits

on the way.

This usually gives a quicker path to a solution as compared to plain stochastic gradient descent.

As you may recall, another factor that determines how big the step size will be is the learning

rate.

It's usually a good strategy to start with bigger steps and decrease the step size as

we get closer to our target.

You might think the magnitude of the gradient should shrink over time anyway, but that doesn't

happen in many cases.

The norm of the gradient might even increase, while the loss still keeps decreasing.

So, it's common to set up a schedule, such as exponential decay, to decrease learning

rate over time either in a continuous way or by taking discrete steps.

In the plain version of stochastic gradient descent, the choice of learning rate might

have a crucial impact on the performance.

There are several methods that set a separate learning rate for each trainable parameter

and adaptively adjust the learning rate to decrease a model's sensitivity to the initial

learning rate.

AdaGrad algorithm decreases the learning rate faster for the parameters that have large

gradient components and slower for the ones that have a smaller gradient.

RMSProp algorithm also adaptively tunes the learning rate for each parameter in a similar

way but uses a moving average of gradients to make the optimization more suitable for

optimizing non-convex cost functions.

Another algorithm that uses adaptive learning rates is the Adam optimizer.

Adam stands for adaptive moment estimation.

It attempts to combine the best parts of RMSProp and momentum optimizers.

In practice, Adam and RMSProp both work well.

In addition to the optimization algorithm, the model architecture also has a big impact

on how easy it is to optimize a model.

Many successful models owe their performance to their architecture rather than the choice

of the optimization algorithm.

You can check out my earlier video on designing neural networks to learn more about how model

architecture can facilitate the optimization procedure.

You can find it in the Deep Learning Crash Course playlist in the description below.

One challenge in optimizing deep models is the internal covariate shift problem.

When we update the weights in one layer in a deep neural network, we update them assuming

that its inputs would stay the same.

However, the distribution of the inputs might change every time we update the weights as

the previous layer parameters are updated.

In deep models, even small changes in the early layers get amplified through the network

and cause a shift in the distributions of the later layers.

Changes in the input distributions make it harder for the following layers to adapt.

This problem is called the internal covariate shift problem.

A technique called batch-normalization makes it easier to optimize deep models by normalizing

the outputs of the hidden nodes right before they are fed into an activation function.

The first step of batch normalization is to subtract the batch mean from every output

value and divide it by the batch standard deviation.

This gives us a zero-mean unit variance distribution at the output.

The second step uses scaling and shifting parameters to allow the variables to have

any mean and standard deviation.

These scaling and shifting parameters are trainable and learned during training.

Essentially, the second step can undo what the first step does.

You might ask what's the point of normalization then?

The answer is that in practice the second step doesn't really undo the first one.

It's true that the variables are allowed to have an arbitrary mean and standard deviation

both with and without batch normalization.

The difference is that when batch normalization is not used, the distributions are determined

by a cascade of parameters.

On the other hand, batch normalization parametrizes the mean and standard deviation as trainable

parameters, which makes the distribution shifts manageable during training, resulting in faster

convergence.

Once the training is complete, global statistics that are computed during training are used

to normalize the activations rather than the batch statistics.

In this way, the inference becomes independent of the input batch during test time and we

don't need a batch of samples to run inference on a single sample.

Optimizing deep models involves iterative processes that require some sort of parameter

initialization.

The way we initialize the parameters can have a big impact on a solution that a learning

algorithm achieves.

For example, if this is how the cost function looks like, projected into a single dimension,

and the weights are initialized on the wrong side of a hill like this then the model would

converge to a local minimum although there are much better solutions on the other side

of the hill.

In practice, though, cost functions like this are very rare.

In higher dimensional space, local minima are not very common and it's likely that there

is a way around hills like these.

Why are local minima rare?

Think of it this way: for a point to be a local minimum it needs to have a smaller value

than its neighbors in all axes.

If we have a single dimension, the odds of observing such structures is not very low.

But what if we have a million parameters.

Then, for a point to be a local minimum it needs to have a smaller value than all of

its neighbors in all one million directions.

How likely is that?

If it does happen it's likely that it already has a very small value that can be considered

an acceptable solution.

In deep learning, we usually care about finding a good solution rather than finding the global

minimum.

Another type of critical point is a saddle point, where the cost function gets a minimum

value in some directions and a maximum value in some other directions.

Saddle points are likelier to be observed than observing local minima since it's harder

to get a value that is smaller than its neighbors in all directions as compared to a being a

minimum only across some directions.

If the norm of the gradient becomes very small during training the problem is likelier to

be a saddle point than being a local minimum.

Let's go back to weight initialization.

The initial state of a neural network can have an effect on how fast the model converges,

how good the converged point is, or if the model converges at all.

So how should we choose the initial values?

There's no definite answer but there are some heuristics that might help.

Initializing the biases is usually easier.

It's usually safe to initialize them to zero or a small number.

However, initializing the rest of the parameters to zero or another constant is not a good

idea.

If we initialize all parameters to the same value, then they will all get the same updates

during training and end up learning the same features.

Ideally, we would prefer each neuron to learn something different to be useful.

To do that, we need to initialize the weights in a way that breaks the symmetry so that

each neuron gets a different update during training.

Initializing the weights randomly usually works fine although it doesn't guarantee an

absolute asymmetry.

A very common initialization algorithm is the Glorot initializer, also known as the

Xavier initializer.

Glorot initializer randomly samples the weights from a uniform distribution where the range

is determined by the number of inputs and outputs to keep the activation and gradient

variance under control.

This is the default initialization method in some frameworks.

The scale of the initial values matters because if they are picked from a very narrow range

of small numbers then the units might not be different enough to learn different features.

If the scale is too big, then the gradients might grow exponentially as the weights get

multiplied from one layer to another.

This problem is called exploding gradients and is more prevalent in recurrent neural

networks than the types of neural networks that we have discussed so far.

So far, we have focused only on feedforward neural networks.

In the next video, we will discuss what recurrent neural networks are and how they work.

That's all for today.

Thanks for watching, stay tuned, and see you next time.

For more infomation >> Optimization Tricks: momentum, adaptive methods, batch-norm, and more - Duration: 10:16.

-------------------------------------------

Oil Pastel Painting For Absolute Beginners By my Son Hrithik - Duration: 1:25:25.

For more infomation >> Oil Pastel Painting For Absolute Beginners By my Son Hrithik - Duration: 1:25:25.

-------------------------------------------

J-PLA는 국내 최초의 뮤지션으로 YouTube World Music Chart에서 톱 30에 진입했습니다.|조회수8.212.910 - Duration: 2:34.

For more infomation >> J-PLA는 국내 최초의 뮤지션으로 YouTube World Music Chart에서 톱 30에 진입했습니다.|조회수8.212.910 - Duration: 2:34.

-------------------------------------------

L25:Compiler Design Tutorial, Bottom Up Parser,LR(0) Parser, SLR(1) Parser,Simple LR Parser In Hindi - Duration: 26:57.

www.universityacademy.in

www.universityacademy.in

www.universityacademy.in

www.universityacademy.in

www.universityacademy.in

www.universityacademy.in

For more infomation >> L25:Compiler Design Tutorial, Bottom Up Parser,LR(0) Parser, SLR(1) Parser,Simple LR Parser In Hindi - Duration: 26:57.

-------------------------------------------

How To Post Youtube Video On Facebook with Large Thumbnail [Bangla] | - Duration: 2:36.

How To Share Youtube Video Link In Large Thumbnail On Facebook

post fb

large thumbnail

Lets go

open your videos

For more infomation >> How To Post Youtube Video On Facebook with Large Thumbnail [Bangla] | - Duration: 2:36.

-------------------------------------------

Dupion Silk Sarees from Flipkart - Duration: 3:24.

For more infomation >> Dupion Silk Sarees from Flipkart - Duration: 3:24.

-------------------------------------------

HITTING A TRICKSHOT I WOULD NEVER HIT AGAIN - Duration: 1:15.

Right here

Cuz you have the flag Oh in it I

Sort of go out that was another no scope for a quick scope I won't be so bad

Duck I actually hit this time oh

Shit yeah

Yeah, don't judge me hmm. I

Thought for sure you gonna get like just uh no scope I was about to be like I was doing

Yo, so uh game reviews I will be streaming tomorrow around maybe a little bit earlier two hours earlier

But from now, so if you just hop In the stream. I'll add you tomorrow

Yeah, I'm not adding you I'm sorry

I'm not gonna. Add I was gonna play with it. We could play her. Yeah, sorry

For more infomation >> HITTING A TRICKSHOT I WOULD NEVER HIT AGAIN - Duration: 1:15.

-------------------------------------------

China's Freedom-Crushing "Social Credit Score" - Duration: 4:52.

Is your privacy under threat?

Has the age of privacy come to an end?

Are you sick of Facebook's

lack of respect

for your privacy?

If you think

your privacy's threatened,

be glad you don't live in China.

Chinese government wants to give

every citizen a score

based on behavior.

Purchase history,

political leanings,

and social interactions would be used

to calculate a person's trust score.

Facebook and Twitter

are banned in China,

so people use

So the government spies on that

round-the-clock.

The state also monitors

the Chinese version of Amazon,

called Alibaba.

Alibaba is by your side

24 hours a day,

seven days a week.

Why should we care what the Communists do?

We're not in China.

Li Schoolland came to America

30 years ago.

Here's Li when she was

16.

She survived the

Great Leap Forward,

the Great Famine,

the Cultural Revolution.

Her parents were doctors so

they

and she were

re-educated.

And this was to teach you

not to be fancy?

The repression is over.

It's all better now.

Today in China,

if you tell friends about certain books,

your message will be blocked.

Even innocent sounding phrases

are censored.

So I understand

the titles

of novels like Animal Farm,

Brave New World,

but Long Live the Emperor?

Well he's President forever.

They can't even talk about this teddy bear.

Winnie the Pooh?

And now, another step

more subtle than just

banning things.

The state will monitor what you say

in social media

and assign you

a social credit score.

That will tell them how trustworthy you are.

The government says

this will allow the trustworthy to roam

everywhere under heaven,

while making it hard for the discredited

to take a single step.

There's gonna be this new

social credit score.

Some American governments

already do something

similar.

The LAPD can scan tens of thousands of

license plates.

Los Angeles police now practice

predictive policing.

They pay a company called Palantir

to analyze social media,

trace people's ties to gang members,

and predict the likelihood that

someone may commit a crime.

After searching over a hundred million data points,

Palantir displayed an impressive

web of information

on one burglary suspect.

People like that.

They think it makes them safer.

I would like to know

that there's a trust score so

I can know who's trustworthy

and who's not.

Sounds sort of appealing.

When government does gets involved,

bad things can happen.

What happens if you have a low score?

If they really don't like what you say,

they lock you up

and torture you.

They didn't allow me to sleep.

I was kept in a small room

and saw no daylight for half a year.

But that's China.

Why should

we be afraid?

Get out of my life!

In America, every week

on YouTube,

Twitter,

Facebook,

I challenge people in power.

Trump does make things up.

I say these things and

no one punishes me.

So far.

Không có nhận xét nào:

Đăng nhận xét