Thứ Năm, 22 tháng 3, 2018

Waching daily Mar 23 2018

Hello Everyone..

Welcome to Priya's Kitchen..

Today I am going to share one easy and delicious recipe with vermicelli, that is Vermicelli Pulihora.

today I am going to use rice vermicelli, it tastes better than normal semolina vermicelli.

Let's start the process..

for this let me show you how to boil the vermicelli

Add 1 Tablespoon of Oil to the Hot Boiling Water

add 1 Cup of Vermicelli

today I am taking Rice Vermicelli, you can use normal vermicelli also.

mix it once and cook the vermicelli till it is 90% done

do not overcook the vermicelli, otherwise they stick to each other

so its been 10 minutes, let's check our vermicelli

see its done..

it should be like this...

Now switch off the flame and drain all the water

Now lets prepare the Tempering for our Pulihora.. for that heat 2 Tablespoons of Oil in pan

add 1 teaspoon of Mustard Seeds

1 Tablespoon of Chana Dal

and 1 teaspoon of Urad Dal

fry them

add 2 Tablespoons of Peanuts

15 to 20 No.s of Cashew Nuts

fry them

add 3 No.s of Whole Red Chillies

4 to 5 No.s of Slit Green Chillies

some Curry Leaves

fry them

so all our ingredients got fried nicely

now add 1 teaspoon of Turmeric Powder

just a pinch of Hing (Asafoetida)

Salt to taste

mix it once

now add Lemon Juice

I have taken 2 medium sized lemons

you can increase the lemon juice if the sourness is not enough

switch off the flame

mix it

add our cooked vermicelli

mix it nicely till all the ingredients combines very well

so I have mixed the Vermicelli very well

Let this cool down for 5 to 10 minutes, and then we can serve our Vermicelli Pulihora

So our yummy n delicious vermicelli pulihora is ready

serve this at room temparature.. it tastes excellent..

It is perfect for kids lunch box also.

Try it in your home and let me know the feedback in the comments section below

Thank you for watching..

I hope you enjoyed today's session of making Vermicelli Pulihora.

If you enjoy my recipe, hit the Like button and Share my video with your friends n family members

subscribe to my channel for more such yummy recipes

See you soon with some more recipes.. Till then take care.. Bye bye...

For more infomation >> Vermicelli Pulihora Recipe | How to make Semiya Pulihora | Semiya Lemon Pulihora Recipe - Duration: 3:36.

-------------------------------------------

Natural Language Processing: State-of-Art: Observations and Insights - Duration: 31:15.

So, from Computer Vision, will move next to NLP,

and when we talk about NLP,

I cannot think of any better name

and Professor Pushpak Bhattacharyya,

who is from IIT Bombay as probably most of you know,

and now divides his time between IIT Bombay and

IIT Patna where he is the Director for IIT Patna.

So, over to you Sir, Pushpak.

>> Okay. So, yeah, from vision to language.

My talk is on state- of-the-art for NLP.

And I've also added some observations and insights,

based on whatever we have seen for a number of years.

Okay. So I'll make

a few remarks on the nature of NLP as such,

then I'll take a representative problem.

Talk about the new world order for NLP.

Then in some trends,

Indian efforts and draw some conclusions in future work.

So, Natural Language Processing is one of the areas

of AI and the remarks that

Professor Raj Reddy made in the morning.

Sort of contextualizes remarks on AI,

Computer Vision also and we can draw from those remarks.

So, the areas on

the outer circle are the ones which

are closer to human beings

and real-life applications but all of them draw from

very fundamental ideas and techniques of research,

reasoning, and learning, and some planning also.

So, we have always believe

the Natural Language Processing

is a confluence of linguistics and computation.

We believe linguistics is

the eye and computation is the body.

And we can see on the top layer,

the basic ingredients of

linguistics: lexicon, morphology, syntax, semantics.

And at the bottom layer, the

nuts and bolts of computer science,

Probability theory, machine learning,

graphs and trees, and so on.

And this interaction between

linguistics and Computer Science gives rise to

very important areas whose utility

as well as research interest are supreme.

Like sentiment analysis, summarization,

morphology analysis, machine transmission.

All these are extremely big areas

and they have a tremendous application.

Next please? So, just to make the point,

that linguistics is the eye

really and computation is the body.

I would like to make

this example and put

it in front of you right in the beginning.

Okay? Just to emphasize the point

that this physics for Computer Science,

linguistics for Natural Language Processing,

physics for Computer Vision,

linguistics for Natural Language Processing,

physics, phonetics for speech.

I think they will always remain,

they can never be discounted.

So, here is an example,

this is the distributional hypothesis

from Harris back in 1970.

Words with similar distributional properties

have similar meanings

and Harris does mentioned the distributional approaches

can model differences in

meaning rather than the proper meaning itself.

So, this particular idea is

the foundation for Word embedding or Word vectors.

So, the linguistics is the eye-point now.

Now, let's see the competition is the body point.

Next, so, we have the CBOW and

Skip-gram models for generating

word vectors or word embeddings.

So in Skip-gram, what happens is that,

a word is presented,

one hot representation is

presented to a feed-forward network and

the output is prescribed as

the contextual words for that particular word.

For example, if dog is the input,

represent input word one-hot representation,

then all the contextual words for

dog especially the content words like bark,

policeman, vigilance, and so on.

They are placed in the output layer,

in the form of a supervised-learning situation.

And CBOW is exactly the opposite,

the context words are placed

one-hot representations and the output

is the word itself, okay?

So now, the weight vector for the neuron,

in the one-hot representation

becomes the representation of the word.

Okay? So the weight vector represents

the word. Next slide.

So now, if we take an example here,

when we hear the word

Dog by association quite a few other words

appear in our mind and they also appear

in the syntactic environment.

So, bark, police, thief, vigilance,

faithful, friend, animal, milk, carnivore, and so on.

Similarly, for the word Cat,

we have associated words from the corpora,

similarly for a candle.

Now, the similarity between dog and

cat is much more than

the similarity between dog and lamb.

As is shown in the associated word list also.

Now, this is an intuition we have,

this is also specified by

Harris's distributional hypothesis and next slide please?

And, if you present these words to the neural network,

it is these very same words which

appear by excitation at the outermost layer.

Okay? So the representation in this case means

the similarity of the words through

this representation in the weight vector form.

So, this is the way,

we have captured the word embedding

and we take it as the meaning of the word.

Okay? Next please? So, coming back to this aphorism,

linguistics is the eye and computation is the body.

The encode-decode deep learning network is nothing

but an implementation

of Harris's distributional hypothesis.

Okay? So this 1970 hypothesis,

has appeared as the word embedding.

Trained through a feed-forward neural network

by applying Backpropagation algorithm.

And this whole network is nothing

but the implementation of that particular principle.

Linguistics is the eye, competition

is the body. Next please?

So in Natural Language Processing,

we know is a layered task going from morphology at

the bottom which processes words and transits true,

part-of-speech tagging, chunking, parsing,

semantic going up to

tough problems like Pragmatics and Discourse.

And Natural Language Processing

is also multi-dimensional problem.

There are words in the X-axis,

Y-axis are the problems

with respect to language processing.

And computer scientists are interested in

the other axis which is algorithm design.

Now, the property and

the personality of the language is very important.

We have a very favorite example of ours,

[FOREIGN] This is a Marathi sentence

and the English sentence is,

The one who is in front of the house told me.

So, this is the way languages represent information.

[FOREIGN] is a continuous string

and English has this isolating behavior,

it has taken different morphemes from that long string.

And has a different words representing the morphemes.

So, this difference of property between the languages,

is important to keep in mind for algorithm design.

Next? So, need for NLP is a well-known,

we have to track the sentiment of people,

eCommerce companies are very interested in this.

And as Professor Reddy mentioned in the morning,

translation is going to be a very,

very important problem in the years to come.

Next? Now, machine learning has

become one of the important paradigms

of Natural Language Processing.

And in machine learning, what we do is that,

we look at instances and arrive at

an abstract concept by which we can classify the objects.

So, when instances of tables are given,

what we arrive at is the concept of tableness in

some approximate form and

that is refined gradually in presence of more data.

Okay. So, what we are acquiring is

the concept of tableness not particular tables.

Next? So this picture I'm very fond of,

Natural Language Processing and machine learning have got

married for the benefit of both fields. Next please?

Now, natural language processing.

The challenge of natural language processing

comes from the fact that,

there is ambiguity at every stage.

And this is what makes

machine learning and natural language processing

come close to each other.

Ambiguity processing means choosing

one amongst many other options.

So, that precisely is the classification task.

We choose one of the many classes present.

So there is ambiguity at lexical level,

ambiguity of words, the word

present can mean time or gift.

There is structural ambiguity,

one and two bedroom flats live in ready.

Live in ready has to be grouped together,

similarly one and two bedrooms

have to be grouped together,

not only as set but also recursively.

They have their internal structure.

These are very famous sentence,

in semantics, flying planes can be dangerous.

What is dangerous? Flying or the planes?

Pragmatic ambiguity.

Sarcasm is a case of pragmatic ambiguity.

If I'm not given any attention in

a party and while I'm taking leave of the host,

if you ask me how did you like the party then I reply,

or let's say retort, "I love being ignored."

Okay. Which is a sarcastic sentence.

So the surface meaning is

different from the intended meaning.

Forward. The other challenge of NLP, is multilinguality,

and if you had seen the list of

things that professor already put up,

multilingual competition is listed

as one of the basic needs of our civilization.

Next. So now an important point about the nature

of NLP and its interaction with the machine learning.

Rules have been used in

artificial intelligence always right

from the beginning of AI.

Even when data-driven computation is used,

underlying mechanism is rules,

even though they are not apparent,

and we would like to extract those rules

from the machine that has been trained.

That is called the research on explainability.

Rules are very good for explaining the competition.

And, we also like to believe that we should

not learn when we know the underlying knowledge.

Only when the phenomenon seems

arbitrary at the current state of knowledge,

then we'd like to go to data,

for example, [inaudible] Thunderbird is translated as many thanks,

not as several tags and there is no explanation for this.

Okay? So, since this is

a fixed behavior we'd

like to extract this pattern from the data,

and use it. Next please.

So, in this scenario probability

has played a very, very important role.

This is a favorite example of mine again.

If we look at these four sentences,

the sun rises in the east,

the sunrise in the east,

the svn rises in the east and the sun rises in the west.

All the sentences have

some defect or rather grammatical mistake,

spelling mistake, semantic mistakes.

The first sentence is correct.

And when we compute the probability of the sentences,

for the first sentence

the probability comes out to be highest.

What is the meaning of the probability of

the sentence? Next slide please.

The probability of a sentence means

the probability of n-grams and their product.

So probability of sun rises in the east, is p,

probability of sun into probability of rises given sun,

probability of in given sun and rises,

probability of the sun rises in, and so on.

So this is the quadrigram probability.

So, when we try to compute this probability,

because of the fact that rises in the west as

a quadrigram is less frequent than rises in the east,

as it appears in the corpora,

the probability of the first sentence,

comes out to be the highest.

Can you go back once? Yeah. So, the probability

of the first sentence comes out to be the highest,

and it produces an illusion that

the system understands grammatical mistakes,

semantic mistaken and spelling mistake.

Okay. So this very simple calculation which

is robust and fast,

and based on simple ratio of counts,

creates an illusion that

the system is aware of grammatical mistakes,

spelling mistake and semantic mistake.

So this method of doing AI,

according to me is a slightly different word order,

and this is going to stay for

quite some time because data is available.

Next. Next please.

So this particular power of data,

and simple processing on data,

was brought out in 2014 by a paper from Google,

which also got a lot of press and Johann

mentioned this in plentiful number.

There are lots of papers on automatic image captioning.

And the main methodology is simple.

We do image processing on the image,

and natural language processing on the caption,

and put features and parts in

correspondence. Next slide please.

So, one more slide.

Yeah. So, you see these,

all these images have been captioned automatically.

And now there are slight mistakes which are also curious.

So, look at this second image,

two dogs play in the grass.

Actually three dogs are playing.

And also the last but one image,

red motorcycle, actually the motorcycle is pink.

Now, one begins to wonder why

a system which has done everything right,

why does it make these mistakes of

counting the dogs here and the color of a scooter there?

So this we know from the history of AI,

is the sign of shallow understanding.

These systems which have shallow understanding can make

curious mistakes which are otherwise alright,

but makes some mistakes which are

unexplainable. So next please.

So, this kind of

shallow understanding is the characteristic of NLP, ML,

AI and maybe vision speech also where we can

sort of write a vector equation

with some violation of notation.

Deep understanding is equal to

shallow understanding plus big data.

That seems to be the world order

today in terms of research.

Next. So, this reminds us of the grind methodology,

show umpteen number of problems,

for a particular concept,

for example, Newton's Third Law of Motion.

There is the problem of spring,

there is a problem of recall by a gun.

All these are explained by Newton's Third Law of Motion.

But what happens today is the students are subjected to

a very large number of problems being solved. Next slide.

And what they do typically is memorize the patterns,

there are multiple-choice questions,

match the pattern, eliminate choices, select from a few.

And it seems they do not have

a unifying theme which runs across the problems.

Okay? So, next.

So, there is an uncanny resemblance to

today's NLP as I see.

So there is this kind of huge memorization of pattern,

along with probability distribution.

So if I take this example, "I love being ignored."

This is sarcastic-yes, non-sarcastic- no.

This is the hardmax decision.

The movie is great for putting you

to sleep, slightly difficult,

because the sarcasm is not over.

So this is sarcastic with probability point nine,

and non-sarcastic with probability point one.

So, instead of classifying in

a hard manner what we're

learning is the probability distribution,

where all the classes are

probable with different probability values.

Okay? And this corresponds

to, actually extracting patterns,

and giving them probabilities

assuming some kind of probability distribution

on the underlying data.

Next. So the main methodology in NLP.

Also I think in many other fields of AI is this,

object A and object B are putting correspondence,

and parts and features are

extracted from both the objects,

and we establish correspondence

between these parts and features.

We go from larger correspondence

to smaller correspondence.

And the methodology is

most of the time expectation maximization.

And we learn these mappings,

we use these mappings in a new situation called Decoding.

This is the main methodology which is

applied across machine learning driven NLP.

Next. And linguistics and

computation interaction happens through annotation.

Annotated data is better than

raw data that dramatically

changes the way computation is done.

But good annotation design and

good annotators are very difficult to find,

because they have to understand

both statistical phenomena and linguistic phenomena.

Next. So I take a representative problem now,

and this is the problem of numerical sarcasm.

It shows, you know, why rules,

classical machine learning and

deep learning all of them are important.

Why it is necessary to perspectivize all of them.

Next please.

So, about 17% of sarcastic tweets have origin in number.

This phone has an awesome battery backup of 38 hours,

which is non-sarcastic statement, positive sentiment.

Last sentence, this phone has

a terrible battery backup of

two hours, non-sarcastic, negative sentiment.

The second sentence is curious,

this phone has an awesome battery backup of two hours.

Why do we resort to sarcasm?

Okay? Why can't we plainly directly say,

express on negative sentiment.

So this is an interesting question,

why people use sarcasm.

I think, human beings are inherently like dramatization.

They also like forceful articulation.

And, here one of the hypotheses is that,

sarcasm has this effect

of lowering the defense and then attacking.

Okay? And at that time the attack is much more forceful.

So, this phone has awesome battery backup.

Fine. The difference is

lowered and then you dramatically come back saying,

backup of two hours, which is a negative sentiment.

Okay? So there are many, many theories about why

sarcasm is used as an instrument for communication.

Next. So, numerical sarcasm.

All these are examples of numerical sarcasm.

Waiting 45 minutes for

subway in the freezing cold is so much fun.

So much fun is positive,

but waiting 45 minutes

in the subway in freezing cold is negative.

So, simultaneous presence of

positive and negative sentiment indicates sarcasm.

Next. So, there's important datasets

in the area for sarcasm task. Next.

So what is done in sarcasm research is,

numerical sarcasm detection is,

this phone has an awesome battery back-up of two hours.

It is passed through natural language processing,

this is dependency parsing output.

So, we create tuples of this kind,

phone or some battery back-up et cetera noun phrases.

The number is hours,

and the algorithm goes as follows.

In rule-based matching system,

I love writing this paper at 9:00 am.

This a new sentence. The matched sarcastic tweet is.

"I love writing this paper

daily at 3:00 am." That is sarcastic.

Since nine is not close to three,

the test sentence is non-sarcastic.

I'm sure you're raising your eyebrows,

okay, because this is still ad-hoc.

See the next example also.

Test Tweet: "I'm so

productive when my room is 81 degrees."

Matched to Non-sarcastic Tweet is,

"I'm very much productive in

my room as it has 21 degrees."

The absolute difference between 81 and 21

is high and therefore the test tweet is sarcastic.

So this is the rule-based system.

Okay? Now the rule based system is naive,

very ad hoc, agreed?

So then, if we go to machine learning,

when decision seems ad hoc,

we'd like to go to data and

make these decisions come from the data itself.

So here, classical machine learning and

we use different features like positive words,

negative words and so on.

Positive emoticon, negative emoticon.

Punctuation features.

And then since features also depend

on human beings wimps and fancies.

Okay? We'd like to eliminate even that part

of computation and make

it completely deep learning-based,

very little feature engineering.

So Convolutional Neural Network,

Feedforward Network with back propagation.

And here are the results.

So we find that deep learning-based approaches

give about 93% accuracy,

machine learning-based approach is 80%,

rule-based system 82%, okay,

both of them are 80%.

What is the main difference? The difference is

that rule-based system, the decision is ad hoc.

Coming from human beings, it is ad hoc.

The problem with machine learning based system

is that features are being used.

Features are also given by human beings.

The ideal deep learning system will eliminate

all human intervention and

everything will come from data.

That is the ideal deep learning scenario.

And the message being- Okay?

Insight is that ad hocism in the decision.

We rely on the data to give us the decision threshold.

Then you can remove this ad hocism by relying on data.

If we go to machine learning,

then human intervention is a little reduced.

Still, the features are coming from human beings,

and even that high level of human intervention is

removed when we resort to Deep Learning System.

Message is that rule based systems are

great for intuition building and explainability.

That we cannot take from Rule-based Systems.

Okay? The idea for

building the machine comes from rule-based systems.

However, some human decisions seem to be ad hoc,

so relegate that decision to come from data.

And finally, in the final step,

resort to Deep Learning to have

even feature engineering coming from data.

So this is the trend, Rule-based System,

Classical Machine Learning-based System and

then deep learning-based system.

So the new world order for NLP-ML is the following.

There is this trinity of data, technique and idea.

In this new world order,

data has become available.

Techniques have improved a lot,

a lot of insights from machine learning.

Now, there is this playing field where people with ideas

can make use of data and

technique to come up with very useful systems.

Okay? But this playing field is not a level one.

There are haves and have nots.

And data application and

more data is a steeply ascending gradient.

Googles Facebooks and Ubers

have tremendous amount of data.

So they will always try to either outsmart

new entities with ideas or will try to acquire them.

So, I saw this San Francisco based Startup or Postmates,

which aims at getting anything delivered in minutes.

Okay? The idea is very similar to Uber.

Like Uber it has a fleet of cars-with-drivers.

Postmates has a fleet of couriers,

currently about 13,000 who can deliver goods locally.

So huge amount of data is gathered and

subjected to analytics to reduce delivery time,

by drafting nearest courier personnel to pick

items from specified shops and deliver,

just like Uber drafts its drivers.

Okay? So same idea. So, Apple

is very interested in bringing in multilinguality,

sentiment, et cetera in

its Siri system where again NLP plays an important role.

NVIDIA is more interested

in attacking very fundamental question.

A lot of deep learning is actually

based on matix multiplication.

So they would like to make matrix

multiplication faster and faster.

Now, a common start-up theme seems to be a prediction

of problematic instances out of conceptual space.

Separating if a loanee will turn out as a defaulter,

an organization will fall into financial distress,

a rentee will prove "bad" for AirBnB.

So, those are all one class of problems,

which startups are handling.

But another common theme is,

we have services like cars, shops, restaurants,

movies, et cetera, and people wanting service,

like travelers, consumers movie goers.

And between these two, there

is this Match-making software.

Uber, Lyft, MakeMyTrip,

Postmates are nothing but matchmaking softwares.

And therefore, the methodology is,

services register themselves with the software.

People wanting service, they download the App and that

builds the bridge and then

matchmaking takes place for fast service,

quality of service, and customer satisfaction.

Collect users feedback in spoken and written form.

This is a very very important trend for

all these match-making softwares. Indian Effort.

This has been mainly multilinguality driven.

And many of

our very well-known researchers are right in this room.

In 90s with fifth generation Computer Project,

IISC (architecture), IITM (expert

systems) the then NCST doing NLP,

ISI (Computer Vision), TIFR (speech).

Machine Translation has been a large effort since 1980's.

Many institutes have been involved.

Cross-lingual information retrieval, we have

been leading this effort from IIT Bombay.

Indian Language Lexical Resources like Indian WordNets,

Information Extraction very strong

groups exist in IITKGP,

IIT Delhi, IISc, IIT Bombay,

IIIT Hyderabad, IIT Patna also.

Speech and NLP, IITM, TIFR, IIT Bombay.

OCR and NLP, IITD and IISc.

So these institutes have been

contributing a lot to Indian Language and NLP and speech.

So, future outlook is as follows.

It is a reality for NLP.

That lab ideas are going to the land.

Okay, they're becoming large utilities.

Very, very useful utilities making lot of money

and the processing is

based on quick and robust computation.

Quick and robust computation.

Shallow understanding, shallow processing,

but lot of data. This is here to stay.

And NLP and speech marriage is inevitable,

led by speech, NLP has the backend.

Okay.

So speech voice-activated systems are going to be there,

but they will be helped a lot

by Natural Language Processing at the backend.

Chat Bots are becoming very ubiquitous.

Banks, insurance companies, airlines,

all of them are on Chat Bots.

Emotion and Opinion Tracking.

We will have to ignore this only at our peril.

And there is a huge possibility of economic impact.

I'll end with some philosophical questions.

One, method changes but the philosophy does not.

Method changes but fundamental philosophy doesn't.

Classical, Statistical and Deep Learning

method changes not the fundamental principle.

What is the fundamental principle? Let's remember this.

NLP and Machine-learning works

on the principle of establishing correspondences.

Okay, there are large objects,

objects which are put in correspondence.

Their parts and features are

put in correspondence by

expectation-maximization or some such algorithm.

This is the basic methodology.

And then, we apply these land mappings

to new situations, which is called Decoding.

So this particular thing hasn't changed.

Whether it is rule-based

or machine learning-based or deep learning based,

this fundamental principle hasn't changed,

will not change according to me.

So this is the based on

the correspondence of parts

and correspondence or features.

Now, we when we learn

the correspondence of parts and features,

the ingenuity lies in modeling the probability.

Okay, what exactly is the probability distribution?

That is the crux of the matter.

That's where human ingenuity will play its role.

And the philosophical view on

Neural Network as vs language is the following.

In Deep Learning or Neural Network-based Computation,

we know there are only two kinds of

representations: vectors and matrices. In fact only one.

Matrices. And Operators are

only two: vector addition and vector multiplication.

If we go to the world of languages,

the representation is much much richer.

There are characters, words,

sentences, paragraphs, punctuations,

syntax tree, meaning graph,

silence modulation, et cetera, et cetera.

And Operators, concatenation, reversal, reorder,

implicature, irony metaphor, sarcasm,

honor operations of language.

So that a very very important question is

this representation and this matrix only two,

can these capture this whole gamut of operations and

representations in the field of language. Okay Thank you.

>> Thank you professor.

We'll take only one question since

we're running a bit over time.

One question in the audience?

Anyone? Just wait for microphone.

>> Like in Word to Vector representation,

let's say one word is there,

the riverbank of Ganga something and

the State Bank of India

increase the interest rate something.

So these two words are placing different context,

but the representation, how will you make

one vector representation for these two?

>> Yeah. So, this is an example of word ambiguity.

Bank as River Bank or bank as Financial Bank.

So, if you create

the word representation from the corpus as such.

Okay, then you will

not have two different representations

for two different senses.

Okay, however, if you generate word vectors from domains,

then the representations will be different.

The other thing is, this explainable word vectors and

partners proposed this worked

on interpretable word embeddings.

Kevin is working on this.

So you can trace components of the word vector,

which correspond to those domain-specific attributes.

For example, the financial bank

has very close association with money for example.

Okay, so that particular component,

maybe we can trace to a particular

position in the word vector.

>> With that, we'll close this talk.

Thank you very much Professor.

For more infomation >> Natural Language Processing: State-of-Art: Observations and Insights - Duration: 31:15.

-------------------------------------------

Dacia 1300 1:8 scale model - Eaglemoss Romania - nr.45 - Duration: 5:50.

For more infomation >> Dacia 1300 1:8 scale model - Eaglemoss Romania - nr.45 - Duration: 5:50.

-------------------------------------------

❤️СУПЕР! Самое веселое и шутливое поздравление подруге !❤️КРАСИВОЕ ПОЗДРАВЛЕНИЕ С ДНЁМ РОЖДЕНИЯ - Duration: 0:54.

For more infomation >> ❤️СУПЕР! Самое веселое и шутливое поздравление подруге !❤️КРАСИВОЕ ПОЗДРАВЛЕНИЕ С ДНЁМ РОЖДЕНИЯ - Duration: 0:54.

-------------------------------------------

THE ADVENTURE BEGINS! - Duration: 1:32.

Hi guys my name is Sticks

I'm about to start a new series called

OUT N" ABOUT

It's about places that I just like to go to or places I can try to go to

and you just never know where it's going to be you never know what's going to happen

and I hop you guys will tag along for the ride

For more infomation >> THE ADVENTURE BEGINS! - Duration: 1:32.

-------------------------------------------

Recognize The Light Worker Within By Tackling These 3 Light Work - Duration: 5:17.

Recognize The Light-Worker Within By Tackling These 3 Light-Worker Blocks

by Conscious Reminder

What Is A Light-Worker? A Light-worker is anyone who devotes their

life to being a bright light in the world. They understand that their actions (no matter

how big or small) have the potential to raise the vibration of the planet.

A Light-worker soul is awake, conscious that their presence matters and that they are part

of something that is bigger than them. Light-workers are not just tie-dye wearing hippies and healers

with dreads far from it.

They are teachers and chefs, writers and singers, producers and cleaners, mothers and mediums.

They�re at the country club and the nightclub, in the cafe and cr�che, the boardroom and

the art room.

In spite of often being treated indifferently, they do not give up. They keep coming back

with a lot of positivity to offer to the world. This is because they know that the world and

its people need it the most.

Light-workers are always aligned with the people that they are destined to assist. It�s

a where the student calls out for the teacher, and the teacher appears. This also happens

the other way around too.

These are the three things you need to keep in mind to become a true light-worker:

Don�t Give Up Easily: Life becomes a harsh training ground for a Light-worker so that

they can be of the highest service to mankind. Many Light-workers have lived a lot of life

in a short period of time.

Their lives are beautifully decorated with loss, sadness, illness, depression, anxiety

and fear so that they can live and teach how to navigate through the uncertainty of life

and be living proof of walking the talk. Light-workers are the flame carriers.

They hold the light energy of this planet in balance and have come to Earth in order

to help shift it into a higher level of consciousness. Light-workers are souls that have agreed to

come to this planet in order to fulfil this duty. They have agreed to carry this flame

inside of their soul in order to illuminate and enlighten the world.

Some Light-workers take on the role of being a spiritual teacher, psychic or energy healer,

whereas as others do their work with more subtlety. When a Light-worker enters the physical

world, they often struggle to manage their flame within.

Often, a Light-worker forgets their purpose and their flame is dimmed or extinguished

altogether.

Appreciate Your Energy: It is through this dim flame that the Light-worker goes on their

own healing journey of self-discovery. When their flame is out, the Light-worker has to

dwell in their own shadows in order to learn how to reignite their flame.

All callings are different for Light-workers; however their ultimate objective is to help

people dig through their own shadows in order to find their light.

Know Your Purpose: Often a Light-worker is responsible for balancing out the energy of

fear with the energy of love on this planet. They do this by helping others to awaken their

inner soul and energy they also do this by healing others by their words, empathy and

hand.

I guess, we need these people now more than ever because all of us are so lost in all

the unnecessary things that sometimes we don�t even know how the person sitting beside us

feels.

For more infomation >> Recognize The Light Worker Within By Tackling These 3 Light Work - Duration: 5:17.

-------------------------------------------

Cuphead - Secret / Hidden and Unfinished Heads Part # 2 - Duration: 10:02.

For more infomation >> Cuphead - Secret / Hidden and Unfinished Heads Part # 2 - Duration: 10:02.

-------------------------------------------

Superstore - Goodbye, Jeff (Episode Highlight) - Duration: 1:17.

For more infomation >> Superstore - Goodbye, Jeff (Episode Highlight) - Duration: 1:17.

-------------------------------------------

15 Traits of People with True Integrity - Duration: 4:39.

15 Traits of People with True Integrity

There are some rare qualities that modern people have, and one of them is integrity.

Integrity is a combination of a strong will and honesty.

It also incorporates genuine character from within, and It is just difficult to find the

person with real integrity because of chaotic universe that drives people here and there.

However, there are still people who have such integrity, and you can see it through these

15 traits.

#1 - Trustworthy

It is not all about a character that other people can rely on.

It is also about the ability that the person can have when they are cornered.

People who are in difficult situation usually cannot be trusted, and they are people who

are lacking of integrity.

#2 - Accountable

Yes, people with true integrity will not pass the blame into other people.

If they really do make mistakes, they will clarify on how they made the mistake.

They also will admit it and they will be responsible for their action.

#3 - Reliable

People with true integrity will act their words.

They are reliable people that others can rely on.

The reason is simply because they always do things they have been assigned to.

#4 - Sharing the spotlight

It goes without saying that people with true integrity do not want to take the fame by

themselves.

They realize that things that happen to them are also caused by others helping them.

They remember that, and they share their joy to others too.

#5 - Humble

Some people really cannot hold their ego when they talk about themselves.

However, people with true integrity can control that, so that they will respond in a nice

and humble way.

You never know themselves from themselves.

You will hear about those people from others.

#6 - Finding solution

They are aware that debate is sometimes not important, and it only wastes a good time

that everyone can enjoy.

However, they do not always avoid debate.

They also offer solution and suggestion that construct a better reality.

#7 - Genuine

People full of integrity cannot be swayed around with things that can change their mind.

However, it is worth noting that not all people can stay grounded when they are faced with

multiple interests.

#8 - Generous

It is not all about the money or material things.

It is all about giving that you think valuable such as time and energy.

People with great integrity will share their valuable things to others who need them.

#9 - Lending a hand

Yes, people with great integrity also like to help others.

They also help others even without being asked.

They are like empaths who know others without even saying things.

#10 - Kind

Being kind is like mandatory character that all people should have.

However, those with high integrity push their kindness into advanced level.

They help community instead of a single person.

#11 - Raising others

They just like parents who want to see their children grow healthy and perfect.

They raise other people without them realizing it, and it gives a great peace of mind.

#12 - Valuing time

We all know that time is the most valuable thing.

People with full integrity know how to use their time well, and they will regret for

wasting it even for a second.

#13 - Intuitive

They can strengthen the relationship between people, and ask them to produce something

new and amazing.

They have visionary mind, and it keeps them going.

#14 - Believing others

It is difficult to trust and believe others because no one can do the job better than

you do.

However, people with high integrity really can help others.

#15 - Seeing the best

The eyes of people are different, and people blessed with integrity can spot the best thing

that they probably miss.

They know the true potential of someone, and they also try to raise them through it.

Well, that's all about the 15 traits of people with true integrity.

Really cool information isn't it?

I hope you enjoy this short video, if you have something on your mind, please share

your thoughts and experiences in the comments below!

Don't forget to subscribe to our channel and watch all our other amazing videos!

Thanks for watching!

For more infomation >> 15 Traits of People with True Integrity - Duration: 4:39.

-------------------------------------------

6 aliments qui peuvent causer des calculs rénaux - Duration: 9:14.

For more infomation >> 6 aliments qui peuvent causer des calculs rénaux - Duration: 9:14.

-------------------------------------------

Comment combattre naturellement les calculs rénaux avec 5 remèdes ? - Duration: 9:30.

For more infomation >> Comment combattre naturellement les calculs rénaux avec 5 remèdes ? - Duration: 9:30.

-------------------------------------------

Swedish FEMINIST / DEGENERATION cringe comp. 4 (w/subs) - Duration: 5:09.

We believe that all children growing up today should have a sound basis when it comes to equality and to know that all humans are equally valuable.

We lay the foundation as to how they will behave when they grow up.

It's all about family, religion, ethnicity, culture, age and all of the discrimination grounds, we try to incorporate all these part into our work.

In the begining it was mostly about gender but then it has developed into more things.

First and foremost we need to fight the racism.

Racism is the problem.

Racisms normality.

Strong racist forces.

Racism.

And we're back to 1938 again.

That's how you create racism, this is the building blocks of racism.

And also racism.

Racism.

We know very well that they are fascists

Neo-fascism, neo-fascism, neo-fascism, one is fascist.

Fascist party.

The nazis.

The 1930s.

Fascism research.

Fascist ideology.

Ultra nationalism.

Nationalism.

Nationalism.

Radical right-wing populism.

Fascism.

Neo-fascism.

Fascist.

Racial biology.

Cultural biology.

The neo-fascist branch.

The Per Engdahl-fascism.

Per Engdahl-fascism.

Fascist comments.

Fascist theory.

What is fascism? Can you find it?

Fascist agenda.

Racist party which threatens the swedish democracy.

Is Voldemort back? Or is Voldemort not back?

The democracy is threatened within the European Union.

The Sweden Democrats are absolutely a fascist party.

Then Hitler came to power.

The coming fascism.

The holocaust.

Textbook fascism.

I stand for my mother who worked as a cleaner rich peoples houses.

I stand for that women of foreign background should get more opportunities than to wear out their bodies at underpaid jobs.

I stand for all the suburbs (immigrant ghettos) which are in the need of more youth centers, welfare, jobs and housing.

Out with the racists! In with the feminists! Out with the racists!

I feel terrible, all humans are equally valuable.

The world belongs to everyone, huh? Is it not so?

Hello, I am going to show how to eat a banana in public.

The prize that I am holding in my hand will go to a person who sparkles a little bit extra here in the audience.

This is an important prize, because if you dare to sparkle, you dare to be yourself and that we would like to praise.

I will now present the prize which goes to a very colorful person... and that is you!!!

Stand up! Stand up! Stand up!

Congraulations!

What is your name?

Love.

Love, how does this feel to win this prize?

I know!!!

Tell me, where have you gotten your inspiration from?

From all of those who want to be themselves.

It's about how girls and boys perceive themselves and how they identify themselves where feminist were ranked quite high on the girls side but very low on the boys side who tended to identify more as gamers.

We need to have a discussion about this. Feminism doesn't begin with women talking about feminism

It is people who look like me that has to reflect, 'hold on, should I only give 90% of the food to my daugther that my son gets?'

I have to reflect upon that feminism is about women getting the same rights and opportunities as men.

That leaves you with two choices. Either you're a feminist or you're an idiot.

For more infomation >> Swedish FEMINIST / DEGENERATION cringe comp. 4 (w/subs) - Duration: 5:09.

-------------------------------------------

박상원 무상급식 반대 1인 시위,박상원 음주운전 사건 - Duration: 11:15.

For more infomation >> 박상원 무상급식 반대 1인 시위,박상원 음주운전 사건 - Duration: 11:15.

-------------------------------------------

sri rama navami muggulu easy | sri rama navami kolam 2018 | sri rama navami rangoli | TNBN TV Live - Duration: 4:37.

sri rama navami muggulu

Không có nhận xét nào:

Đăng nhận xét