Thứ Bảy, 20 tháng 10, 2018

Waching daily Oct 21 2018

We've been referring back to Bitcoin a lot since it's one of the oldest and most popular

blockchain systems, so it's subject to a lot of talk about scalability – especially

since it's been studied extensively and many scalability solutions have been proposed

for it.

Again to recap Bitcoin, transactions have very long delays.

On average, 6 confirmations on a transaction will take 1 hour.

Transaction fees are also pretty inconsistent – historically during winter 2018, becoming

insanely high.

Here's a graph of the daily transaction fees in US dollars per transaction.

As you can see, in winter 2018, transaction fees spiked to 37 dollars per transaction.

Nowadays, it's around just a couple cents.

But the inconsistencies, and also the upper bound in the possible transaction fees just

comes to show that Bitcoin really isn't economical for low value items.

I drink a lot of coffee, and I don't want to be paying up to 10 times what I normally

pay.

So the idea here is that since transaction fees are so expensive, clearly, just using

the blockchain at all is expensive.

Why can't two users Alice and Bob make payments between themselves without always needing

to consult the blockchain?

Perhaps they could transaction amongst themselves for a bit – perhaps Alice is a regular customer

at Bob's coffee shop – and then only consult the blockchain every once in a while to settle

an aggregate amount.

After all, paying a single transaction fee for what was actually a month's worth of

transactions for example would be pretty good.

We could call this a private payment channel – just between two users, Alice and Bob.

So how would we actually implement a private payment channel between Alice and Bob?

Well, they could maintain a private balance sheet, tracking each of the transactions they

conduct amongst themselves.

Initially, the private balance sheet would start off with however much money both Alice

and Bob have set aside.

So in the diagram below, say Alice starts off with 10 bitcoins, and Bob has 5 bitcoins.

After purchasing an extremely rare cup of coffee, Alice pays Bob 2 bitcoins, and they

both agree to update their balance to the following: Alice now has 8 bitcoins ,and Bob

has 7.

Say again that Alice is a regular customer at Bob's super high end coffee shop, and

they'd like to settle their net balances weekly, so they'd consult the blockchain

then.

This way, they could avoid having to undergo the high fees and long confirmation times

of regular on chain transactions.

This is how Alice and Bob's payment channel would look like.

First, Alice and Bob open a private balance sheet, letting it be known that this is the

case on the blockchain.

They both start off with some initial balances.

Alice and Bob then make several private transactions amongst each other.

When Alice and Bob want to close their private payment channel later on, they publish it

and their net balances on the blockchain.

So we'd want to be able to create blockchain enforceable contracts between users.

This could be done with smart contracts, and in Bitcoin's case, be written in Bitcoin

script.

This way, we can encode the proper functionality so that neither party in a payment channel

can cheat the other.

In blockchain, we call them payment channels, but we generally also mean to specify the

technical mechanism by which we achieve payment channel functionality, and that's with Hash

TimeLock Contracts, or HTLCs.

HTLCs use tamper evidence techniques to make sure that neither party cheats, and for integrity

of information, hence the H in HTLC, standing for Hash.

The TL stands for TimeLock, and is the name for the mechanism by which we can schedule

an action in the future, for example the refunding of transactions.

Implementation details.

And we would like to implement this as a contract of some sort on a blockchain system.

The goal of this all is to enable a bi-directional payment channel so that both parties in a

payment channel can pay each other with the guarantees of a contract governing all actions

– including incentives for not cheating.

So let's walk through a short little demo for a payment on a payment channel.

Say two users Alice and Bob set up a payment channel, and this is the initial state – state

0 – of their private balance sheet.

To do so, they need to create essentially a 2-of-2 multisig between them, and each pay

in their initial amounts.

For example, here, Alice has 10 bitcoins, and Bob has 0, for a total of 10 bitcoins

in this payment channel.

There exists an issue of trust within the payment channel though.

We don't want to require Alice to completely trust Bob in the payment channel, so we can

design around that.

We won't get too deep into the implementation details of HTLCs, but suffice it to say that

at any point in time, either Alice or Bob can attempt to exit out of their payment channel.

What they need for this is to get both parties to agree, or to wait a large number of blocks,

say 1000 blocks.

So say Alice pays Bob 3 bitcoins.

Now we're in state 1, where Alice has 10 bitcoins, and Bob has 3.

Again, this is all done off chain, so for this transaction, neither Alice nor Bob had

to incur high transaction fees or long confirmation times.

So if both Alice and Bob are happy with their transactions, they can at any point post back

to the blockchain to settle their final balances.

And this is done with each others' signatures and secret information.

On the other hand, to bring back the issue of trust, say Alice and Bob don't trust

each other, and for good reason.

So Alice paid 3 bitcoins earlier, but now she wants to revert back to an earlier state,

to before she paid Bob the 3 bitcoins.

She does this by attempting to exit the HTLC with their previous balances.

However, the only way she can do this without Bob's signature is to wait 1000 blocks.

And at any time, Bob can see that Alice is trying to cheat him out of his money, and

then claim all the funds in channel – an incentive to prevent cheating.

Now some key observations from our payment channel demo, and of payment channels in general.

Firstly, we have a mechanism for countering cheating.

If Alice and Bob are in a payment channel and one of them try to cheat, the other can

always override and take all the money in the deposit.

And that's assuming at least one of the two will try to cheat the other.

If Alice and Bob always cooperate,

then they can stay in their payment channel however long they like, and keep transacting.

Alice and Bob never have to touch the blockchain, except for when they want to create the payment

channel in the first place, and at the very end – whenever they choose – to settle

their final balances after their series of transactions.

And of course the main motivation for payment channels.

It enables huge savings in terms of how much we have to interact with the blockchain.

We saw that the blockchain was inherently slow, and sought to use it as infrequently

as possible.

With payment channels, we only need two transactions on the blockchain: one to initiate a payment

channel, and one to settle the final state.

With only two transactions on the blockchain, we can support any arbitrary number of local

transactions between two users Alice and Bob.

And depending on how many times and how frequently Alice and Bob transact, the scalability could

be pretty high.

There remain some issues though.

Firstly, both participants Alice and Bob need to have capital locked up in the HTLC before

they can send money to each other.

And the money is locked such that it can ONLY be used in the HTLC, meaning that if Alice

transacts with many people other than Bob, then she afford can't lock up all that she

owns.

Also, she has to make sure that she doesn't run out of capital in the existing HTLC.

If she locks up 10 bitcoins to begin with, and purchases a 2 bitcoin coffee every morning

from Bob, then she should probably look into locking up more capital next time she and

Bob enter a HTLC.

It's the most benefit for the underlying blockchain that Alice and Bob conduct as many

transactions as possible off chain before settling the final balance on the blockchain,

since that was our main goal.

So, we would want HTLCs to have bi-directional payments as much as possible.

And that ties right into another issue, which is that with the payment channel enabled by

an HTLC, we're only making it easier for Alice and Bob to send money between themselves.

So what if Alice wants to send money to Charlie, but doesn't have or want a payment channel

set up between herself and Charlie?

Especially if Alice only intends on transacting once or twice with Charlie, it's not worth

to set up a payment channel.

We could potentially set up a network of payment channels.

As long as Alice is connected to Charlie somehow in the network, she can send him money.

For more infomation >> [CS198.2x Week 4] Payment Channels - Duration: 9:03.

-------------------------------------------

「猫の街」- 和田明 - Duration: 5:54.

For more infomation >> 「猫の街」- 和田明 - Duration: 5:54.

-------------------------------------------

[CS198.2x Week 4] Decrease Transaction Size - Duration: 11:21.

Another alternative we mentioned earlier was that of decreasing the size of transactions

themselves.

The implication is that if we decrease the size of transactions, and keep the block size

constant, then we can fit more transactions into the same size blocks.

And you can see that in the diagram below.

Both blockchains are 5 gigabytes after 2 years, but the one with smaller transaction size

has twice the number of transactions.

This solution is currently a bit more popular with people in general, and we'll be going

over two main ways we can decrease transaction size, which are SegWit and recursive SNARKs.

SegWit is short for Segregated Witness, and was originally created to solve an issue in

Bitcoin called transaction malleability, which we'll mention later on.

Beyond solving transaction malleability though, SegWit also allowed Bitcoin to scale up by

decreasing effective transaction size.

The way SegWit decreases transaction size is by separating – or segregating – digital

signatures from within each transaction.

Recall (perhaps from our first course) that signatures in Bitcoin were kept in the scriptPubKeys.

The idea behind SegWit is that since digital signatures take up so much space in each transaction,

there's no reason they need to be there after they've been used for verification.

After all, after one use, the digital signatures don't provide any value, since they're

only there in the first place for recipients of transactions to prove that they are authorized

to spend from a previous transaction output.

There's no reason to keep the signatures except for the first time, so let's just

remove all the signatures from the transaction data.

From previous sections, we saw that transaction size is on average about 546 bytes, so if

we can decrease that size, that would be awesome.

The idea for SegWit was to move the signatures to a separate add-on structure outside of

the scriptPubKey – to what's called a segregated witness.

In the diagram to the left, you can see that the signatures are located at the end of the

transaction, rather than in the input script – the scriptPubKey.

New nodes would see the new scriptPubKeys that don't contain signatures and then know

to look instead in the segregated witness for the signatures.

Assuming the signatures are valid, then the transaction is valid.

Old nodes on the other hand would find these new scriptPubKeys and think that whoever created

the transactions are crazy.

Without the signature contained in the scriptPubKey like before, new segwit transactions would

seem to be unsafe, though the signature's just in a different place, where old nodes

don't know where to look.

But in the end, it's not their bitcoin though, and other users are free to do with their

bitcoin as they please anyways.

So old nodes confirm the SegWit transactions as valid, and forward it to other nodes.

As you can see from both cases: from the perspective of both old nodes and new segwit-enabled nodes,

the transaction was seen as valid.

So, SegWit is compatible as a soft fork.

One issue now though is that because we segregated signatures from other transaction data, the

blockchain doesn't have any evidence that the correct signatures were included in their

respective transactions!

To fix this, Segwit also comes with a change to the regular merkle tree structure of Bitcoin.

Instead of having a merkle tree just with transactions, SegWit-enabled miners construct

merkle trees with one half transactions, and one half

the transactions' segregated witnesses – in a sense creating a mirrored merkle tree.

So this way, we have information about transactions and their segregated witnesses all contained

within the block header, giving us back all the beautiful properties of tamper evidence.

So pros and cons of SegWit.

These were debated quite a bit before SegWit was actually implemented.

Firstly, SegWit fixes transaction malleability, which we mentioned briefly before.

In Bitcoin, unique transaction IDs are calculated by taking the hash of a transaction, and before

SegWit, that included the signatures.

The only way for attackers to change a transaction ID without changing the underlying transaction

is by changing the signatures.

And there are ways to change the signature, though it's a bit out of scope.

It's a cryptographic vulnerability.

Since SegWit removes signatures from transaction data, signatures are no longer used to calculate

a transaction ID, thereby fixing transaction malleability.

This allows further blockchain scalability solutions, such as the Lightning Network and

sidechains to work, both of which we'll talk about in the coming sections.

Another pro is that with Segwit, we have a soft fork instead of a hard fork – compared

with what would have happened if we just directly increased the block size.

One of the main motivating factors for implementing SegWit was that Bitcoin Core is generally

very conservative, and would want to avoid a hard fork at all cost.

And that's what they did with SegWit.

Some other pros are that it's not subject to slippery slope arguments.

SegWit is a one time fix; it's not like you can just keep removing data from transactions

to decrease transaction size.

So it's not like what we could do with increasing block size, where there wasn't really a

cap to how large we could make it.

Also, of course, the efficiency gains with SegWit are pretty nice.

Smaller transactions means that it's less for miners to parse through.

And we also have a smaller blockchain size for the amount of transactions we want to

represent.

As for cons,

we know that SegWit is only a one time linear capacity increase.

Since we can only remove signatures from transactions once, it's not like we can keep removing

signatures right, so that's where the one-time comes from.

And the increase is only linear, because decreasing the transaction size by removing signatures

only increases the number of transactions in a block linearly, with respect to the block

size.

Another con is that in implementation, SegWit isn't the prettiest.

It's proven to be very complicated and ugly, with over 500 lines of code.

And compound the difficulty of implementation with the fact that

Wallets would have to implement SegWit as well, and that these wallet software developers

might not get their SegWit implementation right the first time, or might take a while

to upgrade especially if the team behind it is small, and that could mean some losses

for the average Bitcoin user.

And finally, SegWit isn't the only way to fix transaction malleability.

So as the Bitcoin scalability debate reached its climax, Bitcoin split into Bitcoin Cash,

which increased the block size to 8 megabytes, and Bitcoin, which kept its block size at

1 megabyte, but enabled Segregated Witness.

This was on August 1st, 2017, at block number 478,558.

The next on-chain vertical scalability topic we'll cover hinges on the concept of zero

knowledge proofs, an advanced topic in cryptography.

Zero knowledge proofs are a way to prove to someone that you know something, without revealing

what exactly you know.

So, the recipient of the proof has "zero knowledge" of what you know, except for

the fact that they know you know something.

You can think of it like how you authenticate yourself on websites where you have to log

in with a username and password.

If the website stored your actual password, that would be a horrible security practice,

since all it takes now is one data leak and now all your user's identities are compromised.

Instead, websites usually store a hash of your password instead.

If the password you input when you login hashes to the same string as the website's saved

password hash, then you've been authenticated!

The website knows that you know your password, but they themselves don't know your password

– only the hash of it.

Note that this analogy was to explain the concept of zero knowledge proofs, and is NOT

meant to be taken at any deeper level.

And so with our high level understanding of zero knowledge proofs, we can begin to understand

zk-SNARKS, which stands for:

Zero Knowledge Succinct Non-Interactive Arguments of Knowledge –

– a pretty big mouthful.

Instead of Alice sending a transaction from Alice to Bob, she can replace that transaction

with a proof that she has sent a valid transaction to Bob, and the corresponding changes to a

virtual balance sheet.

This is a lot smaller than the original transaction itself.

And also, with the smaller size, and the way we construct and verify these proofs, any

machine in the network can verify the proof in milliseconds.

That's the main idea of zk-SNARKS

And it gets better; we can introduce a recursive structure!

A miner can merely include a single proof that they validated all the other proofs and

changes to the state of the network and everyone's balances.

Instead of having transactions inside blocks, our new block construction would just have

the following components:

(1) The root hash of the content of the ledger (2) proofs for all valid transactions that

have changed the ledger to the current state, and (3) proof that the previous block's

proof was valid.

All in all, this would allow anyone in the world to verify the blockchain in under 1

second, and also allows for twice as many transactions per block.

The average transaction size we saw earlier was 546 bytes – and compare this with the

average proof size, which is 288 bytes.

For some closing thoughts, let's look back at our previous slide.

Back to Alice and Bob, Alice generates a proof that she can send a valid transaction to Bob.

She includes this proof and any changes to a balance sheet instead of including a transaction,

and any machine in the network can verify the proof in milliseconds.

However,

there do exist some pretty big drawbacks.

Firstly in practice, these proofs are very time consuming to generate, and could potentially

take hours.

Secondly, part of the proof construction requires a trusted setup between computers.

Trusted setup, perhaps, like trusted execution environments, which we discussed in week 1.

Ultimately the time it takes to generate proofs counteracts whatever scalability benefits

we saw earlier, with the reduced data size, and also, the trusted setup violates the trust

assumptions especially in public blockchains.

So we've been playing with a bunch of parameters, such as block size, size of transactions,

and block rate, but we haven't really been able to reach amazing numbers.

It's clear that we need to change something else, but we've ran out of parameters to

play with.

The key observation is that all the scaling solutions we've seen so far are layer 1

scaling solutions.

So in the next section, we'll look at layer 2 solutions.

Let's just not use the blockchain.

For more infomation >> [CS198.2x Week 4] Decrease Transaction Size - Duration: 11:21.

-------------------------------------------

[CS198.2x Week 4] Decrease Block Time - Duration: 2:49.

So what we were trying to do with our naive scaling solution was to decrease the block

creation time.

Size of the blockchain will increase regardless, since we're assuming blocks stay a constant

size and have an increased velocity.

We'll look at this problem later.

But what are some other outcomes of this naive solution that we can take a look at with a

more constructive outlook?

Recall that another problem with the naive solution was that there were a lot of naturally

occurring forks.

With our standard fork resolution policy of taking the longest chain as the canonical

chain, we'd be wasting a lot of work if the block speed increased.

All these orphaned blocks that are part of chains that weren't in the end the longest

chain all represent work that was in the end wasted in a sense.

So the problem now is: how can we account for the existence of an increased number of

naturally increasing forks when we decrease the block creation time?

How do we avoid wasting all of this work?

Instead of just increasing the speed of blocks and doing nothing else, the observation here

is that we can increase the speed of blocks by specifically decreasing the difficulty

of the Proof-of-Work problem, and also by considering the Proof-of-Work chain with the

most weight, rather than simply the longest chain.

And that was the idea behind the GHOST, or greedy heaviest observed sub tree, protocol,

used in Ethereum's Proof-of-Work protocol.

The way it works is that in the GHOST protocol, blocks that are orphaned are called uncle

blocks.

And uncles up to 7 generations deep from the most recent block also get block reward.

Specifically, uncle blocks get 87.5% block reward, and the children of uncle blocks – appropriately

called nephew blocks – get 12.5% block reward.

And these blocks are used to calculate a chain's weight.

In the end, GHOST reduces transaction time since the blocks are faster.

It also decreases the incentive for pooled mining.

By rewarding uncle blocks, we've reduced the need for being exactly the first new block

on top of the longest chain.

Also, since block times are so fast, sufficiently fast miners would want to reduce the overhead

of communicating with pools anyways.

Ethereum had a period when it had 17 second block times, but now it's been more around

13, 14, 15.

If we round to a 15 second block time, then that means in 60 minutes, the length it takes

for 6 Bitcoin blocks to be created, Ethereum created 40.

And though it's been the same amount of time – 60 minutes – for both Bitcoin and

Ethereum blockchains to add blocks, it's clear to see that 40 confirmations in Ethereum

might be more secure than 6 confirmations in Bitcoin.

For more infomation >> [CS198.2x Week 4] Decrease Block Time - Duration: 2:49.

-------------------------------------------

[CS198.2x Week 4] Scaling Background - Duration: 11:45.

One property we look for when gauging the scalability of a blockchain system is its

ability to deal with an increased transaction volume.

To be scalable, a blockchain should be able to function with a higher transaction velocity

– so we're looking for a higher TPS, or transactions per second.

The definition here is pretty self explanatory in its name.

Being able to handle a greater volume of transactions per second means that our blockchain system

would be able to handle more transactions with a higher velocity.

Another property we're looking for is the speed with which we can update our distributed

ledger – and in the case of blockchain, we call this the block time: the average time

it takes for a new block, or update, to be appended to the blockchain.

And as with enabling a higher volume of transactions, the reason for wanting faster block times

is pretty easy to see as well.

If I'm buying a coffee with Bitcoin, there's no guarantee that my transaction will go through,

especially with so many other transactions floating around with potentially higher transaction

fees.

You may know that the average block time in Bitcoin is roughly 10 minutes long, and generally

when making a transaction, we would want to wait six confirmations to be confident – with

high probability – that our transaction has gone through and has been finalized.

This isn't scalable though, at least in the sense that I don't want to wait an hour

every morning to get my coffee.

And one side note: as we know, blockchain is a decentralized system.

As such, especially with large public networks, we want to lower the barrier to entry if possible.

We're aiming for decentralization, and a flat network topology is preferred – one

where anyone who wants to join can join.

One way we can do this is pay particularly close attention to the size of the blockchain.

Historically, if there's a high storage requirement for users to join the network,

then users might be disincentivized to join the network completely.

Perhaps because they don't have that much storage space, or perhaps they just can't

justify using so much of their precious storage space for a blockchain – which has no immediate

value to them.

For example, in Bitcoin, many users don't want to run full nodes since as of late August

2018, the blockchain is roughly 180GB.

Users can't simply run a full node on their phone; and most users won't want to run

a full node on their laptops or personal computers either – due to the blockchain's immense

size.

And it's become that size in only a couple years.

What happens if the blockchain's around for another couple decades?

Or even centuries?

If the blockchain continues to grow at the same rate it's been growing at, then it

can very easily become very large and unmanageable in the future.

So, in order to make it easier for nodes to join the network in the future, whether they

be run by dedicated or casual users, we must design blockchain systems keeping into account

storage size.

Fundamentally, there's a scalability trilemma here.

This was proposed by Ethereum research, and claims that in any blockchain system, we can

only have two of the three properties shown here: decentralization, security, and scalability.

We formalize the notion of decentralization by the amount of resources everyone has, on

average, in the network.

So decentralization in the case of the trilemma is the system being able to run in a scenario

where each participant in the network has access to on average the same amount of resources.

Scalability is defined by a system's ability to processes some increasingly large volume

of transactions at increasing speed.

So given everyone in the network has on average the same amount of resources, how fast can

we make the system.

Security is defined as a system's ability to withstand attackers with up to a certain

amount of resources.

Potentially, they could have resources on the order of the total number of resources

in the network.

That's a lot of formalisms, but it's easy to see that tradeoffs are inherent in these

types of systems.

For example, if you increase the number of participants in a network, the more we have

to consider how including more transactions in a block or speeding up the block confirmation

time might cause security to suffer.

Or if we want to make a system have much faster block times, without adjustment anywhere else,

we could suffer in security since faster blocks means more orphaned blocks; and perhaps if

someone has the right amount of resources, they could tilt the system in their favor

and so decentralization would take a hit.

To understand all of that more, let's look at further detail and do a bit of math.

In the graph on the screen, we have the size of Bitcoin transactions over time.

The graph is a bit outdated, but the scalability concerns still apply.

We can see that on average, Bitcoin transactions have a size of around 546 bytes.

And here, we have a similar graph, but for Bitcoin block size.

The system was designed for 1 megabyte blocks, and we can see that reflected in the graph.

So here's a quick calculation based on the numbers we've collected so far for Bitcoin.

From the previous slides, we have an average of about 546 bytes per transaction.

The current blocksize is 1 megabyte.

And the block time in Bitcoin is 10 minutes on average.

Therefore, we can compute the sustained maximum transaction volume in transactions per second.

By simple dimensional analysis, we have 1 megabyte per block, times 1 transaction per

546 bytes, times 1 block every 10 minutes, and we get a final value of 3.2 transactions

per second.

That's not too hot.

Compared with some other traditional payment systems, Bitcoin is way behind in terms of

speed.

Bitcoin has an average of about 3 transactions per second, and we just calculated in the

previous slide that it has a max of 3.2 transactions per second.

On the other hand, Paypal has an average of 150 transactions per second, with a maximum

of 450 transactions per second.

And even more is VISA, which has an average transaction rate of around 2,000 transactions

per second, and has a theoretical high load of 56,000 transactions per second.

Comparing that to Bitcoin's 3.2 transactions per second, the difference is definitely quite

drastic.

So that was the situation at hand.

Suppose we now want to make our transaction rate comparable to that of VISA, so we can

finally realize our dream of using Bitcoin to buy coffee and not wait an hour for the

transaction to be finalized.

To increase our transactions per second, we have two fundamental options.

Looking at the fraction transactions over seconds, it's very easy to see that in order

to increase TPS, we could increase the transaction volume, or decrease the block time.

And that's just because TPS is directly proportional to the transaction volume we

have, and inversely proportional to that of the block time.

To increase the volume of transactions, we could do this a number of ways, we could either

decrease the size of transactions, and thus be able to fit more transactions into each

block, given an unchanged block time.

Alternatively, we could also increase the size of blocks, so that each block could hold

more transactions, and thus at each block time, we'd have more transactions.

On the other hand, to decrease the block time, there's not much else to say.

We increase the rate at which we create blocks.

There are definitely drawbacks and considerations for each of these approaches, so that's

what we're going to be looking into in later sections.

In terms of different techniques with which we can scale, there are two fundamental options.

Setting aside blockchain scalability for now, and just looking at the big picture of how

we scale systems in general, we can see that we can either scale vertically or horizontally.

Vertical scaling, or scaling up, implies adding more resources so that each machine can do

more work.

Traditionally this is done by adding more memory, compute, or storage to a particular

machine.

Horizontal scaling, or scaling out, implies adding more machines of the same capability,

and to add more distributed functionality.

For example, imagine adding more machines to your compute cluster if we're talking

about cloud.

And combining ideas from both vertical and horizontal scaling simultaneously is diagonal

scaling.

Applying this intuition to blockchain, we can categorize scaling efforts.

For vertical scaling, there have been efforts to increase the block size or decrease the

block time.

There have also been alternative fork resolution policies to Bitcoin's longest chain wins

policy.

For example, we have the GHOST, or greedy heaviest observed subtree, protocol.

And also, there's the idea of setting up payment channels between particular participants

in the network.

For horizontal scaling, there have been many projects focusing on sharding, a method of

distributing a database, and also on sidechains, as opposed to keeping everything on a singular

blockchain.

And finally for diagonal scaling, we've seen projects like Plasma and Cosmos, which

aim to not only make individual blockchains more efficient, but also to create new value

and connect these blockchains together.

Besides the traditional standpoint of scaling up and out, there's another useful model

to view scaling solutions, which is in layers.

Yep that's right –

– blockchains have layers.

We'll cover this part pretty quickly since it's more insightful to see examples of

these solutions, but here's a quick rundown.

Layer 1 scaling solutions refer to those that change the blockchain and its protocol itself.

This could be modifying parameters of the blockchain – like block size, block speed,

the hash puzzle – changing a blockchain's consensus mechanism.

And these would all be layer 1 scaling solutions.

For example, if you remember from week 1, you might be able to recognize that Casper

the Friendly GHOST, Correct by Construction, is a layer 1 scaling solution, since it would

fundamentally change the infrastructure and operation of the Ethereum blockchain.

On the other hand, we have layer 2 scaling solutions, which push expensive computation

off the blockchain.

This is also called off-chain scaling.

Generally, layer 2 solutions are easier to execute since they don't require a complete

overwrite of the underlying blockchain like in layer 1 solutions.

For example, also from week 1, is Casper the Friendly Finality Gadget, the Proof-of-Stake

overlay on top of Proof-of-Work.

And that's a layer 2 scaling solution – it's implemented simply as a smart contract on

top of the existing Ethereum infrastructure.

We'll go into other examples of layer 2 scaling, some of which include side chains

and payment channels.

Going forward, we choose to organize blockchain scaling solutions as vertical and horizontal

as well as layer 1 and layer 2, so knowing the distinctions are important in identifying

solutions.

For more infomation >> [CS198.2x Week 4] Scaling Background - Duration: 11:45.

-------------------------------------------

[CS198.2x Week 4] Lightning & Raiden - Duration: 5:28.

In the last video, we saw that if Alice and Bob have a payment channel, it only makes

it easier for them to send money between themselves.

If Alice wants to send money to Charlie, and doesn't want to set up a payment channel

between them, then we have an issue here.

What we could do instead is set up a network of payment channels.

As long as Alice is connected to Charlie somehow in the network, then she can send him money.

This is an example of what a payment channel network would look like.

Alice on the left side of the network here is able to send money to Charlie on the right

side of the network through this hypothetical payment channel network, where her payment

goes first to Bob, then to Eve, then finally to Charlie.

The main problem we have to address here is that of security.

How do we ensure that capital is being transferred along the payment channel network?

Well thankfully, with just some small additions on top of our HTLC construction, we can trustlessly

send money across a network of payment channels governed by HTLCs.

And that was the innovation of the Lightning Network paper, titled The Bitcoin Lightning

Network: Scalable Off-Chain Instant Payments, written by Joseph Poon and Thaddeus Dryja

in early 2016.

So what exactly are the scalability benefits of the Lightning Network?

Well, if we assume that there is enough capital in payment channels, then we can pretty much

make payments instantly.

We don't have to wait for confirmation times on the main blockchain since we're doing

everything off chain.

Transactions could occur as fast as the communication delay across the network, since as we saw

earlier, if Alice wants to transact with Charlie, her transaction might have to make several

hops through other payment channels in order to go through.

And since we're only using the main Bitcoin blockchain as an arbiter to settle disputes

and to close out payment channels, we reduce the load on the main bottleneck – the main

Bitcoin blockchain.

So, there would be far fewer transactions on the blockchain.

What this means is that instead of the 3 transactions per second that we calculated in the earlier

section, the Bitcoin network could potentially support tens of thousands of transactions

per second.

Since we're delegating payments to simple bookkeeping that's done in each payment

channel, we avoid the main bottleneck – the Bitcoin blockchain.

And in practice, depending on the choice of when and between which nodes to have payment

channels, we could keep a very high percentage of transactions, upwards of 99%, off-chain

!

And keeping transactions off-chain not only increases the scalability for the network

as a whole, but it also has some nice outcomes for the users as well.

First of all, transactions would be very fast, meaning that it's now feasible for me to

get a coffee and not wait 60 minutes for the confirmations.

Also, Lightning Network transaction fees would be several orders of magnitude cheaper than

that of the normal Bitcoin network, since we'd be doing everything off-chain.

And we'd only have to pay more expensive fees upon opening and closing a payment channel

– since these would be actual transactions on the Bitcoin blockchain.

And in terms of speed, we're only really limited by the packet transfer overhead, and

that's not really an issue since it's very fast.

So instead of 3 tps, or 6 or 10 with incremental changes, we could literally have tens of thousands

of tps with the Lightning Network.

As great as the Lightning Network sounds, of course we have some immediate issues we

have to consider.

First of all, payment channels could be very expensive to operate.

We identified earlier that payment channels could be a problem if most payments occur

only in a single direction; for example if Alice buys a coffee from Bob every morning,

but Bob never pays Alice for anything.

In these cases, nodes would need to keep very large amounts of capital locked up in payment

channels, to avoid running out of capital in the payment channel and having to close

it and open a new one with more capital.

There's also a tendency to strong centralization.

Only nodes with significant capital can afford to hold payment channels for long, since they

can afford to allocate a lot of capital to each payment channel.

Larger payment channels would get settled less often on the main blockchain, meaning

that they could offer lower transaction fees.

Other users would see this and would want to use these payment channels to avoid fees.

And so these payment channels controlled by more capital would get a disproportionate

amount of traffic.

And finally, there's the realization that less capital is required with less nodes on

the network.

So, there's a tendency towards a hub and spoke network topology.

Perhaps with large banks opening up many payment channels to other banks or brokers, themselves

with a lot of capital.

And from the perspective of Bitcoin's values of decentralization, this is probably not

so good for the politically minded.

Of course, the idea of payment channel networks isn't limited to just the Lightning Network

for Bitcoin.

There's also a comparable technology for Ethereum, called Raiden.

The idea is mainly the same, to support a network of payment channels, but there are

some implementation differences, especially given the differences between Bitcoin and

Ethereum anyways.

The most basic differences to spot are that Raiden would be implemented as a smart contract,

and Raiden nodes in the Ethereum network would allow for ERC20 compliant token transfers

between users.

For more infomation >> [CS198.2x Week 4] Lightning & Raiden - Duration: 5:28.

-------------------------------------------

How to Download CyberLink Power Director & Remove Watermark In Hindi * 2018 - Duration: 5:13.

Please Hit Like

Comment Please

Share or Send this video

Subscribe please

For more infomation >> How to Download CyberLink Power Director & Remove Watermark In Hindi * 2018 - Duration: 5:13.

-------------------------------------------

萌えろお兄ちゃん!【NieR Replicant】Part3その12 - Duration: 10:01.

【NieR Replicant】Part3-12

For more infomation >> 萌えろお兄ちゃん!【NieR Replicant】Part3その12 - Duration: 10:01.

-------------------------------------------

Lyrical Video: Shahjaan Dawoodi ^ Yaade Tara Man Dil Daat ^ Saleem Sabit ^ Single Song 2018 - Duration: 6:09.

For more infomation >> Lyrical Video: Shahjaan Dawoodi ^ Yaade Tara Man Dil Daat ^ Saleem Sabit ^ Single Song 2018 - Duration: 6:09.

-------------------------------------------

Tiết Lộ Đoạn Kết Video Thử Đắp Kín Bùn Lên Xe Exciter Thử Lòng Bạn - Miền Tây Vlogs Tập 280 - Duration: 4:01.

Welcome to Mien Tay Vlogs

For more infomation >> Tiết Lộ Đoạn Kết Video Thử Đắp Kín Bùn Lên Xe Exciter Thử Lòng Bạn - Miền Tây Vlogs Tập 280 - Duration: 4:01.

-------------------------------------------

Star Control Origins: Обзор. Прохождение и Гайд на Русском - Duration: 1:17:59.

For more infomation >> Star Control Origins: Обзор. Прохождение и Гайд на Русском - Duration: 1:17:59.

-------------------------------------------

これが俺のバイクだ!〜CVOハーレーストリートグライドの巻〜(日本語字幕) - Duration: 25:40.

For more infomation >> これが俺のバイクだ!〜CVOハーレーストリートグライドの巻〜(日本語字幕) - Duration: 25:40.

-------------------------------------------

Suspect identified in Young's Food Mart shooting - Duration: 0:20.

For more infomation >> Suspect identified in Young's Food Mart shooting - Duration: 0:20.

-------------------------------------------

Media Lost #15: Treehouse of Horror - Duration: 4:44.

Hello and welcome back to Media Lost. Now, it's been a little while since I've made one of these videos

So for those unfamiliar with this series, we basically take a piece of obscure media and contextualize it

We give a little bit of background, find some related clips and recall any memories associated with said media.

I'm excited, as we're gonna be talking about one of my favorite things The Simpsons' Treehouse of Horror.

This is their annual Halloween special. and while I realize there is absolutely nothing obscure about this

We are gonna be looking at it through a 20 year old issue of TV Guide

Ever since I was a kid Fall has always been my favorite season with Halloween being my favorite holiday.

I find it lacks the sentimentality and I guess religious baggage of Christmas

and really just feels like a celebration of fun.

It encourages and brings out people's creativity in the way they decorate their houses or the costumes they

create. It is the one time of year where it is socially acceptable to express yourself and the things you love

by becoming them, or escape into something else.

Growing up I had several Halloween traditions passed down to me like the Great Pumpkin

or the Garfield special, but the first one I really developed on my own was Treehouse of Horror.

It's no secret that the Simpsons are very important to me and the first episode I remember seeing was the first

Treehouse of Horror.

I'm assuming this would have been a rerun as this is one of my earliest memories

and I would have been much too young to remember when I first aired.

Treehouse of Horror began during the Simpsons second season. The first one aired on October 25th,

1990 with one airing every year since then.

These episodes break the show's format, rather than having one long kind of narrative

they have three self-contained segments, all generally horror or science fiction themed.

Earlier ones feature a device that would allow these stories to be told within the Simpsons universe

So either they would be scary stories being told or scary dreams being had.

They have since abandoned this and not that the Simpsons Canon really means much anymore

but these are non-canonical.

Treehouse of Horror takes a lot of inspiration from old anthology series,

such as EC Comics or especially the Twilight Zone and often features parodies of famous horror movies.

I could probably make-nay-probably will make a video just on how much The Simpsons has influenced me.

Most relevant to this it introduced me to so much media I would have never known existed otherwise,

simply through parody.

For years, these were my only frame of reference and it's possible that I would have never sought out or been able to

appreciate the source material if not for the Simpsons.

One of my earliest videos was about my love of TV Guide.

For those too young to remember what TV Guide was

it was a magazine that featured television listings as well as stories from the world of television and

previews and stuff like that.

This issue covers the week of October 17th through 23rd, 1998 and is dedicated to previewing upcoming

Halloween content. It details Halloween themed episodes of such shows as

Two Guys, a Girl and a Pizza Place

Home Improvement, Suddenly Susan and the New Addams Family.

It also features a countdown of the twenty scariest movies as of, I guess, 1998.

The list is not perfect. I disagree with some of it though I can't argue with number one.

I love collecting these because I think they're the perfect snapshots into what television was at a certain period of

time and I especially enjoy when I'm familiar with that time, as I am with 1998, all I was doing was watching

television but I got this one because of the cover story.

It features the Simpsons with these great illustrations and promotes Treehouse of Horror 9.

The segments in this episode include Hell Toupee, where Homer gets a hair transplant from Snake

and ends up being possessed by him.

Terror of Tiny Toon where Bart and Lisa gets sucked into the television and into an episode of Itchy and Scratchy,

and Starship Poopers, where it is revealed that Marge had a one-night stand with Kang

and that Kang is Maggie's father

This issue was settled on Jerry Springer because 1998.

For many people, I think season 10 of The Simpsons was a jumping-off point.

This is generally attributed to a dip in quality though

I have stuck with the show over the past two decades and will continue to defend it.

Well some segments fall flat, I think Treehouse of Horror still ranks amongst the best episodes of every season.

For me, they're still essential viewing every October.

I love that the writers are still introducing new generations to media that are all but forgotten

Things like Orson Welles' War of the Worlds broadcast or Todd Browning's Freaks.

I don't think it's a stretch to say that had they not done this in the past for me,

you would not be watching this video.

So if you enjoyed it please give us a thumbs up, subscribe if you haven't and

be sure to check out some of our other videos like our look at The Simpsons phenomenon in promo in print

or our look back on Matt Groening's Life In Hell.

Thank you so much for watching and Happy Halloween!

For more infomation >> Media Lost #15: Treehouse of Horror - Duration: 4:44.

-------------------------------------------

Funny videos Mix Antivirous বলেনতু দেখি এদের মিরগি রোগ নয় কি? - Duration: 5:23.

For more infomation >> Funny videos Mix Antivirous বলেনতু দেখি এদের মিরগি রোগ নয় কি? - Duration: 5:23.

-------------------------------------------

萌えろお兄ちゃん!【NieR Replicant】Part3その11 - Duration: 10:01.

【NieR Replicant】Part3-11

For more infomation >> 萌えろお兄ちゃん!【NieR Replicant】Part3その11 - Duration: 10:01.

-------------------------------------------

萌えろお兄ちゃん!【NieR Replicant】Part3その10 - Duration: 10:01.

【NieR Replicant】Part3-10

For more infomation >> 萌えろお兄ちゃん!【NieR Replicant】Part3その10 - Duration: 10:01.

-------------------------------------------

Espiritualidade em Gotas / Ep. 112 - Respeito aos animais! - Duration: 4:26.

Respect for Animals!

- Do animals know God in the higher worlds?

It is question 603 of "The Book of Spirits".

[ ♪ Soft Song ♪ ]

Spirituality in Drops

- No! For them man is a god,

as the Spirits were considered to men,

in antiguity, as gods.

Our relationship with animals today

requires reflection, because there was a rupture,

in those days, of naturalness as we should

establish in dealing with animals;

or to an extreme, in which

we limit your freedom; we lock

the animals improperly, in cages,

in nurseries, withdrawing from their

using cruelties, such as bullfights,

rodeos, cockfights .... Therefore, making the

relationship with the animal a relationship of cruelty.

There is another extreme when we try

to humanize animals, and we also withdraw

from their species to try to treat them as

a person, dressing as a person,

sometimes lending excessive care

that is contrary to their own nature,

hurting them, therefore, his own

health because treated not as an animal,

but as a human being; in transfer

movements, habitually, in which their owners,

who moved by a good

intention and by affection, sometimes

try to transform the animal into a person,

disregarding that the animal, it has its habitat.

An animal, a dog, for example,

that is locked inside the house

and that bites and is mistreated,

it is out of its habitat;

who is brought into the room, and that

sometimes, unconsciously, if it lends him

the place of a child, is in some circumstances

an exaggeration because the

treatments and the care, so excessive,

are that the animal, in spite of love and

positive energy, they are curtailed

in the integrity of their own species;

so some veterinarians are

nowadays positioned.

The animal needs respect, love ... it is our

brother in an inferior condition, in a language of the

spirits speaking of creation;

it will walk with us to other

worlds and he is on his way

later to a process of humanization,

but he is not yet a human being,

it is a subject, but he is not a person,

until we could express ourselves.

In this way, our relationship with the animal

can and should be respectful,

ensuring affection, tenderness, listening

to the birds, using animals for company,

the guide dog ... being able to dispose

of the relationship with animals, in a very

ecological interaction, with deep

respect for him and for ourselves,

without transforming them into victims of our

inhumanity and without placing them in a position,

that later the process is such a

humanization that we have

difficulty respecting its limits , because the

animal, dogs, for example, stopped being in the

gardens, in the backyard, circulating

freely in the house for today to appear in the

beds, sometimes in the service, in quotes,

of a conflict that conjugality experiences.

So, let's love animals, respecting them!

[ ♪ Soft Song ♪ ]

For more infomation >> Espiritualidade em Gotas / Ep. 112 - Respeito aos animais! - Duration: 4:26.

-------------------------------------------

माँ देवकी के गर्भसे जन्मे वे6पुत्र कोन थे? | with SUB - who were born from the womb of Mother Devaki - Duration: 3:45.

Before the Lord Krishna, who were the six sons of God born from the womb of Devaki?

It was time consuming there were very gruesome situations

The office bearers were overwhelmed the people were upset

Those who ask for justice were fortnightly

The utterance of lawlessness was that the pleasures of luxury were shadowed

At that time, the kingdom of Mathura was in the hands of Kans

He was Naresh also a despotic and carnal heart monarch

Well he loved his sister Devaki very much

But since he came to know that Devaki's eighth son would be the cause of her death

From then on he had put her in the dungeon

Due to the desire not to any kind of risk

Kansa also killed the first six sons of Devaki

Who were they unfortunate infants?

Hello friends, Welcome all of you in our channel "DHARMIK GYAAN"

In which we find many interesting and supernatural information from all religions

It's all you have to do to reach out to a very unique and kind way

Which you will not find any more

So subscribe to our channel now And always connected to us

Please watch this video by the end

Let's start without losing time

Brahmolk used to have six gods named Smar, Udritha, Parikshanga, Kite, Nirmudram and Ghurni

These were the blessings of Brahmaji the grace and affection of Brahma ji were always maintained on these

They ignored these six things and ignored them and ignored them

For this reason, pride in the six of them gradually started to vanity due to their success

In front of you nobody started to understand something

In such a case, he also disowned Brahmaji in a matter of one day

From this, Brahmaji was angry and cursed them that you are born in the world of the monster family

With this the suspicions of those six came to frustration and they repeatedly apologized to Brahmaji

Brahma ji also pity them and he said that you will have to take birth in the giant descent

Your prior knowledge will remain

At the time, those six people were born in the house of Rakshasraj Hiranyakashyap

In that birth, he did not do any wrong thing due to having knowledge of previous birth

All the time, he spent his time in doing penance of Brahmaji and pleased him

Wherever pleased, Brahma ji asked him to ask for boon

With the influence of daity yoni, he demanded that same

Our death is not in the hands of the gods, nor of the Gandhavas, nor will we die from the disease

Brahmaji became distraught by saying things like this

Here Hiranyakshyap was angry because his sons worshiped the deities

When he became aware of this, he cursed the six

Your death is not in the hands of god or Gandharva it will be in the hands of a monster

This curse was born from the womb of Devaki

And killed in the hands of the sutal lok found a place in the well

When Kansa was killed, Krishna went to Mother Devaki

So the mother wishes to see those six sons who were killed as soon as they were born

The lord from the sutal lok, By bringing those six to fulfill the wish of the mother

By the Congenial and grace of the Lord, they found a place in heaven

Friends, how did you get this information from us?

You must tell us through a comment

Please like this video more and more

And share it with your friends and family

Friends subscribe to our channel "DHARMIK GYAAN"

And click the bell icon

Latest to see interesting information about the world's religions before watching the video

Không có nhận xét nào:

Đăng nhận xét