Eth2: Ethereum 2.0 - Breaking It All Down, Part 3

Do repost and rate:

Welcome to part 3 of my overview on Eth2... Ethereum 2.0... the super-hero-sized upgrade path for the Ethereum blockchain!

Referencing the links at the bottom of this post, you can catch-up on the previous parts and reference Ethereum's source pages regarding the main components of the multi-stage upgrade path and getting into specifics about the Beacon Chain.

Now, we're going to focus specifically on Eth2's "Shard Chains" which are the necessary element, one could even say the building blocks, of the new blockchain when it is fully ready. As we've reviewed, the Beacon Chain has now been created, works, and has gone live. It is also not an extremely complex concept to understand on its own. Shard Chains, while they all make good sense, will take a person all the way down the Ethereum rabbit hole if one is not careful. Gordon wants to keep things straight forward here, as this is not a deep dive deep dish Ether pizza. This is just a good way to understand the brilliant, sometimes obscure, usually mathematically sound reasonings, plannings, designs of Vitalik and friends.

To understand the plural of shard chains, one must first understand the shard.

Shard chains

  • Sharding is a multi-phase upgrade to improve Ethereum’s scalability and capacity.
  • Shard chains spread the network's load across 64 new chains.
  • They make it easier to run a node by keeping hardware requirements low.
  • Technical roadmaps include work on shard chains in "Phase 1" and potentially "Phase 2".

From this explanation above, we can see that the purpose of sharding is to allow Ethereum to scale. In order to do so, it needs to be able to handle more items with flow, and to do so in parallel. I'm reminded of issues that I've dealt with in the audio recording industry. With modern music (well, anything after 1970, okay?) we work with multi-track recording devices. While yesterday it was reel to reel tape machines, today almost all recordings use dedicated computers with hard drives and audio interfaces. Even going back 10 years, it was very difficult to get even a top-end computer to handle as much capacity in multiple tracks as a fully professional studio required. 

Every audio track today has the capability of being in stereo instead of mono... double the memory, double the bandwidth. Someone may wish to record a whole drumset all at once, while recording a click track, bass, and guitar, before laying down vocals and overdubs. This could require 16 tracks or more to be recorded to one's hard drive simultaneously. In addition to the possibility that many tracks are in stereo, a person also has the choice to record at a very high 'bit depth', simply meaning that the volume range, or dynamics, are recorded with a lot more data to preserve the quality from soft to loud, which can increase the file size and, you guessed it, the requirement for bandwidth 3-4 times a standard audio CD quality, for instance. What's more, more people record at more than CD quality for frequencies, some use more than double this range. By the end of it, a person may want to utilize more bandwidth all at once, than the transmission of that data can handle. Every part of the computer is a potential bottleneck, from the front side bus to the amount of RAM, to the speed of the RAM, to the hard drive space (capacity) and the hard drive physical speed. That is all without even addressing the processor power and speed. What's more, these signals are coming in from analog, instrument plugs and microphones, that interface with a sound device like a USB connection, Firewire or other. These protocols handle information differently, so it becomes an issue of whether one handles more items better side by side, but those items all need to be lower in resources, or whether a connection is better at bigger chunks of data, but can only handle a limited amount of data in parallel.

Any one of these factors can cause audio to glitch because it cannot catch up to all of that data, could make the computer freeze, could require the user to save resources somewhere by reducing audio quality, number of tracks or some other feature.

This is where my brain goes when I learn about the issues of data being compressed to computer resources, internet bandwidth, processing power, number of computers in a network, and how efficient that network must be in order to scale big time. It is very much the same thing, except it is a different data set.

So, the first issue is working in parallel. The more items that can exist side by side, the more they can essentially do something that has already been accomplished in real world testing, while replicating this repeatedly. This kind of solution is always desired because testing is likely to be supported by fact and less experimental in nature. Increasing capacity in something that already works, by doing it more, and side by side, is likely to work.

From my understanding of shards in this context, the issues are regarding the need to implement multiple types of shards to handle multiple scenarios and this also connects directly to the issue of security.

So, every shard is a node attached and communicating with the Beacon. Every shard is being designed for the purpose of efficiency. The person running a node should be able to do so with less effort, less consumption, at a faster pace.

Another important element of sharding, is that the site mentions that elements that deal with actually executing code have not been fully resolved. To begin with, shards will not be running any code, but rather they will solely be devoted to grab and store data.

A prized feature of sharding is that while solving a technical growth requirement... aka scale, it is also in keeping with the cryptocurrency philosophy. More nodes of networked data operating in parallel with less consumption weight means a higher degree of decentralization, which is good for the network, good for the idealist nature of the system, but also results in better security. If it takes a lower point of entry for more people to be capable of running nodes, it reduces the fears not only of centralization in a system, but just as important, the image of access, that the network is not designed just to benefit those who can afford the biggest, baddest systems.

It is expected that lower power thresholds will lead to more people running more nodes in the system, providing more access to those who wish to run the client software. More participation seems to be a good element for this stage in the Ethereum journey as well, since other projects like BINANCE Chain are seeking to have advanced networks building on the back of Ethereum's years of perfecting the process. New projects are focusing on doing what Ethereum has been doing, with more efficiency, and given it has been alt season for many months, people are eager to embrace new projects and new ideas. Most of this is centered around earning profit on pumps, but it is also speaking to stability, and people making bets on what projects they should place their longer term confidence in. Ethereum is extremely goal-centered to make sure it has lasting power to be relevant and useful.

Shards will be providing this speed-up to the network in an indirect manner. Since they aren't running the code, they are essentially facilitating the machines that are operating code. They work hand in hand with roll-ups, which is an entire topic to itself. Imagine roll-ups taking a congested system with tens of thousands of transactions, and bundling them into fewer transactions handled on the side, wrapped up in a pretty protective layer of encryption, and then using shards to help add a booster to the process with added levels of data that is never slowed down either by the network that processes transactions or the network running roll-ups. The shard at the first stage, is about facilitating everything else in the system to lighten the load.

This one last step is a fascinating one, because what we have to realize is that there are portions of the roadmap that are still extremely theoretical, and with all things of this nature, a lot of brilliant minds are weighing in, and at some point this will require consensus, and it is usually at that point where personalities clash, tempers flare, and we see whether the team is supportive if not cautious about differed opinions, or whether they run into irreconcilable differences. There's really no way to be certain, but I believe that is partly why the longer approach is a safer approach. The point here, is that it is possible that shards will never be designed to assist code execution, and there is a possibility that it is necessary for a select number of shards in the system to facilitate running code. In my opinion, without going deeply into detail, this is one of the most interesting areas and no one fully knows what the right answer will be.

When we connect all of these elements together, from the Beacon Chain to the Shard Chains, in the simplest terms, we have a secure network with a Beacon that handles everyone's validated stakes, connects them and interacts with the intermediary assistance of dozens of parallel shards, all run on high efficiency points of origin, or nodes. Every piece of the engine serves a purpose while taking a small piece of the load off of some other element in the network. To pretend it is actually a simple process would be dishonest, though, as it is incredibly complex, so much that there are elements where it is simply impossible to know all of the actual results in the system, which will inevitably lead to the questions of what needs to be fixed, and how that should be accomplished.

It is my impression that the ideas behind shards and their interaction with the Beacon will work, and will improve all manner of scale and efficiency, but we're going to learn a lot about just how hard that is to accomplish before 2022 comes around.

I hope you've found this interesting, and in the next post I'm going to discuss the Docking element of Eth2.

And for now, Crypto Gordon Freeman... out.

Regulation and Society adoption

Ждем новостей

Нет новых страниц

Следующая новость