Graphene is a third-generation blockchain technology that differs from alternative platforms in high performance, no transaction fees, and a simplified process for creating smart contracts. It can increase network bandwidth through the use of Bloom’s Inverted Search Tables (IBLTs) and Bloom filters based on Simplified Payment Verification technology. At the moment, the maximum transaction rate detected in the testing process is 10,628 h / s. For comparison, the maximum rate of transactions in the Ethereum blockchain is 30 x / c.
BlockchainJournal told blockchain-architect and creator of blockchain-protocols Scorum.com and DEIP.world Alex Shkor about BlockchainJournal's existing infrastructure, its main problems and ways to solve them.
Graphene infrastructure issues:
1. User experience and implementation
BitShares and Steem have already done a good job of introducing personal accounts and recovery mechanisms, but there is still a big gap between the scale of implementation of classic web applications and blockchain-based applications. The unavailability of tools such as full-text and geospatial search, as well as the lack of easy integration of any centralized services, significantly reduces the usability of blockchain products.
2. The problem of scalability
The problem of scalability in the blockchain world is particularly acute. On the one hand, decentralized systems should strive to improve scalability, on the other hand, they retain strict consistency and the need for constant compromise on what information is included in the blockchain.
There are a number of other problems in the existing blockchain model, for example, the size of an existing node, truly decentralized information storage, crosschain interaction, sidechain and offchain integration, network synchronization stability, and others. It seems that the number of problems is actively growing, and their solutions appear much slower than we would like.
While working on Scorum and DEIP, we experienced these difficulties on ourselves. We needed an efficient way to store information, as well as a decentralized repository on sidechain. We wanted to optimize Graphene to process more transactions. Therefore, we have developed a solution that addresses the most pressing problems of the Graphene infrastructure, the aggregation cluster.
This is a cluster of continuously synchronized data in the node, which helps to scale the reading model independently of the blockchain.
The Graphene framework processes each operation in a single thread and stores all data in memory. Because of this, its throughput reaches several thousand transactions per second and creates tremendous potential for further tuning and optimization. The prototype for Graphene was the LMAX architecture, which presents the world's fastest trading engine capable of handling 6 million transactions per second.
The Graphene framework is the optimal choice for blockchain projects, in particular – decentralized exchanges. However, some features of Graphene hamper scalability and integration.
So, in Graphene, all data is in a state of strict consistency, which is very important for a blockchain project. However, in addition to the data required for domain validation, Graphene stores information for display on the UI.
In turn, the LMAX business logic processor does not store information that is not needed for domain validation rules. In fact, it breaks the information model into a reading model and a writing model. The record model is in memory and processed in one stream, which allows it to maintain a high degree of data consistency. While the reading model is implemented with a slight delay in the flow of events that comes from the writing model and is in “eventual consistency” with the business logic processor.
“Consistency ultimately” means that the data will be consistent if we stop the production of events, but at one point in time they may not be consistent. This does not break the system, because the “consistency ultimately” model is ultimately read-only. According to information theory, such readings are always in “consistency ultimately,” because once they are shown to the user, they become obsolete for him (and for the user interface, which can lead to a transaction based on this outdated model). However, if they are incorrect, they will never be processed by a strictly coordinated business processor.
This separation allows us to separately optimize and scale the data reading model and the data recording model, to have a separate infrastructure designed for the specific needs of each of the models. This separation of the data model is also quite common in distributed systems. In addition, it is the main concept of the CQRS paradigm.
The CQRS principle is applied when creating an aggregation cluster that, in essence, provides the infrastructure for a reading model for any blockchain based on Graphene. The cluster allows you to move all data that is not needed to validate domain rules into a self-balancing and self-scalable consistent aggregation cluster and store them there.
In the business processor Graphene, there will be only data needed by blockchain logic and nothing more. The cluster can reduce the size of the main chain of the blockchain and improve the performance and throughput of the business processor. Much less data will be stored and fewer updates will be needed in one stream.
For any blockchain project, it is very expensive to create a continuously synchronized data storage storage system that would function in near real-time mode, especially if you need to use classic databases or analytical tools. Such tools are not designed to deal with patterns such as data branching and “rollback” of recent transactions. This greatly limits what blockchain applications we can create and make the user experience different from the experience of interacting with classic applications.
However, with an aggregation cluster, we can integrate any classic data repositories or analytical tools with blockchain projects. We designed the aggregation cluster so that it functions as classical centralized tools and controls them within the cluster. Thus, it automatically balances the load and coordinates the branching of the chains.
The cluster provides the necessary level of data redundancy in order to roll back the state to any time point. This happens through the event-centric model (event centric design model), the principle of CQRS and the paradigm of “consistency ultimately” for the reading model, but at the same time maintaining the data recording model in strict consistency.
An aggregation cluster makes it easy to integrate any kind of tools or database into the reading model. These can be full-text or geospatial indices, graph databases, and even learning clusters of neural networks.
With the help of the teams of the research platform DEIP and the blog platform Scorum, we want to offer our assistance in the future improvement of blockchain systems and the continuous improvement of the Graphene framework. However, this also requires creating an environment that will maintain continuous improvement and promote quality.
Ideally, this should be a decentralized system that provides open access to all the results of research blockchain projects, their fair decentralized evaluation and sustainable funding. In such a system, the automatic distribution of remuneration for all those involved in research projects will minimize operational costs.
Our goal is to improve existing systems, promote their smooth integration into society and create a solid foundation for future blockchain protocols and applications.
Download the BlockchainJournal application for Android smartphones!
TOP 10 CRYPTOCURRENCY
|#||Name||Price||Market Cap||Change||Price Graph (24h)|