Implementation
The Ledgera V_0_1 prototype is implemented in Rust.
Architecture
The diagram below, generated from the code itself, presents the architecture of the code into sub-libraries (square nodes) and includes their external dependencies (rounded nodes).
The sub-libraries are the following:
- “ledgera-pki” implements interfaces and backends for the notions of Public/Private keys, signatures and quorums
- “ledgera-types” provides:
- a template for defining application use cases
- a generic implementation of digests
- types for computation specifications
- types for all Ledgera messages
- functions for the validation of Ledgera messages
- “ledgera-api” provides an external API for communicating with Ledgera clients
- “ledgera-node” implements the logic of ledgera nodes which includes:
- an interface for how they communicate among themselves
- the logic of each role (client, voter, storage, logger)
- “ledgera-zenoh” implements a Zenoh-based backend for the communication interface defined in “ledgera-node”
- “ledgera-test-use-cases” defines the application use case used in the demo
- “ledgera-test-tui” implements the Text User Interface used in the demo
- “ledgera-test” implements executables and provide scripts to launch the demo
Zenoh as a communications layer
In the specification, we have presented the Ledgera protocol diagrammatically, horizontal arrows representing asynchronous message exchanges.
In practice, we use the Zenoh Publish/Subscribe/Query protocol as a backend for communications.
Each node in the Ledgera network is also a node in a Zenoh network.
At the initialization of the system, nodes subscribe to specific topics according the the roles they play:
- voters subscribe to a “client Message” (“M”) and “votes” (“V”) topics
- storage nodes subcribe to a “Storage requests” (“S”) topic and expose a queryable to retrieve stored values indexed by their digest
- secure log nodes subscribe to a “Transactions” (“T”) topic
- clients are subscribed to a “notifications” (“N”) topic
The publish subscribe pattern allows (as per the specification):
- voters to receive the
M_{arch},M_{comp}andM_{arg}message requests from clients - voters to receive the
V_{arch},V_{stored},V_{comp},V_{arg}andV_{loc}votes from the other voters - storage nodes to receive
S_{val}storage requests and to receive and respond toQ_{val}data queries - secure log nodes to receive the
T_{arch},T_{comp},T_{args}andT_{res}transactions from the voters - clients to receive
N_{res}notifications from voters
Implementation of the storage replicas
In this prototype implementation, each node that plays the storage role maintains, in memory, a hash table that represents the local copy of the storage.
Implementation of the secure log
In this prototype implementation, the distributed ledger is a centralized mockup.
We require that there is exactly one node that plays the logger role.
This node maintains a simple list of transactions.
Upon receiving a new transaction from any voter it will append it to the list iff another equivalent transaction was not already included. This guarantees that:
- there is at most one
T_{arch}transaction per data digest and it must be valid (i.e. contain a valid Proof Of Storage for this specific digest) - there is at most one
T_{comp}transaction per computation instance digest and it must be valid (i.e. contain a valid quorum ofV_{comp}votes for the corresponding digest) - there is at most one
T_{args}transaction per computation instance digest and it must be valid (i.e., it contains, for all the missing arguments in the corresponding computation specification, a valid quorum ofV_{arg}votes) - there is at most one
T_{res}transaction per computation instance digest (i.e., it contains a valid Proof Of Integrity for this specific digest)
Upon appending a new transaction to the local list, the secure log node mockup will notify every other node of the new delivered transaction via publishing a message on a dedicated Zenoh topic to which all the nodes are subscribed.