Why do we need consensus?
Every blockchain system has its own consensus algorithm. It is the way that the nodes in a decentralized network are able to stay synced with each other. It is the process by which these nodes communicate and come to agreement, and are able to produce new blocks. To secure the blockchain network, it is crucial to consistently produce the blocks. It should never stop. Today, let's closely look at consensus algorithm of Polkadot, which is BABE.
Polkadot's Consensus
Babe Overview
One of the consensus protocol Polkadot/Kusama use is BABE(Blind Assignment for Blockchain Extension). It is a slot-based block production mechanism which uses a VRF function to randomly perform the slot allocation for validators(authorities).
Polkadot's block time can be split into _epoch and slot. Epochs can be broken into slots and every slot is six seconds long. _

On every slot, all validators(authorities) generate a random number from VRF and if it is lower than threshold value which is proportional to weight/amount of stake(Currently, threshold doesn't care about amount of stake in Polkadot/Kusama), they have right to produce new block. Since it is fully randomized and probabilistic way, which Polkadot called Primary Slots, there are some chances that no validators(authorities) are chosen for specific slot. To prevent not to be delayed for producing blocks, Polkadot has made Secondary slots, which is also using VRF but deterministic way. Moreover, there could be multiple authorities for single slot, which could lead to temporal fork. BABE's fork-choice rule is weight-based, which mean chains that has more primary slots are chosen for the system.
Code Overview
So let's look at how slot-based BABE is implemented.
Scheme
claim_slot
Tries to claim the given slot.
Params
- slot: pub struct Slot(u64)
- epoch: pub struct Epoch { some fields }
- keystore: Pointer to keystore. Arc
Returns
- Option<(PreDigest, AuthorityId)>
- PreDigest: Enum type that contains all data required to validate a block such as Primary, Secondary, SecondaryVRF
- AuthorityId: Public address of authority
// Get authorities for specific epoch
// authorities = Vec[(authorityId, index), ...]
// pub struct Slot(u64);
// epoch.authorities = [(authority's public address, weight)]
pub fn claim_slot(
slot: Slot,
epoch: &Epoch,
keystore: &SyncCryptoStorePtr,
) -> Option<(PreDigest, AuthorityId)> {
let authorities = epoch
.authorities
.iter()
.enumerate()
.map(|(index, a)| (a.0.clone(), index))
.collect::<Vec<_>>();
claim_slot_using_keys(slot, epoch, keystore, &authorities)
}
claim_slot_using_keys
Claim Primary and claim Secondary
Params
- slot,epoch,keystore are mentioned above
- keys: [(author's public address, index for epoch)]
Returns
- Option<(PreDigest, AuthorityId)>
- In claim_primary_slot, there is epoch.config.c. This is pre-defined constant when we build our node. That is, pub const PRIMARY_PROBABILITY: (u64, u64) = (1, 4)
// claim primary. If fail, claim secondary
pub fn claim_slot_using_keys(
slot: Slot,
epoch: &Epoch,
keystore: &SyncCryptoStorePtr,
keys: &[(AuthorityId, usize)],
) -> Option<(PreDigest, AuthorityId)> {
claim_primary_slot(slot, epoch, epoch.config.c, keystore, keys).or_else(|| {
if epoch.config.allowed_slots.is_secondary_plain_slots_allowed() ||
epoch.config.allowed_slots.is_secondary_vrf_slots_allowed()
{
claim_secondary_slot(
slot,
epoch,
keys,
keystore,
epoch.config.allowed_slots.is_secondary_vrf_slots_allowed(),
)
} else {
None
}
})
}
claim_primary_slot
If the result of VRF, here, it is inout, is less than threshold returns Some((pre_digest, authority_id.clone() else returns None.
fn claim_primary_slot(
slot: Slot,
epoch: &Epoch,
c: (u64, u64),
keystore: &SyncCryptoStorePtr,
keys: &[(AuthorityId, usize)],
) -> Option<(PreDigest, AuthorityId)> {
---snippet---
let threshold = calculate_primary_threshold(c, authorities, *authority_index);
if check_primary_threshold(&inout, threshold) {
let pre_digest =
PreDigest::Primary(PrimaryPreDigest {
slot,
vrf_output: VRFOutput(signature.output),
vrf_proof: VRFProof(signature.proof),
authority_index: *authority_index as u32,
});
return Some((pre_digest,
authority_id.clone()))}
None
}
Secondary Slot
If there is no primary authorities, set up to back-up plan.
Params
- slot,epoch,keys,keystore are same with above.
- author_secondary_vrf: bool type for whether VRF secondary slots are allowed.
Returns
- Option<(PreDigest, AuthorityId)>
fn claim_secondary_slot(
slot: Slot,
epoch: &Epoch,
keys: &[(AuthorityId, usize)],
keystore: &SyncCryptoStorePtr,
author_secondary_vrf: bool,
) -> Option<(PreDigest, AuthorityId)> {
if authorities.is_empty() {
// It means there is authorities for primary slot. Return Immediately
return None
}
let expected_author = secondary_slot_author(slot, authorities, *randomness)?;
--snippet--
}
// This should always return some authorities.
pub(super) fn secondary_slot_author(
slot: Slot,
authorities: &[(AuthorityId, BabeAuthorityWeight)],
randomness: [u8; 32],
) -> Option<&AuthorityId> {
if authorities.is_empty() {
return None
}
let rand = U256::from((randomness, slot).using_encoded(blake2_256));
let authorities_len = U256::from(authorities.len());
let idx = rand % authorities_len;
let expected_author = authorities.get(idx.as_u32() as usize).expect(
"authorities not empty; index constrained to list length; \
this is a valid index; qed",
);
Some(&expected_author.0)
}
- When BABE calculates threshold value in *calculate_primary_threshold_ , they have used the sum of all authorities' weight which is equal to 1 as denominator. It doesn't seem like threshold value is proportional to (weight/stake) as they said. Since validators' weight in Polkadot/Kusama is same, amount of stake is not important for current BABE system.