In the previous blog post, we discussed the Mempool utilized by Solana and Aptos. In this post, we will focus on how Solana and Aptos process transactions in parallel. Parallel Execution is the core innovation of both Aptos and Solana that scales the transaction throughput. Sequential execution of transactions is one of the major bottlenecks in network throughput. It causes transactions in blocks to take longer to execute and limits the number of transactions that can be added to a block. Parallelism allows transaction processing to be divided among multiple processor cores – increasing hardware utilization and enabling better scalability.
Comparison between Aptos and Solana: Parallel Execution
In simple words, dependencies are declared when a smart contract is written on Solana, thus, ideally, Solana runtime can schedule the parallel execution. While on Aptos, dependencies are automatically estimated and managed on-the-fly so that transactions can be scheduled and executed in parallel optimistically.
More precisely, Solana would lock all dependent memory location before executing
(because dependencies are known in advance) while Aptos executes transactions, then locks dependent memory that was detected after the execution, and reiterates this again until all dependent transactions are executed successfully.
How does Solana process transactions?
Solana Runtime is a concurrent transaction processor and it implements parallelization of incoming transactions by grouping them into batches and enhancing throughput. The Solana programming model requires smart contracts to specify all accounts that are used for a contract call (and its inner calls) as well as their readability or writability. Therefore, transactions accessing only read-only accounts can be executed in parallel, whereas transactions accessing writable accounts are serialized.
The transaction parallel processing flow in a TPU (Transaction Processing Unit) is as follows:
1. Transactions are fetched into multiple threads in TPU’s banking stage that attempt to process transactions in parallel. Then in each thread, these transactions are checked for dependency. This is done by the function prepare_sanitized_batch_with_resultsf rom the runtime.
/// Prepare a locked transaction batch from a list of sanitized transactions, and their cost
/// limited packing status
pub fn prepare_sanitized_batch_with_results<'a, 'b>(
&'a self,
transactions: &'b [SanitizedTransaction],
transaction_results: impl Iterator<Item = &'b Result<()>>,
) -> TransactionBatch<'a, 'b> {
// this lock_results could be: Ok, AccountInUse, WouldExceedBlockMaxLimit or WouldExceedAccountMaxLimit
let tx_account_lock_limit = self.get_transaction_account_lock_limit();
let lock_results = self.rc.accounts.lock_accounts_with_results(
transactions.iter(),
transaction_results,
tx_account_lock_limit,
);
TransactionBatch::new(lock_results, self, Cow::Borrowed(transactions))
}
This function will return a batch of transactions together with their lock_results. The lock_results of a transaction is either an Ok (meaning there is no dependency and it should proceed ) or Err (there is dependency). More specifically, the result will be determined by the following lock_account function. For each transaction, this function takes its writable and read-only accounts and checks for dependency.
For a writable account (the account that this transaction attempts to write), if the account is in use – meaning that there is a transaction that is either writing or reading this account – the function would return an error and it should not be processed in parallel. Thus, it is not possible for two transactions to write to the same account, and it also prevents updating an account while it’s being read.
For a read-only account, if it is being written by some transaction, it would also return an error to prevent reading an account while it’s being updated.
fn lock_account(
&self,
account_locks: &mut AccountLocks,
writable_keys: Vec<&Pubkey>,
readonly_keys: Vec<&Pubkey>,
) -> Result<()> {
for k in writable_keys.iter() {
if account_locks.is_locked_write(k) || account_locks.is_locked_readonly(k) {
debug!("Writable account in use: {:?}", k);
return Err(TransactionError::AccountInUse);
}
}
for k in readonly_keys.iter() {
if account_locks.is_locked_write(k) {
debug!("Read-only account in use: {:?}", k);
return Err(TransactionError::AccountInUse);
}
}
for k in writable_keys {
account_locks.write_locks.insert(*k);
}
for k in readonly_keys {
if !account_locks.lock_readonly(k) {
account_locks.insert_new_readonly(k);
}
}
Ok(())
}
2. Then, transactions with a dependency will be pushed into retryable_transaction_indexes batch to be processed later once all dependent prior transactions are processed. Specifically, conflicting transactions that can’t be processed will be put back into the “retryable” batch, and processed again in the next interaction so that eventually there will be no more conflict. This implies that dependent transactions are processed sequentially.
How does Aptos process transactions? Block-STM is a new technology originally from the Diem project and matured by Aptos. The challenge is that the transaction dependencies are not known in advance and a parallel execution of transactions must yield the same outcome. Block-STM employs an optimistic approach to figure the dependency on-the-fly. Each transaction might be executed several times and the i-th execution is referred to as incarnation i of a transaction. Each incarnation is simplified as follows:
1. Execution step: Transactions are executed in parallel. Parallel execution records a read-set and a write-set. The read-set contains the memory locations that are read during the incarnation and the corresponding versions. A version is a pair of a transaction indices that writes to this memory location and its incarnation number. The write-set describes the updates made by the incarnation as (memory location, value) pairs.
pub fn read_set(&self, txn_idx: TxnIndex) -> Option<Arc<Vec<ReadDescriptor<K>>>> {
self.inputs[txn_idx].load_full()
}
// Extracts a set of paths written during execution from transaction output.
pub fn write_set(
&self,
txn_idx: TxnIndex,
) -> HashSet<<<T as TransactionOutput>::T as Transaction>::Key> {
match &self.outputs[txn_idx].load_full() {
None => HashSet::new(),
Some(txn_output) => match txn_output.as_ref() {
ExecutionStatus::Success(t) | ExecutionStatus::SkipRest(t) => {
t.get_writes().into_iter().map(|(k, _)| k).collect()
}
ExecutionStatus::Abort(_) => HashSet::new(),
},
}
}
2. Validation step: After the execution, each transaction will go through a validation where the read-set of the transaction after the execution is compared against the original read-set that transaction obtained in its latest execution.
let valid = read_set.iter().all(|r| {
match versioned_data_cache.read(r.path(), idx_to_validate) {
Ok((version, _)) => r.validate_version(version),
Err(Some(_)) => false, // Dependency implies a validation failure.
Err(None) => r.validate_storage(),
}
});
If the comparison fails, the transaction aborts and the entries in the multi-version data-structures corresponding to its write-set are replaced with a special ESTIMATE marker. This mark is similar to the lock in Solana in a way that makes sure no other transactions that depend on this would get executed later on until a re-execution overwrites it.
if aborted {
// Not valid and successfully aborted, mark the latest write-set as estimates.
for k in &last_input_output.write_set(idx_to_validate) {
versioned_data_cache.mark_estimate(k, idx_to_validate);
}
scheduler.finish_abort(idx_to_validate, incarnation, guard)
} else {
SchedulerTask::NoTask
}
Coming up…
In the third part of this three part series, we will delve into:
Aptos’s BFT vs. Solana’s PoS consensus mechanisms
AptosNet vs. Solana’s UDP-based protocol
コメント