BilyZ
3 min readNov 5, 2020

--

my question:

how do they leverage the characteristics of NVM e.g. non-volatile, byte-addressable

  1. for write operation, it allocates storage in requested operation granularity instead of block allocation.
  2. I don’t have another one for this question…
  3. use RDMA-NVM to accelerate the speed of I/O
the architecture of assise

🤗 Assise tries to leverage the high speed of read and write operation of NVM, integrating it into a distributed file system. And it improves I/O performance.

Assise allocates storage size in a dynamic granularity in NVM instead of block allocation like tradition block storage device.

For write operation, there are 2 stages.

  1. libfs directly writes to a process-local cache in NVM
  2. to outlive node failures, the update log is replicated to reserved NVM of the next cache replica along the replication chain.

For read operation,

  1. libfs first checks local cache
  2. if not found, it checks the closest cache replica of the corresponding subtree
  3. if not found, it checks reserve replica and in parallel, cold storage.

reads from remote nodes will be cached in local dram

How to maintain cache coherence?

cc-nvm tracks write order via update log in process-local NVM. Each posix call that updates state is recorded. cc-nvm leverages the ordering guarantee of RDMA to write the log in order to replicas.

cc-nvm serializes concurrent access to shared state by untrusted libfses and recovers the same serialization after a crash via leases. (leases are just like reader-writer lock.

hierarchical coherence

to localize coherence enforcement, leases are delegated hierarchically. libfses will need to request for leases from local sharedfs, then sharedfs may need to forward the request to the root cluster manager. the leases will be recycled or be expired by cluster manager every 5 seconds.

This allows cc-nvm to migrate lease management to the sharedfs that is local to the libfses requesting them.

hierarchical structure allows cc-nvm to minimize network communication and lease delegation overhead.

crash recovery and fail-over

libfs recovery

the local sharedfs will evicts the dead libfs update log, recovering all completed writes and then expires its leases.

sharedfs recovery

os crash, use nvm to dramatically accelerate os reboot by storing a checkpoint of a freshly booted os. by examining the sharedfs log stored in nvm, it can initiate recovery for all previously running libfs instances.

cache replica fail-over

to avoid waiting for node recovery after a power failure, it immediately fail-over to cache replica. writes to the file system can invalidate cached data of the failed node. to track writes, the cluster manager maintains an epoch number, which it increments on node failure and recovery. all sharedfs shared a per-epoch bitmap in a sparse file indicating what inodes have been written during each epoch.

node recovery

when a node crashes, the cluster manager make sure that ll of the node’s leases expire before the node can rejoin. A recovering sharedfs contacts an online sharedfs to collect relevant epoch bitmaps. sharedfs then invalidate every block from every file that has been written since its crash.

strength

  1. use hierarchical leases management to localize the acquirement of the lease, thus reducing the network overhead
  2. quickly failover to reserve replica
  3. allocate storage in requesting granularity.

weakness

  1. the cluster manager may crash, thus causing single node failure.
  2. I don’t think this paper fully leverages the characteristic of NVM because the whole system design doesn’t have much work related about the use of NVM especially in crash consistency of CC-NVM part

--

--

BilyZ

master of SYSU, do research on computer system software