1. 15 Jan, 2018 1 commit
    • Tomasz Majchrzak's avatar
      raid5-ppl: PPL support for disks with write-back cache enabled · 1532d9e8
      Tomasz Majchrzak authored
      In order to provide data consistency with PPL for disks with write-back
      cache enabled all data has to be flushed to disks before next PPL
      entry. The disks to be flushed are marked in the bitmap. It's modified
      under a mutex and it's only read after PPL io unit is submitted.
      A limitation of 64 disks in the array has been introduced to keep data
      structures and implementation simple. RAID5 arrays with so many disks are
      not likely due to high risk of multiple disks failure. Such restriction
      should not be a real life limitation.
      With write-back cache disabled next PPL entry is submitted when data write
      for current one completes. Data flush defers next log submission so trigger
      it when there are no stripes for handling found.
      As PPL assures all data is flushed to disk at request completion, just
      acknowledge flush request when PPL is enabled.
      Signed-off-by: 's avatarTomasz Majchrzak <tomasz.majchrzak@intel.com>
      Signed-off-by: 's avatarShaohua Li <sh.li@alibaba-inc.com>
  2. 02 Nov, 2017 1 commit
  3. 17 Mar, 2017 1 commit
  4. 16 Mar, 2017 2 commits
    • Artur Paszkiewicz's avatar
      raid5-ppl: Partial Parity Log write logging implementation · 3418d036
      Artur Paszkiewicz authored
      Implement the calculation of partial parity for a stripe and PPL write
      logging functionality. The description of PPL is added to the
      documentation. More details can be found in the comments in raid5-ppl.c.
      Attach a page for holding the partial parity data to stripe_head.
      Allocate it only if mddev has the MD_HAS_PPL flag set.
      Partial parity is the xor of not modified data chunks of a stripe and is
      calculated as follows:
      - reconstruct-write case:
        xor data from all not updated disks in a stripe
      - read-modify-write case:
        xor old data and parity from all updated disks in a stripe
      Implement it using the async_tx API and integrate into raid_run_ops().
      It must be called when we still have access to old data, so do it when
      STRIPE_OP_BIODRAIN is set, but before ops_run_prexor5(). The result is
      stored into sh->ppl_page.
      Partial parity is not meaningful for full stripe write and is not stored
      in the log or used for recovery, so don't attempt to calculate it when
      stripe has STRIPE_FULL_WRITE.
      Put the PPL metadata structures to md_p.h because userspace tools
      (mdadm) will also need to read/write PPL.
      Warn about using PPL with enabled disk volatile write-back cache for
      now. It can be removed once disk cache flushing before writing PPL is
      Signed-off-by: 's avatarArtur Paszkiewicz <artur.paszkiewicz@intel.com>
      Signed-off-by: 's avatarShaohua Li <shli@fb.com>
    • Guoqing Jiang's avatar
      md-cluster: add the support for resize · 818da59f
      Guoqing Jiang authored
      To update size for cluster raid, we need to make
      sure all nodes can perform the change successfully.
      However, it is possible that some of them can't do
      it due to failure (bitmap_resize could fail). So
      we need to consider the issue before we set the
      capacity unconditionally, and we use below steps
      to perform sanity check.
      1. A change the size, then broadcast METADATA_UPDATED
      2. B and C receive METADATA_UPDATED change the size
         excepts call set_capacity, sync_size is not update
         if the change failed. Also call bitmap_update_sb
         to sync sb to disk.
      3. A checks other node's sync_size, if sync_size has
         been updated in all nodes, then send CHANGE_CAPACITY
         msg otherwise send msg to revert previous change.
      4. B and C call set_capacity if receive CHANGE_CAPACITY
         msg, otherwise pers->resize will be called to restore
         the old value.
      Reviewed-by: 's avatarNeilBrown <neilb@suse.com>
      Signed-off-by: 's avatarGuoqing Jiang <gqjiang@suse.com>
      Signed-off-by: 's avatarShaohua Li <shli@fb.com>
  5. 13 Feb, 2017 2 commits