1. 23 Feb, 2016 2 commits
    • Chao Yu's avatar
      f2fs: remove unneeded pointer conversion · 81ca7350
      Chao Yu authored
      
      
      There are redundant pointer conversion in following call stack:
       - at position a, inode was been converted to f2fs_file_info.
       - at position b, f2fs_file_info was been converted to inode again.
      
       - truncate_blocks(inode,..)
        - fi = F2FS_I(inode)		---a
        - ADDRS_PER_PAGE(node_page, fi)
         - addrs_per_inode(fi)
          - inode = &fi->vfs_inode	---b
          - f2fs_has_inline_xattr(inode)
           - fi = F2FS_I(inode)
           - is_inode_flag_set(fi,..)
      
      In order to avoid unneeded conversion, alter ADDRS_PER_PAGE and
      addrs_per_inode to acept parameter with type of inode pointer.
      Signed-off-by: default avatarChao Yu <chao2.yu@samsung.com>
      Signed-off-by: default avatarJaegeuk Kim <jaegeuk@kernel.org>
      81ca7350
    • Jaegeuk Kim's avatar
      f2fs: use wait_for_stable_page to avoid contention · fec1d657
      Jaegeuk Kim authored
      
      
      In write_begin, if storage supports stable_page, we don't need to wait for
      writeback to update its contents.
      This patch introduces to use wait_for_stable_page instead of
      wait_on_page_writeback.
      Signed-off-by: default avatarJaegeuk Kim <jaegeuk@kernel.org>
      fec1d657
  2. 22 Jan, 2016 1 commit
    • Al Viro's avatar
      wrappers for ->i_mutex access · 5955102c
      Al Viro authored
      
      
      parallel to mutex_{lock,unlock,trylock,is_locked,lock_nested},
      inode_foo(inode) being mutex_foo(&inode->i_mutex).
      
      Please, use those for access to ->i_mutex; over the coming cycle
      ->i_mutex will become rwsem, with ->lookup() done with it held
      only shared.
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      5955102c
  3. 11 Jan, 2016 2 commits
  4. 08 Jan, 2016 2 commits
  5. 31 Dec, 2015 1 commit
  6. 30 Dec, 2015 6 commits
  7. 16 Dec, 2015 2 commits
  8. 15 Dec, 2015 1 commit
  9. 14 Dec, 2015 2 commits
  10. 09 Dec, 2015 1 commit
  11. 04 Dec, 2015 4 commits
  12. 22 Oct, 2015 1 commit
  13. 13 Oct, 2015 1 commit
    • Chao Yu's avatar
      f2fs crypto: fix racing of accessing encrypted page among · 08b39fbd
      Chao Yu authored
      
      
       different competitors
      
      Since we use different page cache (normally inode's page cache for R/W
      and meta inode's page cache for GC) to cache the same physical block
      which is belong to an encrypted inode. Writeback of these two page
      cache should be exclusive, but now we didn't handle writeback state
      well, so there may be potential racing problem:
      
      a)
      kworker:				f2fs_gc:
       - f2fs_write_data_pages
        - f2fs_write_data_page
         - do_write_data_page
          - write_data_page
           - f2fs_submit_page_mbio
      (page#1 in inode's page cache was queued
      in f2fs bio cache, and be ready to write
      to new blkaddr)
      					 - gc_data_segment
      					  - move_encrypted_block
      					   - pagecache_get_page
      					(page#2 in meta inode's page cache
      					was cached with the invalid datas
      					of physical block located in new
      					blkaddr)
      					   - f2fs_submit_page_mbio
      					(page#1 was submitted, later, page#2
      					with invalid data will be submitted)
      
      b)
      f2fs_gc:
       - gc_data_segment
        - move_encrypted_block
         - f2fs_submit_page_mbio
      (page#1 in meta inode's page cache was
      queued in f2fs bio cache, and be ready
      to write to new blkaddr)
      					user thread:
      					 - f2fs_write_begin
      					  - f2fs_submit_page_bio
      					(we submit the request to block layer
      					to update page#2 in inode's page cache
      					with physical block located in new
      					blkaddr, so here we may read gabbage
      					data from new blkaddr since GC hasn't
      					writebacked the page#1 yet)
      
      This patch fixes above potential racing problem for encrypted inode.
      Signed-off-by: default avatarChao Yu <chao2.yu@samsung.com>
      Signed-off-by: default avatarJaegeuk Kim <jaegeuk@kernel.org>
      08b39fbd
  14. 12 Oct, 2015 3 commits
  15. 09 Oct, 2015 7 commits
  16. 26 Aug, 2015 1 commit
    • Chao Yu's avatar
      f2fs: update extent tree in batches · 19b2c30d
      Chao Yu authored
      
      
      This patch introduce a new helper f2fs_update_extent_tree_range which can
      do extent mapping update at a specified range.
      
      The main idea is:
      1) punch all mapping info in extent node(s) which are at a specified range;
      2) try to merge new extent mapping with adjacent node, or failing that,
         insert the mapping into extent tree as a new node.
      
      In order to see the benefit, I add a function for stating time stamping
      count as below:
      
      uint64_t rdtsc(void)
      {
      	uint32_t lo, hi;
      	__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
      	return (uint64_t)hi << 32 | lo;
      }
      
      My test environment is: ubuntu, intel i7-3770, 16G memory, 256g micron ssd.
      
      truncation path:	update extent cache from truncate_data_blocks_range
      non-truncataion path:	update extent cache from other paths
      total:			all update paths
      
      a) Removing 128MB file which has one extent node mapping whole range of
      file:
      1. dd if=/dev/zero of=/mnt/f2fs/128M bs=1M count=128
      2. sync
      3. rm /mnt/f2fs/128M
      
      Before:
      		total		count		average
      truncation:	7651022		32768		233.49
      
      Patched:
      		total		count		average
      truncation:	3321		33		100.64
      
      b) fsstress:
      fsstress -d /mnt/f2fs -l 5 -n 100 -p 20
      Test times:		5 times.
      
      Before:
      		total		count		average
      truncation:	5812480.6	20911.6		277.95
      non-truncation:	7783845.6	13440.8		579.12
      total:		13596326.2	34352.4		395.79
      
      Patched:
      		total		count		average
      truncation:	1281283.0	3041.6		421.25
      non-truncation:	7355844.4	13662.8		538.38
      total:		8637127.4	16704.4		517.06
      
      1) For the updates in truncation path:
       - we can see updating in batches leads total tsc and update count reducing
         explicitly;
       - besides, for a single batched updating, punching multiple extent nodes
         in a loop, result in executing more operations, so our average tsc
         increase intensively.
      2) For the updates in non-truncation path:
       - there is a little improvement, that is because for the scenario that we
         just need to update in the head or tail of extent node, new interface
         optimize to update info in extent node directly, rather than removing
         original extent node for updating and then inserting that updated one
         into cache as new node.
      Signed-off-by: default avatarChao Yu <chao2.yu@samsung.com>
      Signed-off-by: default avatarJaegeuk Kim <jaegeuk@kernel.org>
      19b2c30d
  17. 24 Aug, 2015 1 commit
  18. 20 Aug, 2015 1 commit
  19. 10 Aug, 2015 1 commit