1. 07 Mar, 2018 2 commits
  2. 01 Mar, 2018 12 commits
    • Christoph Hellwig's avatar
      xfs: don't block on the ilock for RWF_NOWAIT · ff3d8b9c
      Christoph Hellwig authored
      Fix xfs_file_iomap_begin to trylock the ilock if IOMAP_NOWAIT is passed,
      so that we don't block io_submit callers.
      Signed-off-by: 's avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: 's avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: 's avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: 's avatarDarrick J. Wong <darrick.wong@oracle.com>
      ff3d8b9c
    • Christoph Hellwig's avatar
      xfs: don't start out with the exclusive ilock for direct I/O · af5b5afe
      Christoph Hellwig authored
      There is no reason to take the ilock exclusively at the start of
      xfs_file_iomap_begin for direct I/O, given that it will be demoted
      just before calling xfs_iomap_write_direct anyway.
      Signed-off-by: 's avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: 's avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: 's avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: 's avatarDarrick J. Wong <darrick.wong@oracle.com>
      af5b5afe
    • Christoph Hellwig's avatar
      xfs: don't allocate COW blocks for zeroing holes or unwritten extents · 172ed391
      Christoph Hellwig authored
      The iomap zeroing interface is smart enough to skip zeroing holes or
      unwritten extents.  Don't subvert this logic for reflink files.
      Signed-off-by: 's avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: 's avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: 's avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: 's avatarDarrick J. Wong <darrick.wong@oracle.com>
      172ed391
    • Chengguang Xu's avatar
      ceph: fix potential memory leak in init_caches() · 1c789249
      Chengguang Xu authored
      There is lack of cache destroy operation for ceph_file_cachep
      when failing from fscache register.
      Signed-off-by: 's avatarChengguang Xu <cgxu519@icloud.com>
      Reviewed-by: 's avatarIlya Dryomov <idryomov@gmail.com>
      Signed-off-by: 's avatarIlya Dryomov <idryomov@gmail.com>
      1c789249
    • Filipe Manana's avatar
      Btrfs: fix log replay failure after unlink and link combination · 1f250e92
      Filipe Manana authored
      If we have a file with 2 (or more) hard links in the same directory,
      remove one of the hard links, create a new file (or link an existing file)
      in the same directory with the name of the removed hard link, and then
      finally fsync the new file, we end up with a log that fails to replay,
      causing a mount failure.
      
      Example:
      
        $ mkfs.btrfs -f /dev/sdb
        $ mount /dev/sdb /mnt
      
        $ mkdir /mnt/testdir
        $ touch /mnt/testdir/foo
        $ ln /mnt/testdir/foo /mnt/testdir/bar
      
        $ sync
      
        $ unlink /mnt/testdir/bar
        $ touch /mnt/testdir/bar
        $ xfs_io -c "fsync" /mnt/testdir/bar
      
        <power failure>
      
        $ mount /dev/sdb /mnt
        mount: mount(2) failed: /mnt: No such file or directory
      
      When replaying the log, for that example, we also see the following in
      dmesg/syslog:
      
        [71813.671307] BTRFS info (device dm-0): failed to delete reference to bar, inode 258 parent 257
        [71813.674204] ------------[ cut here ]------------
        [71813.675694] BTRFS: Transaction aborted (error -2)
        [71813.677236] WARNING: CPU: 1 PID: 13231 at fs/btrfs/inode.c:4128 __btrfs_unlink_inode+0x17b/0x355 [btrfs]
        [71813.679669] Modules linked in: btrfs xfs f2fs dm_flakey dm_mod dax ghash_clmulni_intel ppdev pcbc aesni_intel aes_x86_64 crypto_simd cryptd glue_helper evdev psmouse i2c_piix4 parport_pc i2c_core pcspkr sg serio_raw parport button sunrpc loop autofs4 ext4 crc16 mbcache jbd2 zstd_decompress zstd_compress xxhash raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c crc32c_generic raid1 raid0 multipath linear md_mod ata_generic sd_mod virtio_scsi ata_piix libata virtio_pci virtio_ring crc32c_intel floppy virtio e1000 scsi_mod [last unloaded: btrfs]
        [71813.679669] CPU: 1 PID: 13231 Comm: mount Tainted: G        W        4.15.0-rc9-btrfs-next-56+ #1
        [71813.679669] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org 04/01/2014
        [71813.679669] RIP: 0010:__btrfs_unlink_inode+0x17b/0x355 [btrfs]
        [71813.679669] RSP: 0018:ffffc90001cef738 EFLAGS: 00010286
        [71813.679669] RAX: 0000000000000025 RBX: ffff880217ce4708 RCX: 0000000000000001
        [71813.679669] RDX: 0000000000000000 RSI: ffffffff81c14bae RDI: 00000000ffffffff
        [71813.679669] RBP: ffffc90001cef7c0 R08: 0000000000000001 R09: 0000000000000001
        [71813.679669] R10: ffffc90001cef5e0 R11: ffffffff8343f007 R12: ffff880217d474c8
        [71813.679669] R13: 00000000fffffffe R14: ffff88021ccf1548 R15: 0000000000000101
        [71813.679669] FS:  00007f7cee84c480(0000) GS:ffff88023fc80000(0000) knlGS:0000000000000000
        [71813.679669] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        [71813.679669] CR2: 00007f7cedc1abf9 CR3: 00000002354b4003 CR4: 00000000001606e0
        [71813.679669] Call Trace:
        [71813.679669]  btrfs_unlink_inode+0x17/0x41 [btrfs]
        [71813.679669]  drop_one_dir_item+0xfa/0x131 [btrfs]
        [71813.679669]  add_inode_ref+0x71e/0x851 [btrfs]
        [71813.679669]  ? __lock_is_held+0x39/0x71
        [71813.679669]  ? replay_one_buffer+0x53/0x53a [btrfs]
        [71813.679669]  replay_one_buffer+0x4a4/0x53a [btrfs]
        [71813.679669]  ? rcu_read_unlock+0x3a/0x57
        [71813.679669]  ? __lock_is_held+0x39/0x71
        [71813.679669]  walk_up_log_tree+0x101/0x1d2 [btrfs]
        [71813.679669]  walk_log_tree+0xad/0x188 [btrfs]
        [71813.679669]  btrfs_recover_log_trees+0x1fa/0x31e [btrfs]
        [71813.679669]  ? replay_one_extent+0x544/0x544 [btrfs]
        [71813.679669]  open_ctree+0x1cf6/0x2209 [btrfs]
        [71813.679669]  btrfs_mount_root+0x368/0x482 [btrfs]
        [71813.679669]  ? trace_hardirqs_on_caller+0x14c/0x1a6
        [71813.679669]  ? __lockdep_init_map+0x176/0x1c2
        [71813.679669]  ? mount_fs+0x64/0x10b
        [71813.679669]  mount_fs+0x64/0x10b
        [71813.679669]  vfs_kern_mount+0x68/0xce
        [71813.679669]  btrfs_mount+0x13e/0x772 [btrfs]
        [71813.679669]  ? trace_hardirqs_on_caller+0x14c/0x1a6
        [71813.679669]  ? __lockdep_init_map+0x176/0x1c2
        [71813.679669]  ? mount_fs+0x64/0x10b
        [71813.679669]  mount_fs+0x64/0x10b
        [71813.679669]  vfs_kern_mount+0x68/0xce
        [71813.679669]  do_mount+0x6e5/0x973
        [71813.679669]  ? memdup_user+0x3e/0x5c
        [71813.679669]  SyS_mount+0x72/0x98
        [71813.679669]  entry_SYSCALL_64_fastpath+0x1e/0x8b
        [71813.679669] RIP: 0033:0x7f7cedf150ba
        [71813.679669] RSP: 002b:00007ffca71da688 EFLAGS: 00000206
        [71813.679669] Code: 7f a0 e8 51 0c fd ff 48 8b 43 50 f0 0f ba a8 30 2c 00 00 02 72 17 41 83 fd fb 74 11 44 89 ee 48 c7 c7 7d 11 7f a0 e8 38 f5 8d e0 <0f> ff 44 89 e9 ba 20 10 00 00 eb 4d 48 8b 4d b0 48 8b 75 88 4c
        [71813.679669] ---[ end trace 83bd473fc5b4663b ]---
        [71813.854764] BTRFS: error (device dm-0) in __btrfs_unlink_inode:4128: errno=-2 No such entry
        [71813.886994] BTRFS: error (device dm-0) in btrfs_replay_log:2307: errno=-2 No such entry (Failed to recover log tree)
        [71813.903357] BTRFS error (device dm-0): cleaner transaction attach returned -30
        [71814.128078] BTRFS error (device dm-0): open_ctree failed
      
      This happens because the log has inode reference items for both inode 258
      (the first file we created) and inode 259 (the second file created), and
      when processing the reference item for inode 258, we replace the
      corresponding item in the subvolume tree (which has two names, "foo" and
      "bar") witht he one in the log (which only has one name, "foo") without
      removing the corresponding dir index keys from the parent directory.
      Later, when processing the inode reference item for inode 259, which has
      a name of "bar" associated to it, we notice that dir index entries exist
      for that name and for a different inode, so we attempt to unlink that
      name, which fails because the inode reference item for inode 258 no longer
      has the name "bar" associated to it, making a call to btrfs_unlink_inode()
      fail with a -ENOENT error.
      
      Fix this by unlinking all the names in an inode reference item from a
      subvolume tree that are not present in the inode reference item found in
      the log tree, before overwriting it with the item from the log tree.
      Signed-off-by: 's avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: 's avatarDavid Sterba <dsterba@suse.com>
      1f250e92
    • Filipe Manana's avatar
      Btrfs: fix log replay failure after linking special file and fsync · 9a6509c4
      Filipe Manana authored
      If in the same transaction we rename a special file (fifo, character/block
      device or symbolic link), create a hard link for it having its old name
      then sync the log, we will end up with a log that can not be replayed and
      at when attempting to replay it, an EEXIST error is returned and mounting
      the filesystem fails. Example scenario:
      
        $ mkfs.btrfs -f /dev/sdc
        $ mount /dev/sdc /mnt
        $ mkdir /mnt/testdir
        $ mkfifo /mnt/testdir/foo
        # Make sure everything done so far is durably persisted.
        $ sync
      
        # Create some unrelated file and fsync it, this is just to create a log
        # tree. The file must be in the same directory as our special file.
        $ touch /mnt/testdir/f1
        $ xfs_io -c "fsync" /mnt/testdir/f1
      
        # Rename our special file and then create a hard link with its old name.
        $ mv /mnt/testdir/foo /mnt/testdir/bar
        $ ln /mnt/testdir/bar /mnt/testdir/foo
      
        # Create some other unrelated file and fsync it, this is just to persist
        # the log tree which was modified by the previous rename and link
        # operations. Alternatively we could have modified file f1 and fsync it.
        $ touch /mnt/f2
        $ xfs_io -c "fsync" /mnt/f2
      
        <power failure>
      
        $ mount /dev/sdc /mnt
        mount: mount /dev/sdc on /mnt failed: File exists
      
      This happens because when both the log tree and the subvolume's tree have
      an entry in the directory "testdir" with the same name, that is, there
      is one key (258 INODE_REF 257) in the subvolume tree and another one in
      the log tree (where 258 is the inode number of our special file and 257
      is the inode for directory "testdir"). Only the data of those two keys
      differs, in the subvolume tree the index field for inode reference has
      a value of 3 while the log tree it has a value of 5. Because the same key
      exists in both trees, but have different index, the log replay fails with
      an -EEXIST error when attempting to replay the inode reference from the
      log tree.
      
      Fix this by setting the last_unlink_trans field of the inode (our special
      file) to the current transaction id when a hard link is created, as this
      forces logging the parent directory inode, solving the conflict at log
      replay time.
      
      A new generic test case for fstests was also submitted.
      Signed-off-by: 's avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: 's avatarDavid Sterba <dsterba@suse.com>
      9a6509c4
    • Filipe Manana's avatar
      Btrfs: send, fix issuing write op when processing hole in no data mode · d4dfc0f4
      Filipe Manana authored
      When doing an incremental send of a filesystem with the no-holes feature
      enabled, we end up issuing a write operation when using the no data mode
      send flag, instead of issuing an update extent operation. Fix this by
      issuing the update extent operation instead.
      
      Trivial reproducer:
      
        $ mkfs.btrfs -f -O no-holes /dev/sdc
        $ mkfs.btrfs -f /dev/sdd
        $ mount /dev/sdc /mnt/sdc
        $ mount /dev/sdd /mnt/sdd
      
        $ xfs_io -f -c "pwrite -S 0xab 0 32K" /mnt/sdc/foobar
        $ btrfs subvolume snapshot -r /mnt/sdc /mnt/sdc/snap1
      
        $ xfs_io -c "fpunch 8K 8K" /mnt/sdc/foobar
        $ btrfs subvolume snapshot -r /mnt/sdc /mnt/sdc/snap2
      
        $ btrfs send /mnt/sdc/snap1 | btrfs receive /mnt/sdd
        $ btrfs send --no-data -p /mnt/sdc/snap1 /mnt/sdc/snap2 \
             | btrfs receive -vv /mnt/sdd
      
      Before this change the output of the second receive command is:
      
        receiving snapshot snap2 uuid=f6922049-8c22-e544-9ff9-fc6755918447...
        utimes
        write foobar, offset 8192, len 8192
        utimes foobar
        BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=f6922049-8c22-e544-9ff9-...
      
      After this change it is:
      
        receiving snapshot snap2 uuid=564d36a3-ebc8-7343-aec9-bf6fda278e64...
        utimes
        update_extent foobar: offset=8192, len=8192
        utimes foobar
        BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=564d36a3-ebc8-7343-aec9-bf6fda278e64...
      Signed-off-by: 's avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: 's avatarDavid Sterba <dsterba@suse.com>
      d4dfc0f4
    • Anand Jain's avatar
      btrfs: use proper endianness accessors for super_copy · 3c181c12
      Anand Jain authored
      The fs_info::super_copy is a byte copy of the on-disk structure and all
      members must use the accessor macros/functions to obtain the right
      value.  This was missing in update_super_roots and in sysfs readers.
      
      Moving between opposite endianness hosts will report bogus numbers in
      sysfs, and mount may fail as the root will not be restored correctly. If
      the filesystem is always used on a same endian host, this will not be a
      problem.
      
      Fix this by using the btrfs_set_super...() functions to set
      fs_info::super_copy values, and for the sysfs, use the cached
      fs_info::nodesize/sectorsize values.
      
      CC: stable@vger.kernel.org
      Fixes: df93589a ("btrfs: export more from FS_INFO to sysfs")
      Signed-off-by: 's avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: 's avatarLiu Bo <bo.li.liu@oracle.com>
      Reviewed-by: 's avatarDavid Sterba <dsterba@suse.com>
      [ update changelog ]
      Signed-off-by: 's avatarDavid Sterba <dsterba@suse.com>
      3c181c12
    • Hans van Kranenburg's avatar
      btrfs: alloc_chunk: fix DUP stripe size handling · 92e222df
      Hans van Kranenburg authored
      In case of using DUP, we search for enough unallocated disk space on a
      device to hold two stripes.
      
      The devices_info[ndevs-1].max_avail that holds the amount of unallocated
      space found is directly assigned to stripe_size, while it's actually
      twice the stripe size.
      
      Later on in the code, an unconditional division of stripe_size by
      dev_stripes corrects the value, but in the meantime there's a check to
      see if the stripe_size does not exceed max_chunk_size. Since during this
      check stripe_size is twice the amount as intended, the check will reduce
      the stripe_size to max_chunk_size if the actual correct to be used
      stripe_size is more than half the amount of max_chunk_size.
      
      The unconditional division later tries to correct stripe_size, but will
      actually make sure we can't allocate more than half the max_chunk_size.
      
      Fix this by moving the division by dev_stripes before the max chunk size
      check, so it always contains the right value, instead of putting a duct
      tape division in further on to get it fixed again.
      
      Since in all other cases than DUP, dev_stripes is 1, this change only
      affects DUP.
      
      Other attempts in the past were made to fix this:
      * 37db63a4 "Btrfs: fix max chunk size check in chunk allocator" tried
      to fix the same problem, but still resulted in part of the code acting
      on a wrongly doubled stripe_size value.
      * 86db2578 "Btrfs: fix max chunk size on raid5/6" unintentionally
      broke this fix again.
      
      The real problem was already introduced with the rest of the code in
      73c5de00.
      
      The user visible result however will be that the max chunk size for DUP
      will suddenly double, while it's actually acting according to the limits
      in the code again like it was 5 years ago.
      Reported-by: 's avatarNaohiro Aota <naohiro.aota@wdc.com>
      Link: https://www.spinics.net/lists/linux-btrfs/msg69752.html
      Fixes: 73c5de00 ("btrfs: quasi-round-robin for chunk allocation")
      Fixes: 86db2578 ("Btrfs: fix max chunk size on raid5/6")
      Signed-off-by: 's avatarHans van Kranenburg <hans.van.kranenburg@mendix.com>
      Reviewed-by: 's avatarDavid Sterba <dsterba@suse.com>
      [ update comment ]
      Signed-off-by: 's avatarDavid Sterba <dsterba@suse.com>
      92e222df
    • Nikolay Borisov's avatar
      btrfs: Handle btrfs_set_extent_delalloc failure in relocate_file_extent_cluster · 765f3ceb
      Nikolay Borisov authored
      Essentially duplicate the error handling from the above block which
      handles the !PageUptodate(page) case and additionally clear
      EXTENT_BOUNDARY.
      Signed-off-by: 's avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: 's avatarJosef Bacik <jbacik@fb.com>
      Signed-off-by: 's avatarDavid Sterba <dsterba@suse.com>
      765f3ceb
    • Nikolay Borisov's avatar
      btrfs: handle failure of add_pending_csums · ac01f26a
      Nikolay Borisov authored
      add_pending_csums was added as part of the new data=ordered
      implementation in e6dcd2dc ("Btrfs: New data=ordered
      implementation"). Even back then it called the btrfs_csum_file_blocks
      which can fail but it never bothered handling the failure. In ENOMEM
      situation this could lead to the filesystem failing to write the
      checksums for a particular extent and not detect this. On read this
      could lead to the filesystem erroring out due to crc mismatch. Fix it by
      propagating failure from add_pending_csums and handling them.
      Signed-off-by: 's avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: 's avatarJosef Bacik <jbacik@fb.com>
      Reviewed-by: 's avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: 's avatarDavid Sterba <dsterba@suse.com>
      ac01f26a
    • Jeff Mahoney's avatar
      btrfs: use kvzalloc to allocate btrfs_fs_info · a8fd1f71
      Jeff Mahoney authored
      The srcu_struct in btrfs_fs_info scales in size with NR_CPUS.  On
      kernels built with NR_CPUS=8192, this can result in kmalloc failures
      that prevent mounting.
      
      There is work in progress to try to resolve this for every user of
      srcu_struct but using kvzalloc will work around the failures until
      that is complete.
      
      As an example with NR_CPUS=512 on x86_64: the overall size of
      subvol_srcu is 3460 bytes, fs_info is 6496.
      Signed-off-by: 's avatarJeff Mahoney <jeffm@suse.com>
      Reviewed-by: 's avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: 's avatarDavid Sterba <dsterba@suse.com>
      a8fd1f71
  3. 26 Feb, 2018 10 commits
  4. 22 Feb, 2018 6 commits
  5. 21 Feb, 2018 1 commit
  6. 16 Feb, 2018 3 commits
    • Amir Goldstein's avatar
      ovl: check ERR_PTR() return value from ovl_lookup_real() · 7168179f
      Amir Goldstein authored
      Reported-by: 's avatarDan Carpenter <dan.carpenter@oracle.com>
      Fixes: 06170154 ("ovl: lookup indexed ancestor of lower dir")
      Signed-off-by: 's avatarAmir Goldstein <amir73il@gmail.com>
      Signed-off-by: 's avatarMiklos Szeredi <mszeredi@redhat.com>
      7168179f
    • Amir Goldstein's avatar
      ovl: check lower ancestry on encode of lower dir file handle · 2ca3c148
      Amir Goldstein authored
      This change relaxes copy up on encode of merge dir with lower layer > 1
      and handles the case of encoding a merge dir with lower layer 1, where an
      ancestor is a non-indexed merge dir. In that case, decode of the lower
      file handle will not have been possible if the non-indexed ancestor is
      redirected before or after encode.
      
      Before encoding a non-upper directory file handle from real layer N, we
      need to check if it will be possible to reconnect an overlay dentry from
      the real lower decoded dentry. This is done by following the overlay
      ancestry up to a "layer N connected" ancestor and verifying that all
      parents along the way are "layer N connectable". If an ancestor that is
      NOT "layer N connectable" is found, we need to copy up an ancestor, which
      is "layer N connectable", thus making that ancestor "layer N connected".
      For example:
      
       layer 1: /a
       layer 2: /a/b/c
      
      The overlay dentry /a is NOT "layer 2 connectable", because if dir /a is
      copied up and renamed, upper dir /a will be indexed by lower dir /a from
      layer 1. The dir /a from layer 2 will never be indexed, so the algorithm
      in ovl_lookup_real_ancestor() (*) will not be able to lookup a connected
      overlay dentry from the connected lower dentry /a/b/c.
      
      To avoid this problem on decode time, we need to copy up an ancestor of
      /a/b/c, which is "layer 2 connectable", on encode time. That ancestor is
      /a/b. After copy up (and index) of /a/b, it will become "layer 2 connected"
      and when the time comes to decode the file handle from lower dentry /a/b/c,
      ovl_lookup_real_ancestor() will find the indexed ancestor /a/b and decoding
      a connected overlay dentry will be accomplished.
      
      (*) the algorithm in ovl_lookup_real_ancestor() can be improved to lookup
      an entry /a in the lower layers above layer N and find the indexed dir /a
      from layer 1. If that improvement is made, then the check for "layer N
      connected" will need to verify there are no redirects in lower layers above
      layer N. In the example above, /a will be "layer 2 connectable". However,
      if layer 2 dir /a is a target of a layer 1 redirect, then /a will NOT be
      "layer 2 connectable":
      
       layer 1: /A (redirect = /a)
       layer 2: /a/b/c
      Signed-off-by: 's avatarAmir Goldstein <amir73il@gmail.com>
      Signed-off-by: 's avatarMiklos Szeredi <mszeredi@redhat.com>
      2ca3c148
    • Amir Goldstein's avatar
      ovl: hash non-dir by lower inode for fsnotify · 764baba8
      Amir Goldstein authored
      Commit 31747eda ("ovl: hash directory inodes for fsnotify")
      fixed an issue of inotify watch on directory that stops getting
      events after dropping dentry caches.
      
      A similar issue exists for non-dir non-upper files, for example:
      
      $ mkdir -p lower upper work merged
      $ touch lower/foo
      $ mount -t overlay -o
      lowerdir=lower,workdir=work,upperdir=upper none merged
      $ inotifywait merged/foo &
      $ echo 2 > /proc/sys/vm/drop_caches
      $ cat merged/foo
      
      inotifywait doesn't get the OPEN event, because ovl_lookup() called
      from 'cat' allocates a new overlay inode and does not reuse the
      watched inode.
      
      Fix this by hashing non-dir overlay inodes by lower real inode in
      the following cases that were not hashed before this change:
       - A non-upper overlay mount
       - A lower non-hardlink when index=off
      
      A helper ovl_hash_bylower() was added to put all the logic and
      documentation about which real inode an overlay inode is hashed by
      into one place.
      
      The issue dates back to initial version of overlayfs, but this
      patch depends on ovl_inode code that was introduced in kernel v4.13.
      
      Cc: <stable@vger.kernel.org> #v4.13
      Signed-off-by: 's avatarAmir Goldstein <amir73il@gmail.com>
      Signed-off-by: 's avatarMiklos Szeredi <mszeredi@redhat.com>
      764baba8
  7. 13 Feb, 2018 2 commits
    • Andreas Gruenbacher's avatar
      gfs2: Fixes to "Implement iomap for block_map" · 49edd5bf
      Andreas Gruenbacher authored
      It turns out that commit 3974320c "Implement iomap for block_map"
      introduced a few bugs that trigger occasional failures with xfstest
      generic/476:
      
      In gfs2_iomap_begin, we jump to do_alloc when we determine that we are
      beyond the end of the allocated metadata (height > ip->i_height).
      There, we can end up calling hole_size with a metapath that doesn't
      match the current metadata tree, which doesn't make sense.  After
      untangling the code at do_alloc, fix this by checking if the block we
      are looking for is within the range of allocated metadata.
      
      In addition, add a BUG() in case gfs2_iomap_begin is accidentally called
      for reading stuffed files: this is handled separately.  Make sure we
      don't truncate iomap->length for reads beyond the end of the file; in
      that case, the entire range counts as a hole.
      
      Finally, revert to taking a bitmap write lock when doing allocations.
      It's unclear why that change didn't lead to any failures during testing.
      Signed-off-by: 's avatarAndreas Gruenbacher <agruenba@redhat.com>
      Signed-off-by: 's avatarBob Peterson <rpeterso@redhat.com>
      49edd5bf
    • Jia Zhang's avatar
      vfs/proc/kcore, x86/mm/kcore: Fix SMAP fault when dumping vsyscall user page · 595dd46e
      Jia Zhang authored
      Commit:
      
        df04abfd ("fs/proc/kcore.c: Add bounce buffer for ktext data")
      
      ... introduced a bounce buffer to work around CONFIG_HARDENED_USERCOPY=y.
      However, accessing the vsyscall user page will cause an SMAP fault.
      
      Replace memcpy() with copy_from_user() to fix this bug works, but adding
      a common way to handle this sort of user page may be useful for future.
      
      Currently, only vsyscall page requires KCORE_USER.
      Signed-off-by: 's avatarJia Zhang <zhang.jia@linux.alibaba.com>
      Reviewed-by: 's avatarJiri Olsa <jolsa@kernel.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: jolsa@redhat.com
      Link: http://lkml.kernel.org/r/1518446694-21124-2-git-send-email-zhang.jia@linux.alibaba.comSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      595dd46e
  8. 11 Feb, 2018 1 commit
    • Linus Torvalds's avatar
      vfs: do bulk POLL* -> EPOLL* replacement · a9a08845
      Linus Torvalds authored
      This is the mindless scripted replacement of kernel use of POLL*
      variables as described by Al, done by this script:
      
          for V in IN OUT PRI ERR RDNORM RDBAND WRNORM WRBAND HUP RDHUP NVAL MSG; do
              L=`git grep -l -w POLL$V | grep -v '^t' | grep -v /um/ | grep -v '^sa' | grep -v '/poll.h$'|grep -v '^D'`
              for f in $L; do sed -i "-es/^\([^\"]*\)\(\<POLL$V\>\)/\\1E\\2/" $f; done
          done
      
      with de-mangling cleanups yet to come.
      
      NOTE! On almost all architectures, the EPOLL* constants have the same
      values as the POLL* constants do.  But they keyword here is "almost".
      For various bad reasons they aren't the same, and epoll() doesn't
      actually work quite correctly in some cases due to this on Sparc et al.
      
      The next patch from Al will sort out the final differences, and we
      should be all done.
      Scripted-by: 's avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      a9a08845
  9. 08 Feb, 2018 3 commits