1. 23 May, 2017 3 commits
    • Eric W. Biederman's avatar
      mnt: Make propagate_umount less slow for overlapping mount propagation trees · 296990de
      Eric W. Biederman authored
      Andrei Vagin pointed out that time to executue propagate_umount can go
      non-linear (and take a ludicrious amount of time) when the mount
      propogation trees of the mounts to be unmunted by a lazy unmount
      overlap.
      
      Make the walk of the mount propagation trees nearly linear by
      remembering which mounts have already been visited, allowing
      subsequent walks to detect when walking a mount propgation tree or a
      subtree of a mount propgation tree would be duplicate work and to skip
      them entirely.
      
      Walk the list of mounts whose propgatation trees need to be traversed
      from the mount highest in the mount tree to mounts lower in the mount
      tree so that odds are higher that the code will walk the largest trees
      first, allowing later tree walks to be skipped entirely.
      
      Add cleanup_umount_visitation to remover the code's memory of which
      mounts have been visited.
      
      Add the functions last_slave and skip_propagation_subtree to allow
      skipping appropriate parts of the mount propagation tree without
      needing to change the logic of the rest of the code.
      
      A script to generate overlapping mount propagation trees:
      
      $ cat runs.h
      set -e
      mount -t tmpfs zdtm /mnt
      mkdir -p /mnt/1 /mnt/2
      mount -t tmpfs zdtm /mnt/1
      mount --make-shared /mnt/1
      mkdir /mnt/1/1
      
      iteration=10
      if [ -n "$1" ] ; then
      	iteration=$1
      fi
      
      for i in $(seq $iteration); do
      	mount --bind /mnt/1/1 /mnt/1/1
      done
      
      mount --rbind /mnt/1 /mnt/2
      
      TIMEFORMAT='%Rs'
      nr=$(( ( 2 ** ( $iteration + 1 ) ) + 1 ))
      echo -n "umount -l /mnt/1 -> $nr        "
      time umount -l /mnt/1
      
      nr=$(cat /proc/self/mountinfo | grep zdtm | wc -l )
      time umount -l /mnt/2
      
      $ for i in $(seq 9 19); do echo $i; unshare -Urm bash ./run.sh $i; done
      
      Here are the performance numbers with and without the patch:
      
           mhash |  8192   |  8192  | 1048576 | 1048576
          mounts | before  | after  |  before | after
          ------------------------------------------------
            1025 |  0.040s | 0.016s |  0.038s | 0.019s
            2049 |  0.094s | 0.017s |  0.080s | 0.018s
            4097 |  0.243s | 0.019s |  0.206s | 0.023s
            8193 |  1.202s | 0.028s |  1.562s | 0.032s
           16385 |  9.635s | 0.036s |  9.952s | 0.041s
           32769 | 60.928s | 0.063s | 44.321s | 0.064s
           65537 |         | 0.097s |         | 0.097s
          131073 |         | 0.233s |         | 0.176s
          262145 |         | 0.653s |         | 0.344s
          524289 |         | 2.305s |         | 0.735s
         1048577 |         | 7.107s |         | 2.603s
      
      Andrei Vagin reports fixing the performance problem is part of the
      work to fix CVE-2016-6213.
      
      Cc: stable@vger.kernel.org
      Fixes: a05964f3 ("[PATCH] shared mounts handling: umount")
      Reported-by: default avatarAndrei Vagin <avagin@openvz.org>
      Reviewed-by: default avatarAndrei Vagin <avagin@virtuozzo.com>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      296990de
    • Eric W. Biederman's avatar
      mnt: In propgate_umount handle visiting mounts in any order · 99b19d16
      Eric W. Biederman authored
      While investigating some poor umount performance I realized that in
      the case of overlapping mount trees where some of the mounts are locked
      the code has been failing to unmount all of the mounts it should
      have been unmounting.
      
      This failure to unmount all of the necessary
      mounts can be reproduced with:
      
      $ cat locked_mounts_test.sh
      
      mount -t tmpfs test-base /mnt
      mount --make-shared /mnt
      mkdir -p /mnt/b
      
      mount -t tmpfs test1 /mnt/b
      mount --make-shared /mnt/b
      mkdir -p /mnt/b/10
      
      mount -t tmpfs test2 /mnt/b/10
      mount --make-shared /mnt/b/10
      mkdir -p /mnt/b/10/20
      
      mount --rbind /mnt/b /mnt/b/10/20
      
      unshare -Urm --propagation unchaged /bin/sh -c 'sleep 5; if [ $(grep test /proc/self/mountinfo | wc -l) -eq 1 ] ; then echo SUCCESS ; else echo FAILURE ; fi'
      sleep 1
      umount -l /mnt/b
      wait %%
      
      $ unshare -Urm ./locked_mounts_test.sh
      
      This failure is corrected by removing the prepass that marks mounts
      that may be umounted.
      
      A first pass is added that umounts mounts if possible and if not sets
      mount mark if they could be unmounted if they weren't locked and adds
      them to a list to umount possibilities.  This first pass reconsiders
      the mounts parent if it is on the list of umount possibilities, ensuring
      that information of umoutability will pass from child to mount parent.
      
      A second pass then walks through all mounts that are umounted and processes
      their children unmounting them or marking them for reparenting.
      
      A last pass cleans up the state on the mounts that could not be umounted
      and if applicable reparents them to their first parent that remained
      mounted.
      
      While a bit longer than the old code this code is much more robust
      as it allows information to flow up from the leaves and down
      from the trunk making the order in which mounts are encountered
      in the umount propgation tree irrelevant.
      
      Cc: stable@vger.kernel.org
      Fixes: 0c56fe31 ("mnt: Don't propagate unmounts to locked mounts")
      Reviewed-by: default avatarAndrei Vagin <avagin@virtuozzo.com>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      99b19d16
    • Eric W. Biederman's avatar
      mnt: In umount propagation reparent in a separate pass · 570487d3
      Eric W. Biederman authored
      It was observed that in some pathlogical cases that the current code
      does not unmount everything it should.  After investigation it
      was determined that the issue is that mnt_change_mntpoint can
      can change which mounts are available to be unmounted during mount
      propagation which is wrong.
      
      The trivial reproducer is:
      $ cat ./pathological.sh
      
      mount -t tmpfs test-base /mnt
      cd /mnt
      mkdir 1 2 1/1
      mount --bind 1 1
      mount --make-shared 1
      mount --bind 1 2
      mount --bind 1/1 1/1
      mount --bind 1/1 1/1
      echo
      grep test-base /proc/self/mountinfo
      umount 1/1
      echo
      grep test-base /proc/self/mountinfo
      
      $ unshare -Urm ./pathological.sh
      
      The expected output looks like:
      46 31 0:25 / /mnt rw,relatime - tmpfs test-base rw,uid=1000,gid=1000
      47 46 0:25 /1 /mnt/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      48 46 0:25 /1 /mnt/2 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      49 54 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      50 53 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      51 49 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      54 47 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      53 48 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      52 50 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      
      46 31 0:25 / /mnt rw,relatime - tmpfs test-base rw,uid=1000,gid=1000
      47 46 0:25 /1 /mnt/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      48 46 0:25 /1 /mnt/2 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      
      The output without the fix looks like:
      46 31 0:25 / /mnt rw,relatime - tmpfs test-base rw,uid=1000,gid=1000
      47 46 0:25 /1 /mnt/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      48 46 0:25 /1 /mnt/2 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      49 54 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      50 53 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      51 49 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      54 47 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      53 48 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      52 50 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      
      46 31 0:25 / /mnt rw,relatime - tmpfs test-base rw,uid=1000,gid=1000
      47 46 0:25 /1 /mnt/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      48 46 0:25 /1 /mnt/2 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      52 48 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
      
      That last mount in the output was in the propgation tree to be unmounted but
      was missed because the mnt_change_mountpoint changed it's parent before the walk
      through the mount propagation tree observed it.
      
      Cc: stable@vger.kernel.org
      Fixes: 1064f874 ("mnt: Tuck mounts under others instead of creating shadow/side mounts.")
      Acked-by: default avatarAndrei Vagin <avagin@virtuozzo.com>
      Reviewed-by: default avatarRam Pai <linuxram@us.ibm.com>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      570487d3
  2. 03 Feb, 2017 1 commit
    • Eric W. Biederman's avatar
      mnt: Tuck mounts under others instead of creating shadow/side mounts. · 1064f874
      Eric W. Biederman authored
      Ever since mount propagation was introduced in cases where a mount in
      propagated to parent mount mountpoint pair that is already in use the
      code has placed the new mount behind the old mount in the mount hash
      table.
      
      This implementation detail is problematic as it allows creating
      arbitrary length mount hash chains.
      
      Furthermore it invalidates the constraint maintained elsewhere in the
      mount code that a parent mount and a mountpoint pair will have exactly
      one mount upon them.  Making it hard to deal with and to talk about
      this special case in the mount code.
      
      Modify mount propagation to notice when there is already a mount at
      the parent mount and mountpoint where a new mount is propagating to
      and place that preexisting mount on top of the new mount.
      
      Modify unmount propagation to notice when a mount that is being
      unmounted has another mount on top of it (and no other children), and
      to replace the unmounted mount with the mount on top of it.
      
      Move the MNT_UMUONT test from __lookup_mnt_last into
      __propagate_umount as that is the only call of __lookup_mnt_last where
      MNT_UMOUNT may be set on any mount visible in the mount hash table.
      
      These modifications allow:
       - __lookup_mnt_last to be removed.
       - attach_shadows to be renamed __attach_mnt and its shadow
         handling to be removed.
       - commit_tree to be simplified
       - copy_tree to be simplified
      
      The result is an easier to understand tree of mounts that does not
      allow creation of arbitrary length hash chains in the mount hash table.
      
      The result is also a very slight userspace visible difference in semantics.
      The following two cases now behave identically, where before order
      mattered:
      
      case 1: (explicit user action)
      	B is a slave of A
      	mount something on A/a , it will propagate to B/a
      	and than mount something on B/a
      
      case 2: (tucked mount)
      	B is a slave of A
      	mount something on B/a
      	and than mount something on A/a
      
      Histroically umount A/a would fail in case 1 and succeed in case 2.
      Now umount A/a succeeds in both configurations.
      
      This very small change in semantics appears if anything to be a bug
      fix to me and my survey of userspace leads me to believe that no programs
      will notice or care of this subtle semantic change.
      
      v2: Updated to mnt_change_mountpoint to not call dput or mntput
      and instead to decrement the counts directly.  It is guaranteed
      that there will be other references when mnt_change_mountpoint is
      called so this is safe.
      
      v3: Moved put_mountpoint under mount_lock in attach_recursive_mnt
          As the locking in fs/namespace.c changed between v2 and v3.
      
      v4: Reworked the logic in propagate_mount_busy and __propagate_umount
          that detects when a mount completely covers another mount.
      
      v5: Removed unnecessary tests whose result is alwasy true in
          find_topper and attach_recursive_mnt.
      
      v6: Document the user space visible semantic difference.
      
      Cc: stable@vger.kernel.org
      Fixes: b90fa9ae ("[PATCH] shared mount handling: bind and rbind")
      Tested-by: default avatarAndrei Vagin <avagin@virtuozzo.com>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      1064f874
  3. 16 Dec, 2016 1 commit
  4. 30 Sep, 2016 1 commit
    • Eric W. Biederman's avatar
      mnt: Add a per mount namespace limit on the number of mounts · d2921684
      Eric W. Biederman authored
      CAI Qian <caiqian@redhat.com> pointed out that the semantics
      of shared subtrees make it possible to create an exponentially
      increasing number of mounts in a mount namespace.
      
          mkdir /tmp/1 /tmp/2
          mount --make-rshared /
          for i in $(seq 1 20) ; do mount --bind /tmp/1 /tmp/2 ; done
      
      Will create create 2^20 or 1048576 mounts, which is a practical problem
      as some people have managed to hit this by accident.
      
      As such CVE-2016-6213 was assigned.
      
      Ian Kent <raven@themaw.net> described the situation for autofs users
      as follows:
      
      > The number of mounts for direct mount maps is usually not very large because of
      > the way they are implemented, large direct mount maps can have performance
      > problems. There can be anywhere from a few (likely case a few hundred) to less
      > than 10000, plus mounts that have been triggered and not yet expired.
      >
      > Indirect mounts have one autofs mount at the root plus the number of mounts that
      > have been triggered and not yet expired.
      >
      > The number of autofs indirect map entries can range from a few to the common
      > case of several thousand and in rare cases up to between 30000 and 50000. I've
      > not heard of people with maps larger than 50000 entries.
      >
      > The larger the number of map entries the greater the possibility for a large
      > number of active mounts so it's not hard to expect cases of a 1000 or somewhat
      > more active mounts.
      
      So I am setting the default number of mounts allowed per mount
      namespace at 100,000.  This is more than enough for any use case I
      know of, but small enough to quickly stop an exponential increase
      in mounts.  Which should be perfect to catch misconfigurations and
      malfunctioning programs.
      
      For anyone who needs a higher limit this can be changed by writing
      to the new /proc/sys/fs/mount-max sysctl.
      Tested-by: default avatarCAI Qian <caiqian@redhat.com>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      d2921684
  5. 05 May, 2016 1 commit
    • Eric W. Biederman's avatar
      propogate_mnt: Handle the first propogated copy being a slave · 5ec0811d
      Eric W. Biederman authored
      When the first propgated copy was a slave the following oops would result:
      > BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
      > IP: [<ffffffff811fba4e>] propagate_one+0xbe/0x1c0
      > PGD bacd4067 PUD bac66067 PMD 0
      > Oops: 0000 [#1] SMP
      > Modules linked in:
      > CPU: 1 PID: 824 Comm: mount Not tainted 4.6.0-rc5userns+ #1523
      > Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
      > task: ffff8800bb0a8000 ti: ffff8800bac3c000 task.ti: ffff8800bac3c000
      > RIP: 0010:[<ffffffff811fba4e>]  [<ffffffff811fba4e>] propagate_one+0xbe/0x1c0
      > RSP: 0018:ffff8800bac3fd38  EFLAGS: 00010283
      > RAX: 0000000000000000 RBX: ffff8800bb77ec00 RCX: 0000000000000010
      > RDX: 0000000000000000 RSI: ffff8800bb58c000 RDI: ffff8800bb58c480
      > RBP: ffff8800bac3fd48 R08: 0000000000000001 R09: 0000000000000000
      > R10: 0000000000001ca1 R11: 0000000000001c9d R12: 0000000000000000
      > R13: ffff8800ba713800 R14: ffff8800bac3fda0 R15: ffff8800bb77ec00
      > FS:  00007f3c0cd9b7e0(0000) GS:ffff8800bfb00000(0000) knlGS:0000000000000000
      > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      > CR2: 0000000000000010 CR3: 00000000bb79d000 CR4: 00000000000006e0
      > Stack:
      >  ffff8800bb77ec00 0000000000000000 ffff8800bac3fd88 ffffffff811fbf85
      >  ffff8800bac3fd98 ffff8800bb77f080 ffff8800ba713800 ffff8800bb262b40
      >  0000000000000000 0000000000000000 ffff8800bac3fdd8 ffffffff811f1da0
      > Call Trace:
      >  [<ffffffff811fbf85>] propagate_mnt+0x105/0x140
      >  [<ffffffff811f1da0>] attach_recursive_mnt+0x120/0x1e0
      >  [<ffffffff811f1ec3>] graft_tree+0x63/0x70
      >  [<ffffffff811f1f6b>] do_add_mount+0x9b/0x100
      >  [<ffffffff811f2c1a>] do_mount+0x2aa/0xdf0
      >  [<ffffffff8117efbe>] ? strndup_user+0x4e/0x70
      >  [<ffffffff811f3a45>] SyS_mount+0x75/0xc0
      >  [<ffffffff8100242b>] do_syscall_64+0x4b/0xa0
      >  [<ffffffff81988f3c>] entry_SYSCALL64_slow_path+0x25/0x25
      > Code: 00 00 75 ec 48 89 0d 02 22 22 01 8b 89 10 01 00 00 48 89 05 fd 21 22 01 39 8e 10 01 00 00 0f 84 e0 00 00 00 48 8b 80 d8 00 00 00 <48> 8b 50 10 48 89 05 df 21 22 01 48 89 15 d0 21 22 01 8b 53 30
      > RIP  [<ffffffff811fba4e>] propagate_one+0xbe/0x1c0
      >  RSP <ffff8800bac3fd38>
      > CR2: 0000000000000010
      > ---[ end trace 2725ecd95164f217 ]---
      
      This oops happens with the namespace_sem held and can be triggered by
      non-root users.  An all around not pleasant experience.
      
      To avoid this scenario when finding the appropriate source mount to
      copy stop the walk up the mnt_master chain when the first source mount
      is encountered.
      
      Further rewrite the walk up the last_source mnt_master chain so that
      it is clear what is going on.
      
      The reason why the first source mount is special is that it it's
      mnt_parent is not a mount in the dest_mnt propagation tree, and as
      such termination conditions based up on the dest_mnt mount propgation
      tree do not make sense.
      
      To avoid other kinds of confusion last_dest is not changed when
      computing last_source.  last_dest is only used once in propagate_one
      and that is above the point of the code being modified, so changing
      the global variable is meaningless and confusing.
      
      Cc: stable@vger.kernel.org
      fixes: f2ebb3a9 ("smarter propagate_mnt()")
      Reported-by: default avatarTycho Andersen <tycho.andersen@canonical.com>
      Reviewed-by: default avatarSeth Forshee <seth.forshee@canonical.com>
      Tested-by: default avatarSeth Forshee <seth.forshee@canonical.com>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      5ec0811d
  6. 20 Feb, 2016 1 commit
    • Maxim Patlasov's avatar
      fs/pnode.c: treat zero mnt_group_id-s as unequal · 7ae8fd03
      Maxim Patlasov authored
      propagate_one(m) calculates "type" argument for copy_tree() like this:
      
      >    if (m->mnt_group_id == last_dest->mnt_group_id) {
      >        type = CL_MAKE_SHARED;
      >    } else {
      >        type = CL_SLAVE;
      >        if (IS_MNT_SHARED(m))
      >           type |= CL_MAKE_SHARED;
      >   }
      
      The "type" argument then governs clone_mnt() behavior with respect to flags
      and mnt_master of new mount. When we iterate through a slave group, it is
      possible that both current "m" and "last_dest" are not shared (although,
      both are slaves, i.e. have non-NULL mnt_master-s). Then the comparison
      above erroneously makes new mount shared and sets its mnt_master to
      last_source->mnt_master. The patch fixes the problem by handling zero
      mnt_group_id-s as though they are unequal.
      
      The similar problem exists in the implementation of "else" clause above
      when we have to ascend upward in the master/slave tree by calling:
      
      >    last_source = last_source->mnt_master;
      >    last_dest = last_source->mnt_parent;
      
      proper number of times. The last step is governed by
      "n->mnt_group_id != last_dest->mnt_group_id" condition that may lie if
      both are zero. The patch fixes this case in the same way as the former one.
      
      [AV: don't open-code an obvious helper...]
      Signed-off-by: default avatarMaxim Patlasov <mpatlasov@virtuozzo.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      7ae8fd03
  7. 03 Apr, 2015 5 commits
    • Eric W. Biederman's avatar
      mnt: Don't propagate unmounts to locked mounts · 0c56fe31
      Eric W. Biederman authored
      If the first mount in shared subtree is locked don't unmount the
      shared subtree.
      
      This is ensured by walking through the mounts parents before children
      and marking a mount as unmountable if it is not locked or it is locked
      but it's parent is marked.
      
      This allows recursive mount detach to propagate through a set of
      mounts when unmounting them would not reveal what is under any locked
      mount.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      0c56fe31
    • Eric W. Biederman's avatar
      mnt: On an unmount propagate clearing of MNT_LOCKED · 5d88457e
      Eric W. Biederman authored
      A prerequisite of calling umount_tree is that the point where the tree
      is mounted at is valid to unmount.
      
      If we are propagating the effect of the unmount clear MNT_LOCKED in
      every instance where the same filesystem is mounted on the same
      mountpoint in the mount tree, as we know (by virtue of the fact
      that umount_tree was called) that it is safe to reveal what
      is at that mountpoint.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      5d88457e
    • Eric W. Biederman's avatar
      mnt: Delay removal from the mount hash. · 411a938b
      Eric W. Biederman authored
      - Modify __lookup_mnt_hash_last to ignore mounts that have MNT_UMOUNTED set.
      - Don't remove mounts from the mount hash table in propogate_umount
      - Don't remove mounts from the mount hash table in umount_tree before
        the entire list of mounts to be umounted is selected.
      - Remove mounts from the mount hash table as the last thing that
        happens in the case where a mount has a parent in umount_tree.
        Mounts without parents are not hashed (by definition).
      
      This paves the way for delaying removal from the mount hash table even
      farther and fixing the MNT_LOCKED vs MNT_DETACH issue.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      411a938b
    • Eric W. Biederman's avatar
      mnt: Add MNT_UMOUNT flag · 590ce4bc
      Eric W. Biederman authored
      In some instances it is necessary to know if the the unmounting
      process has begun on a mount.  Add MNT_UMOUNT to make that reliably
      testable.
      
      This fix gets used in fixing locked mounts in MNT_DETACH
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      590ce4bc
    • Eric W. Biederman's avatar
      mnt: In umount_tree reuse mnt_list instead of mnt_hash · c003b26f
      Eric W. Biederman authored
      umount_tree builds a list of mounts that need to be unmounted.
      Utilize mnt_list for this purpose instead of mnt_hash.  This begins to
      allow keeping a mount on the mnt_hash after it is unmounted, which is
      necessary for a properly functioning MNT_LOCKED implementation.
      
      The fact that mnt_list is an ordinary list makding available list_move
      is nice bonus.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      c003b26f
  8. 02 Dec, 2014 1 commit
    • Eric W. Biederman's avatar
      mnt: Move the clear of MNT_LOCKED from copy_tree to it's callers. · 8486a788
      Eric W. Biederman authored
      Clear MNT_LOCKED in the callers of copy_tree except copy_mnt_ns, and
      collect_mounts.  In copy_mnt_ns it is necessary to create an exact
      copy of a mount tree, so not clearing MNT_LOCKED is important.
      Similarly collect_mounts is used to take a snapshot of the mount tree
      for audit logging purposes and auditing using a faithful copy of the
      tree is important.
      
      This becomes particularly significant when we start setting MNT_LOCKED
      on rootfs to prevent it from being unmounted.
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      8486a788
  9. 30 Aug, 2014 1 commit
  10. 02 Apr, 2014 1 commit
    • Al Viro's avatar
      smarter propagate_mnt() · f2ebb3a9
      Al Viro authored
      The current mainline has copies propagated to *all* nodes, then
      tears down the copies we made for nodes that do not contain
      counterparts of the desired mountpoint.  That sets the right
      propagation graph for the copies (at teardown time we move
      the slaves of removed node to a surviving peer or directly
      to master), but we end up paying a fairly steep price in
      useless allocations.  It's fairly easy to create a situation
      where N calls of mount(2) create exactly N bindings, with
      O(N^2) vfsmounts allocated and freed in process.
      
      Fortunately, it is possible to avoid those allocations/freeings.
      The trick is to create copies in the right order and find which
      one would've eventually become a master with the current algorithm.
      It turns out to be possible in O(nodes getting propagation) time
      and with no extra allocations at all.
      
      One part is that we need to make sure that eventual master will be
      created before its slaves, so we need to walk the propagation
      tree in a different order - by peer groups.  And iterate through
      the peers before dealing with the next group.
      
      Another thing is finding the (earlier) copy that will be a master
      of one we are about to create; to do that we are (temporary) marking
      the masters of mountpoints we are attaching the copies to.
      
      Either we are in a peer of the last mountpoint we'd dealt with,
      or we have the following situation: we are attaching to mountpoint M,
      the last copy S_0 had been attached to M_0 and there are sequences
      S_0...S_n, M_0...M_n such that S_{i+1} is a master of S_{i},
      S_{i} mounted on M{i} and we need to create a slave of the first S_{k}
      such that M is getting propagation from M_{k}.  It means that the master
      of M_{k} will be among the sequence of masters of M.  On the
      other hand, the nearest marked node in that sequence will either
      be the master of M_{k} or the master of M_{k-1} (the latter -
      in the case if M_{k-1} is a slave of something M gets propagation
      from, but in a wrong peer group).
      
      So we go through the sequence of masters of M until we find
      a marked one (P).  Let N be the one before it.  Then we go through
      the sequence of masters of S_0 until we find one (say, S) mounted
      on a node D that has P as master and check if D is a peer of N.
      If it is, S will be the master of new copy, if not - the master of S
      will be.
      
      That's it for the hard part; the rest is fairly simple.  Iterator
      is in next_group(), handling of one prospective mountpoint is
      propagate_one().
      
      It seems to survive all tests and gives a noticably better performance
      than the current mainline for setups that are seriously using shared
      subtrees.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      f2ebb3a9
  11. 30 Mar, 2014 1 commit
    • Al Viro's avatar
      switch mnt_hash to hlist · 38129a13
      Al Viro authored
      fixes RCU bug - walking through hlist is safe in face of element moves,
      since it's self-terminating.  Cyclic lists are not - if we end up jumping
      to another hash chain, we'll loop infinitely without ever hitting the
      original list head.
      
      [fix for dumb braino folded]
      
      Spotted by: Max Kellermann <mk@cm4all.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      38129a13
  12. 25 Oct, 2013 3 commits
  13. 31 May, 2013 1 commit
  14. 09 Apr, 2013 2 commits
  15. 27 Mar, 2013 1 commit
  16. 14 Jul, 2012 1 commit
    • David Howells's avatar
      VFS: Make clone_mnt()/copy_tree()/collect_mounts() return errors · be34d1a3
      David Howells authored
      copy_tree() can theoretically fail in a case other than ENOMEM, but always
      returns NULL which is interpreted by callers as -ENOMEM.  Change it to return
      an explicit error.
      
      Also change clone_mnt() for consistency and because union mounts will add new
      error cases.
      
      Thanks to Andreas Gruenbacher <agruen@suse.de> for a bug fix.
      [AV: folded braino fix by Dan Carpenter]
      
      Original-author: Valerie Aurora <vaurora@redhat.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Cc: Valerie Aurora <valerie.aurora@gmail.com>
      Cc: Andreas Gruenbacher <agruen@suse.de>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      be34d1a3
  17. 30 May, 2012 1 commit
    • Andi Kleen's avatar
      brlocks/lglocks: API cleanups · 962830df
      Andi Kleen authored
      lglocks and brlocks are currently generated with some complicated macros
      in lglock.h.  But there's no reason to not just use common utility
      functions and put all the data into a common data structure.
      
      In preparation, this patch changes the API to look more like normal
      function calls with pointers, not magic macros.
      
      The patch is rather large because I move over all users in one go to keep
      it bisectable.  This impacts the VFS somewhat in terms of lines changed.
      But no actual behaviour change.
      
      [akpm@linux-foundation.org: checkpatch fixes]
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      962830df
  18. 04 Jan, 2012 14 commits