1. 06 May, 2012 2 commits
  2. 23 Mar, 2012 1 commit
    • Jason Baron's avatar
      coredump: remove VM_ALWAYSDUMP flag · 909af768
      Jason Baron authored
      The motivation for this patchset was that I was looking at a way for a
      qemu-kvm process, to exclude the guest memory from its core dump, which
      can be quite large.  There are already a number of filter flags in
      /proc/<pid>/coredump_filter, however, these allow one to specify 'types'
      of kernel memory, not specific address ranges (which is needed in this
      case).
      
      Since there are no more vma flags available, the first patch eliminates
      the need for the 'VM_ALWAYSDUMP' flag.  The flag is used internally by
      the kernel to mark vdso and vsyscall pages.  However, it is simple
      enough to check if a vma covers a vdso or vsyscall page without the need
      for this flag.
      
      The second patch then replaces the 'VM_ALWAYSDUMP' flag with a new
      'VM_NODUMP' flag, which can be set by userspace using new madvise flags:
      'MADV_DONTDUMP', and unset via 'MADV_DODUMP'.  The core dump filters
      continue to work the same as before unless 'MADV_DONTDUMP' is set on the
      region.
      
      The qemu code which implements this features is at:
      
        http://people.redhat.com/~jbaron/qemu-dump/qemu-dump.patch
      
      In my testing the qemu core dump shrunk from 383MB -> 13MB with this
      patch.
      
      I also believe that the 'MADV_DONTDUMP' flag might be useful for
      security sensitive apps, which might want to select which areas are
      dumped.
      
      This patch:
      
      The VM_ALWAYSDUMP flag is currently used by the coredump code to
      indicate that a vma is part of a vsyscall or vdso section.  However, we
      can determine if a vma is in one these sections by checking it against
      the gate_vma and checking for a non-NULL return value from
      arch_vma_name().  Thus, freeing a valuable vma bit.
      Signed-off-by: default avatarJason Baron <jbaron@redhat.com>
      Acked-by: default avatarRoland McGrath <roland@hack.frob.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      909af768
  3. 22 Mar, 2012 4 commits
    • David Rientjes's avatar
      mm, counters: fold __sync_task_rss_stat() into sync_mm_rss() · ea48cf78
      David Rientjes authored
      There's no difference between sync_mm_rss() and __sync_task_rss_stat(),
      so fold the latter into the former.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ea48cf78
    • David Rientjes's avatar
      mm, counters: remove task argument to sync_mm_rss() and __sync_task_rss_stat() · 05af2e10
      David Rientjes authored
      sync_mm_rss() can only be used for current to avoid race conditions in
      iterating and clearing its per-task counters.  Remove the task argument
      for it and its helper function, __sync_task_rss_stat(), to avoid thinking
      it can be used safely for anything other than current.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      05af2e10
    • Konstantin Khlebnikov's avatar
      mm: make get_mm_counter static-inline · 69c97823
      Konstantin Khlebnikov authored
      Make get_mm_counter() always static inline, it is simple enough for that.
      And remove unused set_mm_counter()
      
      bloat-o-meter:
      
      add/remove: 0/1 grow/shrink: 4/12 up/down: 99/-341 (-242)
      function                                     old     new   delta
      try_to_unmap_one                             886     952     +66
      sys_remap_file_pages                        1214    1230     +16
      dup_mm                                      1684    1700     +16
      do_exit                                     2277    2278      +1
      zap_page_range                               208     205      -3
      unmap_region                                 304     296      -8
      static.oom_kill_process                      554     546      -8
      try_to_unmap_file                           1716    1700     -16
      getrusage                                    925     909     -16
      flush_old_exec                              1704    1688     -16
      static.dump_header                           416     390     -26
      acct_update_integrals                        218     187     -31
      do_task_stat                                2986    2954     -32
      get_mm_counter                                34       -     -34
      xacct_add_tsk                                371     334     -37
      task_statm                                   172     118     -54
      task_mem                                     383     323     -60
      
      try_to_unmap_one() grows because update_hiwater_rss() now completely inline.
      Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: default avatarKirill A. Shutemov <kirill@shutemov.name>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      69c97823
    • Andrea Arcangeli's avatar
      mm: thp: fix pmd_bad() triggering in code paths holding mmap_sem read mode · 1a5a9906
      Andrea Arcangeli authored
      In some cases it may happen that pmd_none_or_clear_bad() is called with
      the mmap_sem hold in read mode.  In those cases the huge page faults can
      allocate hugepmds under pmd_none_or_clear_bad() and that can trigger a
      false positive from pmd_bad() that will not like to see a pmd
      materializing as trans huge.
      
      It's not khugepaged causing the problem, khugepaged holds the mmap_sem
      in write mode (and all those sites must hold the mmap_sem in read mode
      to prevent pagetables to go away from under them, during code review it
      seems vm86 mode on 32bit kernels requires that too unless it's
      restricted to 1 thread per process or UP builds).  The race is only with
      the huge pagefaults that can convert a pmd_none() into a
      pmd_trans_huge().
      
      Effectively all these pmd_none_or_clear_bad() sites running with
      mmap_sem in read mode are somewhat speculative with the page faults, and
      the result is always undefined when they run simultaneously.  This is
      probably why it wasn't common to run into this.  For example if the
      madvise(MADV_DONTNEED) runs zap_page_range() shortly before the page
      fault, the hugepage will not be zapped, if the page fault runs first it
      will be zapped.
      
      Altering pmd_bad() not to error out if it finds hugepmds won't be enough
      to fix this, because zap_pmd_range would then proceed to call
      zap_pte_range (which would be incorrect if the pmd become a
      pmd_trans_huge()).
      
      The simplest way to fix this is to read the pmd in the local stack
      (regardless of what we read, no need of actual CPU barriers, only
      compiler barrier needed), and be sure it is not changing under the code
      that computes its value.  Even if the real pmd is changing under the
      value we hold on the stack, we don't care.  If we actually end up in
      zap_pte_range it means the pmd was not none already and it was not huge,
      and it can't become huge from under us (khugepaged locking explained
      above).
      
      All we need is to enforce that there is no way anymore that in a code
      path like below, pmd_trans_huge can be false, but pmd_none_or_clear_bad
      can run into a hugepmd.  The overhead of a barrier() is just a compiler
      tweak and should not be measurable (I only added it for THP builds).  I
      don't exclude different compiler versions may have prevented the race
      too by caching the value of *pmd on the stack (that hasn't been
      verified, but it wouldn't be impossible considering
      pmd_none_or_clear_bad, pmd_bad, pmd_trans_huge, pmd_none are all inlines
      and there's no external function called in between pmd_trans_huge and
      pmd_none_or_clear_bad).
      
      		if (pmd_trans_huge(*pmd)) {
      			if (next-addr != HPAGE_PMD_SIZE) {
      				VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem));
      				split_huge_page_pmd(vma->vm_mm, pmd);
      			} else if (zap_huge_pmd(tlb, vma, pmd, addr))
      				continue;
      			/* fall through */
      		}
      		if (pmd_none_or_clear_bad(pmd))
      
      Because this race condition could be exercised without special
      privileges this was reported in CVE-2012-1179.
      
      The race was identified and fully explained by Ulrich who debugged it.
      I'm quoting his accurate explanation below, for reference.
      
      ====== start quote =======
            mapcount 0 page_mapcount 1
            kernel BUG at mm/huge_memory.c:1384!
      
          At some point prior to the panic, a "bad pmd ..." message similar to the
          following is logged on the console:
      
            mm/memory.c:145: bad pmd ffff8800376e1f98(80000000314000e7).
      
          The "bad pmd ..." message is logged by pmd_clear_bad() before it clears
          the page's PMD table entry.
      
              143 void pmd_clear_bad(pmd_t *pmd)
              144 {
          ->  145         pmd_ERROR(*pmd);
              146         pmd_clear(pmd);
              147 }
      
          After the PMD table entry has been cleared, there is an inconsistency
          between the actual number of PMD table entries that are mapping the page
          and the page's map count (_mapcount field in struct page). When the page
          is subsequently reclaimed, __split_huge_page() detects this inconsistency.
      
             1381         if (mapcount != page_mapcount(page))
             1382                 printk(KERN_ERR "mapcount %d page_mapcount %d\n",
             1383                        mapcount, page_mapcount(page));
          -> 1384         BUG_ON(mapcount != page_mapcount(page));
      
          The root cause of the problem is a race of two threads in a multithreaded
          process. Thread B incurs a page fault on a virtual address that has never
          been accessed (PMD entry is zero) while Thread A is executing an madvise()
          system call on a virtual address within the same 2 MB (huge page) range.
      
                     virtual address space
                    .---------------------.
                    |                     |
                    |                     |
                  .-|---------------------|
                  | |                     |
                  | |                     |<-- B(fault)
                  | |                     |
            2 MB  | |/////////////////////|-.
            huge <  |/////////////////////|  > A(range)
            page  | |/////////////////////|-'
                  | |                     |
                  | |                     |
                  '-|---------------------|
                    |                     |
                    |                     |
                    '---------------------'
      
          - Thread A is executing an madvise(..., MADV_DONTNEED) system call
            on the virtual address range "A(range)" shown in the picture.
      
          sys_madvise
            // Acquire the semaphore in shared mode.
            down_read(&current->mm->mmap_sem)
            ...
            madvise_vma
              switch (behavior)
              case MADV_DONTNEED:
                   madvise_dontneed
                     zap_page_range
                       unmap_vmas
                         unmap_page_range
                           zap_pud_range
                             zap_pmd_range
                               //
                               // Assume that this huge page has never been accessed.
                               // I.e. content of the PMD entry is zero (not mapped).
                               //
                               if (pmd_trans_huge(*pmd)) {
                                   // We don't get here due to the above assumption.
                               }
                               //
                               // Assume that Thread B incurred a page fault and
                   .---------> // sneaks in here as shown below.
                   |           //
                   |           if (pmd_none_or_clear_bad(pmd))
                   |               {
                   |                 if (unlikely(pmd_bad(*pmd)))
                   |                     pmd_clear_bad
                   |                     {
                   |                       pmd_ERROR
                   |                         // Log "bad pmd ..." message here.
                   |                       pmd_clear
                   |                         // Clear the page's PMD entry.
                   |                         // Thread B incremented the map count
                   |                         // in page_add_new_anon_rmap(), but
                   |                         // now the page is no longer mapped
                   |                         // by a PMD entry (-> inconsistency).
                   |                     }
                   |               }
                   |
                   v
          - Thread B is handling a page fault on virtual address "B(fault)" shown
            in the picture.
      
          ...
          do_page_fault
            __do_page_fault
              // Acquire the semaphore in shared mode.
              down_read_trylock(&mm->mmap_sem)
              ...
              handle_mm_fault
                if (pmd_none(*pmd) && transparent_hugepage_enabled(vma))
                    // We get here due to the above assumption (PMD entry is zero).
                    do_huge_pmd_anonymous_page
                      alloc_hugepage_vma
                        // Allocate a new transparent huge page here.
                      ...
                      __do_huge_pmd_anonymous_page
                        ...
                        spin_lock(&mm->page_table_lock)
                        ...
                        page_add_new_anon_rmap
                          // Here we increment the page's map count (starts at -1).
                          atomic_set(&page->_mapcount, 0)
                        set_pmd_at
                          // Here we set the page's PMD entry which will be cleared
                          // when Thread A calls pmd_clear_bad().
                        ...
                        spin_unlock(&mm->page_table_lock)
      
          The mmap_sem does not prevent the race because both threads are acquiring
          it in shared mode (down_read).  Thread B holds the page_table_lock while
          the page's map count and PMD table entry are updated.  However, Thread A
          does not synchronize on that lock.
      
      ====== end quote =======
      
      [akpm@linux-foundation.org: checkpatch fixes]
      Reported-by: default avatarUlrich Obergfell <uobergfe@redhat.com>
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Jones <davej@redhat.com>
      Acked-by: default avatarLarry Woodman <lwoodman@redhat.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Cc: <stable@vger.kernel.org>		[2.6.38+]
      Cc: Mark Salter <msalter@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1a5a9906
  4. 21 Mar, 2012 5 commits
  5. 20 Mar, 2012 1 commit
  6. 23 Jan, 2012 1 commit
  7. 13 Jan, 2012 1 commit
    • Shaohua Li's avatar
      thp: add tlb_remove_pmd_tlb_entry · f21760b1
      Shaohua Li authored
      We have tlb_remove_tlb_entry to indicate a pte tlb flush entry should be
      flushed, but not a corresponding API for pmd entry.  This isn't a
      problem so far because THP is only for x86 currently and tlb_flush()
      under x86 will flush entire TLB.  But this is confusion and could be
      missed if thp is ported to other arch.
      
      Also convert tlb->need_flush = 1 to a VM_BUG_ON(!tlb->need_flush) in
      __tlb_remove_page() as suggested by Andrea Arcangeli.  The
      __tlb_remove_page() function is supposed to be called after
      tlb_remove_xxx_tlb_entry() and we can catch any misuse.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Reviewed-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f21760b1
  8. 02 Nov, 2011 1 commit
    • Andrea Arcangeli's avatar
      mm: thp: tail page refcounting fix · 70b50f94
      Andrea Arcangeli authored
      Michel while working on the working set estimation code, noticed that
      calling get_page_unless_zero() on a random pfn_to_page(random_pfn)
      wasn't safe, if the pfn ended up being a tail page of a transparent
      hugepage under splitting by __split_huge_page_refcount().
      
      He then found the problem could also theoretically materialize with
      page_cache_get_speculative() during the speculative radix tree lookups
      that uses get_page_unless_zero() in SMP if the radix tree page is freed
      and reallocated and get_user_pages is called on it before
      page_cache_get_speculative has a chance to call get_page_unless_zero().
      
      So the best way to fix the problem is to keep page_tail->_count zero at
      all times.  This will guarantee that get_page_unless_zero() can never
      succeed on any tail page.  page_tail->_mapcount is guaranteed zero and
      is unused for all tail pages of a compound page, so we can simply
      account the tail page references there and transfer them to
      tail_page->_count in __split_huge_page_refcount() (in addition to the
      head_page->_mapcount).
      
      While debugging this s/_count/_mapcount/ change I also noticed get_page is
      called by direct-io.c on pages returned by get_user_pages.  That wasn't
      entirely safe because the two atomic_inc in get_page weren't atomic.  As
      opposed to other get_user_page users like secondary-MMU page fault to
      establish the shadow pagetables would never call any superflous get_page
      after get_user_page returns.  It's safer to make get_page universally safe
      for tail pages and to use get_page_foll() within follow_page (inside
      get_user_pages()).  get_page_foll() is safe to do the refcounting for tail
      pages without taking any locks because it is run within PT lock protected
      critical sections (PT lock for pte and page_table_lock for
      pmd_trans_huge).
      
      The standard get_page() as invoked by direct-io instead will now take
      the compound_lock but still only for tail pages.  The direct-io paths
      are usually I/O bound and the compound_lock is per THP so very
      finegrined, so there's no risk of scalability issues with it.  A simple
      direct-io benchmarks with all lockdep prove locking and spinlock
      debugging infrastructure enabled shows identical performance and no
      overhead.  So it's worth it.  Ideally direct-io should stop calling
      get_page() on pages returned by get_user_pages().  The spinlock in
      get_page() is already optimized away for no-THP builds but doing
      get_page() on tail pages returned by GUP is generally a rare operation
      and usually only run in I/O paths.
      
      This new refcounting on page_tail->_mapcount in addition to avoiding new
      RCU critical sections will also allow the working set estimation code to
      work without any further complexity associated to the tail page
      refcounting with THP.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reported-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: <stable@kernel.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      70b50f94
  9. 31 Oct, 2011 1 commit
  10. 26 Jul, 2011 3 commits
    • Benjamin Herrenschmidt's avatar
      mm/futex: fix futex writes on archs with SW tracking of dirty & young · 2efaca92
      Benjamin Herrenschmidt authored
      I haven't reproduced it myself but the fail scenario is that on such
      machines (notably ARM and some embedded powerpc), if you manage to hit
      that futex path on a writable page whose dirty bit has gone from the PTE,
      you'll livelock inside the kernel from what I can tell.
      
      It will go in a loop of trying the atomic access, failing, trying gup to
      "fix it up", getting succcess from gup, go back to the atomic access,
      failing again because dirty wasn't fixed etc...
      
      So I think you essentially hang in the kernel.
      
      The scenario is probably rare'ish because affected architecture are
      embedded and tend to not swap much (if at all) so we probably rarely hit
      the case where dirty is missing or young is missing, but I think Shan has
      a piece of SW that can reliably reproduce it using a shared writable
      mapping & fork or something like that.
      
      On archs who use SW tracking of dirty & young, a page without dirty is
      effectively mapped read-only and a page without young unaccessible in the
      PTE.
      
      Additionally, some architectures might lazily flush the TLB when relaxing
      write protection (by doing only a local flush), and expect a fault to
      invalidate the stale entry if it's still present on another processor.
      
      The futex code assumes that if the "in_atomic()" access -EFAULT's, it can
      "fix it up" by causing get_user_pages() which would then be equivalent to
      taking the fault.
      
      However that isn't the case.  get_user_pages() will not call
      handle_mm_fault() in the case where the PTE seems to have the right
      permissions, regardless of the dirty and young state.  It will eventually
      update those bits ...  in the struct page, but not in the PTE.
      
      Additionally, it will not handle the lazy TLB flushing that can be
      required by some architectures in the fault case.
      
      Basically, gup is the wrong interface for the job.  The patch provides a
      more appropriate one which boils down to just calling handle_mm_fault()
      since what we are trying to do is simulate a real page fault.
      
      The futex code currently attempts to write to user memory within a
      pagefault disabled section, and if that fails, tries to fix it up using
      get_user_pages().
      
      This doesn't work on archs where the dirty and young bits are maintained
      by software, since they will gate access permission in the TLB, and will
      not be updated by gup().
      
      In addition, there's an expectation on some archs that a spurious write
      fault triggers a local TLB flush, and that is missing from the picture as
      well.
      
      I decided that adding those "features" to gup() would be too much for this
      already too complex function, and instead added a new simpler
      fixup_user_fault() which is essentially a wrapper around handle_mm_fault()
      which the futex code can call.
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix some nits Darren saw, fiddle comment layout]
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Reported-by: default avatarShan Hai <haishan.bai@gmail.com>
      Tested-by: default avatarShan Hai <haishan.bai@gmail.com>
      Cc: David Laight <David.Laight@ACULAB.COM>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Darren Hart <darren.hart@intel.com>
      Cc: <stable@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2efaca92
    • KAMEZAWA Hiroyuki's avatar
      mm: preallocate page before lock_page() at filemap COW · 1d65f86d
      KAMEZAWA Hiroyuki authored
      Currently we are keeping faulted page locked throughout whole __do_fault
      call (except for page_mkwrite code path) after calling file system's fault
      code.  If we do early COW, we allocate a new page which has to be charged
      for a memcg (mem_cgroup_newpage_charge).
      
      This function, however, might block for unbounded amount of time if memcg
      oom killer is disabled or fork-bomb is running because the only way out of
      the OOM situation is either an external event or OOM-situation fix.
      
      In the end we are keeping the faulted page locked and blocking other
      processes from faulting it in which is not good at all because we are
      basically punishing potentially an unrelated process for OOM condition in
      a different group (I have seen stuck system because of ld-2.11.1.so being
      locked).
      
      We can do test easily.
      
       % cgcreate -g memory:A
       % cgset -r memory.limit_in_bytes=64M A
       % cgset -r memory.memsw.limit_in_bytes=64M A
       % cd kernel_dir; cgexec -g memory:A make -j
      
      Then, the whole system will live-locked until you kill 'make -j'
      by hands (or push reboot...) This is because some important page in a
      a shared library are locked.
      
      Considering again, the new page is not necessary to be allocated
      with lock_page() held. And usual page allocation may dive into
      long memory reclaim loop with holding lock_page() and can cause
      very long latency.
      
      There are 3 ways.
        1. do allocation/charge before lock_page()
           Pros. - simple and can handle page allocation in the same manner.
                   This will reduce holding time of lock_page() in general.
           Cons. - we do page allocation even if ->fault() returns error.
      
        2. do charge after unlock_page(). Even if charge fails, it's just OOM.
           Pros. - no impact to non-memcg path.
           Cons. - implemenation requires special cares of LRU and we need to modify
                   page_add_new_anon_rmap()...
      
        3. do unlock->charge->lock again method.
           Pros. - no impact to non-memcg path.
           Cons. - This may kill LOCK_PAGE_RETRY optimization. We need to release
                   lock and get it again...
      
      This patch moves "charge" and memory allocation for COW page
      before lock_page(). Then, we can avoid scanning LRU with holding
      a lock on a page and latency under lock_page() will be reduced.
      
      Then, above livelock disappears.
      
      [akpm@linux-foundation.org: fix code layout]
      Signed-off-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reported-by: default avatarLutz Vieweg <lvml@5t9.de>
      Original-idea-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Ying Han <yinghan@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1d65f86d
    • Andrew Morton's avatar
      mm/memory.c: remove ZAP_BLOCK_SIZE · 6ac47520
      Andrew Morton authored
      ZAP_BLOCK_SIZE became unused in the preemptible-mmu_gather work ("mm:
      Remove i_mmap_lock lockbreak").  So zap it.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6ac47520
  11. 09 Jul, 2011 1 commit
  12. 28 Jun, 2011 1 commit
  13. 16 Jun, 2011 2 commits
    • Steven Rostedt's avatar
      mm: fix wrong kunmap_atomic() pointer · 5f1a1907
      Steven Rostedt authored
      Running a ktest.pl test, I hit the following bug on x86_32:
      
        ------------[ cut here ]------------
        WARNING: at arch/x86/mm/highmem_32.c:81 __kunmap_atomic+0x64/0xc1()
         Hardware name:
        Modules linked in:
        Pid: 93, comm: sh Not tainted 2.6.39-test+ #1
        Call Trace:
         [<c04450da>] warn_slowpath_common+0x7c/0x91
         [<c042f5df>] ? __kunmap_atomic+0x64/0xc1
         [<c042f5df>] ? __kunmap_atomic+0x64/0xc1^M
         [<c0445111>] warn_slowpath_null+0x22/0x24
         [<c042f5df>] __kunmap_atomic+0x64/0xc1
         [<c04d4a22>] unmap_vmas+0x43a/0x4e0
         [<c04d9065>] exit_mmap+0x91/0xd2
         [<c0443057>] mmput+0x43/0xad
         [<c0448358>] exit_mm+0x111/0x119
         [<c044855f>] do_exit+0x1ff/0x5fa
         [<c0454ea2>] ? set_current_blocked+0x3c/0x40
         [<c0454f24>] ? sigprocmask+0x7e/0x8e
         [<c0448b55>] do_group_exit+0x65/0x88
         [<c0448b90>] sys_exit_group+0x18/0x1c
         [<c0c3915f>] sysenter_do_call+0x12/0x38
        ---[ end trace 8055f74ea3c0eb62 ]---
      
      Running a ktest.pl git bisect, found the culprit: commit e303297e
      ("mm: extended batches for generic mmu_gather")
      
      But although this was the commit triggering the bug, it was not the one
      originally responsible for the bug.  That was commit d16dfc55 ("mm:
      mmu_gather rework").
      
      The code in zap_pte_range() has something that looks like the following:
      
      	pte =  pte_offset_map_lock(mm, pmd, addr, &ptl);
      	do {
      		[...]
      	} while (pte++, addr += PAGE_SIZE, addr != end);
      	pte_unmap_unlock(pte - 1, ptl);
      
      The pte starts off pointing at the first element in the page table
      directory that was returned by the pte_offset_map_lock().  When it's done
      with the page, pte will be pointing to anything between the next entry and
      the first entry of the next page inclusive.  By doing a pte - 1, this puts
      the pte back onto the original page, which is all that pte_unmap_unlock()
      needs.
      
      In most archs (64 bit), this is not an issue as the pte is ignored in the
      pte_unmap_unlock().  But on 32 bit archs, where things may be kmapped, it
      is essential that the pte passed to pte_unmap_unlock() resides on the same
      page that was given by pte_offest_map_lock().
      
      The problem came in d16dfc55 ("mm: mmu_gather rework") where it introduced
      a "break;" from the while loop.  This alone did not seem to easily trigger
      the bug.  But the modifications made by e303297e caused that "break;" to
      be hit on the first iteration, before the pte++.
      
      The pte not being incremented will now cause pte_unmap_unlock(pte - 1) to
      be pointing to the previous page.  This will cause the wrong page to be
      unmapped, and also trigger the warning above.
      
      The simple solution is to just save the pointer given by
      pte_offset_map_lock() and use it in the unlock.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5f1a1907
    • Randy Dunlap's avatar
      mm/memory.c: fix kernel-doc notation · 0164f69d
      Randy Dunlap authored
      Fix new kernel-doc warnings in mm/memory.c:
      
        Warning(mm/memory.c:1327): No description found for parameter 'tlb'
        Warning(mm/memory.c:1327): Excess function parameter 'tlbp' description in 'unmap_vmas'
      Signed-off-by: default avatarRandy Dunlap <randy.dunlap@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0164f69d
  14. 27 May, 2011 1 commit
    • Ying Han's avatar
      memcg: add the pagefault count into memcg stats · 456f998e
      Ying Han authored
      Two new stats in per-memcg memory.stat which tracks the number of page
      faults and number of major page faults.
      
        "pgfault"
        "pgmajfault"
      
      They are different from "pgpgin"/"pgpgout" stat which count number of
      pages charged/discharged to the cgroup and have no meaning of reading/
      writing page to disk.
      
      It is valuable to track the two stats for both measuring application's
      performance as well as the efficiency of the kernel page reclaim path.
      Counting pagefaults per process is useful, but we also need the aggregated
      value since processes are monitored and controlled in cgroup basis in
      memcg.
      
      Functional test: check the total number of pgfault/pgmajfault of all
      memcgs and compare with global vmstat value:
      
        $ cat /proc/vmstat | grep fault
        pgfault 1070751
        pgmajfault 553
      
        $ cat /dev/cgroup/memory.stat | grep fault
        pgfault 1071138
        pgmajfault 553
        total_pgfault 1071142
        total_pgmajfault 553
      
        $ cat /dev/cgroup/A/memory.stat | grep fault
        pgfault 199
        pgmajfault 0
        total_pgfault 199
        total_pgmajfault 0
      
      Performance test: run page fault test(pft) wit 16 thread on faulting in
      15G anon pages in 16G container.  There is no regression noticed on the
      "flt/cpu/s"
      
      Sample output from pft:
      
        TAG pft:anon-sys-default:
          Gb  Thr CLine   User     System     Wall    flt/cpu/s fault/wsec
          15   16   1     0.67s   233.41s    14.76s   16798.546 266356.260
      
        +-------------------------------------------------------------------------+
            N           Min           Max        Median           Avg        Stddev
        x  10     16682.962     17344.027     16913.524     16928.812      166.5362
        +  10     16695.568     16923.896     16820.604     16824.652     84.816568
        No difference proven at 95.0% confidence
      
      [akpm@linux-foundation.org: fix build]
      [hughd@google.com: shmem fix]
      Signed-off-by: default avatarYing Han <yinghan@google.com>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Acked-by: default avatarBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      456f998e
  15. 26 May, 2011 1 commit
  16. 25 May, 2011 7 commits
    • Peter Zijlstra's avatar
      mm: uninline large generic tlb.h functions · 9547d01b
      Peter Zijlstra authored
      Some of these functions have grown beyond inline sanity, move them
      out-of-line.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Requested-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Requested-by: default avatarHugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9547d01b
    • Peter Zijlstra's avatar
      mm: Convert i_mmap_lock to a mutex · 3d48ae45
      Peter Zijlstra authored
      Straightforward conversion of i_mmap_lock to a mutex.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3d48ae45
    • Peter Zijlstra's avatar
      mm: Remove i_mmap_lock lockbreak · 97a89413
      Peter Zijlstra authored
      Hugh says:
       "The only significant loser, I think, would be page reclaim (when
        concurrent with truncation): could spin for a long time waiting for
        the i_mmap_mutex it expects would soon be dropped? "
      
      Counter points:
       - cpu contention makes the spin stop (need_resched())
       - zap pages should be freeing pages at a higher rate than reclaim
         ever can
      
      I think the simplification of the truncate code is definitely worth it.
      
      Effectively reverts: 2aa15890 ("mm: prevent concurrent
      unmap_mapping_range() on the same inode") and takes out the code that
      caused its problem.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      97a89413
    • Peter Zijlstra's avatar
      mm: extended batches for generic mmu_gather · e303297e
      Peter Zijlstra authored
      Instead of using a single batch (the small on-stack, or an allocated
      page), try and extend the batch every time it runs out and only flush once
      either the extend fails or we're done.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Requested-by: default avatarNick Piggin <npiggin@kernel.dk>
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e303297e
    • Peter Zijlstra's avatar
      mm, powerpc: move the RCU page-table freeing into generic code · 26723911
      Peter Zijlstra authored
      In case other architectures require RCU freed page-tables to implement
      gup_fast() and software filled hashes and similar things, provide the
      means to do so by moving the logic into generic code.
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Requested-by: default avatarDavid Miller <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      26723911
    • Peter Zijlstra's avatar
      mm: mmu_gather rework · d16dfc55
      Peter Zijlstra authored
      Rework the existing mmu_gather infrastructure.
      
      The direct purpose of these patches was to allow preemptible mmu_gather,
      but even without that I think these patches provide an improvement to the
      status quo.
      
      The first 9 patches rework the mmu_gather infrastructure.  For review
      purpose I've split them into generic and per-arch patches with the last of
      those a generic cleanup.
      
      The next patch provides generic RCU page-table freeing, and the followup
      is a patch converting s390 to use this.  I've also got 4 patches from
      DaveM lined up (not included in this series) that uses this to implement
      gup_fast() for sparc64.
      
      Then there is one patch that extends the generic mmu_gather batching.
      
      After that follow the mm preemptibility patches, these make part of the mm
      a lot more preemptible.  It converts i_mmap_lock and anon_vma->lock to
      mutexes which together with the mmu_gather rework makes mmu_gather
      preemptible as well.
      
      Making i_mmap_lock a mutex also enables a clean-up of the truncate code.
      
      This also allows for preemptible mmu_notifiers, something that XPMEM I
      think wants.
      
      Furthermore, it removes the new and universially detested unmap_mutex.
      
      This patch:
      
      Remove the first obstacle towards a fully preemptible mmu_gather.
      
      The current scheme assumes mmu_gather is always done with preemption
      disabled and uses per-cpu storage for the page batches.  Change this to
      try and allocate a page for batching and in case of failure, use a small
      on-stack array to make some progress.
      
      Preemptible mmu_gather is desired in general and usable once i_mmap_lock
      becomes a mutex.  Doing it before the mutex conversion saves us from
      having to rework the code by moving the mmu_gather bits inside the
      pte_lock.
      
      Also avoid flushing the tlb batches from under the pte lock, this is
      useful even without the i_mmap_lock conversion as it significantly reduces
      pte lock hold times.
      
      [akpm@linux-foundation.org: fix comment tpyo]
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Acked-by: default avatarMel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d16dfc55
    • Michal Hocko's avatar
      mm: make expand_downwards() symmetrical with expand_upwards() · d05f3169
      Michal Hocko authored
      Currently we have expand_upwards exported while expand_downwards is
      accessible only via expand_stack or expand_stack_downwards.
      
      check_stack_guard_page is a nice example of the asymmetry.  It uses
      expand_stack for VM_GROWSDOWN while expand_upwards is called for
      VM_GROWSUP case.
      
      Let's clean this up by exporting both functions and make those names
      consistent.  Let's use expand_{upwards,downwards} because expanding
      doesn't always involve stack manipulation (an example is
      ia64_do_page_fault which uses expand_upwards for registers backing store
      expansion).  expand_downwards has to be defined for both
      CONFIG_STACK_GROWS{UP,DOWN} because get_arg_page calls the downwards
      version in the early process initialization phase for growsup
      configuration.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d05f3169
  17. 09 May, 2011 1 commit
    • Mikulas Patocka's avatar
      Don't lock guardpage if the stack is growing up · a09a79f6
      Mikulas Patocka authored
      Linux kernel excludes guard page when performing mlock on a VMA with
      down-growing stack. However, some architectures have up-growing stack
      and locking the guard page should be excluded in this case too.
      
      This patch fixes lvm2 on PA-RISC (and possibly other architectures with
      up-growing stack). lvm2 calculates number of used pages when locking and
      when unlocking and reports an internal error if the numbers mismatch.
      
      [ Patch changed fairly extensively to also fix /proc/<pid>/maps for the
        grows-up case, and to move things around a bit to clean it all up and
        share the infrstructure with the /proc bits.
      
        Tested on ia64 that has both grow-up and grow-down segments  - Linus ]
      Signed-off-by: default avatarMikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
      Tested-by: default avatarTony Luck <tony.luck@gmail.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a09a79f6
  18. 05 May, 2011 1 commit
    • Linus Torvalds's avatar
      VM: skip the stack guard page lookup in get_user_pages only for mlock · a1fde08c
      Linus Torvalds authored
      The logic in __get_user_pages() used to skip the stack guard page lookup
      whenever the caller wasn't interested in seeing what the actual page
      was.  But Michel Lespinasse points out that there are cases where we
      don't care about the physical page itself (so 'pages' may be NULL), but
      do want to make sure a page is mapped into the virtual address space.
      
      So using the existence of the "pages" array as an indication of whether
      to look up the guard page or not isn't actually so great, and we really
      should just use the FOLL_MLOCK bit.  But because that bit was only set
      for the VM_LOCKED case (and not all vma's necessarily have it, even for
      mlock()), we couldn't do that originally.
      
      Fix that by moving the VM_LOCKED check deeper into the call-chain, which
      actually simplifies many things.  Now mlock() gets simpler, and we can
      also check for FOLL_MLOCK in __get_user_pages() and the code ends up
      much more straightforward.
      Reported-and-reviewed-by: default avatarMichel Lespinasse <walken@google.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a1fde08c
  19. 28 Apr, 2011 1 commit
    • Mel Gorman's avatar
      mm: check if PTE is already allocated during page fault · cc03638d
      Mel Gorman authored
      With transparent hugepage support, handle_mm_fault() has to be careful
      that a normal PMD has been established before handling a PTE fault.  To
      achieve this, it used __pte_alloc() directly instead of pte_alloc_map as
      pte_alloc_map is unsafe to run against a huge PMD.  pte_offset_map() is
      called once it is known the PMD is safe.
      
      pte_alloc_map() is smart enough to check if a PTE is already present
      before calling __pte_alloc but this check was lost.  As a consequence,
      PTEs may be allocated unnecessarily and the page table lock taken.  Thi
      useless PTE does get cleaned up but it's a performance hit which is
      visible in page_test from aim9.
      
      This patch simply re-adds the check normally done by pte_alloc_map to
      check if the PTE needs to be allocated before taking the page table lock.
      The effect is noticable in page_test from aim9.
      
        AIM9
                        2.6.38-vanilla 2.6.38-checkptenone
        creat-clo      446.10 ( 0.00%)   424.47 (-5.10%)
        page_test       38.10 ( 0.00%)    42.04 ( 9.37%)
        brk_test        52.45 ( 0.00%)    51.57 (-1.71%)
        exec_test      382.00 ( 0.00%)   456.90 (16.39%)
        fork_test       60.11 ( 0.00%)    67.79 (11.34%)
        MMTests Statistics: duration
        Total Elapsed Time (seconds)                611.90    612.22
      
      (While this affects 2.6.38, it is a performance rather than a functional
      bug and normally outside the rules -stable.  While the big performance
      differences are to a microbench, the difference in fork and exec
      performance may be significant enough that -stable wants to consider the
      patch)
      Reported-by: default avatarRaz Ben Yehuda <raziebe@gmail.com>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Reviewed-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: <stable@kernel.org>		[2.6.38.x]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cc03638d
  20. 14 Apr, 2011 1 commit
    • Michael Ellerman's avatar
      mm: check that we have the right vma in __access_remote_vm() · fe936dfc
      Michael Ellerman authored
      In __access_remote_vm() we need to check that we have found the right
      vma, not the following vma before we try to access it.  Otherwise we
      might call the vma's access routine with an address which does not fall
      inside the vma.
      
      It was discovered on a current kernel but with an unreleased driver,
      from memory it was strace leading to a kernel bad access, but it
      obviously depends on what the access implementation does.
      
      Looking at other access implementations I only see:
      
        $ git grep -A 5 vm_operations|grep access
        arch/powerpc/platforms/cell/spufs/file.c-	.access = spufs_mem_mmap_access,
        arch/x86/pci/i386.c-	.access = generic_access_phys,
        drivers/char/mem.c-	.access = generic_access_phys
        fs/sysfs/bin.c-	.access		= bin_access,
      
      The spufs one looks like it might behave badly given the wrong vma, it
      assumes vma->vm_file->private_data is a spu_context, and looks like it
      would probably blow up pretty quickly if it wasn't.
      
      generic_access_phys() only uses the vma to check vm_flags and get the
      mm, and then walks page tables using the address.  So it should bail on
      the vm_flags check, or at worst let you access some other VM_IO mapping.
      
      And bin_access() just proxies to another access implementation.
      Signed-off-by: default avatarMichael Ellerman <michael@ellerman.id.au>
      Reviewed-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fe936dfc
  21. 12 Apr, 2011 1 commit
    • Linus Torvalds's avatar
      vm: fix mlock() on stack guard page · 95042f9e
      Linus Torvalds authored
      Commit 53a7706d ("mlock: do not hold mmap_sem for extended periods
      of time") changed mlock() to care about the exact number of pages that
      __get_user_pages() had brought it.  Before, it would only care about
      errors.
      
      And that doesn't work, because we also handled one page specially in
      __mlock_vma_pages_range(), namely the stack guard page.  So when that
      case was handled, the number of pages that the function returned was off
      by one.  In particular, it could be zero, and then the caller would end
      up not making any progress at all.
      
      Rather than try to fix up that off-by-one error for the mlock case
      specially, this just moves the logic to handle the stack guard page
      into__get_user_pages() itself, thus making all the counts come out
      right automatically.
      Reported-by: default avatarRobert Święcki <robert@swiecki.net>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      95042f9e
  22. 28 Mar, 2011 1 commit
  23. 24 Mar, 2011 1 commit