Skip to content
Snippets Groups Projects
  1. Aug 22, 2018
    • Michal Hocko's avatar
      mm, oom: distinguish blockable mode for mmu notifiers · 93065ac7
      Michal Hocko authored
      There are several blockable mmu notifiers which might sleep in
      mmu_notifier_invalidate_range_start and that is a problem for the
      oom_reaper because it needs to guarantee a forward progress so it cannot
      depend on any sleepable locks.
      
      Currently we simply back off and mark an oom victim with blockable mmu
      notifiers as done after a short sleep.  That can result in selecting a new
      oom victim prematurely because the previous one still hasn't torn its
      memory down yet.
      
      We can do much better though.  Even if mmu notifiers use sleepable locks
      there is no reason to automatically assume those locks are held.  Moreover
      majority of notifiers only care about a portion of the address space and
      there is absolutely zero reason to fail when we are unmapping an unrelated
      range.  Many notifiers do really block and wait for HW which is harder to
      handle and we have to bail out though.
      
      This patch handles the low hanging fruit.
      __mmu_notifier_invalidate_range_start gets a blockable flag and callbacks
      are not allowed to sleep if the flag is set to false.  This is achieved by
      using trylock instead of the sleepable lock for most callbacks and
      continue as long as we do not block down the call chain.
      
      I think we can improve that even further because there is a common pattern
      to do a range lookup first and then do something about that.  The first
      part can be done without a sleeping lock in most cases AFAICS.
      
      The oom_reaper end then simply retries if there is at least one notifier
      which couldn't make any progress in !blockable mode.  A retry loop is
      already implemented to wait for the mmap_sem and this is basically the
      same thing.
      
      The simplest way for driver developers to test this code path is to wrap
      userspace code which uses these notifiers into a memcg and set the hard
      limit to hit the oom.  This can be done e.g.  after the test faults in all
      the mmu notifier managed memory and set the hard limit to something really
      small.  Then we are looking for a proper process tear down.
      
      [akpm@linux-foundation.org: coding style fixes]
      [akpm@linux-foundation.org: minor code simplification]
      Link: http://lkml.kernel.org/r/20180716115058.5559-1-mhocko@kernel.org
      
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: Christian König <christian.koenig@amd.com> # AMD notifiers
      Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx and umem_odp
      Reported-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: "David (ChunMing) Zhou" <David1.Zhou@amd.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Doug Ledford <dledford@redhat.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
      Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
      Cc: Sudeep Dutt <sudeep.dutt@intel.com>
      Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Felix Kuehling <felix.kuehling@amd.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      93065ac7
  2. Aug 13, 2018
  3. Aug 12, 2018
  4. Aug 06, 2018
  5. Jul 31, 2018
    • Christoffer Dall's avatar
      KVM: arm/arm64: Fix lost IRQs from emulated physcial timer when blocked · 245715cb
      Christoffer Dall authored
      
      When the VCPU is blocked (for example from WFI) we don't inject the
      physical timer interrupt if it should fire while the CPU is blocked, but
      instead we just wake up the VCPU and expect kvm_timer_vcpu_load to take
      care of injecting the interrupt.
      
      Unfortunately, kvm_timer_vcpu_load() doesn't actually do that, it only
      has support to schedule a soft timer if the emulated phys timer is
      expected to fire in the future.
      
      Follow the same pattern as kvm_timer_update_state() and update the irq
      state after potentially scheduling a soft timer.
      
      Reported-by: default avatarAndre Przywara <andre.przywara@arm.com>
      Cc: Stable <stable@vger.kernel.org> # 4.15+
      Fixes: bbdd52cf ("KVM: arm/arm64: Avoid phys timer emulation in vcpu entry/exit")
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      245715cb
    • Christoffer Dall's avatar
      KVM: arm/arm64: Fix potential loss of ptimer interrupts · 7afc4ddb
      Christoffer Dall authored
      
      kvm_timer_update_state() is called when changing the phys timer
      configuration registers, either via vcpu reset, as a result of a trap
      from the guest, or when userspace programs the registers.
      
      phys_timer_emulate() is in turn called by kvm_timer_update_state() to
      either cancel an existing software timer, or program a new software
      timer, to emulate the behavior of a real phys timer, based on the change
      in configuration registers.
      
      Unfortunately, the interaction between these two functions left a small
      race; if the conceptual emulated phys timer should actually fire, but
      the soft timer hasn't executed its callback yet, we cancel the timer in
      phys_timer_emulate without injecting an irq.  This only happens if the
      check in kvm_timer_update_state is called before the timer should fire,
      which is relatively unlikely, but possible.
      
      The solution is to update the state of the phys timer after calling
      phys_timer_emulate, which will pick up the pending timer state and
      update the interrupt value.
      
      Note that this leaves the opportunity of raising the interrupt twice,
      once in the just-programmed soft timer, and once in
      kvm_timer_update_state.  Since this always happens synchronously with
      the VCPU execution, there is no harm in this, and the guest ever only
      sees a single timer interrupt.
      
      Cc: Stable <stable@vger.kernel.org> # 4.15+
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      7afc4ddb
  6. Jul 24, 2018
    • Mark Rutland's avatar
      KVM: arm/arm64: vgic: Fix possible spectre-v1 write in vgic_mmio_write_apr() · 6b8b9a48
      Mark Rutland authored
      
      It's possible for userspace to control n. Sanitize n when using it as an
      array index, to inhibit the potential spectre-v1 write gadget.
      
      Note that while it appears that n must be bound to the interval [0,3]
      due to the way it is extracted from addr, we cannot guarantee that
      compiler transformations (and/or future refactoring) will ensure this is
      the case, and given this is a slow path it's better to always perform
      the masking.
      
      Found by smatch.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      6b8b9a48
  7. Jul 21, 2018
  8. Jul 18, 2018
  9. Jul 13, 2018
  10. Jul 09, 2018
  11. Jul 05, 2018
  12. Jun 21, 2018
    • Marc Zyngier's avatar
      KVM: arm64: Prevent KVM_COMPAT from being selected · 37b65db8
      Marc Zyngier authored
      
      There is very little point in trying to support the 32bit KVM/arm API
      on arm64, and this was never an anticipated use case.
      
      Let's make it clear by not selecting KVM_COMPAT.
      
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      37b65db8
    • Marc Zyngier's avatar
      KVM: Enforce error in ioctl for compat tasks when !KVM_COMPAT · 7ddfd3e0
      Marc Zyngier authored
      
      The current behaviour of the compat ioctls is a bit odd.
      We provide a compat_ioctl method when KVM_COMPAT is set, and NULL
      otherwise. But NULL means that the normal, non-compat ioctl should
      be used directly for compat tasks, and there is no way to actually
      prevent a compat task from issueing KVM ioctls.
      
      This patch changes this behaviour, by always registering a compat_ioctl
      method, even if KVM_COMPAT is not selected. In that case, the callback
      will always return -EINVAL.
      
      Fixes: de8e5d74 ("KVM: Disable compat ioctl for s390")
      Reported-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Acked-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      7ddfd3e0
    • Jia He's avatar
      KVM: arm/arm64: add WARN_ON if size is not PAGE_SIZE aligned in unmap_stage2_range · 47a91b72
      Jia He authored
      There is a panic in armv8a server(QDF2400) under memory pressure tests
      (start 20 guests and run memhog in the host).
      
      ---------------------------------begin--------------------------------
      [35380.800950] BUG: Bad page state in process qemu-kvm  pfn:dd0b6
      [35380.805825] page:ffff7fe003742d80 count:-4871 mapcount:-2126053375
      mapping:          (null) index:0x0
      [35380.815024] flags: 0x1fffc00000000000()
      [35380.818845] raw: 1fffc00000000000 0000000000000000 0000000000000000
      ffffecf981470000
      [35380.826569] raw: dead000000000100 dead000000000200 ffff8017c001c000
      0000000000000000
      [35380.805825] page:ffff7fe003742d80 count:-4871 mapcount:-2126053375
      mapping:          (null) index:0x0
      [35380.815024] flags: 0x1fffc00000000000()
      [35380.818845] raw: 1fffc00000000000 0000000000000000 0000000000000000
      ffffecf981470000
      [35380.826569] raw: dead000000000100 dead000000000200 ffff8017c001c000
      0000000000000000
      [35380.834294] page dumped because: nonzero _refcount
      [...]
      --------------------------------end--------------------------------------
      
      The root cause might be what was fixed at [1]. But from the KVM points of
      view, it would be better if the issue was caught earlier.
      
      If the size is not PAGE_SIZE aligned, unmap_stage2_range might unmap the
      wrong(more or less) page range. Hence it caused the "BUG: Bad page
      state"
      
      Let's WARN in that case, so that the issue is obvious.
      
      [1] https://lkml.org/lkml/2018/5/3/1042
      
      
      
      Reviewed-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatar <jia.he@hxt-semitech.com>
      [maz: tidied up commit message]
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      47a91b72
Loading