1. 04 Dec, 2015 1 commit
    • Pavel Fedin's avatar
      arm64: KVM: Correctly handle zero register during MMIO · bc45a516
      Pavel Fedin authored
      
      
      On ARM64 register index of 31 corresponds to both zero register and SP.
      However, all memory access instructions, use ZR as transfer register. SP
      is used only as a base register in indirect memory addressing, or by
      register-register arithmetics, which cannot be trapped here.
      
      Correct emulation is achieved by introducing new register accessor
      functions, which can do special handling for reg_num == 31. These new
      accessors intentionally do not rely on old vcpu_reg() on ARM64, because
      it is to be removed. Since the affected code is shared by both ARM
      flavours, implementations of these accessors are also added to ARM32 code.
      
      This patch fixes setting MMIO register to a random value (actually SP)
      instead of zero by something like:
      
       *((volatile int *)reg) = 0;
      
      compilers tend to generate "str wzr, [xx]" here
      
      [Marc: Fixed 32bit splat]
      Signed-off-by: default avatarPavel Fedin <p.fedin@samsung.com>
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      bc45a516
  2. 24 Nov, 2015 6 commits
    • Mark Rutland's avatar
      arm64: kvm: report original PAR_EL1 upon panic · fbb4574c
      Mark Rutland authored
      
      
      If we call __kvm_hyp_panic while a guest context is active, we call
      __restore_sysregs before acquiring the system register values for the
      panic, in the process throwing away the PAR_EL1 value at the point of
      the panic.
      
      This patch modifies __kvm_hyp_panic to stash the PAR_EL1 value prior to
      restoring host register values, enabling us to report the original
      values at the point of the panic.
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      fbb4574c
    • Mark Rutland's avatar
      arm64: kvm: avoid %p in __kvm_hyp_panic · 1d7a4e31
      Mark Rutland authored
      
      
      Currently __kvm_hyp_panic uses %p for values which are not pointers,
      such as the ESR value. This can confusingly lead to "(null)" being
      printed for the value.
      
      Use %x instead, and only use %p for host pointers.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      1d7a4e31
    • Christoffer Dall's avatar
      KVM: arm/arm64: Fix preemptible timer active state crazyness · 7e16aa81
      Christoffer Dall authored
      
      
      We were setting the physical active state on the GIC distributor in a
      preemptible section, which could cause us to set the active state on
      different physical CPU from the one we were actually going to run on,
      hacoc ensues.
      
      Since we are no longer descheduling/scheduling soft timers in the
      flush/sync timer functions, simply moving the timer flush into a
      non-preemptible section.
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      7e16aa81
    • Marc Zyngier's avatar
      arm64: KVM: Add workaround for Cortex-A57 erratum 834220 · 498cd5c3
      Marc Zyngier authored
      
      
      Cortex-A57 parts up to r1p2 can misreport Stage 2 translation faults
      when a Stage 1 permission fault or device alignment fault should
      have been reported.
      
      This patch implements the workaround (which is to validate that the
      Stage-1 translation actually succeeds) by using code patching.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      498cd5c3
    • Marc Zyngier's avatar
      arm64: KVM: Fix AArch32 to AArch64 register mapping · c0f09634
      Marc Zyngier authored
      
      
      When running a 32bit guest under a 64bit hypervisor, the ARMv8
      architecture defines a mapping of the 32bit registers in the 64bit
      space. This includes banked registers that are being demultiplexed
      over the 64bit ones.
      
      On exceptions caused by an operation involving a 32bit register, the
      HW exposes the register number in the ESR_EL2 register. It was so
      far understood that SW had to distinguish between AArch32 and AArch64
      accesses (based on the current AArch32 mode and register number).
      
      It turns out that I misinterpreted the ARM ARM, and the clue is in
      D1.20.1: "For some exceptions, the exception syndrome given in the
      ESR_ELx identifies one or more register numbers from the issued
      instruction that generated the exception. Where the exception is
      taken from an Exception level using AArch32 these register numbers
      give the AArch64 view of the register."
      
      Which means that the HW is already giving us the translated version,
      and that we shouldn't try to interpret it at all (for example, doing
      an MMIO operation from the IRQ mode using the LR register leads to
      very unexpected behaviours).
      
      The fix is thus not to perform a call to vcpu_reg32() at all from
      vcpu_reg(), and use whatever register number is supplied directly.
      The only case we need to find out about the mapping is when we
      actively generate a register access, which only occurs when injecting
      a fault in a guest.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      c0f09634
    • Ard Biesheuvel's avatar
      ARM/arm64: KVM: test properly for a PTE's uncachedness · e6fab544
      Ard Biesheuvel authored
      
      
      The open coded tests for checking whether a PTE maps a page as
      uncached use a flawed '(pte_val(xxx) & CONST) != CONST' pattern,
      which is not guaranteed to work since the type of a mapping is
      not a set of mutually exclusive bits
      
      For HYP mappings, the type is an index into the MAIR table (i.e, the
      index itself does not contain any information whatsoever about the
      type of the mapping), and for stage-2 mappings it is a bit field where
      normal memory and device types are defined as follows:
      
          #define MT_S2_NORMAL            0xf
          #define MT_S2_DEVICE_nGnRE      0x1
      
      I.e., masking *and* comparing with the latter matches on the former,
      and we have been getting lucky merely because the S2 device mappings
      also have the PTE_UXN bit set, or we would misidentify memory mappings
      as device mappings.
      
      Since the unmap_range() code path (which contains one instance of the
      flawed test) is used both for HYP mappings and stage-2 mappings, and
      considering the difference between the two, it is non-trivial to fix
      this by rewriting the tests in place, as it would involve passing
      down the type of mapping through all the functions.
      
      However, since HYP mappings and stage-2 mappings both deal with host
      physical addresses, we can simply check whether the mapping is backed
      by memory that is managed by the host kernel, and only perform the
      D-cache maintenance if this is the case.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarPavel Fedin <p.fedin@samsung.com>
      Reviewed-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      e6fab544
  3. 22 Nov, 2015 7 commits
    • Helge Deller's avatar
      parisc: Map kernel text and data on huge pages · 41b85a11
      Helge Deller authored
      
      
      Adjust the linker script and map_pages() to map kernel text and data on
      physical 1MB huge/large pages.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      41b85a11
    • Helge Deller's avatar
      parisc: Add Huge Page and HUGETLBFS support · 736d2169
      Helge Deller authored
      
      
      This patch adds huge page support to allow userspace to allocate huge
      pages and to use hugetlbfs filesystem on 32- and 64-bit Linux kernels.
      A later patch will add kernel support to map kernel text and data on
      huge pages.
      
      The only requirement is, that the kernel needs to be compiled for a
      PA8X00 CPU (PA2.0 architecture). Older PA1.X CPUs do not support
      variable page sizes. 64bit Kernels are compiled for PA2.0 by default.
      
      Technically on parisc multiple physical huge pages may be needed to
      emulate standard 2MB huge pages.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      736d2169
    • Helge Deller's avatar
      parisc: Use long branch to do_syscall_trace_exit · 337685e5
      Helge Deller authored
      
      
      Use the 22bit instead of the 17bit branch instruction on a 64bit kernel
      to reach the do_syscall_trace_exit function from the gateway page.
      A huge page enabled kernel may need the additional branch distance bits.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      337685e5
    • Helge Deller's avatar
      parisc: Increase initial kernel mapping to 32MB on 64bit kernel · 332b42e4
      Helge Deller authored
      
      
      For the 64bit kernel the initially 16 MB kernel memory might become too
      small if you build a kernel with many modules built-in and with kernel
      text and data areas mapped on huge pages.
      
      This patch increases the initial mapping to 32MB for 64bit kernels and
      keeps 16MB for 32bit kernels.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      332b42e4
    • Helge Deller's avatar
      parisc: Initialize the fault vector earlier in the boot process. · 4182d0cd
      Helge Deller authored
      
      
      A fault vector on parisc needs to be 2K aligned.  Furthermore the
      checksum of the fault vector needs to sum up to 0 which is being
      calculated and written at runtime.
      
      Up to now we aligned both PA20 and PA11 fault vectors on the same 4K
      page in order to easily write the checksum after having mapped the
      kernel read-only (by mapping this page only as read-write).
      But when we want to map the kernel text and data on huge pages this
      makes things harder.
      So, simplify it by aligning both fault vectors on 2K boundries and write
      the checksum before we map the page read-only.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      4182d0cd
    • Helge Deller's avatar
      parisc: Add defines for Huge page support · 1f25ad26
      Helge Deller authored
      
      
      Huge pages on parisc will have the same size as one pmd table, which
      is on a 64bit kernel 2MB on a kernel with 4K kernel page sizes, and
      on a 32bit kernel 4MB when used with 4K kernel pages.
      
      Since parisc does not physically supports 2MB huge page sizes, emulate
      it with two consecutive 1MB page sizes instead. Keeping the same huge
      page size as one pmd will allow us to add transparent huge page support
      later on.
      
      Bit 21 in the pte flags was unused and will now be used to mark a page
      as huge page (_PAGE_HPAGE_BIT).
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      1f25ad26
    • Helge Deller's avatar
      parisc: Drop unused MADV_xxxK_PAGES flags from asm/mman.h · dcbf0d29
      Helge Deller authored
      
      
      Drop the MADV_xxK_PAGES flags, which were never used and were from a proposed
      API which was never integrated into the generic Linux kernel code.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      dcbf0d29
  4. 20 Nov, 2015 5 commits
  5. 19 Nov, 2015 3 commits
  6. 18 Nov, 2015 5 commits
    • Will Deacon's avatar
      arm64: barriers: fix smp_load_acquire to work with const arguments · c139aa60
      Will Deacon authored
      
      
      A newly introduced function in include/net/sock.h passes a const
      argument to smp_load_acquire:
      
        static inline int sk_state_load(const struct sock *sk)
        {
      	return smp_load_acquire(&sk->sk_state);
        }
      
      This cause an allmodconfig build failure, since our underlying
      load-acquire implementation does not handle const types correctly:
      
        include/net/sock.h: In function 'sk_state_load':
        ./arch/arm64/include/asm/barrier.h:71:3: error: read-only variable '___p1' used as 'asm' output
           asm volatile ("ldarb %w0, %1"    \
      
      This patch fixes the problem by reusing the trick in READ_ONCE that
      loads via a non-const member of an anonymous union. This has the
      advantage of allowing us to use smp_load_acquire on packed structures
      (e.g. arch_spinlock_t) as well as primitive types.
      
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Reported-by: default avatarArnd Bergmann <arnd@arndb.de>
      Reported-by: default avatarDavid Daney <david.daney@cavium.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c139aa60
    • Laura Abbott's avatar
      arm64: Fix R/O permissions in mark_rodata_ro · 0b2aa5b8
      Laura Abbott authored
      
      
      The permissions in mark_rodata_ro trigger a build error
      with STRICT_MM_TYPECHECKS. Fix this by introducing
      PAGE_KERNEL_ROX for the same reasons as PAGE_KERNEL_RO.
      From Ard:
      
      "PAGE_KERNEL_EXEC has PTE_WRITE set as well, making the range
      writeable under the ARMv8.1 DBM feature, that manages the
      dirty bit in hardware (writing to a page with the PTE_RDONLY
      and PTE_WRITE bits both set will clear the PTE_RDONLY bit in that case)"
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      0b2aa5b8
    • Ard Biesheuvel's avatar
      arm64: crypto: reduce priority of core AES cipher · 08c6781c
      Ard Biesheuvel authored
      
      
      The asynchronous, merged implementations of AES in CBC, CTR and XTS
      modes are preferred when available (i.e., when instantiating ablkciphers
      explicitly). However, the synchronous core AES cipher combined with the
      generic CBC mode implementation will produce a 'cbc(aes)' blkcipher that
      is callable asynchronously as well. To prevent this implementation from
      being used when the accelerated asynchronous implemenation is also
      available, lower its priority to 250 (i.e., below the asynchronous
      module's priority of 300).
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      08c6781c
    • Ard Biesheuvel's avatar
      arm64: use non-global mappings for UEFI runtime regions · 65da0a8e
      Ard Biesheuvel authored
      
      
      As pointed out by Russell King in response to the proposed ARM version
      of this code, the sequence to switch between the UEFI runtime mapping
      and current's actual userland mapping (and vice versa) is potentially
      unsafe, since it leaves a time window between the switch to the new
      page tables and the TLB flush where speculative accesses may hit on
      stale global TLB entries.
      
      So instead, use non-global mappings, and perform the switch via the
      ordinary ASID-aware context switch routines.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      65da0a8e
    • Peter Chen's avatar
      ARM: dts: imx27.dtsi: change the clock information for usb · facf47ee
      Peter Chen authored
      For imx27, it needs three clocks to let the controller work,
      the old code is wrong, and usbmisc has not included clock handling
      code any more. Without this patch, it will cause below data
      abort when accessing usbmisc registers.
      
      usbcore: registered new interface driver usb-storage
      Unhandled fault: external abort on non-linefetch (0x008) at 0xf4424600
      pgd = c0004000
      [f4424600] *pgd=10000452(bad)
      Internal error: : 8 [#1
      
      ] PREEMPT ARM
      Modules linked in:
      CPU: 0 PID: 1 Comm: swapper Not tainted 4.1.0-next-20150701-dirty #3089
      Hardware name: Freescale i.MX27 (Device Tree Support)
      task: c7832b60 ti: c783e000 task.ti: c783e000
      PC is at usbmisc_imx27_init+0x4c/0xbc
      LR is at usbmisc_imx27_init+0x40/0xbc
      pc : [<c03cb5c0>]    lr : [<c03cb5b4>]    psr: 60000093
      sp : c783fe08  ip : 00000000  fp : 00000000
      r10: c0576434  r9 : 0000009c  r8 : c7a773a0
      r7 : 01000000  r6 : 60000013  r5 : c7a776f0  r4 : c7a773f0
      r3 : f4424600  r2 : 00000000  r1 : 00000001  r0 : 00000001
      Flags: nZCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
      Control: 0005317f  Table: a0004000  DAC: 00000017
      Process swapper (pid: 1, stack limit = 0xc783e190)
      Stack: (0xc783fe08 to 0xc7840000)
      Signed-off-by: default avatarPeter Chen <peter.chen@freescale.com>
      Reported-by: default avatarFabio Estevam <fabio.estevam@freescale.com>
      Tested-by: default avatarFabio Estevam <fabio.estevam@freescale.com>
      Cc: <stable@vger.kernel.org> #v4.1+
      Acked-by: default avatarShawn Guo <shawnguo@kernel.org>
      facf47ee
  7. 17 Nov, 2015 5 commits
    • Yang Shi's avatar
      arm64: bpf: make BPF prologue and epilogue align with ARM64 AAPCS · ec0738db
      Yang Shi authored
      
      
      Save and restore FP/LR in BPF prog prologue and epilogue, save SP to FP
      in prologue in order to get the correct stack backtrace.
      
      However, ARM64 JIT used FP (x29) as eBPF fp register, FP is subjected to
      change during function call so it may cause the BPF prog stack base address
      change too.
      
      Use x25 to replace FP as BPF stack base register (fp). Since x25 is callee
      saved register, so it will keep intact during function call.
      It is initialized in BPF prog prologue when BPF prog is started to run
      everytime. Save and restore x25/x26 in BPF prologue and epilogue to keep
      them intact for the outside of BPF. Actually, x26 is unnecessary, but SP
      requires 16 bytes alignment.
      
      So, the BPF stack layout looks like:
      
                                       high
               original A64_SP =>   0:+-----+ BPF prologue
                                      |FP/LR|
               current A64_FP =>  -16:+-----+
                                      | ... | callee saved registers
                                      +-----+
                                      |     | x25/x26
               BPF fp register => -80:+-----+
                                      |     |
                                      | ... | BPF prog stack
                                      |     |
                                      |     |
               current A64_SP =>      +-----+
                                      |     |
                                      | ... | Function call stack
                                      |     |
                                      +-----+
                                        low
      
      CC: Zi Shen Lim <zlim.lnx@gmail.com>
      CC: Xi Wang <xi.wang@gmail.com>
      Signed-off-by: default avatarYang Shi <yang.shi@linaro.org>
      Acked-by: default avatarZi Shen Lim <zlim.lnx@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ec0738db
    • Lorenzo Pieralisi's avatar
      arm64: kernel: pause/unpause function graph tracer in cpu_suspend() · de818bd4
      Lorenzo Pieralisi authored
      The function graph tracer adds instrumentation that is required to trace
      both entry and exit of a function. In particular the function graph
      tracer updates the "return address" of a function in order to insert
      a trace callback on function exit.
      
      Kernel power management functions like cpu_suspend() are called
      upon power down entry with functions called "finishers" that are in turn
      called to trigger the power down sequence but they may not return to the
      kernel through the normal return path.
      
      When the core resumes from low-power it returns to the cpu_suspend()
      function through the cpu_resume path, which leaves the trace stack frame
      set-up by the function tracer in an incosistent state upon return to the
      kernel when tracing is enabled.
      
      This patch fixes the issue by pausing/resuming the function graph
      tracer on the thread executing cpu_suspend() (ie the function call that
      subsequently triggers the "suspend finishers"), so that the function graph
      tracer state is kept consistent across functions that enter power down
      states and never return by effectively disabling graph tracer while they
      are executing.
      
      Fixes: 819e50e2
      
       ("arm64: Add ftrace support")
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Reported-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: default avatarAKASHI Takahiro <takahiro.akashi@linaro.org>
      Suggested-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org> # 3.16+
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      de818bd4
    • Arnd Bergmann's avatar
      arm64: do not include ptrace.h from compat.h · adc235af
      Arnd Bergmann authored
      
      
      including ptrace.h brings a definition of BITS_PER_PAGE into device
      drivers and cause a build warning in allmodconfig builds:
      
      drivers/block/drbd/drbd_bitmap.c:482:0: warning: "BITS_PER_PAGE" redefined
       #define BITS_PER_PAGE  (1UL << (PAGE_SHIFT + 3))
      
      This uses a slightly different way to express current_pt_regs()
      that avoids the use of the header and gets away with the already
      included asm/ptrace.h.
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      adc235af
    • Arnd Bergmann's avatar
      arm64: simplify dma_get_ops · 1dccb598
      Arnd Bergmann authored
      
      
      Including linux/acpi.h from asm/dma-mapping.h causes tons of compile-time
      warnings, e.g.
      
       drivers/isdn/mISDN/dsp_ecdis.h:43:0: warning: "FALSE" redefined
       drivers/isdn/mISDN/dsp_ecdis.h:44:0: warning: "TRUE" redefined
       drivers/net/fddi/skfp/h/targetos.h:62:0: warning: "TRUE" redefined
       drivers/net/fddi/skfp/h/targetos.h:63:0: warning: "FALSE" redefined
      
      However, it looks like the dependency should not even there as
      I do not see why __generic_dma_ops() cares about whether we have
      an ACPI based system or not.
      
      The current behavior is to fall back to the global dma_ops when
      a device has not set its own dma_ops, but only for DT based systems.
      This seems dangerous, as a random device might have different
      requirements regarding IOMMU or coherency, so we should really
      never have that fallback and just forbid DMA when we have not
      initialized DMA for a device.
      
      This removes the global dma_ops variable and the special-casing
      for ACPI, and just returns the dma ops that got set for the
      device, or the dummy_dma_ops if none were present.
      
      The original code has apparently been copied from arm32 where we
      rely on it for ISA devices things like the floppy controller, but
      we should have no such devices on ARM64.
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      [catalin.marinas@arm.com: removed acpi_disabled check in arch_setup_dma_ops()]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      1dccb598
    • Ard Biesheuvel's avatar
      arm64: mm: use correct mapping granularity under DEBUG_RODATA · 4fee9f36
      Ard Biesheuvel authored
      When booting a 64k pages kernel that is built with CONFIG_DEBUG_RODATA
      and resides at an offset that is not a multiple of 512 MB, the rounding
      that occurs in __map_memblock() and fixup_executable() results in
      incorrect regions being mapped.
      
      The following snippet from /sys/kernel/debug/kernel_page_tables shows
      how, when the kernel is loaded 2 MB above the base of DRAM at 0x40000000,
      the first 2 MB of memory (which may be inaccessible from non-secure EL1
      or just reserved by the firmware) is inadvertently mapped into the end of
      the module region.
      
        ---[ Modules start ]---
        0xfffffdffffe00000-0xfffffe0000000000     2M RW NX ... UXN MEM/NORMAL
        ---[ Modules end ]---
        ---[ Kernel Mapping ]---
        0xfffffe0000000000-0xfffffe0000090000   576K RW NX ... UXN MEM/NORMAL
        0xfffffe0000090000-0xfffffe0000200000  1472K ro x  ... UXN MEM/NORMAL
        0xfffffe0000200000-0xfffffe0000800000     6M ro x  ... UXN MEM/NORMAL
        0xfffffe0000800000-0xfffffe0000810000    64K ro x  ... UXN MEM/NORMAL
        0xfffffe0000810000-0xfffffe0000a00000  1984K RW NX ... UXN MEM/NORMAL
        0xfffffe0000a00000-0xfffffe00ffe00000  4084M RW NX ... UXN MEM/NORMAL
      
      The same issue is likely to occur on 16k pages kernels whose load
      address is not a multiple of 32 MB (i.e., SECTION_SIZE). So round to
      SWAPPER_BLOCK_SIZE instead of SECTION_SIZE.
      
      Fixes: da141706
      
       ("arm64: add better page protections to arm64")
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: <stable@vger.kernel.org> # 4.0+
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4fee9f36
  8. 16 Nov, 2015 8 commits