1. 30 Jun, 2020 2 commits
  2. 10 Jun, 2020 2 commits
  3. 24 Mar, 2020 1 commit
  4. 27 Feb, 2020 1 commit
    • Sean Christopherson's avatar
      x86/pkeys: Manually set X86_FEATURE_OSPKE to preserve existing changes · 735a6dd0
      Sean Christopherson authored
      Explicitly set X86_FEATURE_OSPKE via set_cpu_cap() instead of calling
      get_cpu_cap() to pull the feature bit from CPUID after enabling CR4.PKE.
      Invoking get_cpu_cap() effectively wipes out any {set,clear}_cpu_cap()
      changes that were made between this_cpu->c_init() and setup_pku(), as
      all non-synthetic feature words are reinitialized from the CPU's CPUID
      values.
      
      Blasting away capability updates manifests most visibility when running
      on a VMX capable CPU, but with VMX disabled by BIOS.  To indicate that
      VMX is disabled, init_ia32_feat_ctl() clears X86_FEATURE_VMX, using
      clear_cpu_cap() instead of setup_clear_cpu_cap() so that KVM can report
      which CPU is misconfigured (KVM needs to probe every CPU anyways).
      Restoring X86_FEATURE_VMX from CPUID causes KVM to think VMX is enabled,
      ultimately leading to an unexpected #GP when KVM attempts to do VMXON.
      
      Arguably, init_ia32_feat_ctl() should use setup_clear_cpu_cap() and let
      KVM figure out a different way to report the misconfigured CPU, but VMX
      is not the only feature bit that is affected, i.e. there is precedent
      that tweaking feature bits via {set,clear}_cpu_cap() after ->c_init()
      is expected to work.  Most notably, x86_init_rdrand()'s clearing of
      X86_FEATURE_RDRAND when RDRAND malfunctions is also overwritten.
      
      Fixes: 06976945
      
       ("x86/mm/pkeys: Actually enable Memory Protection Keys in the CPU")
      Reported-by: default avatarJacob Keller <jacob.e.keller@intel.com>
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Acked-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Tested-by: default avatarJacob Keller <jacob.e.keller@intel.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20200226231615.13664-1-sean.j.christopherson@intel.com
      735a6dd0
  5. 20 Feb, 2020 1 commit
    • Peter Zijlstra (Intel)'s avatar
      x86/split_lock: Enable split lock detection by kernel · 6650cdd9
      Peter Zijlstra (Intel) authored
      
      
      A split-lock occurs when an atomic instruction operates on data that spans
      two cache lines. In order to maintain atomicity the core takes a global bus
      lock.
      
      This is typically >1000 cycles slower than an atomic operation within a
      cache line. It also disrupts performance on other cores (which must wait
      for the bus lock to be released before their memory operations can
      complete). For real-time systems this may mean missing deadlines. For other
      systems it may just be very annoying.
      
      Some CPUs have the capability to raise an #AC trap when a split lock is
      attempted.
      
      Provide a command line option to give the user choices on how to handle
      this:
      
      split_lock_detect=
      	off	- not enabled (no traps for split locks)
      	warn	- warn once when an application does a
      		  split lock, but allow it to continue
      		  running.
      	fatal	- Send SIGBUS to applications that cause split lock
      
      On systems that support split lock detection the default is "warn". Note
      that if the kernel hits a split lock in any mode other than "off" it will
      OOPs.
      
      One implementation wrinkle is that the MSR to control the split lock
      detection is per-core, not per thread. This might result in some short
      lived races on HT systems in "warn" mode if Linux tries to enable on one
      thread while disabling on the other. Race analysis by Sean Christopherson:
      
        - Toggling of split-lock is only done in "warn" mode.  Worst case
          scenario of a race is that a misbehaving task will generate multiple
          #AC exceptions on the same instruction.  And this race will only occur
          if both siblings are running tasks that generate split-lock #ACs, e.g.
          a race where sibling threads are writing different values will only
          occur if CPUx is disabling split-lock after an #AC and CPUy is
          re-enabling split-lock after *its* previous task generated an #AC.
        - Transitioning between off/warn/fatal modes at runtime isn't supported
          and disabling is tracked per task, so hardware will always reach a steady
          state that matches the configured mode.  I.e. split-lock is guaranteed to
          be enabled in hardware once all _TIF_SLD threads have been scheduled out.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Co-developed-by: default avatarFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: default avatarFenghua Yu <fenghua.yu@intel.com>
      Co-developed-by: default avatarTony Luck <tony.luck@intel.com>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Link: https://lore.kernel.org/r/20200126200535.GB30377@agluck-desk2.amr.corp.intel.com
      6650cdd9
  6. 23 Jan, 2020 1 commit
    • Dave Hansen's avatar
      x86/mpx: remove MPX from arch/x86 · 45fc24e8
      Dave Hansen authored
      
      
      From: Dave Hansen <dave.hansen@linux.intel.com>
      
      MPX is being removed from the kernel due to a lack of support
      in the toolchain going forward (gcc).
      
      This removes all the remaining (dead at this point) MPX handling
      code remaining in the tree.  The only remaining code is the XSAVE
      support for MPX state which is currently needd for KVM to handle
      VMs which might use MPX.
      
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: x86@kernel.org
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      45fc24e8
  7. 17 Jan, 2020 2 commits
  8. 13 Jan, 2020 1 commit
  9. 09 Jan, 2020 1 commit
  10. 10 Dec, 2019 1 commit
  11. 26 Nov, 2019 1 commit
    • Andy Lutomirski's avatar
      x86/doublefault/32: Move #DF stack and TSS to cpu_entry_area · dc4e0021
      Andy Lutomirski authored
      
      
      There are three problems with the current layout of the doublefault
      stack and TSS.  First, the TSS is only cacheline-aligned, which is
      not enough -- if the hardware portion of the TSS (struct x86_hw_tss)
      crosses a page boundary, horrible things happen [0].  Second, the
      stack and TSS are global, so simultaneous double faults on different
      CPUs will cause massive corruption.  Third, the whole mechanism
      won't work if user CR3 is loaded, resulting in a triple fault [1].
      
      Let the doublefault stack and TSS share a page (which prevents the
      TSS from spanning a page boundary), make it percpu, and move it into
      cpu_entry_area.  Teach the stack dump code about the doublefault
      stack.
      
      [0] Real hardware will read past the end of the page onto the next
          *physical* page if a task switch happens.  Virtual machines may
          have any number of bugs, and I would consider it reasonable for
          a VM to summarily kill the guest if it tries to task-switch to
          a page-spanning TSS.
      
      [1] Real hardware triple faults.  At least some VMs seem to hang.
          I'm not sure what's going on.
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      dc4e0021
  12. 16 Nov, 2019 6 commits
    • Thomas Gleixner's avatar
      x86/ioperm: Extend IOPL config to control ioperm() as well · 111e7b15
      Thomas Gleixner authored
      
      
      If iopl() is disabled, then providing ioperm() does not make much sense.
      
      Rename the config option and disable/enable both syscalls with it. Guard
      the code with #ifdefs where appropriate.
      Suggested-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      111e7b15
    • Thomas Gleixner's avatar
      x86/iopl: Restrict iopl() permission scope · c8137ace
      Thomas Gleixner authored
      
      
      The access to the full I/O port range can be also provided by the TSS I/O
      bitmap, but that would require to copy 8k of data on scheduling in the
      task. As shown with the sched out optimization TSS.io_bitmap_base can be
      used to switch the incoming task to a preallocated I/O bitmap which has all
      bits zero, i.e. allows access to all I/O ports.
      
      Implementing this allows to provide an iopl() emulation mode which restricts
      the IOPL level 3 permissions to I/O port access but removes the STI/CLI
      permission which is coming with the hardware IOPL mechansim.
      
      Provide a config option to switch IOPL to emulation mode, make it the
      default and while at it also provide an option to disable IOPL completely.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarAndy Lutomirski <luto@kernel.org>
      c8137ace
    • Thomas Gleixner's avatar
      x86/ioperm: Add bitmap sequence number · 060aa16f
      Thomas Gleixner authored
      
      
      Add a globally unique sequence number which is incremented when ioperm() is
      changing the I/O bitmap of a task. Store the new sequence number in the
      io_bitmap structure and compare it with the sequence number of the I/O
      bitmap which was last loaded on a CPU. Only update the bitmap if the
      sequence is different.
      
      That should further reduce the overhead of I/O bitmap scheduling when there
      are only a few I/O bitmap users on the system.
      
      The 64bit sequence counter is sufficient. A wraparound of the sequence
      counter assuming an ioperm() call every nanosecond would require about 584
      years of uptime.
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      060aa16f
    • Thomas Gleixner's avatar
      x86/tss: Move I/O bitmap data into a seperate struct · f5848e5f
      Thomas Gleixner authored
      
      
      Move the non hardware portion of I/O bitmap data into a seperate struct for
      readability sake.
      Originally-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      f5848e5f
    • Thomas Gleixner's avatar
      x86/io: Speedup schedule out of I/O bitmap user · ecc7e37d
      Thomas Gleixner authored
      
      
      There is no requirement to update the TSS I/O bitmap when a thread using it is
      scheduled out and the incoming thread does not use it.
      
      For the permission check based on the TSS I/O bitmap the CPU calculates the memory
      location of the I/O bitmap by the address of the TSS and the io_bitmap_base member
      of the tss_struct. The easiest way to invalidate the I/O bitmap is to switch the
      offset to an address outside of the TSS limit.
      
      If an I/O instruction is issued from user space the TSS limit causes #GP to be
      raised in the same was as valid I/O bitmap with all bits set to 1 would do.
      
      This removes the extra work when an I/O bitmap using task is scheduled out
      and puts the burden on the rare I/O bitmap users when they are scheduled
      in.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      ecc7e37d
    • Thomas Gleixner's avatar
      x86/cpu: Unify cpu_init() · 505b7899
      Thomas Gleixner authored
      
      
      Similar to copy_thread_tls() the 32bit and 64bit implementations of
      cpu_init() are very similar and unification avoids duplicate changes in the
      future.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarAndy Lutomirski <luto@kernel.org>
      505b7899
  13. 15 Nov, 2019 1 commit
  14. 04 Nov, 2019 2 commits
  15. 28 Oct, 2019 3 commits
    • Pawan Gupta's avatar
      x86/speculation/taa: Add mitigation for TSX Async Abort · 1b42f017
      Pawan Gupta authored
      TSX Async Abort (TAA) is a side channel vulnerability to the internal
      buffers in some Intel processors similar to Microachitectural Data
      Sampling (MDS). In this case, certain loads may speculatively pass
      invalid data to dependent operations when an asynchronous abort
      condition is pending in a TSX transaction.
      
      This includes loads with no fault or assist condition. Such loads may
      speculatively expose stale data from the uarch data structures as in
      MDS. Scope of exposure is within the same-thread and cross-thread. This
      issue affects all current processors that support TSX, but do not have
      ARCH_CAP_TAA_NO (bit 8) set in MSR_IA32_ARCH_CAPABILITIES.
      
      On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0,
      CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers
      using VERW or L1D_FLUSH, there is no additional mitigation needed for
      TAA. On affected CPUs with MDS_NO=1 this issue can be mitigated by
      disabling the Transactional Synchronization Extensions (TSX) feature.
      
      A new MSR IA32_TSX_CTRL in future and current processors after a
      microcode update can be used to control the TSX feature. There are two
      bits in that MSR:
      
      * TSX_CTRL_RTM_DISABLE disables the TSX sub-feature Restricted
      Transactional Memory (RTM).
      
      * TSX_CTRL_CPUID_CLEAR clears the RTM enumeration in CPUID. The other
      TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
      disabled with updated microcode but still enumerated as present by
      CPUID(EAX=7).EBX{bit4}.
      
      The second mitigation approach is similar to MDS which is clearing the
      affected CPU buffers on return to user space and when entering a guest.
      Relevant microcode update is required for the mitigation to work.  More
      details on this approach can be found here:
      
        https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
      
      
      
      The TSX feature can be controlled by the "tsx" command line parameter.
      If it is force-enabled then "Clear CPU buffers" (MDS mitigation) is
      deployed. The effective mitigation state can be read from sysfs.
      
       [ bp:
         - massage + comments cleanup
         - s/TAA_MITIGATION_TSX_DISABLE/TAA_MITIGATION_TSX_DISABLED/g - Josh.
         - remove partial TAA mitigation in update_mds_branch_idle() - Josh.
         - s/tsx_async_abort_cmdline/tsx_async_abort_parse_cmdline/g
       ]
      Signed-off-by: default avatarPawan Gupta <pawan.kumar.gupta@linux.intel.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      1b42f017
    • Pawan Gupta's avatar
      x86/cpu: Add a "tsx=" cmdline option with TSX disabled by default · 95c5824f
      Pawan Gupta authored
      
      
      Add a kernel cmdline parameter "tsx" to control the Transactional
      Synchronization Extensions (TSX) feature. On CPUs that support TSX
      control, use "tsx=on|off" to enable or disable TSX. Not specifying this
      option is equivalent to "tsx=off". This is because on certain processors
      TSX may be used as a part of a speculative side channel attack.
      
      Carve out the TSX controlling functionality into a separate compilation
      unit because TSX is a CPU feature while the TSX async abort control
      machinery will go to cpu/bugs.c.
      
       [ bp: - Massage, shorten and clear the arg buffer.
             - Clarifications of the tsx= possible options - Josh.
             - Expand on TSX_CTRL availability - Pawan. ]
      Signed-off-by: default avatarPawan Gupta <pawan.kumar.gupta@linux.intel.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      95c5824f
    • Pawan Gupta's avatar
      x86/cpu: Add a helper function x86_read_arch_cap_msr() · 286836a7
      Pawan Gupta authored
      
      
      Add a helper function to read the IA32_ARCH_CAPABILITIES MSR.
      Signed-off-by: default avatarPawan Gupta <pawan.kumar.gupta@linux.intel.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Tested-by: default avatarNeelima Krishnan <neelima.krishnan@intel.com>
      Reviewed-by: default avatarMark Gross <mgross@linux.intel.com>
      Reviewed-by: default avatarTony Luck <tony.luck@intel.com>
      Reviewed-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      286836a7
  16. 06 Sep, 2019 1 commit
  17. 28 Aug, 2019 1 commit
  18. 28 Jul, 2019 1 commit
  19. 25 Jul, 2019 2 commits
  20. 10 Jul, 2019 1 commit
  21. 03 Jul, 2019 1 commit
    • Thomas Gleixner's avatar
      x86/fsgsbase: Revert FSGSBASE support · 049331f2
      Thomas Gleixner authored
      The FSGSBASE series turned out to have serious bugs and there is still an
      open issue which is not fully understood yet.
      
      The confidence in those changes has become close to zero especially as the
      test cases which have been shipped with that series were obviously never
      run before sending the final series out to LKML.
      
        ./fsgsbase_64 >/dev/null
        Segmentation fault
      
      As the merge window is close, the only sane decision is to revert FSGSBASE
      support. The revert is necessary as this branch has been merged into
      perf/core already and rebasing all of that a few days before the merge
      window is not the most brilliant idea.
      
      I could definitely slap myself for not noticing the test case fail when
      merging that series, but TBH my expectations weren't that low back
      then. Won't happen again.
      
      Revert the following commits:
      539bca53 ("x86/entry/64: Fix and clean up paranoid_exit")
      2c7b5ac5 ("Documentation/x86/64: Add documentation for GS/FS addressing mode")
      f987c955 ("x86/elf: Enumerate kernel FSGSBASE capability in AT_HWCAP2")
      2032f1f9 ("x86/cpu: Enable FSGSBASE on 64bit by default and add a chicken bit")
      5bf0cab6 ("x86/entry/64: Document GSBASE handling in the paranoid path")
      708078f6 ("x86/entry/64: Handle FSGSBASE enabled paranoid entry/exit")
      79e1932f ("x86/entry/64: Introduce the FIND_PERCPU_BASE macro")
      1d07316b ("x86/entry/64: Switch CR3 before SWAPGS in paranoid entry")
      f60a83df ("x86/process/64: Use FSGSBASE instructions on thread copy and ptrace")
      1ab5f3f7 ("x86/process/64: Use FSBSBASE in switch_to() if available")
      a86b4625 ("x86/fsgsbase/64: Enable FSGSBASE instructions in helper functions")
      8b71340d ("x86/fsgsbase/64: Add intrinsics for FSGSBASE instructions")
      b64ed19b
      
       ("x86/cpu: Add 'unsafe_fsgsbase' to enable CR4.FSGSBASE")
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Chang S. Bae <chang.seok.bae@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      049331f2
  22. 22 Jun, 2019 4 commits
  23. 20 Jun, 2019 3 commits
    • Fenghua Yu's avatar
      x86/cpufeatures: Enumerate the new AVX512 BFLOAT16 instructions · b302e4b1
      Fenghua Yu authored
      
      
      AVX512 BFLOAT16 instructions support 16-bit BFLOAT16 floating-point
      format (BF16) for deep learning optimization.
      
      BF16 is a short version of 32-bit single-precision floating-point
      format (FP32) and has several advantages over 16-bit half-precision
      floating-point format (FP16). BF16 keeps FP32 accumulation after
      multiplication without loss of precision, offers more than enough
      range for deep learning training tasks, and doesn't need to handle
      hardware exception.
      
      AVX512 BFLOAT16 instructions are enumerated in CPUID.7.1:EAX[bit 5]
      AVX512_BF16.
      
      CPUID.7.1:EAX contains only feature bits. Reuse the currently empty
      word 12 as a pure features word to hold the feature bits including
      AVX512_BF16.
      
      Detailed information of the CPUID bit and AVX512 BFLOAT16 instructions
      can be found in the latest Intel Architecture Instruction Set Extensions
      and Future Features Programming Reference.
      
       [ bp: Check CPUID(7) subleaf validity before accessing subleaf 1. ]
      Signed-off-by: default avatarFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: "Chang S. Bae" <chang.seok.bae@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nadav Amit <namit@vmware.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Radim Krcmar <rkrcmar@redhat.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: "Ravi V Shankar" <ravi.v.shankar@intel.com>
      Cc: Robert Hoo <robert.hu@linux.intel.com>
      Cc: "Sean J Christopherson" <sean.j.christopherson@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>
      Cc: x86 <x86@kernel.org>
      Link: https://lkml.kernel.org/r/1560794416-217638-3-git-send-email-fenghua.yu@intel.com
      b302e4b1
    • Fenghua Yu's avatar
      x86/cpufeatures: Combine word 11 and 12 into a new scattered features word · acec0ce0
      Fenghua Yu authored
      
      
      It's a waste for the four X86_FEATURE_CQM_* feature bits to occupy two
      whole feature bits words. To better utilize feature words, re-define
      word 11 to host scattered features and move the four X86_FEATURE_CQM_*
      features into Linux defined word 11. More scattered features can be
      added in word 11 in the future.
      
      Rename leaf 11 in cpuid_leafs to CPUID_LNX_4 to reflect it's a
      Linux-defined leaf.
      
      Rename leaf 12 as CPUID_DUMMY which will be replaced by a meaningful
      name in the next patch when CPUID.7.1:EAX occupies world 12.
      
      Maximum number of RMID and cache occupancy scale are retrieved from
      CPUID.0xf.1 after scattered CQM features are enumerated. Carve out the
      code into a separate function.
      
      KVM doesn't support resctrl now. So it's safe to move the
      X86_FEATURE_CQM_* features to scattered features word 11 for KVM.
      Signed-off-by: default avatarFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Aaron Lewis <aaronlewis@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Babu Moger <babu.moger@amd.com>
      Cc: "Chang S. Bae" <chang.seok.bae@intel.com>
      Cc: "Sean J Christopherson" <sean.j.christopherson@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: kvm ML <kvm@vger.kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Nadav Amit <namit@vmware.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: Ravi V Shankar <ravi.v.shankar@intel.com>
      Cc: Sherry Hurwitz <sherry.hurwitz@amd.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>
      Cc: x86 <x86@kernel.org>
      Link: https://lkml.kernel.org/r/1560794416-217638-2-git-send-email-fenghua.yu@intel.com
      acec0ce0
    • Borislav Petkov's avatar
      x86/cpufeatures: Carve out CQM features retrieval · 45fc56e6
      Borislav Petkov authored
      
      
      ... into a separate function for better readability. Split out from a
      patch from Fenghua Yu <fenghua.yu@intel.com> to keep the mechanical,
      sole code movement separate for easy review.
      
      No functional changes.
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: x86@kernel.org
      45fc56e6