Core x86 changes for v6.9:

- The biggest change is the rework of the percpu code,
   to support the 'Named Address Spaces' GCC feature,
   by Uros Bizjak:
 
    - This allows C code to access GS and FS segment relative
      memory via variables declared with such attributes,
      which allows the compiler to better optimize those accesses
      than the previous inline assembly code.
 
    - The series also includes a number of micro-optimizations
      for various percpu access methods, plus a number of
      cleanups of %gs accesses in assembly code.
 
    - These changes have been exposed to linux-next testing for
      the last ~5 months, with no known regressions in this area.
 
 - Fix/clean up __switch_to()'s broken but accidentally
   working handling of FPU switching - which also generates
   better code.
 
 - Propagate more RIP-relative addressing in assembly code,
   to generate slightly better code.
 
 - Rework the CPU mitigations Kconfig space to be less idiosyncratic,
   to make it easier for distros to follow & maintain these options.
 
 - Rework the x86 idle code to cure RCU violations and
   to clean up the logic.
 
 - Clean up the vDSO Makefile logic.
 
 - Misc cleanups and fixes.
 
 [ Please note that there's a higher number of merge commits in
   this branch (three) than is usual in x86 topic trees. This happened
   due to the long testing lifecycle of the percpu changes that
   involved 3 merge windows, which generated a longer history
   and various interactions with other core x86 changes that we
   felt better about to carry in a single branch. ]
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmXvB0gRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1jUqRAAqnEQPiabF5acQlHrwviX+cjSobDlqtH5
 9q2AQy9qaEHapzD0XMOxvFye6XIvehGOGxSPvk6CoviSxBND8rb56lvnsEZuLeBV
 Bo5QSIL2x42Zrvo11iPHwgXZfTIusU90sBuKDRFkYBAxY3HK2naMDZe8MAsYCUE9
 nwgHF8DDc/NYiSOXV8kosWoWpNIkoK/STyH5bvTQZMqZcwyZ49AIeP1jGZb/prbC
 e/rbnlrq5Eu6brpM7xo9kELO0Vhd34urV14KrrIpdkmUKytW2KIsyvW8D6fqgDBj
 NSaQLLcz0pCXbhF+8Nqvdh/1coR4L7Ymt08P1rfEjCsQgb/2WnSAGUQuC5JoGzaj
 ngkbFcZllIbD9gNzMQ1n4Aw5TiO+l9zxCqPC/r58Uuvstr+K9QKlwnp2+B3Q73Ft
 rojIJ04NJL6lCHdDgwAjTTks+TD2PT/eBWsDfJ/1pnUWttmv9IjMpnXD5sbHxoiU
 2RGGKnYbxXczYdq/ALYDWM6JXpfnJZcXL3jJi0IDcCSsb92xRvTANYFHnTfyzGfw
 EHkhbF4e4Vy9f6QOkSP3CvW5H26BmZS9DKG0J9Il5R3u2lKdfbb5vmtUmVTqHmAD
 Ulo5cWZjEznlWCAYSI/aIidmBsp9OAEvYd+X7Z5SBIgTfSqV7VWHGt0BfA1heiVv
 F/mednG0gGc=
 =3v4F
 -----END PGP SIGNATURE-----

Merge tag 'x86-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull core x86 updates from Ingo Molnar:

 - The biggest change is the rework of the percpu code, to support the
   'Named Address Spaces' GCC feature, by Uros Bizjak:

      - This allows C code to access GS and FS segment relative memory
        via variables declared with such attributes, which allows the
        compiler to better optimize those accesses than the previous
        inline assembly code.

      - The series also includes a number of micro-optimizations for
        various percpu access methods, plus a number of cleanups of %gs
        accesses in assembly code.

      - These changes have been exposed to linux-next testing for the
        last ~5 months, with no known regressions in this area.

 - Fix/clean up __switch_to()'s broken but accidentally working handling
   of FPU switching - which also generates better code

 - Propagate more RIP-relative addressing in assembly code, to generate
   slightly better code

 - Rework the CPU mitigations Kconfig space to be less idiosyncratic, to
   make it easier for distros to follow & maintain these options

 - Rework the x86 idle code to cure RCU violations and to clean up the
   logic

 - Clean up the vDSO Makefile logic

 - Misc cleanups and fixes

* tag 'x86-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (52 commits)
  x86/idle: Select idle routine only once
  x86/idle: Let prefer_mwait_c1_over_halt() return bool
  x86/idle: Cleanup idle_setup()
  x86/idle: Clean up idle selection
  x86/idle: Sanitize X86_BUG_AMD_E400 handling
  sched/idle: Conditionally handle tick broadcast in default_idle_call()
  x86: Increase brk randomness entropy for 64-bit systems
  x86/vdso: Move vDSO to mmap region
  x86/vdso/kbuild: Group non-standard build attributes and primary object file rules together
  x86/vdso: Fix rethunk patching for vdso-image-{32,64}.o
  x86/retpoline: Ensure default return thunk isn't used at runtime
  x86/vdso: Use CONFIG_COMPAT_32 to specify vdso32
  x86/vdso: Use $(addprefix ) instead of $(foreach )
  x86/vdso: Simplify obj-y addition
  x86/vdso: Consolidate targets and clean-files
  x86/bugs: Rename CONFIG_RETHUNK              => CONFIG_MITIGATION_RETHUNK
  x86/bugs: Rename CONFIG_CPU_SRSO             => CONFIG_MITIGATION_SRSO
  x86/bugs: Rename CONFIG_CPU_IBRS_ENTRY       => CONFIG_MITIGATION_IBRS_ENTRY
  x86/bugs: Rename CONFIG_CPU_UNRET_ENTRY      => CONFIG_MITIGATION_UNRET_ENTRY
  x86/bugs: Rename CONFIG_SLS                  => CONFIG_MITIGATION_SLS
  ...
This commit is contained in:
Linus Torvalds 2024-03-11 19:53:15 -07:00
commit 685d982112
97 changed files with 668 additions and 563 deletions

View file

@ -473,8 +473,8 @@ Spectre variant 2
-mindirect-branch=thunk-extern -mindirect-branch-register options. -mindirect-branch=thunk-extern -mindirect-branch-register options.
If the kernel is compiled with a Clang compiler, the compiler needs If the kernel is compiled with a Clang compiler, the compiler needs
to support -mretpoline-external-thunk option. The kernel config to support -mretpoline-external-thunk option. The kernel config
CONFIG_RETPOLINE needs to be turned on, and the CPU needs to run with CONFIG_MITIGATION_RETPOLINE needs to be turned on, and the CPU needs
the latest updated microcode. to run with the latest updated microcode.
On Intel Skylake-era systems the mitigation covers most, but not all, On Intel Skylake-era systems the mitigation covers most, but not all,
cases. See :ref:`[3] <spec_ref3>` for more details. cases. See :ref:`[3] <spec_ref3>` for more details.
@ -609,8 +609,8 @@ kernel command line.
Selecting 'on' will, and 'auto' may, choose a Selecting 'on' will, and 'auto' may, choose a
mitigation method at run time according to the mitigation method at run time according to the
CPU, the available microcode, the setting of the CPU, the available microcode, the setting of the
CONFIG_RETPOLINE configuration option, and the CONFIG_MITIGATION_RETPOLINE configuration option,
compiler with which the kernel was built. and the compiler with which the kernel was built.
Selecting 'on' will also enable the mitigation Selecting 'on' will also enable the mitigation
against user space to user space task attacks. against user space to user space task attacks.

View file

@ -6036,8 +6036,8 @@
Selecting 'on' will, and 'auto' may, choose a Selecting 'on' will, and 'auto' may, choose a
mitigation method at run time according to the mitigation method at run time according to the
CPU, the available microcode, the setting of the CPU, the available microcode, the setting of the
CONFIG_RETPOLINE configuration option, and the CONFIG_MITIGATION_RETPOLINE configuration option,
compiler with which the kernel was built. and the compiler with which the kernel was built.
Selecting 'on' will also enable the mitigation Selecting 'on' will also enable the mitigation
against user space to user space task attacks. against user space to user space task attacks.

View file

@ -26,9 +26,9 @@ comments in pti.c).
This approach helps to ensure that side-channel attacks leveraging This approach helps to ensure that side-channel attacks leveraging
the paging structures do not function when PTI is enabled. It can be the paging structures do not function when PTI is enabled. It can be
enabled by setting CONFIG_PAGE_TABLE_ISOLATION=y at compile time. enabled by setting CONFIG_MITIGATION_PAGE_TABLE_ISOLATION=y at compile
Once enabled at compile-time, it can be disabled at boot with the time. Once enabled at compile-time, it can be disabled at boot with
'nopti' or 'pti=' kernel parameters (see kernel-parameters.txt). the 'nopti' or 'pti=' kernel parameters (see kernel-parameters.txt).
Page Table Management Page Table Management
===================== =====================

View file

@ -147,6 +147,7 @@ config X86
select EDAC_ATOMIC_SCRUB select EDAC_ATOMIC_SCRUB
select EDAC_SUPPORT select EDAC_SUPPORT
select GENERIC_CLOCKEVENTS_BROADCAST if X86_64 || (X86_32 && X86_LOCAL_APIC) select GENERIC_CLOCKEVENTS_BROADCAST if X86_64 || (X86_32 && X86_LOCAL_APIC)
select GENERIC_CLOCKEVENTS_BROADCAST_IDLE if GENERIC_CLOCKEVENTS_BROADCAST
select GENERIC_CLOCKEVENTS_MIN_ADJUST select GENERIC_CLOCKEVENTS_MIN_ADJUST
select GENERIC_CMOS_UPDATE select GENERIC_CMOS_UPDATE
select GENERIC_CPU_AUTOPROBE select GENERIC_CPU_AUTOPROBE
@ -2430,6 +2431,18 @@ source "kernel/livepatch/Kconfig"
endmenu endmenu
config CC_HAS_NAMED_AS
def_bool CC_IS_GCC && GCC_VERSION >= 120100
config USE_X86_SEG_SUPPORT
def_bool y
depends on CC_HAS_NAMED_AS
#
# -fsanitize=kernel-address (KASAN) is at the moment incompatible
# with named address spaces - see GCC PR sanitizer/111736.
#
depends on !KASAN
config CC_HAS_SLS config CC_HAS_SLS
def_bool $(cc-option,-mharden-sls=all) def_bool $(cc-option,-mharden-sls=all)
@ -2461,12 +2474,12 @@ config CALL_PADDING
config FINEIBT config FINEIBT
def_bool y def_bool y
depends on X86_KERNEL_IBT && CFI_CLANG && RETPOLINE depends on X86_KERNEL_IBT && CFI_CLANG && MITIGATION_RETPOLINE
select CALL_PADDING select CALL_PADDING
config HAVE_CALL_THUNKS config HAVE_CALL_THUNKS
def_bool y def_bool y
depends on CC_HAS_ENTRY_PADDING && RETHUNK && OBJTOOL depends on CC_HAS_ENTRY_PADDING && MITIGATION_RETHUNK && OBJTOOL
config CALL_THUNKS config CALL_THUNKS
def_bool n def_bool n
@ -2488,7 +2501,7 @@ menuconfig SPECULATION_MITIGATIONS
if SPECULATION_MITIGATIONS if SPECULATION_MITIGATIONS
config PAGE_TABLE_ISOLATION config MITIGATION_PAGE_TABLE_ISOLATION
bool "Remove the kernel mapping in user mode" bool "Remove the kernel mapping in user mode"
default y default y
depends on (X86_64 || X86_PAE) depends on (X86_64 || X86_PAE)
@ -2499,7 +2512,7 @@ config PAGE_TABLE_ISOLATION
See Documentation/arch/x86/pti.rst for more details. See Documentation/arch/x86/pti.rst for more details.
config RETPOLINE config MITIGATION_RETPOLINE
bool "Avoid speculative indirect branches in kernel" bool "Avoid speculative indirect branches in kernel"
select OBJTOOL if HAVE_OBJTOOL select OBJTOOL if HAVE_OBJTOOL
default y default y
@ -2509,9 +2522,9 @@ config RETPOLINE
branches. Requires a compiler with -mindirect-branch=thunk-extern branches. Requires a compiler with -mindirect-branch=thunk-extern
support for full protection. The kernel may run slower. support for full protection. The kernel may run slower.
config RETHUNK config MITIGATION_RETHUNK
bool "Enable return-thunks" bool "Enable return-thunks"
depends on RETPOLINE && CC_HAS_RETURN_THUNK depends on MITIGATION_RETPOLINE && CC_HAS_RETURN_THUNK
select OBJTOOL if HAVE_OBJTOOL select OBJTOOL if HAVE_OBJTOOL
default y if X86_64 default y if X86_64
help help
@ -2520,14 +2533,14 @@ config RETHUNK
Requires a compiler with -mfunction-return=thunk-extern Requires a compiler with -mfunction-return=thunk-extern
support for full protection. The kernel may run slower. support for full protection. The kernel may run slower.
config CPU_UNRET_ENTRY config MITIGATION_UNRET_ENTRY
bool "Enable UNRET on kernel entry" bool "Enable UNRET on kernel entry"
depends on CPU_SUP_AMD && RETHUNK && X86_64 depends on CPU_SUP_AMD && MITIGATION_RETHUNK && X86_64
default y default y
help help
Compile the kernel with support for the retbleed=unret mitigation. Compile the kernel with support for the retbleed=unret mitigation.
config CALL_DEPTH_TRACKING config MITIGATION_CALL_DEPTH_TRACKING
bool "Mitigate RSB underflow with call depth tracking" bool "Mitigate RSB underflow with call depth tracking"
depends on CPU_SUP_INTEL && HAVE_CALL_THUNKS depends on CPU_SUP_INTEL && HAVE_CALL_THUNKS
select HAVE_DYNAMIC_FTRACE_NO_PATCHABLE select HAVE_DYNAMIC_FTRACE_NO_PATCHABLE
@ -2547,7 +2560,7 @@ config CALL_DEPTH_TRACKING
config CALL_THUNKS_DEBUG config CALL_THUNKS_DEBUG
bool "Enable call thunks and call depth tracking debugging" bool "Enable call thunks and call depth tracking debugging"
depends on CALL_DEPTH_TRACKING depends on MITIGATION_CALL_DEPTH_TRACKING
select FUNCTION_ALIGNMENT_32B select FUNCTION_ALIGNMENT_32B
default n default n
help help
@ -2558,14 +2571,14 @@ config CALL_THUNKS_DEBUG
Only enable this when you are debugging call thunks as this Only enable this when you are debugging call thunks as this
creates a noticeable runtime overhead. If unsure say N. creates a noticeable runtime overhead. If unsure say N.
config CPU_IBPB_ENTRY config MITIGATION_IBPB_ENTRY
bool "Enable IBPB on kernel entry" bool "Enable IBPB on kernel entry"
depends on CPU_SUP_AMD && X86_64 depends on CPU_SUP_AMD && X86_64
default y default y
help help
Compile the kernel with support for the retbleed=ibpb mitigation. Compile the kernel with support for the retbleed=ibpb mitigation.
config CPU_IBRS_ENTRY config MITIGATION_IBRS_ENTRY
bool "Enable IBRS on kernel entry" bool "Enable IBRS on kernel entry"
depends on CPU_SUP_INTEL && X86_64 depends on CPU_SUP_INTEL && X86_64
default y default y
@ -2574,14 +2587,14 @@ config CPU_IBRS_ENTRY
This mitigates both spectre_v2 and retbleed at great cost to This mitigates both spectre_v2 and retbleed at great cost to
performance. performance.
config CPU_SRSO config MITIGATION_SRSO
bool "Mitigate speculative RAS overflow on AMD" bool "Mitigate speculative RAS overflow on AMD"
depends on CPU_SUP_AMD && X86_64 && RETHUNK depends on CPU_SUP_AMD && X86_64 && MITIGATION_RETHUNK
default y default y
help help
Enable the SRSO mitigation needed on AMD Zen1-4 machines. Enable the SRSO mitigation needed on AMD Zen1-4 machines.
config SLS config MITIGATION_SLS
bool "Mitigate Straight-Line-Speculation" bool "Mitigate Straight-Line-Speculation"
depends on CC_HAS_SLS && X86_64 depends on CC_HAS_SLS && X86_64
select OBJTOOL if HAVE_OBJTOOL select OBJTOOL if HAVE_OBJTOOL
@ -2591,7 +2604,7 @@ config SLS
against straight line speculation. The kernel image might be slightly against straight line speculation. The kernel image might be slightly
larger. larger.
config GDS_FORCE_MITIGATION config MITIGATION_GDS_FORCE
bool "Force GDS Mitigation" bool "Force GDS Mitigation"
depends on CPU_SUP_INTEL depends on CPU_SUP_INTEL
default n default n

View file

@ -22,7 +22,7 @@ RETPOLINE_VDSO_CFLAGS := -mretpoline
endif endif
RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch-cs-prefix) RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch-cs-prefix)
ifdef CONFIG_RETHUNK ifdef CONFIG_MITIGATION_RETHUNK
RETHUNK_CFLAGS := -mfunction-return=thunk-extern RETHUNK_CFLAGS := -mfunction-return=thunk-extern
RETPOLINE_CFLAGS += $(RETHUNK_CFLAGS) RETPOLINE_CFLAGS += $(RETHUNK_CFLAGS)
endif endif
@ -195,7 +195,7 @@ KBUILD_CFLAGS += -Wno-sign-compare
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
# Avoid indirect branches in kernel to deal with Spectre # Avoid indirect branches in kernel to deal with Spectre
ifdef CONFIG_RETPOLINE ifdef CONFIG_MITIGATION_RETPOLINE
KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) KBUILD_CFLAGS += $(RETPOLINE_CFLAGS)
# Additionally, avoid generating expensive indirect jumps which # Additionally, avoid generating expensive indirect jumps which
# are subject to retpolines for small number of switch cases. # are subject to retpolines for small number of switch cases.
@ -208,7 +208,7 @@ ifdef CONFIG_RETPOLINE
endif endif
endif endif
ifdef CONFIG_SLS ifdef CONFIG_MITIGATION_SLS
KBUILD_CFLAGS += -mharden-sls=all KBUILD_CFLAGS += -mharden-sls=all
endif endif
@ -299,12 +299,11 @@ install:
vdso-install-$(CONFIG_X86_64) += arch/x86/entry/vdso/vdso64.so.dbg vdso-install-$(CONFIG_X86_64) += arch/x86/entry/vdso/vdso64.so.dbg
vdso-install-$(CONFIG_X86_X32_ABI) += arch/x86/entry/vdso/vdsox32.so.dbg vdso-install-$(CONFIG_X86_X32_ABI) += arch/x86/entry/vdso/vdsox32.so.dbg
vdso-install-$(CONFIG_X86_32) += arch/x86/entry/vdso/vdso32.so.dbg vdso-install-$(CONFIG_COMPAT_32) += arch/x86/entry/vdso/vdso32.so.dbg
vdso-install-$(CONFIG_IA32_EMULATION) += arch/x86/entry/vdso/vdso32.so.dbg
archprepare: checkbin archprepare: checkbin
checkbin: checkbin:
ifdef CONFIG_RETPOLINE ifdef CONFIG_MITIGATION_RETPOLINE
ifeq ($(RETPOLINE_CFLAGS),) ifeq ($(RETPOLINE_CFLAGS),)
@echo "You are building kernel with non-retpoline compiler." >&2 @echo "You are building kernel with non-retpoline compiler." >&2
@echo "Please update your compiler." >&2 @echo "Please update your compiler." >&2

View file

@ -8,8 +8,8 @@
* Copyright (C) 2016 Kees Cook * Copyright (C) 2016 Kees Cook
*/ */
/* No PAGE_TABLE_ISOLATION support needed either: */ /* No MITIGATION_PAGE_TABLE_ISOLATION support needed either: */
#undef CONFIG_PAGE_TABLE_ISOLATION #undef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
#include "error.h" #include "error.h"
#include "misc.h" #include "misc.h"

View file

@ -42,7 +42,7 @@ CONFIG_EFI_STUB=y
CONFIG_HZ_1000=y CONFIG_HZ_1000=y
CONFIG_KEXEC=y CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y CONFIG_CRASH_DUMP=y
# CONFIG_RETHUNK is not set # CONFIG_MITIGATION_RETHUNK is not set
CONFIG_HIBERNATION=y CONFIG_HIBERNATION=y
CONFIG_PM_DEBUG=y CONFIG_PM_DEBUG=y
CONFIG_PM_TRACE_RTC=y CONFIG_PM_TRACE_RTC=y

View file

@ -147,10 +147,10 @@ For 32-bit we have the following conventions - kernel is built with
.endif .endif
.endm .endm
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
/* /*
* PAGE_TABLE_ISOLATION PGDs are 8k. Flip bit 12 to switch between the two * MITIGATION_PAGE_TABLE_ISOLATION PGDs are 8k. Flip bit 12 to switch between the two
* halves: * halves:
*/ */
#define PTI_USER_PGTABLE_BIT PAGE_SHIFT #define PTI_USER_PGTABLE_BIT PAGE_SHIFT
@ -165,7 +165,7 @@ For 32-bit we have the following conventions - kernel is built with
.macro ADJUST_KERNEL_CR3 reg:req .macro ADJUST_KERNEL_CR3 reg:req
ALTERNATIVE "", "SET_NOFLUSH_BIT \reg", X86_FEATURE_PCID ALTERNATIVE "", "SET_NOFLUSH_BIT \reg", X86_FEATURE_PCID
/* Clear PCID and "PAGE_TABLE_ISOLATION bit", point CR3 at kernel pagetables: */ /* Clear PCID and "MITIGATION_PAGE_TABLE_ISOLATION bit", point CR3 at kernel pagetables: */
andq $(~PTI_USER_PGTABLE_AND_PCID_MASK), \reg andq $(~PTI_USER_PGTABLE_AND_PCID_MASK), \reg
.endm .endm
@ -178,7 +178,7 @@ For 32-bit we have the following conventions - kernel is built with
.endm .endm
#define THIS_CPU_user_pcid_flush_mask \ #define THIS_CPU_user_pcid_flush_mask \
PER_CPU_VAR(cpu_tlbstate) + TLB_STATE_user_pcid_flush_mask PER_CPU_VAR(cpu_tlbstate + TLB_STATE_user_pcid_flush_mask)
.macro SWITCH_TO_USER_CR3 scratch_reg:req scratch_reg2:req .macro SWITCH_TO_USER_CR3 scratch_reg:req scratch_reg2:req
mov %cr3, \scratch_reg mov %cr3, \scratch_reg
@ -274,7 +274,7 @@ For 32-bit we have the following conventions - kernel is built with
.Lend_\@: .Lend_\@:
.endm .endm
#else /* CONFIG_PAGE_TABLE_ISOLATION=n: */ #else /* CONFIG_MITIGATION_PAGE_TABLE_ISOLATION=n: */
.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req .macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
.endm .endm
@ -302,7 +302,7 @@ For 32-bit we have the following conventions - kernel is built with
* Assumes x86_spec_ctrl_{base,current} to have SPEC_CTRL_IBRS set. * Assumes x86_spec_ctrl_{base,current} to have SPEC_CTRL_IBRS set.
*/ */
.macro IBRS_ENTER save_reg .macro IBRS_ENTER save_reg
#ifdef CONFIG_CPU_IBRS_ENTRY #ifdef CONFIG_MITIGATION_IBRS_ENTRY
ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS
movl $MSR_IA32_SPEC_CTRL, %ecx movl $MSR_IA32_SPEC_CTRL, %ecx
@ -331,7 +331,7 @@ For 32-bit we have the following conventions - kernel is built with
* regs. Must be called after the last RET. * regs. Must be called after the last RET.
*/ */
.macro IBRS_EXIT save_reg .macro IBRS_EXIT save_reg
#ifdef CONFIG_CPU_IBRS_ENTRY #ifdef CONFIG_MITIGATION_IBRS_ENTRY
ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS
movl $MSR_IA32_SPEC_CTRL, %ecx movl $MSR_IA32_SPEC_CTRL, %ecx
@ -425,3 +425,63 @@ For 32-bit we have the following conventions - kernel is built with
.endm .endm
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
#ifdef CONFIG_X86_64
/* rdi: arg1 ... normal C conventions. rax is saved/restored. */
.macro THUNK name, func
SYM_FUNC_START(\name)
pushq %rbp
movq %rsp, %rbp
pushq %rdi
pushq %rsi
pushq %rdx
pushq %rcx
pushq %rax
pushq %r8
pushq %r9
pushq %r10
pushq %r11
call \func
popq %r11
popq %r10
popq %r9
popq %r8
popq %rax
popq %rcx
popq %rdx
popq %rsi
popq %rdi
popq %rbp
RET
SYM_FUNC_END(\name)
_ASM_NOKPROBE(\name)
.endm
#else /* CONFIG_X86_32 */
/* put return address in eax (arg1) */
.macro THUNK name, func, put_ret_addr_in_eax=0
SYM_CODE_START_NOALIGN(\name)
pushl %eax
pushl %ecx
pushl %edx
.if \put_ret_addr_in_eax
/* Place EIP in the arg1 */
movl 3*4(%esp), %eax
.endif
call \func
popl %edx
popl %ecx
popl %eax
RET
_ASM_NOKPROBE(\name)
SYM_CODE_END(\name)
.endm
#endif

View file

@ -10,6 +10,8 @@
#include <asm/segment.h> #include <asm/segment.h>
#include <asm/cache.h> #include <asm/cache.h>
#include "calling.h"
.pushsection .noinstr.text, "ax" .pushsection .noinstr.text, "ax"
SYM_FUNC_START(entry_ibpb) SYM_FUNC_START(entry_ibpb)
@ -43,3 +45,4 @@ EXPORT_SYMBOL_GPL(mds_verw_sel);
.popsection .popsection
THUNK warn_thunk_thunk, __warn_thunk

View file

@ -305,7 +305,7 @@
.macro CHECK_AND_APPLY_ESPFIX .macro CHECK_AND_APPLY_ESPFIX
#ifdef CONFIG_X86_ESPFIX32 #ifdef CONFIG_X86_ESPFIX32
#define GDT_ESPFIX_OFFSET (GDT_ENTRY_ESPFIX_SS * 8) #define GDT_ESPFIX_OFFSET (GDT_ENTRY_ESPFIX_SS * 8)
#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + GDT_ESPFIX_OFFSET #define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page + GDT_ESPFIX_OFFSET)
ALTERNATIVE "jmp .Lend_\@", "", X86_BUG_ESPFIX ALTERNATIVE "jmp .Lend_\@", "", X86_BUG_ESPFIX

View file

@ -191,7 +191,7 @@ SYM_FUNC_START(__switch_to_asm)
#ifdef CONFIG_STACKPROTECTOR #ifdef CONFIG_STACKPROTECTOR
movq TASK_stack_canary(%rsi), %rbx movq TASK_stack_canary(%rsi), %rbx
movq %rbx, PER_CPU_VAR(fixed_percpu_data) + FIXED_stack_canary movq %rbx, PER_CPU_VAR(fixed_percpu_data + FIXED_stack_canary)
#endif #endif
/* /*
@ -561,7 +561,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
#ifdef CONFIG_XEN_PV #ifdef CONFIG_XEN_PV
ALTERNATIVE "", "jmp xenpv_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV ALTERNATIVE "", "jmp xenpv_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
#endif #endif
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
ALTERNATIVE "", "jmp .Lpti_restore_regs_and_return_to_usermode", X86_FEATURE_PTI ALTERNATIVE "", "jmp .Lpti_restore_regs_and_return_to_usermode", X86_FEATURE_PTI
#endif #endif
@ -578,7 +578,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
jnz .Lnative_iret jnz .Lnative_iret
ud2 ud2
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
.Lpti_restore_regs_and_return_to_usermode: .Lpti_restore_regs_and_return_to_usermode:
POP_REGS pop_rdi=0 POP_REGS pop_rdi=0
@ -1098,7 +1098,7 @@ SYM_CODE_END(error_return)
* *
* Registers: * Registers:
* %r14: Used to save/restore the CR3 of the interrupted context * %r14: Used to save/restore the CR3 of the interrupted context
* when PAGE_TABLE_ISOLATION is in use. Do not clobber. * when MITIGATION_PAGE_TABLE_ISOLATION is in use. Do not clobber.
*/ */
SYM_CODE_START(asm_exc_nmi) SYM_CODE_START(asm_exc_nmi)
UNWIND_HINT_IRET_ENTRY UNWIND_HINT_IRET_ENTRY

View file

@ -4,33 +4,15 @@
* Copyright 2008 by Steven Rostedt, Red Hat, Inc * Copyright 2008 by Steven Rostedt, Red Hat, Inc
* (inspired by Andi Kleen's thunk_64.S) * (inspired by Andi Kleen's thunk_64.S)
*/ */
#include <linux/export.h>
#include <linux/linkage.h>
#include <asm/asm.h>
/* put return address in eax (arg1) */ #include <linux/export.h>
.macro THUNK name, func, put_ret_addr_in_eax=0 #include <linux/linkage.h>
SYM_CODE_START_NOALIGN(\name) #include <asm/asm.h>
pushl %eax
pushl %ecx
pushl %edx
.if \put_ret_addr_in_eax #include "calling.h"
/* Place EIP in the arg1 */
movl 3*4(%esp), %eax
.endif
call \func THUNK preempt_schedule_thunk, preempt_schedule
popl %edx THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace
popl %ecx EXPORT_SYMBOL(preempt_schedule_thunk)
popl %eax EXPORT_SYMBOL(preempt_schedule_notrace_thunk)
RET
_ASM_NOKPROBE(\name)
SYM_CODE_END(\name)
.endm
THUNK preempt_schedule_thunk, preempt_schedule
THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace
EXPORT_SYMBOL(preempt_schedule_thunk)
EXPORT_SYMBOL(preempt_schedule_notrace_thunk)

View file

@ -9,39 +9,6 @@
#include "calling.h" #include "calling.h"
#include <asm/asm.h> #include <asm/asm.h>
/* rdi: arg1 ... normal C conventions. rax is saved/restored. */
.macro THUNK name, func
SYM_FUNC_START(\name)
pushq %rbp
movq %rsp, %rbp
pushq %rdi
pushq %rsi
pushq %rdx
pushq %rcx
pushq %rax
pushq %r8
pushq %r9
pushq %r10
pushq %r11
call \func
popq %r11
popq %r10
popq %r9
popq %r8
popq %rax
popq %rcx
popq %rdx
popq %rsi
popq %rdi
popq %rbp
RET
SYM_FUNC_END(\name)
_ASM_NOKPROBE(\name)
.endm
THUNK preempt_schedule_thunk, preempt_schedule THUNK preempt_schedule_thunk, preempt_schedule
THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace
EXPORT_SYMBOL(preempt_schedule_thunk) EXPORT_SYMBOL(preempt_schedule_thunk)

View file

@ -3,7 +3,7 @@
# Building vDSO images for x86. # Building vDSO images for x86.
# #
# Include the generic Makefile to check the built vdso. # Include the generic Makefile to check the built vDSO:
include $(srctree)/lib/vdso/Makefile include $(srctree)/lib/vdso/Makefile
# Sanitizer runtimes are unavailable and cannot be linked here. # Sanitizer runtimes are unavailable and cannot be linked here.
@ -18,48 +18,39 @@ OBJECT_FILES_NON_STANDARD := y
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n KCOV_INSTRUMENT := n
VDSO64-$(CONFIG_X86_64) := y # Files to link into the vDSO:
VDSOX32-$(CONFIG_X86_X32_ABI) := y
VDSO32-$(CONFIG_X86_32) := y
VDSO32-$(CONFIG_IA32_EMULATION) := y
# files to link into the vdso
vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o
vobjs32-y := vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o vobjs32-y := vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o
vobjs32-y += vdso32/vclock_gettime.o vdso32/vgetcpu.o vobjs32-y += vdso32/vclock_gettime.o vdso32/vgetcpu.o
vobjs-$(CONFIG_X86_SGX) += vsgx.o vobjs-$(CONFIG_X86_SGX) += vsgx.o
# files to link into kernel # Files to link into the kernel:
obj-y += vma.o extable.o obj-y += vma.o extable.o
KASAN_SANITIZE_vma.o := y KASAN_SANITIZE_vma.o := y
UBSAN_SANITIZE_vma.o := y UBSAN_SANITIZE_vma.o := y
KCSAN_SANITIZE_vma.o := y KCSAN_SANITIZE_vma.o := y
OBJECT_FILES_NON_STANDARD_vma.o := n
OBJECT_FILES_NON_STANDARD_extable.o := n
# vDSO images to build OBJECT_FILES_NON_STANDARD_vma.o := n
vdso_img-$(VDSO64-y) += 64 OBJECT_FILES_NON_STANDARD_extable.o := n
vdso_img-$(VDSOX32-y) += x32
vdso_img-$(VDSO32-y) += 32
obj-$(VDSO32-y) += vdso32-setup.o # vDSO images to build:
OBJECT_FILES_NON_STANDARD_vdso32-setup.o := n obj-$(CONFIG_X86_64) += vdso-image-64.o
obj-$(CONFIG_X86_X32_ABI) += vdso-image-x32.o
obj-$(CONFIG_COMPAT_32) += vdso-image-32.o vdso32-setup.o
vobjs := $(foreach F,$(vobjs-y),$(obj)/$F) OBJECT_FILES_NON_STANDARD_vdso-image-32.o := n
vobjs32 := $(foreach F,$(vobjs32-y),$(obj)/$F) OBJECT_FILES_NON_STANDARD_vdso-image-64.o := n
OBJECT_FILES_NON_STANDARD_vdso32-setup.o := n
vobjs := $(addprefix $(obj)/, $(vobjs-y))
vobjs32 := $(addprefix $(obj)/, $(vobjs32-y))
$(obj)/vdso.o: $(obj)/vdso.so $(obj)/vdso.o: $(obj)/vdso.so
targets += vdso.lds $(vobjs-y) targets += vdso.lds $(vobjs-y)
targets += vdso32/vdso32.lds $(vobjs32-y) targets += vdso32/vdso32.lds $(vobjs32-y)
# Build the vDSO image C files and link them in. targets += $(foreach x, 64 x32 32, vdso-image-$(x).c vdso$(x).so vdso$(x).so.dbg)
vdso_img_objs := $(vdso_img-y:%=vdso-image-%.o)
vdso_img_cfiles := $(vdso_img-y:%=vdso-image-%.c)
vdso_img_sodbg := $(vdso_img-y:%=vdso%.so.dbg)
obj-y += $(vdso_img_objs)
targets += $(vdso_img_cfiles)
targets += $(vdso_img_sodbg) $(vdso_img-y:%=vdso%.so)
CPPFLAGS_vdso.lds += -P -C CPPFLAGS_vdso.lds += -P -C
@ -87,7 +78,7 @@ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
-fno-omit-frame-pointer -foptimize-sibling-calls \ -fno-omit-frame-pointer -foptimize-sibling-calls \
-DDISABLE_BRANCH_PROFILING -DBUILD_VDSO -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
ifdef CONFIG_RETPOLINE ifdef CONFIG_MITIGATION_RETPOLINE
ifneq ($(RETPOLINE_VDSO_CFLAGS),) ifneq ($(RETPOLINE_VDSO_CFLAGS),)
CFL += $(RETPOLINE_VDSO_CFLAGS) CFL += $(RETPOLINE_VDSO_CFLAGS)
endif endif
@ -123,7 +114,7 @@ VDSO_LDFLAGS_vdsox32.lds = -m elf32_x86_64 -soname linux-vdso.so.1 \
vobjx32s-y := $(vobjs-y:.o=-x32.o) vobjx32s-y := $(vobjs-y:.o=-x32.o)
# same thing, but in the output directory # same thing, but in the output directory
vobjx32s := $(foreach F,$(vobjx32s-y),$(obj)/$F) vobjx32s := $(addprefix $(obj)/, $(vobjx32s-y))
# Convert 64bit object file to x32 for x32 vDSO. # Convert 64bit object file to x32 for x32 vDSO.
quiet_cmd_x32 = X32 $@ quiet_cmd_x32 = X32 $@
@ -164,7 +155,7 @@ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
KBUILD_CFLAGS_32 += -fno-omit-frame-pointer KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
ifdef CONFIG_RETPOLINE ifdef CONFIG_MITIGATION_RETPOLINE
ifneq ($(RETPOLINE_VDSO_CFLAGS),) ifneq ($(RETPOLINE_VDSO_CFLAGS),)
KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS) KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
endif endif
@ -190,5 +181,3 @@ GCOV_PROFILE := n
quiet_cmd_vdso_and_check = VDSO $@ quiet_cmd_vdso_and_check = VDSO $@
cmd_vdso_and_check = $(cmd_vdso); $(cmd_vdso_check) cmd_vdso_and_check = $(cmd_vdso); $(cmd_vdso_check)
clean-files := vdso32.so vdso32.so.dbg vdso64* vdso-image-*.c vdsox32.so*

View file

@ -274,59 +274,6 @@ static int map_vdso(const struct vdso_image *image, unsigned long addr)
return ret; return ret;
} }
#ifdef CONFIG_X86_64
/*
* Put the vdso above the (randomized) stack with another randomized
* offset. This way there is no hole in the middle of address space.
* To save memory make sure it is still in the same PTE as the stack
* top. This doesn't give that many random bits.
*
* Note that this algorithm is imperfect: the distribution of the vdso
* start address within a PMD is biased toward the end.
*
* Only used for the 64-bit and x32 vdsos.
*/
static unsigned long vdso_addr(unsigned long start, unsigned len)
{
unsigned long addr, end;
unsigned offset;
/*
* Round up the start address. It can start out unaligned as a result
* of stack start randomization.
*/
start = PAGE_ALIGN(start);
/* Round the lowest possible end address up to a PMD boundary. */
end = (start + len + PMD_SIZE - 1) & PMD_MASK;
if (end >= DEFAULT_MAP_WINDOW)
end = DEFAULT_MAP_WINDOW;
end -= len;
if (end > start) {
offset = get_random_u32_below(((end - start) >> PAGE_SHIFT) + 1);
addr = start + (offset << PAGE_SHIFT);
} else {
addr = start;
}
/*
* Forcibly align the final address in case we have a hardware
* issue that requires alignment for performance reasons.
*/
addr = align_vdso_addr(addr);
return addr;
}
static int map_vdso_randomized(const struct vdso_image *image)
{
unsigned long addr = vdso_addr(current->mm->start_stack, image->size-image->sym_vvar_start);
return map_vdso(image, addr);
}
#endif
int map_vdso_once(const struct vdso_image *image, unsigned long addr) int map_vdso_once(const struct vdso_image *image, unsigned long addr)
{ {
struct mm_struct *mm = current->mm; struct mm_struct *mm = current->mm;
@ -369,7 +316,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
if (!vdso64_enabled) if (!vdso64_enabled)
return 0; return 0;
return map_vdso_randomized(&vdso_image_64); return map_vdso(&vdso_image_64, 0);
} }
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
@ -380,7 +327,7 @@ int compat_arch_setup_additional_pages(struct linux_binprm *bprm,
if (x32) { if (x32) {
if (!vdso64_enabled) if (!vdso64_enabled)
return 0; return 0;
return map_vdso_randomized(&vdso_image_x32); return map_vdso(&vdso_image_x32, 0);
} }
#endif #endif
#ifdef CONFIG_IA32_EMULATION #ifdef CONFIG_IA32_EMULATION

View file

@ -18,7 +18,7 @@ struct pcpu_hot {
struct task_struct *current_task; struct task_struct *current_task;
int preempt_count; int preempt_count;
int cpu_number; int cpu_number;
#ifdef CONFIG_CALL_DEPTH_TRACKING #ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING
u64 call_depth; u64 call_depth;
#endif #endif
unsigned long top_of_stack; unsigned long top_of_stack;
@ -37,8 +37,15 @@ static_assert(sizeof(struct pcpu_hot) == 64);
DECLARE_PER_CPU_ALIGNED(struct pcpu_hot, pcpu_hot); DECLARE_PER_CPU_ALIGNED(struct pcpu_hot, pcpu_hot);
/* const-qualified alias to pcpu_hot, aliased by linker. */
DECLARE_PER_CPU_ALIGNED(const struct pcpu_hot __percpu_seg_override,
const_pcpu_hot);
static __always_inline struct task_struct *get_current(void) static __always_inline struct task_struct *get_current(void)
{ {
if (IS_ENABLED(CONFIG_USE_X86_SEG_SUPPORT))
return this_cpu_read_const(const_pcpu_hot.current_task);
return this_cpu_read_stable(pcpu_hot.current_task); return this_cpu_read_stable(pcpu_hot.current_task);
} }

View file

@ -44,32 +44,32 @@
# define DISABLE_LA57 (1<<(X86_FEATURE_LA57 & 31)) # define DISABLE_LA57 (1<<(X86_FEATURE_LA57 & 31))
#endif #endif
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
# define DISABLE_PTI 0 # define DISABLE_PTI 0
#else #else
# define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31)) # define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31))
#endif #endif
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
# define DISABLE_RETPOLINE 0 # define DISABLE_RETPOLINE 0
#else #else
# define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \ # define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \
(1 << (X86_FEATURE_RETPOLINE_LFENCE & 31))) (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
#endif #endif
#ifdef CONFIG_RETHUNK #ifdef CONFIG_MITIGATION_RETHUNK
# define DISABLE_RETHUNK 0 # define DISABLE_RETHUNK 0
#else #else
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31)) # define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
#endif #endif
#ifdef CONFIG_CPU_UNRET_ENTRY #ifdef CONFIG_MITIGATION_UNRET_ENTRY
# define DISABLE_UNRET 0 # define DISABLE_UNRET 0
#else #else
# define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31)) # define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31))
#endif #endif
#ifdef CONFIG_CALL_DEPTH_TRACKING #ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING
# define DISABLE_CALL_DEPTH_TRACKING 0 # define DISABLE_CALL_DEPTH_TRACKING 0
#else #else
# define DISABLE_CALL_DEPTH_TRACKING (1 << (X86_FEATURE_CALL_DEPTH & 31)) # define DISABLE_CALL_DEPTH_TRACKING (1 << (X86_FEATURE_CALL_DEPTH & 31))

View file

@ -392,5 +392,4 @@ struct va_alignment {
} ____cacheline_aligned; } ____cacheline_aligned;
extern struct va_alignment va_align; extern struct va_alignment va_align;
extern unsigned long align_vdso_addr(unsigned long);
#endif /* _ASM_X86_ELF_H */ #endif /* _ASM_X86_ELF_H */

View file

@ -37,10 +37,12 @@ extern void fpu_flush_thread(void);
* The FPU context is only stored/restored for a user task and * The FPU context is only stored/restored for a user task and
* PF_KTHREAD is used to distinguish between kernel and user threads. * PF_KTHREAD is used to distinguish between kernel and user threads.
*/ */
static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) static inline void switch_fpu_prepare(struct task_struct *old, int cpu)
{ {
if (cpu_feature_enabled(X86_FEATURE_FPU) && if (cpu_feature_enabled(X86_FEATURE_FPU) &&
!(current->flags & (PF_KTHREAD | PF_USER_WORKER))) { !(old->flags & (PF_KTHREAD | PF_USER_WORKER))) {
struct fpu *old_fpu = &old->thread.fpu;
save_fpregs_to_fpstate(old_fpu); save_fpregs_to_fpstate(old_fpu);
/* /*
* The save operation preserved register state, so the * The save operation preserved register state, so the
@ -60,10 +62,10 @@ static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu)
* Delay loading of the complete FPU state until the return to userland. * Delay loading of the complete FPU state until the return to userland.
* PKRU is handled separately. * PKRU is handled separately.
*/ */
static inline void switch_fpu_finish(void) static inline void switch_fpu_finish(struct task_struct *new)
{ {
if (cpu_feature_enabled(X86_FEATURE_FPU)) if (cpu_feature_enabled(X86_FEATURE_FPU))
set_thread_flag(TIF_NEED_FPU_LOAD); set_tsk_thread_flag(new, TIF_NEED_FPU_LOAD);
} }
#endif /* _ASM_X86_FPU_SCHED_H */ #endif /* _ASM_X86_FPU_SCHED_H */

View file

@ -40,27 +40,27 @@
#ifdef __ASSEMBLY__ #ifdef __ASSEMBLY__
#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO) #if defined(CONFIG_MITIGATION_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define RET jmp __x86_return_thunk #define RET jmp __x86_return_thunk
#else /* CONFIG_RETPOLINE */ #else /* CONFIG_MITIGATION_RETPOLINE */
#ifdef CONFIG_SLS #ifdef CONFIG_MITIGATION_SLS
#define RET ret; int3 #define RET ret; int3
#else #else
#define RET ret #define RET ret
#endif #endif
#endif /* CONFIG_RETPOLINE */ #endif /* CONFIG_MITIGATION_RETPOLINE */
#else /* __ASSEMBLY__ */ #else /* __ASSEMBLY__ */
#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO) #if defined(CONFIG_MITIGATION_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
#define ASM_RET "jmp __x86_return_thunk\n\t" #define ASM_RET "jmp __x86_return_thunk\n\t"
#else /* CONFIG_RETPOLINE */ #else /* CONFIG_MITIGATION_RETPOLINE */
#ifdef CONFIG_SLS #ifdef CONFIG_MITIGATION_SLS
#define ASM_RET "ret; int3\n\t" #define ASM_RET "ret; int3\n\t"
#else #else
#define ASM_RET "ret\n\t" #define ASM_RET "ret\n\t"
#endif #endif
#endif /* CONFIG_RETPOLINE */ #endif /* CONFIG_MITIGATION_RETPOLINE */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */

View file

@ -59,13 +59,13 @@
#ifdef CONFIG_CALL_THUNKS_DEBUG #ifdef CONFIG_CALL_THUNKS_DEBUG
# define CALL_THUNKS_DEBUG_INC_CALLS \ # define CALL_THUNKS_DEBUG_INC_CALLS \
incq %gs:__x86_call_count; incq PER_CPU_VAR(__x86_call_count);
# define CALL_THUNKS_DEBUG_INC_RETS \ # define CALL_THUNKS_DEBUG_INC_RETS \
incq %gs:__x86_ret_count; incq PER_CPU_VAR(__x86_ret_count);
# define CALL_THUNKS_DEBUG_INC_STUFFS \ # define CALL_THUNKS_DEBUG_INC_STUFFS \
incq %gs:__x86_stuffs_count; incq PER_CPU_VAR(__x86_stuffs_count);
# define CALL_THUNKS_DEBUG_INC_CTXSW \ # define CALL_THUNKS_DEBUG_INC_CTXSW \
incq %gs:__x86_ctxsw_count; incq PER_CPU_VAR(__x86_ctxsw_count);
#else #else
# define CALL_THUNKS_DEBUG_INC_CALLS # define CALL_THUNKS_DEBUG_INC_CALLS
# define CALL_THUNKS_DEBUG_INC_RETS # define CALL_THUNKS_DEBUG_INC_RETS
@ -73,16 +73,13 @@
# define CALL_THUNKS_DEBUG_INC_CTXSW # define CALL_THUNKS_DEBUG_INC_CTXSW
#endif #endif
#if defined(CONFIG_CALL_DEPTH_TRACKING) && !defined(COMPILE_OFFSETS) #if defined(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) && !defined(COMPILE_OFFSETS)
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#define CREDIT_CALL_DEPTH \ #define CREDIT_CALL_DEPTH \
movq $-1, PER_CPU_VAR(pcpu_hot + X86_call_depth); movq $-1, PER_CPU_VAR(pcpu_hot + X86_call_depth);
#define ASM_CREDIT_CALL_DEPTH \
movq $-1, PER_CPU_VAR(pcpu_hot + X86_call_depth);
#define RESET_CALL_DEPTH \ #define RESET_CALL_DEPTH \
xor %eax, %eax; \ xor %eax, %eax; \
bts $63, %rax; \ bts $63, %rax; \
@ -95,20 +92,14 @@
CALL_THUNKS_DEBUG_INC_CALLS CALL_THUNKS_DEBUG_INC_CALLS
#define INCREMENT_CALL_DEPTH \ #define INCREMENT_CALL_DEPTH \
sarq $5, %gs:pcpu_hot + X86_call_depth; \
CALL_THUNKS_DEBUG_INC_CALLS
#define ASM_INCREMENT_CALL_DEPTH \
sarq $5, PER_CPU_VAR(pcpu_hot + X86_call_depth); \ sarq $5, PER_CPU_VAR(pcpu_hot + X86_call_depth); \
CALL_THUNKS_DEBUG_INC_CALLS CALL_THUNKS_DEBUG_INC_CALLS
#else #else
#define CREDIT_CALL_DEPTH #define CREDIT_CALL_DEPTH
#define ASM_CREDIT_CALL_DEPTH
#define RESET_CALL_DEPTH #define RESET_CALL_DEPTH
#define INCREMENT_CALL_DEPTH
#define ASM_INCREMENT_CALL_DEPTH
#define RESET_CALL_DEPTH_FROM_CALL #define RESET_CALL_DEPTH_FROM_CALL
#define INCREMENT_CALL_DEPTH
#endif #endif
/* /*
@ -158,7 +149,7 @@
jnz 771b; \ jnz 771b; \
/* barrier for jnz misprediction */ \ /* barrier for jnz misprediction */ \
lfence; \ lfence; \
ASM_CREDIT_CALL_DEPTH \ CREDIT_CALL_DEPTH \
CALL_THUNKS_DEBUG_INC_CTXSW CALL_THUNKS_DEBUG_INC_CTXSW
#else #else
/* /*
@ -212,7 +203,7 @@
*/ */
.macro VALIDATE_UNRET_END .macro VALIDATE_UNRET_END
#if defined(CONFIG_NOINSTR_VALIDATION) && \ #if defined(CONFIG_NOINSTR_VALIDATION) && \
(defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)) (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO))
ANNOTATE_RETPOLINE_SAFE ANNOTATE_RETPOLINE_SAFE
nop nop
#endif #endif
@ -241,7 +232,7 @@
* instruction irrespective of kCFI. * instruction irrespective of kCFI.
*/ */
.macro JMP_NOSPEC reg:req .macro JMP_NOSPEC reg:req
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
__CS_PREFIX \reg __CS_PREFIX \reg
jmp __x86_indirect_thunk_\reg jmp __x86_indirect_thunk_\reg
#else #else
@ -251,7 +242,7 @@
.endm .endm
.macro CALL_NOSPEC reg:req .macro CALL_NOSPEC reg:req
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
__CS_PREFIX \reg __CS_PREFIX \reg
call __x86_indirect_thunk_\reg call __x86_indirect_thunk_\reg
#else #else
@ -271,7 +262,7 @@
.Lskip_rsb_\@: .Lskip_rsb_\@:
.endm .endm
#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO) #if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO)
#define CALL_UNTRAIN_RET "call entry_untrain_ret" #define CALL_UNTRAIN_RET "call entry_untrain_ret"
#else #else
#define CALL_UNTRAIN_RET "" #define CALL_UNTRAIN_RET ""
@ -289,7 +280,7 @@
* where we have a stack but before any RET instruction. * where we have a stack but before any RET instruction.
*/ */
.macro __UNTRAIN_RET ibpb_feature, call_depth_insns .macro __UNTRAIN_RET ibpb_feature, call_depth_insns
#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY) #if defined(CONFIG_MITIGATION_RETHUNK) || defined(CONFIG_MITIGATION_IBPB_ENTRY)
VALIDATE_UNRET_END VALIDATE_UNRET_END
ALTERNATIVE_3 "", \ ALTERNATIVE_3 "", \
CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \ CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \
@ -309,9 +300,9 @@
.macro CALL_DEPTH_ACCOUNT .macro CALL_DEPTH_ACCOUNT
#ifdef CONFIG_CALL_DEPTH_TRACKING #ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING
ALTERNATIVE "", \ ALTERNATIVE "", \
__stringify(ASM_INCREMENT_CALL_DEPTH), X86_FEATURE_CALL_DEPTH __stringify(INCREMENT_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
#endif #endif
.endm .endm
@ -339,19 +330,19 @@ extern retpoline_thunk_t __x86_indirect_thunk_array[];
extern retpoline_thunk_t __x86_indirect_call_thunk_array[]; extern retpoline_thunk_t __x86_indirect_call_thunk_array[];
extern retpoline_thunk_t __x86_indirect_jump_thunk_array[]; extern retpoline_thunk_t __x86_indirect_jump_thunk_array[];
#ifdef CONFIG_RETHUNK #ifdef CONFIG_MITIGATION_RETHUNK
extern void __x86_return_thunk(void); extern void __x86_return_thunk(void);
#else #else
static inline void __x86_return_thunk(void) {} static inline void __x86_return_thunk(void) {}
#endif #endif
#ifdef CONFIG_CPU_UNRET_ENTRY #ifdef CONFIG_MITIGATION_UNRET_ENTRY
extern void retbleed_return_thunk(void); extern void retbleed_return_thunk(void);
#else #else
static inline void retbleed_return_thunk(void) {} static inline void retbleed_return_thunk(void) {}
#endif #endif
#ifdef CONFIG_CPU_SRSO #ifdef CONFIG_MITIGATION_SRSO
extern void srso_return_thunk(void); extern void srso_return_thunk(void);
extern void srso_alias_return_thunk(void); extern void srso_alias_return_thunk(void);
#else #else
@ -368,7 +359,9 @@ extern void entry_ibpb(void);
extern void (*x86_return_thunk)(void); extern void (*x86_return_thunk)(void);
#ifdef CONFIG_CALL_DEPTH_TRACKING extern void __warn_thunk(void);
#ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING
extern void call_depth_return_thunk(void); extern void call_depth_return_thunk(void);
#define CALL_DEPTH_ACCOUNT \ #define CALL_DEPTH_ACCOUNT \
@ -382,14 +375,14 @@ DECLARE_PER_CPU(u64, __x86_ret_count);
DECLARE_PER_CPU(u64, __x86_stuffs_count); DECLARE_PER_CPU(u64, __x86_stuffs_count);
DECLARE_PER_CPU(u64, __x86_ctxsw_count); DECLARE_PER_CPU(u64, __x86_ctxsw_count);
#endif #endif
#else /* !CONFIG_CALL_DEPTH_TRACKING */ #else /* !CONFIG_MITIGATION_CALL_DEPTH_TRACKING */
static inline void call_depth_return_thunk(void) {} static inline void call_depth_return_thunk(void) {}
#define CALL_DEPTH_ACCOUNT "" #define CALL_DEPTH_ACCOUNT ""
#endif /* CONFIG_CALL_DEPTH_TRACKING */ #endif /* CONFIG_MITIGATION_CALL_DEPTH_TRACKING */
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
#define GEN(reg) \ #define GEN(reg) \
extern retpoline_thunk_t __x86_indirect_thunk_ ## reg; extern retpoline_thunk_t __x86_indirect_thunk_ ## reg;
@ -410,7 +403,7 @@ static inline void call_depth_return_thunk(void) {}
/* /*
* Inline asm uses the %V modifier which is only in newer GCC * Inline asm uses the %V modifier which is only in newer GCC
* which is ensured when CONFIG_RETPOLINE is defined. * which is ensured when CONFIG_MITIGATION_RETPOLINE is defined.
*/ */
# define CALL_NOSPEC \ # define CALL_NOSPEC \
ALTERNATIVE_2( \ ALTERNATIVE_2( \

View file

@ -4,17 +4,21 @@
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#define __percpu_seg gs #define __percpu_seg gs
#define __percpu_rel (%rip)
#else #else
#define __percpu_seg fs #define __percpu_seg fs
#define __percpu_rel
#endif #endif
#ifdef __ASSEMBLY__ #ifdef __ASSEMBLY__
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define PER_CPU_VAR(var) %__percpu_seg:var #define __percpu %__percpu_seg:
#else /* ! SMP */ #else
#define PER_CPU_VAR(var) var #define __percpu
#endif /* SMP */ #endif
#define PER_CPU_VAR(var) __percpu(var)__percpu_rel
#ifdef CONFIG_X86_64_SMP #ifdef CONFIG_X86_64_SMP
#define INIT_PER_CPU_VAR(var) init_per_cpu__##var #define INIT_PER_CPU_VAR(var) init_per_cpu__##var
@ -24,30 +28,84 @@
#else /* ...!ASSEMBLY */ #else /* ...!ASSEMBLY */
#include <linux/build_bug.h>
#include <linux/stringify.h> #include <linux/stringify.h>
#include <asm/asm.h> #include <asm/asm.h>
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#ifdef CONFIG_CC_HAS_NAMED_AS
#ifdef __CHECKER__
#define __seg_gs __attribute__((address_space(__seg_gs)))
#define __seg_fs __attribute__((address_space(__seg_fs)))
#endif
#ifdef CONFIG_X86_64
#define __percpu_seg_override __seg_gs
#else
#define __percpu_seg_override __seg_fs
#endif
#define __percpu_prefix ""
#else /* CONFIG_CC_HAS_NAMED_AS */
#define __percpu_seg_override
#define __percpu_prefix "%%"__stringify(__percpu_seg)":" #define __percpu_prefix "%%"__stringify(__percpu_seg)":"
#endif /* CONFIG_CC_HAS_NAMED_AS */
#define __force_percpu_prefix "%%"__stringify(__percpu_seg)":"
#define __my_cpu_offset this_cpu_read(this_cpu_off) #define __my_cpu_offset this_cpu_read(this_cpu_off)
#ifdef CONFIG_USE_X86_SEG_SUPPORT
/*
* Efficient implementation for cases in which the compiler supports
* named address spaces. Allows the compiler to perform additional
* optimizations that can save more instructions.
*/
#define arch_raw_cpu_ptr(ptr) \
({ \
unsigned long tcp_ptr__; \
tcp_ptr__ = __raw_cpu_read(, this_cpu_off); \
\
tcp_ptr__ += (unsigned long)(ptr); \
(typeof(*(ptr)) __kernel __force *)tcp_ptr__; \
})
#else /* CONFIG_USE_X86_SEG_SUPPORT */
/* /*
* Compared to the generic __my_cpu_offset version, the following * Compared to the generic __my_cpu_offset version, the following
* saves one instruction and avoids clobbering a temp register. * saves one instruction and avoids clobbering a temp register.
*/ */
#define arch_raw_cpu_ptr(ptr) \ #define arch_raw_cpu_ptr(ptr) \
({ \ ({ \
unsigned long tcp_ptr__; \ unsigned long tcp_ptr__; \
asm ("add " __percpu_arg(1) ", %0" \ asm ("mov " __percpu_arg(1) ", %0" \
: "=r" (tcp_ptr__) \ : "=r" (tcp_ptr__) \
: "m" (this_cpu_off), "0" (ptr)); \ : "m" (__my_cpu_var(this_cpu_off))); \
(typeof(*(ptr)) __kernel __force *)tcp_ptr__; \ \
tcp_ptr__ += (unsigned long)(ptr); \
(typeof(*(ptr)) __kernel __force *)tcp_ptr__; \
}) })
#else #endif /* CONFIG_USE_X86_SEG_SUPPORT */
#define __percpu_prefix ""
#endif
#define PER_CPU_VAR(var) %__percpu_seg:(var)__percpu_rel
#else /* CONFIG_SMP */
#define __percpu_seg_override
#define __percpu_prefix ""
#define __force_percpu_prefix ""
#define PER_CPU_VAR(var) (var)__percpu_rel
#endif /* CONFIG_SMP */
#define __my_cpu_type(var) typeof(var) __percpu_seg_override
#define __my_cpu_ptr(ptr) (__my_cpu_type(*ptr) *)(uintptr_t)(ptr)
#define __my_cpu_var(var) (*__my_cpu_ptr(&var))
#define __percpu_arg(x) __percpu_prefix "%" #x #define __percpu_arg(x) __percpu_prefix "%" #x
#define __force_percpu_arg(x) __force_percpu_prefix "%" #x
/* /*
* Initialized pointers to per-cpu variables needed for the boot * Initialized pointers to per-cpu variables needed for the boot
@ -107,14 +165,14 @@ do { \
(void)pto_tmp__; \ (void)pto_tmp__; \
} \ } \
asm qual(__pcpu_op2_##size(op, "%[val]", __percpu_arg([var])) \ asm qual(__pcpu_op2_##size(op, "%[val]", __percpu_arg([var])) \
: [var] "+m" (_var) \ : [var] "+m" (__my_cpu_var(_var)) \
: [val] __pcpu_reg_imm_##size(pto_val__)); \ : [val] __pcpu_reg_imm_##size(pto_val__)); \
} while (0) } while (0)
#define percpu_unary_op(size, qual, op, _var) \ #define percpu_unary_op(size, qual, op, _var) \
({ \ ({ \
asm qual (__pcpu_op1_##size(op, __percpu_arg([var])) \ asm qual (__pcpu_op1_##size(op, __percpu_arg([var])) \
: [var] "+m" (_var)); \ : [var] "+m" (__my_cpu_var(_var))); \
}) })
/* /*
@ -144,16 +202,16 @@ do { \
__pcpu_type_##size pfo_val__; \ __pcpu_type_##size pfo_val__; \
asm qual (__pcpu_op2_##size(op, __percpu_arg([var]), "%[val]") \ asm qual (__pcpu_op2_##size(op, __percpu_arg([var]), "%[val]") \
: [val] __pcpu_reg_##size("=", pfo_val__) \ : [val] __pcpu_reg_##size("=", pfo_val__) \
: [var] "m" (_var)); \ : [var] "m" (__my_cpu_var(_var))); \
(typeof(_var))(unsigned long) pfo_val__; \ (typeof(_var))(unsigned long) pfo_val__; \
}) })
#define percpu_stable_op(size, op, _var) \ #define percpu_stable_op(size, op, _var) \
({ \ ({ \
__pcpu_type_##size pfo_val__; \ __pcpu_type_##size pfo_val__; \
asm(__pcpu_op2_##size(op, __percpu_arg(P[var]), "%[val]") \ asm(__pcpu_op2_##size(op, __force_percpu_arg(a[var]), "%[val]") \
: [val] __pcpu_reg_##size("=", pfo_val__) \ : [val] __pcpu_reg_##size("=", pfo_val__) \
: [var] "p" (&(_var))); \ : [var] "i" (&(_var))); \
(typeof(_var))(unsigned long) pfo_val__; \ (typeof(_var))(unsigned long) pfo_val__; \
}) })
@ -166,7 +224,7 @@ do { \
asm qual (__pcpu_op2_##size("xadd", "%[tmp]", \ asm qual (__pcpu_op2_##size("xadd", "%[tmp]", \
__percpu_arg([var])) \ __percpu_arg([var])) \
: [tmp] __pcpu_reg_##size("+", paro_tmp__), \ : [tmp] __pcpu_reg_##size("+", paro_tmp__), \
[var] "+m" (_var) \ [var] "+m" (__my_cpu_var(_var)) \
: : "memory"); \ : : "memory"); \
(typeof(_var))(unsigned long) (paro_tmp__ + _val); \ (typeof(_var))(unsigned long) (paro_tmp__ + _val); \
}) })
@ -187,7 +245,7 @@ do { \
__percpu_arg([var])) \ __percpu_arg([var])) \
"\n\tjnz 1b" \ "\n\tjnz 1b" \
: [oval] "=&a" (pxo_old__), \ : [oval] "=&a" (pxo_old__), \
[var] "+m" (_var) \ [var] "+m" (__my_cpu_var(_var)) \
: [nval] __pcpu_reg_##size(, pxo_new__) \ : [nval] __pcpu_reg_##size(, pxo_new__) \
: "memory"); \ : "memory"); \
(typeof(_var))(unsigned long) pxo_old__; \ (typeof(_var))(unsigned long) pxo_old__; \
@ -204,7 +262,7 @@ do { \
asm qual (__pcpu_op2_##size("cmpxchg", "%[nval]", \ asm qual (__pcpu_op2_##size("cmpxchg", "%[nval]", \
__percpu_arg([var])) \ __percpu_arg([var])) \
: [oval] "+a" (pco_old__), \ : [oval] "+a" (pco_old__), \
[var] "+m" (_var) \ [var] "+m" (__my_cpu_var(_var)) \
: [nval] __pcpu_reg_##size(, pco_new__) \ : [nval] __pcpu_reg_##size(, pco_new__) \
: "memory"); \ : "memory"); \
(typeof(_var))(unsigned long) pco_old__; \ (typeof(_var))(unsigned long) pco_old__; \
@ -221,7 +279,7 @@ do { \
CC_SET(z) \ CC_SET(z) \
: CC_OUT(z) (success), \ : CC_OUT(z) (success), \
[oval] "+a" (pco_old__), \ [oval] "+a" (pco_old__), \
[var] "+m" (_var) \ [var] "+m" (__my_cpu_var(_var)) \
: [nval] __pcpu_reg_##size(, pco_new__) \ : [nval] __pcpu_reg_##size(, pco_new__) \
: "memory"); \ : "memory"); \
if (unlikely(!success)) \ if (unlikely(!success)) \
@ -244,7 +302,7 @@ do { \
\ \
asm qual (ALTERNATIVE("call this_cpu_cmpxchg8b_emu", \ asm qual (ALTERNATIVE("call this_cpu_cmpxchg8b_emu", \
"cmpxchg8b " __percpu_arg([var]), X86_FEATURE_CX8) \ "cmpxchg8b " __percpu_arg([var]), X86_FEATURE_CX8) \
: [var] "+m" (_var), \ : [var] "+m" (__my_cpu_var(_var)), \
"+a" (old__.low), \ "+a" (old__.low), \
"+d" (old__.high) \ "+d" (old__.high) \
: "b" (new__.low), \ : "b" (new__.low), \
@ -276,7 +334,7 @@ do { \
"cmpxchg8b " __percpu_arg([var]), X86_FEATURE_CX8) \ "cmpxchg8b " __percpu_arg([var]), X86_FEATURE_CX8) \
CC_SET(z) \ CC_SET(z) \
: CC_OUT(z) (success), \ : CC_OUT(z) (success), \
[var] "+m" (_var), \ [var] "+m" (__my_cpu_var(_var)), \
"+a" (old__.low), \ "+a" (old__.low), \
"+d" (old__.high) \ "+d" (old__.high) \
: "b" (new__.low), \ : "b" (new__.low), \
@ -313,7 +371,7 @@ do { \
\ \
asm qual (ALTERNATIVE("call this_cpu_cmpxchg16b_emu", \ asm qual (ALTERNATIVE("call this_cpu_cmpxchg16b_emu", \
"cmpxchg16b " __percpu_arg([var]), X86_FEATURE_CX16) \ "cmpxchg16b " __percpu_arg([var]), X86_FEATURE_CX16) \
: [var] "+m" (_var), \ : [var] "+m" (__my_cpu_var(_var)), \
"+a" (old__.low), \ "+a" (old__.low), \
"+d" (old__.high) \ "+d" (old__.high) \
: "b" (new__.low), \ : "b" (new__.low), \
@ -345,7 +403,7 @@ do { \
"cmpxchg16b " __percpu_arg([var]), X86_FEATURE_CX16) \ "cmpxchg16b " __percpu_arg([var]), X86_FEATURE_CX16) \
CC_SET(z) \ CC_SET(z) \
: CC_OUT(z) (success), \ : CC_OUT(z) (success), \
[var] "+m" (_var), \ [var] "+m" (__my_cpu_var(_var)), \
"+a" (old__.low), \ "+a" (old__.low), \
"+d" (old__.high) \ "+d" (old__.high) \
: "b" (new__.low), \ : "b" (new__.low), \
@ -366,9 +424,9 @@ do { \
* accessed while this_cpu_read_stable() allows the value to be cached. * accessed while this_cpu_read_stable() allows the value to be cached.
* this_cpu_read_stable() is more efficient and can be used if its value * this_cpu_read_stable() is more efficient and can be used if its value
* is guaranteed to be valid across cpus. The current users include * is guaranteed to be valid across cpus. The current users include
* get_current() and get_thread_info() both of which are actually * pcpu_hot.current_task and pcpu_hot.top_of_stack, both of which are
* per-thread variables implemented as per-cpu variables and thus * actually per-thread variables implemented as per-CPU variables and
* stable for the duration of the respective task. * thus stable for the duration of the respective task.
*/ */
#define this_cpu_read_stable_1(pcp) percpu_stable_op(1, "mov", pcp) #define this_cpu_read_stable_1(pcp) percpu_stable_op(1, "mov", pcp)
#define this_cpu_read_stable_2(pcp) percpu_stable_op(2, "mov", pcp) #define this_cpu_read_stable_2(pcp) percpu_stable_op(2, "mov", pcp)
@ -376,13 +434,72 @@ do { \
#define this_cpu_read_stable_8(pcp) percpu_stable_op(8, "mov", pcp) #define this_cpu_read_stable_8(pcp) percpu_stable_op(8, "mov", pcp)
#define this_cpu_read_stable(pcp) __pcpu_size_call_return(this_cpu_read_stable_, pcp) #define this_cpu_read_stable(pcp) __pcpu_size_call_return(this_cpu_read_stable_, pcp)
#ifdef CONFIG_USE_X86_SEG_SUPPORT
#define __raw_cpu_read(qual, pcp) \
({ \
*(qual __my_cpu_type(pcp) *)__my_cpu_ptr(&(pcp)); \
})
#define __raw_cpu_write(qual, pcp, val) \
do { \
*(qual __my_cpu_type(pcp) *)__my_cpu_ptr(&(pcp)) = (val); \
} while (0)
#define raw_cpu_read_1(pcp) __raw_cpu_read(, pcp)
#define raw_cpu_read_2(pcp) __raw_cpu_read(, pcp)
#define raw_cpu_read_4(pcp) __raw_cpu_read(, pcp)
#define raw_cpu_write_1(pcp, val) __raw_cpu_write(, pcp, val)
#define raw_cpu_write_2(pcp, val) __raw_cpu_write(, pcp, val)
#define raw_cpu_write_4(pcp, val) __raw_cpu_write(, pcp, val)
#define this_cpu_read_1(pcp) __raw_cpu_read(volatile, pcp)
#define this_cpu_read_2(pcp) __raw_cpu_read(volatile, pcp)
#define this_cpu_read_4(pcp) __raw_cpu_read(volatile, pcp)
#define this_cpu_write_1(pcp, val) __raw_cpu_write(volatile, pcp, val)
#define this_cpu_write_2(pcp, val) __raw_cpu_write(volatile, pcp, val)
#define this_cpu_write_4(pcp, val) __raw_cpu_write(volatile, pcp, val)
#ifdef CONFIG_X86_64
#define raw_cpu_read_8(pcp) __raw_cpu_read(, pcp)
#define raw_cpu_write_8(pcp, val) __raw_cpu_write(, pcp, val)
#define this_cpu_read_8(pcp) __raw_cpu_read(volatile, pcp)
#define this_cpu_write_8(pcp, val) __raw_cpu_write(volatile, pcp, val)
#endif
#define this_cpu_read_const(pcp) __raw_cpu_read(, pcp)
#else /* CONFIG_USE_X86_SEG_SUPPORT */
#define raw_cpu_read_1(pcp) percpu_from_op(1, , "mov", pcp) #define raw_cpu_read_1(pcp) percpu_from_op(1, , "mov", pcp)
#define raw_cpu_read_2(pcp) percpu_from_op(2, , "mov", pcp) #define raw_cpu_read_2(pcp) percpu_from_op(2, , "mov", pcp)
#define raw_cpu_read_4(pcp) percpu_from_op(4, , "mov", pcp) #define raw_cpu_read_4(pcp) percpu_from_op(4, , "mov", pcp)
#define raw_cpu_write_1(pcp, val) percpu_to_op(1, , "mov", (pcp), val) #define raw_cpu_write_1(pcp, val) percpu_to_op(1, , "mov", (pcp), val)
#define raw_cpu_write_2(pcp, val) percpu_to_op(2, , "mov", (pcp), val) #define raw_cpu_write_2(pcp, val) percpu_to_op(2, , "mov", (pcp), val)
#define raw_cpu_write_4(pcp, val) percpu_to_op(4, , "mov", (pcp), val) #define raw_cpu_write_4(pcp, val) percpu_to_op(4, , "mov", (pcp), val)
#define this_cpu_read_1(pcp) percpu_from_op(1, volatile, "mov", pcp)
#define this_cpu_read_2(pcp) percpu_from_op(2, volatile, "mov", pcp)
#define this_cpu_read_4(pcp) percpu_from_op(4, volatile, "mov", pcp)
#define this_cpu_write_1(pcp, val) percpu_to_op(1, volatile, "mov", (pcp), val)
#define this_cpu_write_2(pcp, val) percpu_to_op(2, volatile, "mov", (pcp), val)
#define this_cpu_write_4(pcp, val) percpu_to_op(4, volatile, "mov", (pcp), val)
#ifdef CONFIG_X86_64
#define raw_cpu_read_8(pcp) percpu_from_op(8, , "mov", pcp)
#define raw_cpu_write_8(pcp, val) percpu_to_op(8, , "mov", (pcp), val)
#define this_cpu_read_8(pcp) percpu_from_op(8, volatile, "mov", pcp)
#define this_cpu_write_8(pcp, val) percpu_to_op(8, volatile, "mov", (pcp), val)
#endif
/*
* The generic per-cpu infrastrucutre is not suitable for
* reading const-qualified variables.
*/
#define this_cpu_read_const(pcp) ({ BUILD_BUG(); (typeof(pcp))0; })
#endif /* CONFIG_USE_X86_SEG_SUPPORT */
#define raw_cpu_add_1(pcp, val) percpu_add_op(1, , (pcp), val) #define raw_cpu_add_1(pcp, val) percpu_add_op(1, , (pcp), val)
#define raw_cpu_add_2(pcp, val) percpu_add_op(2, , (pcp), val) #define raw_cpu_add_2(pcp, val) percpu_add_op(2, , (pcp), val)
#define raw_cpu_add_4(pcp, val) percpu_add_op(4, , (pcp), val) #define raw_cpu_add_4(pcp, val) percpu_add_op(4, , (pcp), val)
@ -408,12 +525,6 @@ do { \
#define raw_cpu_xchg_2(pcp, val) raw_percpu_xchg_op(pcp, val) #define raw_cpu_xchg_2(pcp, val) raw_percpu_xchg_op(pcp, val)
#define raw_cpu_xchg_4(pcp, val) raw_percpu_xchg_op(pcp, val) #define raw_cpu_xchg_4(pcp, val) raw_percpu_xchg_op(pcp, val)
#define this_cpu_read_1(pcp) percpu_from_op(1, volatile, "mov", pcp)
#define this_cpu_read_2(pcp) percpu_from_op(2, volatile, "mov", pcp)
#define this_cpu_read_4(pcp) percpu_from_op(4, volatile, "mov", pcp)
#define this_cpu_write_1(pcp, val) percpu_to_op(1, volatile, "mov", (pcp), val)
#define this_cpu_write_2(pcp, val) percpu_to_op(2, volatile, "mov", (pcp), val)
#define this_cpu_write_4(pcp, val) percpu_to_op(4, volatile, "mov", (pcp), val)
#define this_cpu_add_1(pcp, val) percpu_add_op(1, volatile, (pcp), val) #define this_cpu_add_1(pcp, val) percpu_add_op(1, volatile, (pcp), val)
#define this_cpu_add_2(pcp, val) percpu_add_op(2, volatile, (pcp), val) #define this_cpu_add_2(pcp, val) percpu_add_op(2, volatile, (pcp), val)
#define this_cpu_add_4(pcp, val) percpu_add_op(4, volatile, (pcp), val) #define this_cpu_add_4(pcp, val) percpu_add_op(4, volatile, (pcp), val)
@ -452,8 +563,6 @@ do { \
* 32 bit must fall back to generic operations. * 32 bit must fall back to generic operations.
*/ */
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#define raw_cpu_read_8(pcp) percpu_from_op(8, , "mov", pcp)
#define raw_cpu_write_8(pcp, val) percpu_to_op(8, , "mov", (pcp), val)
#define raw_cpu_add_8(pcp, val) percpu_add_op(8, , (pcp), val) #define raw_cpu_add_8(pcp, val) percpu_add_op(8, , (pcp), val)
#define raw_cpu_and_8(pcp, val) percpu_to_op(8, , "and", (pcp), val) #define raw_cpu_and_8(pcp, val) percpu_to_op(8, , "and", (pcp), val)
#define raw_cpu_or_8(pcp, val) percpu_to_op(8, , "or", (pcp), val) #define raw_cpu_or_8(pcp, val) percpu_to_op(8, , "or", (pcp), val)
@ -462,8 +571,6 @@ do { \
#define raw_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(8, , pcp, oval, nval) #define raw_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(8, , pcp, oval, nval)
#define raw_cpu_try_cmpxchg_8(pcp, ovalp, nval) percpu_try_cmpxchg_op(8, , pcp, ovalp, nval) #define raw_cpu_try_cmpxchg_8(pcp, ovalp, nval) percpu_try_cmpxchg_op(8, , pcp, ovalp, nval)
#define this_cpu_read_8(pcp) percpu_from_op(8, volatile, "mov", pcp)
#define this_cpu_write_8(pcp, val) percpu_to_op(8, volatile, "mov", (pcp), val)
#define this_cpu_add_8(pcp, val) percpu_add_op(8, volatile, (pcp), val) #define this_cpu_add_8(pcp, val) percpu_add_op(8, volatile, (pcp), val)
#define this_cpu_and_8(pcp, val) percpu_to_op(8, volatile, "and", (pcp), val) #define this_cpu_and_8(pcp, val) percpu_to_op(8, volatile, "and", (pcp), val)
#define this_cpu_or_8(pcp, val) percpu_to_op(8, volatile, "or", (pcp), val) #define this_cpu_or_8(pcp, val) percpu_to_op(8, volatile, "or", (pcp), val)
@ -494,7 +601,7 @@ static inline bool x86_this_cpu_variable_test_bit(int nr,
asm volatile("btl "__percpu_arg(2)",%1" asm volatile("btl "__percpu_arg(2)",%1"
CC_SET(c) CC_SET(c)
: CC_OUT(c) (oldbit) : CC_OUT(c) (oldbit)
: "m" (*(unsigned long __percpu *)addr), "Ir" (nr)); : "m" (*__my_cpu_ptr((unsigned long __percpu *)(addr))), "Ir" (nr));
return oldbit; return oldbit;
} }

View file

@ -34,7 +34,7 @@ static inline void paravirt_release_p4d(unsigned long pfn) {}
*/ */
extern gfp_t __userpte_alloc_gfp; extern gfp_t __userpte_alloc_gfp;
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
/* /*
* Instead of one PGD, we acquire two PGDs. Being order-1, it is * Instead of one PGD, we acquire two PGDs. Being order-1, it is
* both 8k in size and 8k-aligned. That lets us just flip bit 12 * both 8k in size and 8k-aligned. That lets us just flip bit 12

View file

@ -52,7 +52,7 @@ static inline void native_set_pmd(pmd_t *pmdp, pmd_t pmd)
static inline void native_set_pud(pud_t *pudp, pud_t pud) static inline void native_set_pud(pud_t *pudp, pud_t pud)
{ {
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
pud.p4d.pgd = pti_set_user_pgtbl(&pudp->p4d.pgd, pud.p4d.pgd); pud.p4d.pgd = pti_set_user_pgtbl(&pudp->p4d.pgd, pud.p4d.pgd);
#endif #endif
pxx_xchg64(pud, pudp, native_pud_val(pud)); pxx_xchg64(pud, pudp, native_pud_val(pud));

View file

@ -909,7 +909,7 @@ static inline int is_new_memtype_allowed(u64 paddr, unsigned long size,
pmd_t *populate_extra_pmd(unsigned long vaddr); pmd_t *populate_extra_pmd(unsigned long vaddr);
pte_t *populate_extra_pte(unsigned long vaddr); pte_t *populate_extra_pte(unsigned long vaddr);
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
pgd_t __pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd); pgd_t __pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd);
/* /*
@ -923,12 +923,12 @@ static inline pgd_t pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd)
return pgd; return pgd;
return __pti_set_user_pgtbl(pgdp, pgd); return __pti_set_user_pgtbl(pgdp, pgd);
} }
#else /* CONFIG_PAGE_TABLE_ISOLATION */ #else /* CONFIG_MITIGATION_PAGE_TABLE_ISOLATION */
static inline pgd_t pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd) static inline pgd_t pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd)
{ {
return pgd; return pgd;
} }
#endif /* CONFIG_PAGE_TABLE_ISOLATION */ #endif /* CONFIG_MITIGATION_PAGE_TABLE_ISOLATION */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
@ -1131,7 +1131,7 @@ static inline int p4d_bad(p4d_t p4d)
{ {
unsigned long ignore_flags = _KERNPG_TABLE | _PAGE_USER; unsigned long ignore_flags = _KERNPG_TABLE | _PAGE_USER;
if (IS_ENABLED(CONFIG_PAGE_TABLE_ISOLATION)) if (IS_ENABLED(CONFIG_MITIGATION_PAGE_TABLE_ISOLATION))
ignore_flags |= _PAGE_NX; ignore_flags |= _PAGE_NX;
return (p4d_flags(p4d) & ~ignore_flags) != 0; return (p4d_flags(p4d) & ~ignore_flags) != 0;
@ -1177,7 +1177,7 @@ static inline int pgd_bad(pgd_t pgd)
if (!pgtable_l5_enabled()) if (!pgtable_l5_enabled())
return 0; return 0;
if (IS_ENABLED(CONFIG_PAGE_TABLE_ISOLATION)) if (IS_ENABLED(CONFIG_MITIGATION_PAGE_TABLE_ISOLATION))
ignore_flags |= _PAGE_NX; ignore_flags |= _PAGE_NX;
return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE; return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE;
@ -1422,9 +1422,9 @@ static inline bool pgdp_maps_userspace(void *__ptr)
#define pgd_leaf pgd_large #define pgd_leaf pgd_large
static inline int pgd_large(pgd_t pgd) { return 0; } static inline int pgd_large(pgd_t pgd) { return 0; }
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
/* /*
* All top-level PAGE_TABLE_ISOLATION page tables are order-1 pages * All top-level MITIGATION_PAGE_TABLE_ISOLATION page tables are order-1 pages
* (8k-aligned and 8k in size). The kernel one is at the beginning 4k and * (8k-aligned and 8k in size). The kernel one is at the beginning 4k and
* the user one is in the last 4k. To switch between them, you * the user one is in the last 4k. To switch between them, you
* just need to flip the 12th bit in their addresses. * just need to flip the 12th bit in their addresses.
@ -1469,7 +1469,7 @@ static inline p4d_t *user_to_kernel_p4dp(p4d_t *p4dp)
{ {
return ptr_clear_bit(p4dp, PTI_PGTABLE_SWITCH_BIT); return ptr_clear_bit(p4dp, PTI_PGTABLE_SWITCH_BIT);
} }
#endif /* CONFIG_PAGE_TABLE_ISOLATION */ #endif /* CONFIG_MITIGATION_PAGE_TABLE_ISOLATION */
/* /*
* clone_pgd_range(pgd_t *dst, pgd_t *src, int count); * clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
@ -1484,7 +1484,7 @@ static inline p4d_t *user_to_kernel_p4dp(p4d_t *p4dp)
static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count) static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count)
{ {
memcpy(dst, src, count * sizeof(pgd_t)); memcpy(dst, src, count * sizeof(pgd_t));
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
if (!static_cpu_has(X86_FEATURE_PTI)) if (!static_cpu_has(X86_FEATURE_PTI))
return; return;
/* Clone the user space pgd as well */ /* Clone the user space pgd as well */

View file

@ -143,7 +143,8 @@ static inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d)
{ {
pgd_t pgd; pgd_t pgd;
if (pgtable_l5_enabled() || !IS_ENABLED(CONFIG_PAGE_TABLE_ISOLATION)) { if (pgtable_l5_enabled() ||
!IS_ENABLED(CONFIG_MITIGATION_PAGE_TABLE_ISOLATION)) {
WRITE_ONCE(*p4dp, p4d); WRITE_ONCE(*p4dp, p4d);
return; return;
} }

View file

@ -91,7 +91,7 @@ static __always_inline void __preempt_count_sub(int val)
*/ */
static __always_inline bool __preempt_count_dec_and_test(void) static __always_inline bool __preempt_count_dec_and_test(void)
{ {
return GEN_UNARY_RMWcc("decl", pcpu_hot.preempt_count, e, return GEN_UNARY_RMWcc("decl", __my_cpu_var(pcpu_hot.preempt_count), e,
__percpu_arg([var])); __percpu_arg([var]));
} }

View file

@ -51,7 +51,7 @@
#define CR3_NOFLUSH 0 #define CR3_NOFLUSH 0
#endif #endif
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
# define X86_CR3_PTI_PCID_USER_BIT 11 # define X86_CR3_PTI_PCID_USER_BIT 11
#endif #endif

View file

@ -526,6 +526,9 @@ static __always_inline unsigned long current_top_of_stack(void)
* and around vm86 mode and sp0 on x86_64 is special because of the * and around vm86 mode and sp0 on x86_64 is special because of the
* entry trampoline. * entry trampoline.
*/ */
if (IS_ENABLED(CONFIG_USE_X86_SEG_SUPPORT))
return this_cpu_read_const(const_pcpu_hot.top_of_stack);
return this_cpu_read_stable(pcpu_hot.top_of_stack); return this_cpu_read_stable(pcpu_hot.top_of_stack);
} }
@ -548,7 +551,7 @@ static inline void load_sp0(unsigned long sp0)
unsigned long __get_wchan(struct task_struct *p); unsigned long __get_wchan(struct task_struct *p);
extern void select_idle_routine(const struct cpuinfo_x86 *c); extern void select_idle_routine(void);
extern void amd_e400_c1e_apic_setup(void); extern void amd_e400_c1e_apic_setup(void);
extern unsigned long boot_option_idle_override; extern unsigned long boot_option_idle_override;

View file

@ -3,7 +3,7 @@
#define _ASM_X86_PTI_H #define _ASM_X86_PTI_H
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
extern void pti_init(void); extern void pti_init(void);
extern void pti_check_boottime_disable(void); extern void pti_check_boottime_disable(void);
extern void pti_finalize(void); extern void pti_finalize(void);

View file

@ -46,7 +46,7 @@
#define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \ #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \
__ARCH_DEFINE_STATIC_CALL_TRAMP(name, ".byte 0xe9; .long " #func " - (. + 4)") __ARCH_DEFINE_STATIC_CALL_TRAMP(name, ".byte 0xe9; .long " #func " - (. + 4)")
#ifdef CONFIG_RETHUNK #ifdef CONFIG_MITIGATION_RETHUNK
#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \ #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \
__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "jmp __x86_return_thunk") __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "jmp __x86_return_thunk")
#else #else

View file

@ -15,6 +15,8 @@
extern void text_poke_early(void *addr, const void *opcode, size_t len); extern void text_poke_early(void *addr, const void *opcode, size_t len);
extern void apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len);
/* /*
* Clear and restore the kernel write-protection flag on the local CPU. * Clear and restore the kernel write-protection flag on the local CPU.
* Allows the kernel to edit read-only pages. * Allows the kernel to edit read-only pages.

View file

@ -11,6 +11,7 @@
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/cpufeatures.h> #include <asm/cpufeatures.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/percpu.h>
#ifdef CONFIG_ADDRESS_MASKING #ifdef CONFIG_ADDRESS_MASKING
/* /*
@ -18,14 +19,10 @@
*/ */
static inline unsigned long __untagged_addr(unsigned long addr) static inline unsigned long __untagged_addr(unsigned long addr)
{ {
/*
* Refer tlbstate_untag_mask directly to avoid RIP-relative relocation
* in alternative instructions. The relocation gets wrong when gets
* copied to the target place.
*/
asm (ALTERNATIVE("", asm (ALTERNATIVE("",
"and %%gs:tlbstate_untag_mask, %[addr]\n\t", X86_FEATURE_LAM) "and " __percpu_arg([mask]) ", %[addr]", X86_FEATURE_LAM)
: [addr] "+r" (addr) : "m" (tlbstate_untag_mask)); : [addr] "+r" (addr)
: [mask] "m" (__my_cpu_var(tlbstate_untag_mask)));
return addr; return addr;
} }

View file

@ -17,7 +17,7 @@
* Hooray, we are in Long 64-bit mode (but still running in low memory) * Hooray, we are in Long 64-bit mode (but still running in low memory)
*/ */
SYM_FUNC_START(wakeup_long64) SYM_FUNC_START(wakeup_long64)
movq saved_magic, %rax movq saved_magic(%rip), %rax
movq $0x123456789abcdef0, %rdx movq $0x123456789abcdef0, %rdx
cmpq %rdx, %rax cmpq %rdx, %rax
je 2f je 2f
@ -33,14 +33,14 @@ SYM_FUNC_START(wakeup_long64)
movw %ax, %es movw %ax, %es
movw %ax, %fs movw %ax, %fs
movw %ax, %gs movw %ax, %gs
movq saved_rsp, %rsp movq saved_rsp(%rip), %rsp
movq saved_rbx, %rbx movq saved_rbx(%rip), %rbx
movq saved_rdi, %rdi movq saved_rdi(%rip), %rdi
movq saved_rsi, %rsi movq saved_rsi(%rip), %rsi
movq saved_rbp, %rbp movq saved_rbp(%rip), %rbp
movq saved_rip, %rax movq saved_rip(%rip), %rax
ANNOTATE_RETPOLINE_SAFE ANNOTATE_RETPOLINE_SAFE
jmp *%rax jmp *%rax
SYM_FUNC_END(wakeup_long64) SYM_FUNC_END(wakeup_long64)
@ -72,11 +72,11 @@ SYM_FUNC_START(do_suspend_lowlevel)
movq $.Lresume_point, saved_rip(%rip) movq $.Lresume_point, saved_rip(%rip)
movq %rsp, saved_rsp movq %rsp, saved_rsp(%rip)
movq %rbp, saved_rbp movq %rbp, saved_rbp(%rip)
movq %rbx, saved_rbx movq %rbx, saved_rbx(%rip)
movq %rdi, saved_rdi movq %rdi, saved_rdi(%rip)
movq %rsi, saved_rsi movq %rsi, saved_rsi(%rip)
addq $8, %rsp addq $8, %rsp
movl $3, %edi movl $3, %edi

View file

@ -45,7 +45,7 @@ EXPORT_SYMBOL_GPL(alternatives_patched);
#define DA_ENDBR 0x08 #define DA_ENDBR 0x08
#define DA_SMP 0x10 #define DA_SMP 0x10
static unsigned int __initdata_or_module debug_alternative; static unsigned int debug_alternative;
static int __init debug_alt(char *str) static int __init debug_alt(char *str)
{ {
@ -133,7 +133,7 @@ const unsigned char * const x86_nops[ASM_NOP_MAX+1] =
* each single-byte NOPs). If @len to fill out is > ASM_NOP_MAX, pad with INT3 and * each single-byte NOPs). If @len to fill out is > ASM_NOP_MAX, pad with INT3 and
* *jump* over instead of executing long and daft NOPs. * *jump* over instead of executing long and daft NOPs.
*/ */
static void __init_or_module add_nop(u8 *instr, unsigned int len) static void add_nop(u8 *instr, unsigned int len)
{ {
u8 *target = instr + len; u8 *target = instr + len;
@ -206,7 +206,7 @@ static int skip_nops(u8 *instr, int offset, int len)
* Optimize a sequence of NOPs, possibly preceded by an unconditional jump * Optimize a sequence of NOPs, possibly preceded by an unconditional jump
* to the end of the NOP sequence into a single NOP. * to the end of the NOP sequence into a single NOP.
*/ */
static bool __init_or_module static bool
__optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, int *target) __optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, int *target)
{ {
int i = *next - insn->length; int i = *next - insn->length;
@ -335,8 +335,7 @@ bool need_reloc(unsigned long offset, u8 *src, size_t src_len)
return (target < src || target > src + src_len); return (target < src || target > src + src_len);
} }
static void __init_or_module noinline void apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len)
apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len)
{ {
int prev, target = 0; int prev, target = 0;
@ -545,7 +544,7 @@ static inline bool is_jcc32(struct insn *insn)
return insn->opcode.bytes[0] == 0x0f && (insn->opcode.bytes[1] & 0xf0) == 0x80; return insn->opcode.bytes[0] == 0x0f && (insn->opcode.bytes[1] & 0xf0) == 0x80;
} }
#if defined(CONFIG_RETPOLINE) && defined(CONFIG_OBJTOOL) #if defined(CONFIG_MITIGATION_RETPOLINE) && defined(CONFIG_OBJTOOL)
/* /*
* CALL/JMP *%\reg * CALL/JMP *%\reg
@ -709,8 +708,8 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
/* /*
* The compiler is supposed to EMIT an INT3 after every unconditional * The compiler is supposed to EMIT an INT3 after every unconditional
* JMP instruction due to AMD BTC. However, if the compiler is too old * JMP instruction due to AMD BTC. However, if the compiler is too old
* or SLS isn't enabled, we still need an INT3 after indirect JMPs * or MITIGATION_SLS isn't enabled, we still need an INT3 after
* even on Intel. * indirect JMPs even on Intel.
*/ */
if (op == JMP32_INSN_OPCODE && i < insn->length) if (op == JMP32_INSN_OPCODE && i < insn->length)
bytes[i++] = INT3_INSN_OPCODE; bytes[i++] = INT3_INSN_OPCODE;
@ -770,7 +769,7 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end)
} }
} }
#ifdef CONFIG_RETHUNK #ifdef CONFIG_MITIGATION_RETHUNK
/* /*
* Rewrite the compiler generated return thunk tail-calls. * Rewrite the compiler generated return thunk tail-calls.
@ -843,14 +842,14 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end)
} }
#else #else
void __init_or_module noinline apply_returns(s32 *start, s32 *end) { } void __init_or_module noinline apply_returns(s32 *start, s32 *end) { }
#endif /* CONFIG_RETHUNK */ #endif /* CONFIG_MITIGATION_RETHUNK */
#else /* !CONFIG_RETPOLINE || !CONFIG_OBJTOOL */ #else /* !CONFIG_MITIGATION_RETPOLINE || !CONFIG_OBJTOOL */
void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { } void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { }
void __init_or_module noinline apply_returns(s32 *start, s32 *end) { } void __init_or_module noinline apply_returns(s32 *start, s32 *end) { }
#endif /* CONFIG_RETPOLINE && CONFIG_OBJTOOL */ #endif /* CONFIG_MITIGATION_RETPOLINE && CONFIG_OBJTOOL */
#ifdef CONFIG_X86_KERNEL_IBT #ifdef CONFIG_X86_KERNEL_IBT

View file

@ -109,7 +109,7 @@ static void __used common(void)
OFFSET(TSS_sp2, tss_struct, x86_tss.sp2); OFFSET(TSS_sp2, tss_struct, x86_tss.sp2);
OFFSET(X86_top_of_stack, pcpu_hot, top_of_stack); OFFSET(X86_top_of_stack, pcpu_hot, top_of_stack);
OFFSET(X86_current_task, pcpu_hot, current_task); OFFSET(X86_current_task, pcpu_hot, current_task);
#ifdef CONFIG_CALL_DEPTH_TRACKING #ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING
OFFSET(X86_call_depth, pcpu_hot, call_depth); OFFSET(X86_call_depth, pcpu_hot, call_depth);
#endif #endif
#if IS_ENABLED(CONFIG_CRYPTO_ARIA_AESNI_AVX_X86_64) #if IS_ENABLED(CONFIG_CRYPTO_ARIA_AESNI_AVX_X86_64)

View file

@ -24,6 +24,8 @@
static int __initdata_or_module debug_callthunks; static int __initdata_or_module debug_callthunks;
#define MAX_PATCH_LEN (255-1)
#define prdbg(fmt, args...) \ #define prdbg(fmt, args...) \
do { \ do { \
if (debug_callthunks) \ if (debug_callthunks) \
@ -179,10 +181,15 @@ static const u8 nops[] = {
static void *patch_dest(void *dest, bool direct) static void *patch_dest(void *dest, bool direct)
{ {
unsigned int tsize = SKL_TMPL_SIZE; unsigned int tsize = SKL_TMPL_SIZE;
u8 insn_buff[MAX_PATCH_LEN];
u8 *pad = dest - tsize; u8 *pad = dest - tsize;
memcpy(insn_buff, skl_call_thunk_template, tsize);
apply_relocation(insn_buff, tsize, pad,
skl_call_thunk_template, tsize);
/* Already patched? */ /* Already patched? */
if (!bcmp(pad, skl_call_thunk_template, tsize)) if (!bcmp(pad, insn_buff, tsize))
return pad; return pad;
/* Ensure there are nops */ /* Ensure there are nops */
@ -192,9 +199,9 @@ static void *patch_dest(void *dest, bool direct)
} }
if (direct) if (direct)
memcpy(pad, skl_call_thunk_template, tsize); memcpy(pad, insn_buff, tsize);
else else
text_poke_copy_locked(pad, skl_call_thunk_template, tsize, true); text_poke_copy_locked(pad, insn_buff, tsize, true);
return pad; return pad;
} }
@ -290,20 +297,27 @@ void *callthunks_translate_call_dest(void *dest)
static bool is_callthunk(void *addr) static bool is_callthunk(void *addr)
{ {
unsigned int tmpl_size = SKL_TMPL_SIZE; unsigned int tmpl_size = SKL_TMPL_SIZE;
void *tmpl = skl_call_thunk_template; u8 insn_buff[MAX_PATCH_LEN];
unsigned long dest; unsigned long dest;
u8 *pad;
dest = roundup((unsigned long)addr, CONFIG_FUNCTION_ALIGNMENT); dest = roundup((unsigned long)addr, CONFIG_FUNCTION_ALIGNMENT);
if (!thunks_initialized || skip_addr((void *)dest)) if (!thunks_initialized || skip_addr((void *)dest))
return false; return false;
return !bcmp((void *)(dest - tmpl_size), tmpl, tmpl_size); pad = (void *)(dest - tmpl_size);
memcpy(insn_buff, skl_call_thunk_template, tmpl_size);
apply_relocation(insn_buff, tmpl_size, pad,
skl_call_thunk_template, tmpl_size);
return !bcmp(pad, insn_buff, tmpl_size);
} }
int x86_call_depth_emit_accounting(u8 **pprog, void *func) int x86_call_depth_emit_accounting(u8 **pprog, void *func)
{ {
unsigned int tmpl_size = SKL_TMPL_SIZE; unsigned int tmpl_size = SKL_TMPL_SIZE;
void *tmpl = skl_call_thunk_template; u8 insn_buff[MAX_PATCH_LEN];
if (!thunks_initialized) if (!thunks_initialized)
return 0; return 0;
@ -312,7 +326,11 @@ int x86_call_depth_emit_accounting(u8 **pprog, void *func)
if (func && is_callthunk(func)) if (func && is_callthunk(func))
return 0; return 0;
memcpy(*pprog, tmpl, tmpl_size); memcpy(insn_buff, skl_call_thunk_template, tmpl_size);
apply_relocation(insn_buff, tmpl_size, *pprog,
skl_call_thunk_template, tmpl_size);
memcpy(*pprog, insn_buff, tmpl_size);
*pprog += tmpl_size; *pprog += tmpl_size;
return tmpl_size; return tmpl_size;
} }

View file

@ -817,7 +817,7 @@ static void fix_erratum_1386(struct cpuinfo_x86 *c)
void init_spectral_chicken(struct cpuinfo_x86 *c) void init_spectral_chicken(struct cpuinfo_x86 *c)
{ {
#ifdef CONFIG_CPU_UNRET_ENTRY #ifdef CONFIG_MITIGATION_UNRET_ENTRY
u64 value; u64 value;
/* /*

View file

@ -668,7 +668,7 @@ enum gds_mitigations {
GDS_MITIGATION_HYPERVISOR, GDS_MITIGATION_HYPERVISOR,
}; };
#if IS_ENABLED(CONFIG_GDS_FORCE_MITIGATION) #if IS_ENABLED(CONFIG_MITIGATION_GDS_FORCE)
static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FORCE; static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FORCE;
#else #else
static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FULL; static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FULL;
@ -979,10 +979,10 @@ static void __init retbleed_select_mitigation(void)
return; return;
case RETBLEED_CMD_UNRET: case RETBLEED_CMD_UNRET:
if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) { if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
retbleed_mitigation = RETBLEED_MITIGATION_UNRET; retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
} else { } else {
pr_err("WARNING: kernel not compiled with CPU_UNRET_ENTRY.\n"); pr_err("WARNING: kernel not compiled with MITIGATION_UNRET_ENTRY.\n");
goto do_cmd_auto; goto do_cmd_auto;
} }
break; break;
@ -991,24 +991,24 @@ static void __init retbleed_select_mitigation(void)
if (!boot_cpu_has(X86_FEATURE_IBPB)) { if (!boot_cpu_has(X86_FEATURE_IBPB)) {
pr_err("WARNING: CPU does not support IBPB.\n"); pr_err("WARNING: CPU does not support IBPB.\n");
goto do_cmd_auto; goto do_cmd_auto;
} else if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) { } else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
retbleed_mitigation = RETBLEED_MITIGATION_IBPB; retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
} else { } else {
pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n"); pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
goto do_cmd_auto; goto do_cmd_auto;
} }
break; break;
case RETBLEED_CMD_STUFF: case RETBLEED_CMD_STUFF:
if (IS_ENABLED(CONFIG_CALL_DEPTH_TRACKING) && if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) &&
spectre_v2_enabled == SPECTRE_V2_RETPOLINE) { spectre_v2_enabled == SPECTRE_V2_RETPOLINE) {
retbleed_mitigation = RETBLEED_MITIGATION_STUFF; retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
} else { } else {
if (IS_ENABLED(CONFIG_CALL_DEPTH_TRACKING)) if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING))
pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n"); pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
else else
pr_err("WARNING: kernel not compiled with CALL_DEPTH_TRACKING.\n"); pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
goto do_cmd_auto; goto do_cmd_auto;
} }
@ -1018,9 +1018,10 @@ static void __init retbleed_select_mitigation(void)
case RETBLEED_CMD_AUTO: case RETBLEED_CMD_AUTO:
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) { boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
retbleed_mitigation = RETBLEED_MITIGATION_UNRET; retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
else if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY) && boot_cpu_has(X86_FEATURE_IBPB)) else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
boot_cpu_has(X86_FEATURE_IBPB))
retbleed_mitigation = RETBLEED_MITIGATION_IBPB; retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
} }
@ -1099,7 +1100,7 @@ static enum spectre_v2_user_mitigation spectre_v2_user_stibp __ro_after_init =
static enum spectre_v2_user_mitigation spectre_v2_user_ibpb __ro_after_init = static enum spectre_v2_user_mitigation spectre_v2_user_ibpb __ro_after_init =
SPECTRE_V2_USER_NONE; SPECTRE_V2_USER_NONE;
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
static bool spectre_v2_bad_module; static bool spectre_v2_bad_module;
bool retpoline_module_ok(bool has_retpoline) bool retpoline_module_ok(bool has_retpoline)
@ -1412,7 +1413,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC || cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC ||
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE || cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) && cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!IS_ENABLED(CONFIG_RETPOLINE)) { !IS_ENABLED(CONFIG_MITIGATION_RETPOLINE)) {
pr_err("%s selected but not compiled in. Switching to AUTO select\n", pr_err("%s selected but not compiled in. Switching to AUTO select\n",
mitigation_options[i].option); mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO; return SPECTRE_V2_CMD_AUTO;
@ -1435,7 +1436,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
return SPECTRE_V2_CMD_AUTO; return SPECTRE_V2_CMD_AUTO;
} }
if (cmd == SPECTRE_V2_CMD_IBRS && !IS_ENABLED(CONFIG_CPU_IBRS_ENTRY)) { if (cmd == SPECTRE_V2_CMD_IBRS && !IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY)) {
pr_err("%s selected but not compiled in. Switching to AUTO select\n", pr_err("%s selected but not compiled in. Switching to AUTO select\n",
mitigation_options[i].option); mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO; return SPECTRE_V2_CMD_AUTO;
@ -1466,7 +1467,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void) static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void)
{ {
if (!IS_ENABLED(CONFIG_RETPOLINE)) { if (!IS_ENABLED(CONFIG_MITIGATION_RETPOLINE)) {
pr_err("Kernel not compiled with retpoline; no mitigation available!"); pr_err("Kernel not compiled with retpoline; no mitigation available!");
return SPECTRE_V2_NONE; return SPECTRE_V2_NONE;
} }
@ -1561,7 +1562,7 @@ static void __init spectre_v2_select_mitigation(void)
break; break;
} }
if (IS_ENABLED(CONFIG_CPU_IBRS_ENTRY) && if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
boot_cpu_has_bug(X86_BUG_RETBLEED) && boot_cpu_has_bug(X86_BUG_RETBLEED) &&
retbleed_cmd != RETBLEED_CMD_OFF && retbleed_cmd != RETBLEED_CMD_OFF &&
retbleed_cmd != RETBLEED_CMD_STUFF && retbleed_cmd != RETBLEED_CMD_STUFF &&
@ -2454,7 +2455,7 @@ static void __init srso_select_mitigation(void)
break; break;
case SRSO_CMD_SAFE_RET: case SRSO_CMD_SAFE_RET:
if (IS_ENABLED(CONFIG_CPU_SRSO)) { if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
/* /*
* Enable the return thunk for generated code * Enable the return thunk for generated code
* like ftrace, static_call, etc. * like ftrace, static_call, etc.
@ -2474,29 +2475,29 @@ static void __init srso_select_mitigation(void)
else else
srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED; srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
} else { } else {
pr_err("WARNING: kernel not compiled with CPU_SRSO.\n"); pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
} }
break; break;
case SRSO_CMD_IBPB: case SRSO_CMD_IBPB:
if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) { if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
if (has_microcode) { if (has_microcode) {
setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB); setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
srso_mitigation = SRSO_MITIGATION_IBPB; srso_mitigation = SRSO_MITIGATION_IBPB;
} }
} else { } else {
pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n"); pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
} }
break; break;
case SRSO_CMD_IBPB_ON_VMEXIT: case SRSO_CMD_IBPB_ON_VMEXIT:
if (IS_ENABLED(CONFIG_CPU_SRSO)) { if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) { if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT); setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT; srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
} }
} else { } else {
pr_err("WARNING: kernel not compiled with CPU_SRSO.\n"); pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
} }
break; break;
} }
@ -2846,3 +2847,8 @@ ssize_t cpu_show_gds(struct device *dev, struct device_attribute *attr, char *bu
return cpu_show_common(dev, attr, buf, X86_BUG_GDS); return cpu_show_common(dev, attr, buf, X86_BUG_GDS);
} }
#endif #endif
void __warn_thunk(void)
{
WARN_ONCE(1, "Unpatched return thunk in use. This should not happen!\n");
}

View file

@ -1856,8 +1856,6 @@ static void identify_cpu(struct cpuinfo_x86 *c)
/* Init Machine Check Exception if available. */ /* Init Machine Check Exception if available. */
mcheck_cpu_init(c); mcheck_cpu_init(c);
select_idle_routine(c);
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
numa_add_cpu(smp_processor_id()); numa_add_cpu(smp_processor_id());
#endif #endif
@ -1967,6 +1965,7 @@ DEFINE_PER_CPU_ALIGNED(struct pcpu_hot, pcpu_hot) = {
.top_of_stack = TOP_OF_INIT_STACK, .top_of_stack = TOP_OF_INIT_STACK,
}; };
EXPORT_PER_CPU_SYMBOL(pcpu_hot); EXPORT_PER_CPU_SYMBOL(pcpu_hot);
EXPORT_PER_CPU_SYMBOL(const_pcpu_hot);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
DEFINE_PER_CPU_FIRST(struct fixed_percpu_data, DEFINE_PER_CPU_FIRST(struct fixed_percpu_data,
@ -2278,6 +2277,8 @@ void __init arch_cpu_finalize_init(void)
{ {
identify_boot_cpu(); identify_boot_cpu();
select_idle_routine();
/* /*
* identify_boot_cpu() initialized SMT support information, let the * identify_boot_cpu() initialized SMT support information, let the
* core code know. * core code know.

View file

@ -410,7 +410,7 @@ static void __die_header(const char *str, struct pt_regs *regs, long err)
IS_ENABLED(CONFIG_SMP) ? " SMP" : "", IS_ENABLED(CONFIG_SMP) ? " SMP" : "",
debug_pagealloc_enabled() ? " DEBUG_PAGEALLOC" : "", debug_pagealloc_enabled() ? " DEBUG_PAGEALLOC" : "",
IS_ENABLED(CONFIG_KASAN) ? " KASAN" : "", IS_ENABLED(CONFIG_KASAN) ? " KASAN" : "",
IS_ENABLED(CONFIG_PAGE_TABLE_ISOLATION) ? IS_ENABLED(CONFIG_MITIGATION_PAGE_TABLE_ISOLATION) ?
(boot_cpu_has(X86_FEATURE_PTI) ? " PTI" : " NOPTI") : ""); (boot_cpu_has(X86_FEATURE_PTI) ? " PTI" : " NOPTI") : "");
} }
NOKPROBE_SYMBOL(__die_header); NOKPROBE_SYMBOL(__die_header);

View file

@ -307,7 +307,8 @@ union ftrace_op_code_union {
} __attribute__((packed)); } __attribute__((packed));
}; };
#define RET_SIZE (IS_ENABLED(CONFIG_RETPOLINE) ? 5 : 1 + IS_ENABLED(CONFIG_SLS)) #define RET_SIZE \
(IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) ? 5 : 1 + IS_ENABLED(CONFIG_MITIGATION_SLS))
static unsigned long static unsigned long
create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)

View file

@ -414,7 +414,7 @@ __REFDATA
.align 4 .align 4
SYM_DATA(initial_code, .long i386_start_kernel) SYM_DATA(initial_code, .long i386_start_kernel)
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
#define PGD_ALIGN (2 * PAGE_SIZE) #define PGD_ALIGN (2 * PAGE_SIZE)
#define PTI_USER_PGD_FILL 1024 #define PTI_USER_PGD_FILL 1024
#else #else
@ -474,7 +474,7 @@ SYM_DATA_START(initial_page_table)
# endif # endif
.align PAGE_SIZE /* needs to be page-sized too */ .align PAGE_SIZE /* needs to be page-sized too */
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
/* /*
* PTI needs another page so sync_initial_pagetable() works correctly * PTI needs another page so sync_initial_pagetable() works correctly
* and does not scribble over the data which is placed behind the * and does not scribble over the data which is placed behind the

View file

@ -478,7 +478,7 @@ SYM_CODE_START(soft_restart_cpu)
UNWIND_HINT_END_OF_STACK UNWIND_HINT_END_OF_STACK
/* Find the idle task stack */ /* Find the idle task stack */
movq PER_CPU_VAR(pcpu_hot) + X86_current_task, %rcx movq PER_CPU_VAR(pcpu_hot + X86_current_task), %rcx
movq TASK_threadsp(%rcx), %rsp movq TASK_threadsp(%rcx), %rsp
jmp .Ljump_to_C_code jmp .Ljump_to_C_code
@ -623,7 +623,7 @@ SYM_CODE_END(vc_no_ghcb)
#define SYM_DATA_START_PAGE_ALIGNED(name) \ #define SYM_DATA_START_PAGE_ALIGNED(name) \
SYM_START(name, SYM_L_GLOBAL, .balign PAGE_SIZE) SYM_START(name, SYM_L_GLOBAL, .balign PAGE_SIZE)
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
/* /*
* Each PGD needs to be 8k long and 8k aligned. We do not * Each PGD needs to be 8k long and 8k aligned. We do not
* ever go out to userspace with these, so we do not * ever go out to userspace with these, so we do not

View file

@ -324,7 +324,7 @@ static int can_optimize(unsigned long paddr)
* However, the kernel built with retpolines or IBT has jump * However, the kernel built with retpolines or IBT has jump
* tables disabled so the check can be skipped altogether. * tables disabled so the check can be skipped altogether.
*/ */
if (!IS_ENABLED(CONFIG_RETPOLINE) && if (!IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) &&
!IS_ENABLED(CONFIG_X86_KERNEL_IBT) && !IS_ENABLED(CONFIG_X86_KERNEL_IBT) &&
insn_is_indirect_jump(&insn)) insn_is_indirect_jump(&insn))
return 0; return 0;

View file

@ -184,7 +184,7 @@ static struct ldt_struct *alloc_ldt_struct(unsigned int num_entries)
return new_ldt; return new_ldt;
} }
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
static void do_sanity_check(struct mm_struct *mm, static void do_sanity_check(struct mm_struct *mm,
bool had_kernel_mapping, bool had_kernel_mapping,
@ -377,7 +377,7 @@ static void unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt)
flush_tlb_mm_range(mm, va, va + nr_pages * PAGE_SIZE, PAGE_SHIFT, false); flush_tlb_mm_range(mm, va, va + nr_pages * PAGE_SIZE, PAGE_SHIFT, false);
} }
#else /* !CONFIG_PAGE_TABLE_ISOLATION */ #else /* !CONFIG_MITIGATION_PAGE_TABLE_ISOLATION */
static int static int
map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot)
@ -388,11 +388,11 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot)
static void unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt) static void unmap_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt)
{ {
} }
#endif /* CONFIG_PAGE_TABLE_ISOLATION */ #endif /* CONFIG_MITIGATION_PAGE_TABLE_ISOLATION */
static void free_ldt_pgtables(struct mm_struct *mm) static void free_ldt_pgtables(struct mm_struct *mm)
{ {
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
struct mmu_gather tlb; struct mmu_gather tlb;
unsigned long start = LDT_BASE_ADDR; unsigned long start = LDT_BASE_ADDR;
unsigned long end = LDT_END_ADDR; unsigned long end = LDT_END_ADDR;

View file

@ -845,31 +845,6 @@ void __noreturn stop_this_cpu(void *dummy)
} }
} }
/*
* AMD Erratum 400 aware idle routine. We handle it the same way as C3 power
* states (local apic timer and TSC stop).
*
* XXX this function is completely buggered vs RCU and tracing.
*/
static void amd_e400_idle(void)
{
/*
* We cannot use static_cpu_has_bug() here because X86_BUG_AMD_APIC_C1E
* gets set after static_cpu_has() places have been converted via
* alternatives.
*/
if (!boot_cpu_has_bug(X86_BUG_AMD_APIC_C1E)) {
default_idle();
return;
}
tick_broadcast_enter();
default_idle();
tick_broadcast_exit();
}
/* /*
* Prefer MWAIT over HALT if MWAIT is supported, MWAIT_CPUID leaf * Prefer MWAIT over HALT if MWAIT is supported, MWAIT_CPUID leaf
* exists and whenever MONITOR/MWAIT extensions are present there is at * exists and whenever MONITOR/MWAIT extensions are present there is at
@ -878,21 +853,22 @@ static void amd_e400_idle(void)
* Do not prefer MWAIT if MONITOR instruction has a bug or idle=nomwait * Do not prefer MWAIT if MONITOR instruction has a bug or idle=nomwait
* is passed to kernel commandline parameter. * is passed to kernel commandline parameter.
*/ */
static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c) static __init bool prefer_mwait_c1_over_halt(void)
{ {
const struct cpuinfo_x86 *c = &boot_cpu_data;
u32 eax, ebx, ecx, edx; u32 eax, ebx, ecx, edx;
/* User has disallowed the use of MWAIT. Fallback to HALT */ /* If override is enforced on the command line, fall back to HALT. */
if (boot_option_idle_override == IDLE_NOMWAIT) if (boot_option_idle_override != IDLE_NO_OVERRIDE)
return 0; return false;
/* MWAIT is not supported on this platform. Fallback to HALT */ /* MWAIT is not supported on this platform. Fallback to HALT */
if (!cpu_has(c, X86_FEATURE_MWAIT)) if (!cpu_has(c, X86_FEATURE_MWAIT))
return 0; return false;
/* Monitor has a bug. Fallback to HALT */ /* Monitor has a bug or APIC stops in C1E. Fallback to HALT */
if (boot_cpu_has_bug(X86_BUG_MONITOR)) if (boot_cpu_has_bug(X86_BUG_MONITOR) || boot_cpu_has_bug(X86_BUG_AMD_APIC_C1E))
return 0; return false;
cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &edx); cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &edx);
@ -901,13 +877,13 @@ static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
* with EAX=0, ECX=0. * with EAX=0, ECX=0.
*/ */
if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED)) if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED))
return 1; return true;
/* /*
* If MWAIT extensions are available, there should be at least one * If MWAIT extensions are available, there should be at least one
* MWAIT C1 substate present. * MWAIT C1 substate present.
*/ */
return (edx & MWAIT_C1_SUBSTATE_MASK); return !!(edx & MWAIT_C1_SUBSTATE_MASK);
} }
/* /*
@ -933,26 +909,27 @@ static __cpuidle void mwait_idle(void)
__current_clr_polling(); __current_clr_polling();
} }
void select_idle_routine(const struct cpuinfo_x86 *c) void __init select_idle_routine(void)
{ {
#ifdef CONFIG_SMP if (boot_option_idle_override == IDLE_POLL) {
if (boot_option_idle_override == IDLE_POLL && __max_threads_per_core > 1) if (IS_ENABLED(CONFIG_SMP) && __max_threads_per_core > 1)
pr_warn_once("WARNING: polling idle and HT enabled, performance may degrade\n"); pr_warn_once("WARNING: polling idle and HT enabled, performance may degrade\n");
#endif return;
if (x86_idle_set() || boot_option_idle_override == IDLE_POLL) }
/* Required to guard against xen_set_default_idle() */
if (x86_idle_set())
return; return;
if (boot_cpu_has_bug(X86_BUG_AMD_E400)) { if (prefer_mwait_c1_over_halt()) {
pr_info("using AMD E400 aware idle routine\n");
static_call_update(x86_idle, amd_e400_idle);
} else if (prefer_mwait_c1_over_halt(c)) {
pr_info("using mwait in idle threads\n"); pr_info("using mwait in idle threads\n");
static_call_update(x86_idle, mwait_idle); static_call_update(x86_idle, mwait_idle);
} else if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) { } else if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) {
pr_info("using TDX aware idle routine\n"); pr_info("using TDX aware idle routine\n");
static_call_update(x86_idle, tdx_safe_halt); static_call_update(x86_idle, tdx_safe_halt);
} else } else {
static_call_update(x86_idle, default_idle); static_call_update(x86_idle, default_idle);
}
} }
void amd_e400_c1e_apic_setup(void) void amd_e400_c1e_apic_setup(void)
@ -985,7 +962,10 @@ void __init arch_post_acpi_subsys_init(void)
if (!boot_cpu_has(X86_FEATURE_NONSTOP_TSC)) if (!boot_cpu_has(X86_FEATURE_NONSTOP_TSC))
mark_tsc_unstable("TSC halt in AMD C1E"); mark_tsc_unstable("TSC halt in AMD C1E");
pr_info("System has AMD C1E enabled\n");
if (IS_ENABLED(CONFIG_GENERIC_CLOCKEVENTS_BROADCAST_IDLE))
static_branch_enable(&arch_needs_tick_broadcast);
pr_info("System has AMD C1E erratum E400. Workaround enabled.\n");
} }
static int __init idle_setup(char *str) static int __init idle_setup(char *str)
@ -998,24 +978,14 @@ static int __init idle_setup(char *str)
boot_option_idle_override = IDLE_POLL; boot_option_idle_override = IDLE_POLL;
cpu_idle_poll_ctrl(true); cpu_idle_poll_ctrl(true);
} else if (!strcmp(str, "halt")) { } else if (!strcmp(str, "halt")) {
/* /* 'idle=halt' HALT for idle. C-states are disabled. */
* When the boot option of idle=halt is added, halt is
* forced to be used for CPU idle. In such case CPU C2/C3
* won't be used again.
* To continue to load the CPU idle driver, don't touch
* the boot_option_idle_override.
*/
static_call_update(x86_idle, default_idle);
boot_option_idle_override = IDLE_HALT; boot_option_idle_override = IDLE_HALT;
} else if (!strcmp(str, "nomwait")) { } else if (!strcmp(str, "nomwait")) {
/* /* 'idle=nomwait' disables MWAIT for idle */
* If the boot option of "idle=nomwait" is added,
* it means that mwait will be disabled for CPU C1/C2/C3
* states.
*/
boot_option_idle_override = IDLE_NOMWAIT; boot_option_idle_override = IDLE_NOMWAIT;
} else } else {
return -1; return -EINVAL;
}
return 0; return 0;
} }
@ -1030,7 +1000,10 @@ unsigned long arch_align_stack(unsigned long sp)
unsigned long arch_randomize_brk(struct mm_struct *mm) unsigned long arch_randomize_brk(struct mm_struct *mm)
{ {
return randomize_page(mm->brk, 0x02000000); if (mmap_is_ia32())
return randomize_page(mm->brk, SZ_32M);
return randomize_page(mm->brk, SZ_1G);
} }
/* /*

View file

@ -156,13 +156,12 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
{ {
struct thread_struct *prev = &prev_p->thread, struct thread_struct *prev = &prev_p->thread,
*next = &next_p->thread; *next = &next_p->thread;
struct fpu *prev_fpu = &prev->fpu;
int cpu = smp_processor_id(); int cpu = smp_processor_id();
/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */ /* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
if (!test_thread_flag(TIF_NEED_FPU_LOAD)) if (!test_tsk_thread_flag(prev_p, TIF_NEED_FPU_LOAD))
switch_fpu_prepare(prev_fpu, cpu); switch_fpu_prepare(prev_p, cpu);
/* /*
* Save away %gs. No need to save %fs, as it was saved on the * Save away %gs. No need to save %fs, as it was saved on the
@ -209,7 +208,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
raw_cpu_write(pcpu_hot.current_task, next_p); raw_cpu_write(pcpu_hot.current_task, next_p);
switch_fpu_finish(); switch_fpu_finish(next_p);
/* Load the Intel cache allocation PQR MSR. */ /* Load the Intel cache allocation PQR MSR. */
resctrl_sched_in(next_p); resctrl_sched_in(next_p);

View file

@ -611,14 +611,13 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
{ {
struct thread_struct *prev = &prev_p->thread; struct thread_struct *prev = &prev_p->thread;
struct thread_struct *next = &next_p->thread; struct thread_struct *next = &next_p->thread;
struct fpu *prev_fpu = &prev->fpu;
int cpu = smp_processor_id(); int cpu = smp_processor_id();
WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) && WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) &&
this_cpu_read(pcpu_hot.hardirq_stack_inuse)); this_cpu_read(pcpu_hot.hardirq_stack_inuse));
if (!test_thread_flag(TIF_NEED_FPU_LOAD)) if (!test_tsk_thread_flag(prev_p, TIF_NEED_FPU_LOAD))
switch_fpu_prepare(prev_fpu, cpu); switch_fpu_prepare(prev_p, cpu);
/* We must save %fs and %gs before load_TLS() because /* We must save %fs and %gs before load_TLS() because
* %fs and %gs may be cleared by load_TLS(). * %fs and %gs may be cleared by load_TLS().
@ -672,7 +671,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
raw_cpu_write(pcpu_hot.current_task, next_p); raw_cpu_write(pcpu_hot.current_task, next_p);
raw_cpu_write(pcpu_hot.top_of_stack, task_top_of_stack(next_p)); raw_cpu_write(pcpu_hot.top_of_stack, task_top_of_stack(next_p));
switch_fpu_finish(); switch_fpu_finish(next_p);
/* Reload sp0. */ /* Reload sp0. */
update_task_stack(next_p); update_task_stack(next_p);

View file

@ -148,14 +148,16 @@ static int register_stop_handler(void)
static void native_stop_other_cpus(int wait) static void native_stop_other_cpus(int wait)
{ {
unsigned int cpu = smp_processor_id(); unsigned int old_cpu, this_cpu;
unsigned long flags, timeout; unsigned long flags, timeout;
if (reboot_force) if (reboot_force)
return; return;
/* Only proceed if this is the first CPU to reach this code */ /* Only proceed if this is the first CPU to reach this code */
if (atomic_cmpxchg(&stopping_cpu, -1, cpu) != -1) old_cpu = -1;
this_cpu = smp_processor_id();
if (!atomic_try_cmpxchg(&stopping_cpu, &old_cpu, this_cpu))
return; return;
/* For kexec, ensure that offline CPUs are out of MWAIT and in HLT */ /* For kexec, ensure that offline CPUs are out of MWAIT and in HLT */
@ -186,7 +188,7 @@ static void native_stop_other_cpus(int wait)
* NMIs. * NMIs.
*/ */
cpumask_copy(&cpus_stop_mask, cpu_online_mask); cpumask_copy(&cpus_stop_mask, cpu_online_mask);
cpumask_clear_cpu(cpu, &cpus_stop_mask); cpumask_clear_cpu(this_cpu, &cpus_stop_mask);
if (!cpumask_empty(&cpus_stop_mask)) { if (!cpumask_empty(&cpus_stop_mask)) {
apic_send_IPI_allbutself(REBOOT_VECTOR); apic_send_IPI_allbutself(REBOOT_VECTOR);
@ -210,6 +212,8 @@ static void native_stop_other_cpus(int wait)
* CPUs to stop. * CPUs to stop.
*/ */
if (!smp_no_nmi_ipi && !register_stop_handler()) { if (!smp_no_nmi_ipi && !register_stop_handler()) {
unsigned int cpu;
pr_emerg("Shutting down cpus with NMI\n"); pr_emerg("Shutting down cpus with NMI\n");
for_each_cpu(cpu, &cpus_stop_mask) for_each_cpu(cpu, &cpus_stop_mask)

View file

@ -172,7 +172,7 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail)
} }
EXPORT_SYMBOL_GPL(arch_static_call_transform); EXPORT_SYMBOL_GPL(arch_static_call_transform);
#ifdef CONFIG_RETHUNK #ifdef CONFIG_MITIGATION_RETHUNK
/* /*
* This is called by apply_returns() to fix up static call trampolines, * This is called by apply_returns() to fix up static call trampolines,
* specifically ARCH_DEFINE_STATIC_CALL_NULL_TRAMP which is recorded as * specifically ARCH_DEFINE_STATIC_CALL_NULL_TRAMP which is recorded as

View file

@ -52,13 +52,6 @@ static unsigned long get_align_bits(void)
return va_align.bits & get_align_mask(); return va_align.bits & get_align_mask();
} }
unsigned long align_vdso_addr(unsigned long addr)
{
unsigned long align_mask = get_align_mask();
addr = (addr + align_mask) & ~align_mask;
return addr | get_align_bits();
}
static int __init control_va_addr_alignment(char *str) static int __init control_va_addr_alignment(char *str)
{ {
/* guard against enabling this on other CPU families */ /* guard against enabling this on other CPU families */

View file

@ -774,7 +774,7 @@ DEFINE_IDTENTRY_RAW(exc_int3)
*/ */
asmlinkage __visible noinstr struct pt_regs *sync_regs(struct pt_regs *eregs) asmlinkage __visible noinstr struct pt_regs *sync_regs(struct pt_regs *eregs)
{ {
struct pt_regs *regs = (struct pt_regs *)this_cpu_read(pcpu_hot.top_of_stack) - 1; struct pt_regs *regs = (struct pt_regs *)current_top_of_stack() - 1;
if (regs != eregs) if (regs != eregs)
*regs = *eregs; *regs = *eregs;
return regs; return regs;
@ -792,7 +792,7 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r
* trust it and switch to the current kernel stack * trust it and switch to the current kernel stack
*/ */
if (ip_within_syscall_gap(regs)) { if (ip_within_syscall_gap(regs)) {
sp = this_cpu_read(pcpu_hot.top_of_stack); sp = current_top_of_stack();
goto sync; goto sync;
} }

View file

@ -46,6 +46,7 @@ ENTRY(phys_startup_64)
#endif #endif
jiffies = jiffies_64; jiffies = jiffies_64;
const_pcpu_hot = pcpu_hot;
#if defined(CONFIG_X86_64) #if defined(CONFIG_X86_64)
/* /*
@ -132,7 +133,7 @@ SECTIONS
LOCK_TEXT LOCK_TEXT
KPROBES_TEXT KPROBES_TEXT
SOFTIRQENTRY_TEXT SOFTIRQENTRY_TEXT
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
*(.text..__x86.indirect_thunk) *(.text..__x86.indirect_thunk)
*(.text..__x86.return_thunk) *(.text..__x86.return_thunk)
#endif #endif
@ -142,7 +143,7 @@ SECTIONS
*(.text..__x86.rethunk_untrain) *(.text..__x86.rethunk_untrain)
ENTRY_TEXT ENTRY_TEXT
#ifdef CONFIG_CPU_SRSO #ifdef CONFIG_MITIGATION_SRSO
/* /*
* See the comment above srso_alias_untrain_ret()'s * See the comment above srso_alias_untrain_ret()'s
* definition. * definition.
@ -267,7 +268,7 @@ SECTIONS
} }
#endif #endif
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
/* /*
* List of instructions that call/jmp/jcc to retpoline thunks * List of instructions that call/jmp/jcc to retpoline thunks
* __x86_indirect_thunk_*(). These instructions can be patched along * __x86_indirect_thunk_*(). These instructions can be patched along
@ -504,11 +505,11 @@ INIT_PER_CPU(irq_stack_backing_store);
"fixed_percpu_data is not at start of per-cpu area"); "fixed_percpu_data is not at start of per-cpu area");
#endif #endif
#ifdef CONFIG_CPU_UNRET_ENTRY #ifdef CONFIG_MITIGATION_UNRET_ENTRY
. = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned"); . = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned");
#endif #endif
#ifdef CONFIG_CPU_SRSO #ifdef CONFIG_MITIGATION_SRSO
. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned"); . = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
/* /*
* GNU ld cannot do XOR until 2.41. * GNU ld cannot do XOR until 2.41.

View file

@ -262,7 +262,7 @@ static unsigned long get_guest_cr3(struct kvm_vcpu *vcpu)
static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu, static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu,
struct kvm_mmu *mmu) struct kvm_mmu *mmu)
{ {
if (IS_ENABLED(CONFIG_RETPOLINE) && mmu->get_guest_pgd == get_guest_cr3) if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && mmu->get_guest_pgd == get_guest_cr3)
return kvm_read_cr3(vcpu); return kvm_read_cr3(vcpu);
return mmu->get_guest_pgd(vcpu); return mmu->get_guest_pgd(vcpu);

View file

@ -315,7 +315,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
if (!prefetch) if (!prefetch)
vcpu->stat.pf_taken++; vcpu->stat.pf_taken++;
if (IS_ENABLED(CONFIG_RETPOLINE) && fault.is_tdp) if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && fault.is_tdp)
r = kvm_tdp_page_fault(vcpu, &fault); r = kvm_tdp_page_fault(vcpu, &fault);
else else
r = vcpu->arch.mmu->page_fault(vcpu, &fault); r = vcpu->arch.mmu->page_fault(vcpu, &fault);

View file

@ -3455,7 +3455,7 @@ int svm_invoke_exit_handler(struct kvm_vcpu *vcpu, u64 exit_code)
if (!svm_check_exit_valid(exit_code)) if (!svm_check_exit_valid(exit_code))
return svm_handle_invalid_exit(vcpu, exit_code); return svm_handle_invalid_exit(vcpu, exit_code);
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
if (exit_code == SVM_EXIT_MSR) if (exit_code == SVM_EXIT_MSR)
return msr_interception(vcpu); return msr_interception(vcpu);
else if (exit_code == SVM_EXIT_VINTR) else if (exit_code == SVM_EXIT_VINTR)

View file

@ -207,7 +207,7 @@ SYM_FUNC_START(__svm_vcpu_run)
7: vmload %_ASM_AX 7: vmload %_ASM_AX
8: 8:
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
#endif #endif
@ -344,7 +344,7 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
/* Pop @svm to RDI, guest registers have been saved already. */ /* Pop @svm to RDI, guest registers have been saved already. */
pop %_ASM_DI pop %_ASM_DI
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
#endif #endif

View file

@ -6553,7 +6553,7 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
if (exit_reason.basic >= kvm_vmx_max_exit_handlers) if (exit_reason.basic >= kvm_vmx_max_exit_handlers)
goto unexpected_vmexit; goto unexpected_vmexit;
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
if (exit_reason.basic == EXIT_REASON_MSR_WRITE) if (exit_reason.basic == EXIT_REASON_MSR_WRITE)
return kvm_emulate_wrmsr(vcpu); return kvm_emulate_wrmsr(vcpu);
else if (exit_reason.basic == EXIT_REASON_PREEMPTION_TIMER) else if (exit_reason.basic == EXIT_REASON_PREEMPTION_TIMER)

View file

@ -49,7 +49,7 @@ lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc.o copy_mc_64.o
lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o insn-eval.o lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o insn-eval.o
lib-$(CONFIG_RANDOMIZE_BASE) += kaslr.o lib-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
lib-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o lib-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
lib-$(CONFIG_RETPOLINE) += retpoline.o lib-$(CONFIG_MITIGATION_RETPOLINE) += retpoline.o
obj-y += msr.o msr-reg.o msr-reg-export.o hweight.o obj-y += msr.o msr-reg.o msr-reg-export.o hweight.o
obj-y += iomem.o obj-y += iomem.o

View file

@ -23,14 +23,14 @@ SYM_FUNC_START(this_cpu_cmpxchg16b_emu)
cli cli
/* if (*ptr == old) */ /* if (*ptr == old) */
cmpq PER_CPU_VAR(0(%rsi)), %rax cmpq __percpu (%rsi), %rax
jne .Lnot_same jne .Lnot_same
cmpq PER_CPU_VAR(8(%rsi)), %rdx cmpq __percpu 8(%rsi), %rdx
jne .Lnot_same jne .Lnot_same
/* *ptr = new */ /* *ptr = new */
movq %rbx, PER_CPU_VAR(0(%rsi)) movq %rbx, __percpu (%rsi)
movq %rcx, PER_CPU_VAR(8(%rsi)) movq %rcx, __percpu 8(%rsi)
/* set ZF in EFLAGS to indicate success */ /* set ZF in EFLAGS to indicate success */
orl $X86_EFLAGS_ZF, (%rsp) orl $X86_EFLAGS_ZF, (%rsp)
@ -42,8 +42,8 @@ SYM_FUNC_START(this_cpu_cmpxchg16b_emu)
/* *ptr != old */ /* *ptr != old */
/* old = *ptr */ /* old = *ptr */
movq PER_CPU_VAR(0(%rsi)), %rax movq __percpu (%rsi), %rax
movq PER_CPU_VAR(8(%rsi)), %rdx movq __percpu 8(%rsi), %rdx
/* clear ZF in EFLAGS to indicate failure */ /* clear ZF in EFLAGS to indicate failure */
andl $(~X86_EFLAGS_ZF), (%rsp) andl $(~X86_EFLAGS_ZF), (%rsp)

View file

@ -24,12 +24,12 @@ SYM_FUNC_START(cmpxchg8b_emu)
pushfl pushfl
cli cli
cmpl 0(%esi), %eax cmpl (%esi), %eax
jne .Lnot_same jne .Lnot_same
cmpl 4(%esi), %edx cmpl 4(%esi), %edx
jne .Lnot_same jne .Lnot_same
movl %ebx, 0(%esi) movl %ebx, (%esi)
movl %ecx, 4(%esi) movl %ecx, 4(%esi)
orl $X86_EFLAGS_ZF, (%esp) orl $X86_EFLAGS_ZF, (%esp)
@ -38,7 +38,7 @@ SYM_FUNC_START(cmpxchg8b_emu)
RET RET
.Lnot_same: .Lnot_same:
movl 0(%esi), %eax movl (%esi), %eax
movl 4(%esi), %edx movl 4(%esi), %edx
andl $(~X86_EFLAGS_ZF), (%esp) andl $(~X86_EFLAGS_ZF), (%esp)
@ -53,18 +53,30 @@ EXPORT_SYMBOL(cmpxchg8b_emu)
#ifndef CONFIG_UML #ifndef CONFIG_UML
/*
* Emulate 'cmpxchg8b %fs:(%rsi)'
*
* Inputs:
* %esi : memory location to compare
* %eax : low 32 bits of old value
* %edx : high 32 bits of old value
* %ebx : low 32 bits of new value
* %ecx : high 32 bits of new value
*
* Notably this is not LOCK prefixed and is not safe against NMIs
*/
SYM_FUNC_START(this_cpu_cmpxchg8b_emu) SYM_FUNC_START(this_cpu_cmpxchg8b_emu)
pushfl pushfl
cli cli
cmpl PER_CPU_VAR(0(%esi)), %eax cmpl __percpu (%esi), %eax
jne .Lnot_same2 jne .Lnot_same2
cmpl PER_CPU_VAR(4(%esi)), %edx cmpl __percpu 4(%esi), %edx
jne .Lnot_same2 jne .Lnot_same2
movl %ebx, PER_CPU_VAR(0(%esi)) movl %ebx, __percpu (%esi)
movl %ecx, PER_CPU_VAR(4(%esi)) movl %ecx, __percpu 4(%esi)
orl $X86_EFLAGS_ZF, (%esp) orl $X86_EFLAGS_ZF, (%esp)
@ -72,8 +84,8 @@ SYM_FUNC_START(this_cpu_cmpxchg8b_emu)
RET RET
.Lnot_same2: .Lnot_same2:
movl PER_CPU_VAR(0(%esi)), %eax movl __percpu (%esi), %eax
movl PER_CPU_VAR(4(%esi)), %edx movl __percpu 4(%esi), %edx
andl $(~X86_EFLAGS_ZF), (%esp) andl $(~X86_EFLAGS_ZF), (%esp)

View file

@ -71,7 +71,7 @@ SYM_CODE_END(__x86_indirect_thunk_array)
#include <asm/GEN-for-each-reg.h> #include <asm/GEN-for-each-reg.h>
#undef GEN #undef GEN
#ifdef CONFIG_CALL_DEPTH_TRACKING #ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING
.macro CALL_THUNK reg .macro CALL_THUNK reg
.align RETPOLINE_THUNK_SIZE .align RETPOLINE_THUNK_SIZE
@ -127,7 +127,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
#undef GEN #undef GEN
#endif #endif
#ifdef CONFIG_RETHUNK #ifdef CONFIG_MITIGATION_RETHUNK
/* /*
* Be careful here: that label cannot really be removed because in * Be careful here: that label cannot really be removed because in
@ -138,7 +138,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
*/ */
.section .text..__x86.return_thunk .section .text..__x86.return_thunk
#ifdef CONFIG_CPU_SRSO #ifdef CONFIG_MITIGATION_SRSO
/* /*
* srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at * srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at
@ -225,12 +225,12 @@ SYM_CODE_END(srso_return_thunk)
#define JMP_SRSO_UNTRAIN_RET "jmp srso_untrain_ret" #define JMP_SRSO_UNTRAIN_RET "jmp srso_untrain_ret"
#define JMP_SRSO_ALIAS_UNTRAIN_RET "jmp srso_alias_untrain_ret" #define JMP_SRSO_ALIAS_UNTRAIN_RET "jmp srso_alias_untrain_ret"
#else /* !CONFIG_CPU_SRSO */ #else /* !CONFIG_MITIGATION_SRSO */
#define JMP_SRSO_UNTRAIN_RET "ud2" #define JMP_SRSO_UNTRAIN_RET "ud2"
#define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2" #define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2"
#endif /* CONFIG_CPU_SRSO */ #endif /* CONFIG_MITIGATION_SRSO */
#ifdef CONFIG_CPU_UNRET_ENTRY #ifdef CONFIG_MITIGATION_UNRET_ENTRY
/* /*
* Some generic notes on the untraining sequences: * Some generic notes on the untraining sequences:
@ -312,11 +312,11 @@ SYM_CODE_END(retbleed_return_thunk)
SYM_FUNC_END(retbleed_untrain_ret) SYM_FUNC_END(retbleed_untrain_ret)
#define JMP_RETBLEED_UNTRAIN_RET "jmp retbleed_untrain_ret" #define JMP_RETBLEED_UNTRAIN_RET "jmp retbleed_untrain_ret"
#else /* !CONFIG_CPU_UNRET_ENTRY */ #else /* !CONFIG_MITIGATION_UNRET_ENTRY */
#define JMP_RETBLEED_UNTRAIN_RET "ud2" #define JMP_RETBLEED_UNTRAIN_RET "ud2"
#endif /* CONFIG_CPU_UNRET_ENTRY */ #endif /* CONFIG_MITIGATION_UNRET_ENTRY */
#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO) #if defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO)
SYM_FUNC_START(entry_untrain_ret) SYM_FUNC_START(entry_untrain_ret)
ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET, \ ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET, \
@ -325,9 +325,9 @@ SYM_FUNC_START(entry_untrain_ret)
SYM_FUNC_END(entry_untrain_ret) SYM_FUNC_END(entry_untrain_ret)
__EXPORT_THUNK(entry_untrain_ret) __EXPORT_THUNK(entry_untrain_ret)
#endif /* CONFIG_CPU_UNRET_ENTRY || CONFIG_CPU_SRSO */ #endif /* CONFIG_MITIGATION_UNRET_ENTRY || CONFIG_MITIGATION_SRSO */
#ifdef CONFIG_CALL_DEPTH_TRACKING #ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING
.align 64 .align 64
SYM_FUNC_START(call_depth_return_thunk) SYM_FUNC_START(call_depth_return_thunk)
@ -359,7 +359,7 @@ SYM_FUNC_START(call_depth_return_thunk)
int3 int3
SYM_FUNC_END(call_depth_return_thunk) SYM_FUNC_END(call_depth_return_thunk)
#endif /* CONFIG_CALL_DEPTH_TRACKING */ #endif /* CONFIG_MITIGATION_CALL_DEPTH_TRACKING */
/* /*
* This function name is magical and is used by -mfunction-return=thunk-extern * This function name is magical and is used by -mfunction-return=thunk-extern
@ -369,21 +369,18 @@ SYM_FUNC_END(call_depth_return_thunk)
* 'JMP __x86_return_thunk' sites are changed to something else by * 'JMP __x86_return_thunk' sites are changed to something else by
* apply_returns(). * apply_returns().
* *
* This should be converted eventually to call a warning function which * The ALTERNATIVE below adds a really loud warning to catch the case
* should scream loudly when the default return thunk is called after * where the insufficient default return thunk ends up getting used for
* alternatives have been applied. * whatever reason like miscompilation or failure of
* * objtool/alternatives/etc to patch all the return sites.
* That warning function cannot BUG() because the bug splat cannot be
* displayed in all possible configurations, leading to users not really
* knowing why the machine froze.
*/ */
SYM_CODE_START(__x86_return_thunk) SYM_CODE_START(__x86_return_thunk)
UNWIND_HINT_FUNC UNWIND_HINT_FUNC
ANNOTATE_NOENDBR ANNOTATE_NOENDBR
ANNOTATE_UNRET_SAFE ALTERNATIVE __stringify(ANNOTATE_UNRET_SAFE; ret), \
ret "jmp warn_thunk_thunk", X86_FEATURE_ALWAYS
int3 int3
SYM_CODE_END(__x86_return_thunk) SYM_CODE_END(__x86_return_thunk)
EXPORT_SYMBOL(__x86_return_thunk) EXPORT_SYMBOL(__x86_return_thunk)
#endif /* CONFIG_RETHUNK */ #endif /* CONFIG_MITIGATION_RETHUNK */

View file

@ -61,7 +61,7 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulation.o
obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o
obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
obj-$(CONFIG_PAGE_TABLE_ISOLATION) += pti.o obj-$(CONFIG_MITIGATION_PAGE_TABLE_ISOLATION) += pti.o
obj-$(CONFIG_X86_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_X86_MEM_ENCRYPT) += mem_encrypt.o
obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_amd.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_amd.o

View file

@ -22,7 +22,7 @@ static int ptdump_curknl_show(struct seq_file *m, void *v)
DEFINE_SHOW_ATTRIBUTE(ptdump_curknl); DEFINE_SHOW_ATTRIBUTE(ptdump_curknl);
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
static int ptdump_curusr_show(struct seq_file *m, void *v) static int ptdump_curusr_show(struct seq_file *m, void *v)
{ {
if (current->mm->pgd) if (current->mm->pgd)
@ -54,7 +54,7 @@ static int __init pt_dump_debug_init(void)
debugfs_create_file("current_kernel", 0400, dir, NULL, debugfs_create_file("current_kernel", 0400, dir, NULL,
&ptdump_curknl_fops); &ptdump_curknl_fops);
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
debugfs_create_file("current_user", 0400, dir, NULL, debugfs_create_file("current_user", 0400, dir, NULL,
&ptdump_curusr_fops); &ptdump_curusr_fops);
#endif #endif

View file

@ -408,7 +408,7 @@ void ptdump_walk_pgd_level_debugfs(struct seq_file *m, struct mm_struct *mm,
bool user) bool user)
{ {
pgd_t *pgd = mm->pgd; pgd_t *pgd = mm->pgd;
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
if (user && boot_cpu_has(X86_FEATURE_PTI)) if (user && boot_cpu_has(X86_FEATURE_PTI))
pgd = kernel_to_user_pgdp(pgd); pgd = kernel_to_user_pgdp(pgd);
#endif #endif
@ -418,7 +418,7 @@ EXPORT_SYMBOL_GPL(ptdump_walk_pgd_level_debugfs);
void ptdump_walk_user_pgd_level_checkwx(void) void ptdump_walk_user_pgd_level_checkwx(void)
{ {
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
pgd_t *pgd = INIT_PGD; pgd_t *pgd = INIT_PGD;
if (!(__supported_pte_mask & _PAGE_NX) || if (!(__supported_pte_mask & _PAGE_NX) ||

View file

@ -293,7 +293,7 @@ static void pgd_mop_up_pmds(struct mm_struct *mm, pgd_t *pgdp)
for (i = 0; i < PREALLOCATED_PMDS; i++) for (i = 0; i < PREALLOCATED_PMDS; i++)
mop_up_one_pmd(mm, &pgdp[i]); mop_up_one_pmd(mm, &pgdp[i]);
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
if (!boot_cpu_has(X86_FEATURE_PTI)) if (!boot_cpu_has(X86_FEATURE_PTI))
return; return;
@ -325,7 +325,7 @@ static void pgd_prepopulate_pmd(struct mm_struct *mm, pgd_t *pgd, pmd_t *pmds[])
} }
} }
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
static void pgd_prepopulate_user_pmd(struct mm_struct *mm, static void pgd_prepopulate_user_pmd(struct mm_struct *mm,
pgd_t *k_pgd, pmd_t *pmds[]) pgd_t *k_pgd, pmd_t *pmds[])
{ {

View file

@ -89,10 +89,10 @@
#define CR3_HW_ASID_BITS 12 #define CR3_HW_ASID_BITS 12
/* /*
* When enabled, PAGE_TABLE_ISOLATION consumes a single bit for * When enabled, MITIGATION_PAGE_TABLE_ISOLATION consumes a single bit for
* user/kernel switches * user/kernel switches
*/ */
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
# define PTI_CONSUMED_PCID_BITS 1 # define PTI_CONSUMED_PCID_BITS 1
#else #else
# define PTI_CONSUMED_PCID_BITS 0 # define PTI_CONSUMED_PCID_BITS 0
@ -114,7 +114,7 @@ static inline u16 kern_pcid(u16 asid)
{ {
VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE); VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE);
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
/* /*
* Make sure that the dynamic ASID space does not conflict with the * Make sure that the dynamic ASID space does not conflict with the
* bit we are using to switch between user and kernel ASIDs. * bit we are using to switch between user and kernel ASIDs.
@ -149,7 +149,7 @@ static inline u16 kern_pcid(u16 asid)
static inline u16 user_pcid(u16 asid) static inline u16 user_pcid(u16 asid)
{ {
u16 ret = kern_pcid(asid); u16 ret = kern_pcid(asid);
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
ret |= 1 << X86_CR3_PTI_PCID_USER_BIT; ret |= 1 << X86_CR3_PTI_PCID_USER_BIT;
#endif #endif
return ret; return ret;
@ -262,7 +262,7 @@ static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen,
static inline void invalidate_user_asid(u16 asid) static inline void invalidate_user_asid(u16 asid)
{ {
/* There is no user ASID if address space separation is off */ /* There is no user ASID if address space separation is off */
if (!IS_ENABLED(CONFIG_PAGE_TABLE_ISOLATION)) if (!IS_ENABLED(CONFIG_MITIGATION_PAGE_TABLE_ISOLATION))
return; return;
/* /*

View file

@ -553,7 +553,7 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip); emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
} else { } else {
EMIT2(0xFF, 0xE0 + reg); /* jmp *%\reg */ EMIT2(0xFF, 0xE0 + reg); /* jmp *%\reg */
if (IS_ENABLED(CONFIG_RETPOLINE) || IS_ENABLED(CONFIG_SLS)) if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) || IS_ENABLED(CONFIG_MITIGATION_SLS))
EMIT1(0xCC); /* int3 */ EMIT1(0xCC); /* int3 */
} }
@ -568,7 +568,7 @@ static void emit_return(u8 **pprog, u8 *ip)
emit_jump(&prog, x86_return_thunk, ip); emit_jump(&prog, x86_return_thunk, ip);
} else { } else {
EMIT1(0xC3); /* ret */ EMIT1(0xC3); /* ret */
if (IS_ENABLED(CONFIG_SLS)) if (IS_ENABLED(CONFIG_MITIGATION_SLS))
EMIT1(0xCC); /* int3 */ EMIT1(0xCC); /* int3 */
} }

View file

@ -1273,7 +1273,7 @@ static int emit_jmp_edx(u8 **pprog, u8 *ip)
u8 *prog = *pprog; u8 *prog = *pprog;
int cnt = 0; int cnt = 0;
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
EMIT1_off32(0xE9, (u8 *)__x86_indirect_thunk_edx - (ip + 5)); EMIT1_off32(0xE9, (u8 *)__x86_indirect_thunk_edx - (ip + 5));
#else #else
EMIT2(0xFF, 0xE2); EMIT2(0xFF, 0xE2);

View file

@ -61,7 +61,7 @@ ifdef CONFIG_STACKPROTECTOR_STRONG
PURGATORY_CFLAGS_REMOVE += -fstack-protector-strong PURGATORY_CFLAGS_REMOVE += -fstack-protector-strong
endif endif
ifdef CONFIG_RETPOLINE ifdef CONFIG_MITIGATION_RETPOLINE
PURGATORY_CFLAGS_REMOVE += $(RETPOLINE_CFLAGS) PURGATORY_CFLAGS_REMOVE += $(RETPOLINE_CFLAGS)
endif endif

View file

@ -28,7 +28,7 @@
* non-zero. * non-zero.
*/ */
SYM_FUNC_START(xen_irq_disable_direct) SYM_FUNC_START(xen_irq_disable_direct)
movb $1, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask movb $1, PER_CPU_VAR(xen_vcpu_info + XEN_vcpu_info_mask)
RET RET
SYM_FUNC_END(xen_irq_disable_direct) SYM_FUNC_END(xen_irq_disable_direct)
@ -69,7 +69,7 @@ SYM_FUNC_END(check_events)
SYM_FUNC_START(xen_irq_enable_direct) SYM_FUNC_START(xen_irq_enable_direct)
FRAME_BEGIN FRAME_BEGIN
/* Unmask events */ /* Unmask events */
movb $0, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask movb $0, PER_CPU_VAR(xen_vcpu_info + XEN_vcpu_info_mask)
/* /*
* Preempt here doesn't matter because that will deal with any * Preempt here doesn't matter because that will deal with any
@ -78,7 +78,7 @@ SYM_FUNC_START(xen_irq_enable_direct)
*/ */
/* Test for pending */ /* Test for pending */
testb $0xff, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_pending testb $0xff, PER_CPU_VAR(xen_vcpu_info + XEN_vcpu_info_pending)
jz 1f jz 1f
call check_events call check_events
@ -97,7 +97,7 @@ SYM_FUNC_END(xen_irq_enable_direct)
* x86 use opposite senses (mask vs enable). * x86 use opposite senses (mask vs enable).
*/ */
SYM_FUNC_START(xen_save_fl_direct) SYM_FUNC_START(xen_save_fl_direct)
testb $0xff, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask testb $0xff, PER_CPU_VAR(xen_vcpu_info + XEN_vcpu_info_mask)
setz %ah setz %ah
addb %ah, %ah addb %ah, %ah
RET RET
@ -113,7 +113,7 @@ SYM_FUNC_END(xen_read_cr2);
SYM_FUNC_START(xen_read_cr2_direct) SYM_FUNC_START(xen_read_cr2_direct)
FRAME_BEGIN FRAME_BEGIN
_ASM_MOV PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_arch_cr2, %_ASM_AX _ASM_MOV PER_CPU_VAR(xen_vcpu_info + XEN_vcpu_info_arch_cr2), %_ASM_AX
FRAME_END FRAME_END
RET RET
SYM_FUNC_END(xen_read_cr2_direct); SYM_FUNC_END(xen_read_cr2_direct);

View file

@ -35,7 +35,7 @@
(typeof(ptr)) (__ptr + (off)); \ (typeof(ptr)) (__ptr + (off)); \
}) })
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
#define __noretpoline __attribute__((__indirect_branch__("keep"))) #define __noretpoline __attribute__((__indirect_branch__("keep")))
#endif #endif

View file

@ -209,7 +209,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
*/ */
#define ___ADDRESSABLE(sym, __attrs) \ #define ___ADDRESSABLE(sym, __attrs) \
static void * __used __attrs \ static void * __used __attrs \
__UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)&sym; __UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)(uintptr_t)&sym;
#define __ADDRESSABLE(sym) \ #define __ADDRESSABLE(sym) \
___ADDRESSABLE(sym, __section(".discard.addressable")) ___ADDRESSABLE(sym, __section(".discard.addressable"))

View file

@ -196,6 +196,8 @@ void arch_cpu_idle(void);
void arch_cpu_idle_prepare(void); void arch_cpu_idle_prepare(void);
void arch_cpu_idle_enter(void); void arch_cpu_idle_enter(void);
void arch_cpu_idle_exit(void); void arch_cpu_idle_exit(void);
void arch_tick_broadcast_enter(void);
void arch_tick_broadcast_exit(void);
void __noreturn arch_cpu_idle_dead(void); void __noreturn arch_cpu_idle_dead(void);
#ifdef CONFIG_ARCH_HAS_CPU_FINALIZE_INIT #ifdef CONFIG_ARCH_HAS_CPU_FINALIZE_INIT

View file

@ -2,7 +2,7 @@
#ifndef _LINUX_INDIRECT_CALL_WRAPPER_H #ifndef _LINUX_INDIRECT_CALL_WRAPPER_H
#define _LINUX_INDIRECT_CALL_WRAPPER_H #define _LINUX_INDIRECT_CALL_WRAPPER_H
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
/* /*
* INDIRECT_CALL_$NR - wrapper for indirect calls with $NR known builtin * INDIRECT_CALL_$NR - wrapper for indirect calls with $NR known builtin

View file

@ -885,7 +885,7 @@ static inline void module_bug_finalize(const Elf_Ehdr *hdr,
static inline void module_bug_cleanup(struct module *mod) {} static inline void module_bug_cleanup(struct module *mod) {}
#endif /* CONFIG_GENERIC_BUG */ #endif /* CONFIG_GENERIC_BUG */
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
extern bool retpoline_module_ok(bool has_retpoline); extern bool retpoline_module_ok(bool has_retpoline);
#else #else
static inline bool retpoline_module_ok(bool has_retpoline) static inline bool retpoline_module_ok(bool has_retpoline)

View file

@ -131,7 +131,7 @@
*/ */
.macro VALIDATE_UNRET_BEGIN .macro VALIDATE_UNRET_BEGIN
#if defined(CONFIG_NOINSTR_VALIDATION) && \ #if defined(CONFIG_NOINSTR_VALIDATION) && \
(defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)) (defined(CONFIG_MITIGATION_UNRET_ENTRY) || defined(CONFIG_MITIGATION_SRSO))
.Lhere_\@: .Lhere_\@:
.pushsection .discard.validate_unret .pushsection .discard.validate_unret
.long .Lhere_\@ - . .long .Lhere_\@ - .

View file

@ -2,7 +2,7 @@
#ifndef _INCLUDE_PTI_H #ifndef _INCLUDE_PTI_H
#define _INCLUDE_PTI_H #define _INCLUDE_PTI_H
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
#include <asm/pti.h> #include <asm/pti.h>
#else #else
static inline void pti_init(void) { } static inline void pti_init(void) { }

View file

@ -12,6 +12,7 @@
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <linux/static_key.h>
#ifdef CONFIG_GENERIC_CLOCKEVENTS #ifdef CONFIG_GENERIC_CLOCKEVENTS
extern void __init tick_init(void); extern void __init tick_init(void);
@ -69,6 +70,8 @@ enum tick_broadcast_state {
TICK_BROADCAST_ENTER, TICK_BROADCAST_ENTER,
}; };
extern struct static_key_false arch_needs_tick_broadcast;
#ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
extern void tick_broadcast_control(enum tick_broadcast_mode mode); extern void tick_broadcast_control(enum tick_broadcast_mode mode);
#else #else

View file

@ -93,7 +93,7 @@ extern const struct nft_set_type nft_set_bitmap_type;
extern const struct nft_set_type nft_set_pipapo_type; extern const struct nft_set_type nft_set_pipapo_type;
extern const struct nft_set_type nft_set_pipapo_avx2_type; extern const struct nft_set_type nft_set_pipapo_avx2_type;
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
bool nft_rhash_lookup(const struct net *net, const struct nft_set *set, bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
const u32 *key, const struct nft_set_ext **ext); const u32 *key, const struct nft_set_ext **ext);
bool nft_rbtree_lookup(const struct net *net, const struct nft_set *set, bool nft_rbtree_lookup(const struct net *net, const struct nft_set *set,

View file

@ -4,7 +4,7 @@
#include <net/pkt_cls.h> #include <net/pkt_cls.h>
#if IS_ENABLED(CONFIG_RETPOLINE) #if IS_ENABLED(CONFIG_MITIGATION_RETPOLINE)
#include <linux/cpufeature.h> #include <linux/cpufeature.h>
#include <linux/static_key.h> #include <linux/static_key.h>

View file

@ -81,6 +81,25 @@ void __weak arch_cpu_idle(void)
cpu_idle_force_poll = 1; cpu_idle_force_poll = 1;
} }
#ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST_IDLE
DEFINE_STATIC_KEY_FALSE(arch_needs_tick_broadcast);
static inline void cond_tick_broadcast_enter(void)
{
if (static_branch_unlikely(&arch_needs_tick_broadcast))
tick_broadcast_enter();
}
static inline void cond_tick_broadcast_exit(void)
{
if (static_branch_unlikely(&arch_needs_tick_broadcast))
tick_broadcast_exit();
}
#else
static inline void cond_tick_broadcast_enter(void) { }
static inline void cond_tick_broadcast_exit(void) { }
#endif
/** /**
* default_idle_call - Default CPU idle routine. * default_idle_call - Default CPU idle routine.
* *
@ -90,6 +109,7 @@ void __cpuidle default_idle_call(void)
{ {
instrumentation_begin(); instrumentation_begin();
if (!current_clr_polling_and_test()) { if (!current_clr_polling_and_test()) {
cond_tick_broadcast_enter();
trace_cpu_idle(1, smp_processor_id()); trace_cpu_idle(1, smp_processor_id());
stop_critical_timings(); stop_critical_timings();
@ -99,6 +119,7 @@ void __cpuidle default_idle_call(void)
start_critical_timings(); start_critical_timings();
trace_cpu_idle(PWR_EVENT_EXIT, smp_processor_id()); trace_cpu_idle(PWR_EVENT_EXIT, smp_processor_id());
cond_tick_broadcast_exit();
} }
local_irq_enable(); local_irq_enable();
instrumentation_end(); instrumentation_end();

View file

@ -39,6 +39,11 @@ config GENERIC_CLOCKEVENTS_BROADCAST
bool bool
depends on GENERIC_CLOCKEVENTS depends on GENERIC_CLOCKEVENTS
# Handle broadcast in default_idle_call()
config GENERIC_CLOCKEVENTS_BROADCAST_IDLE
bool
depends on GENERIC_CLOCKEVENTS_BROADCAST
# Automatically adjust the min. reprogramming time for # Automatically adjust the min. reprogramming time for
# clock event device # clock event device
config GENERIC_CLOCKEVENTS_MIN_ADJUST config GENERIC_CLOCKEVENTS_MIN_ADJUST

View file

@ -1022,7 +1022,7 @@ static inline u64 rb_time_stamp(struct trace_buffer *buffer)
u64 ts; u64 ts;
/* Skip retpolines :-( */ /* Skip retpolines :-( */
if (IS_ENABLED(CONFIG_RETPOLINE) && likely(buffer->clock == trace_clock_local)) if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && likely(buffer->clock == trace_clock_local))
ts = trace_clock_local(); ts = trace_clock_local();
else else
ts = buffer->clock(); ts = buffer->clock();

View file

@ -101,7 +101,7 @@ endif
endif endif
ifdef CONFIG_NFT_CT ifdef CONFIG_NFT_CT
ifdef CONFIG_RETPOLINE ifdef CONFIG_MITIGATION_RETPOLINE
nf_tables-objs += nft_ct_fast.o nf_tables-objs += nft_ct_fast.o
endif endif
endif endif

View file

@ -21,7 +21,7 @@
#include <net/netfilter/nf_log.h> #include <net/netfilter/nf_log.h>
#include <net/netfilter/nft_meta.h> #include <net/netfilter/nft_meta.h>
#if defined(CONFIG_RETPOLINE) && defined(CONFIG_X86) #if defined(CONFIG_MITIGATION_RETPOLINE) && defined(CONFIG_X86)
static struct static_key_false nf_tables_skip_direct_calls; static struct static_key_false nf_tables_skip_direct_calls;
@ -207,7 +207,7 @@ static void expr_call_ops_eval(const struct nft_expr *expr,
struct nft_regs *regs, struct nft_regs *regs,
struct nft_pktinfo *pkt) struct nft_pktinfo *pkt)
{ {
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
unsigned long e; unsigned long e;
if (nf_skip_indirect_calls()) if (nf_skip_indirect_calls())
@ -236,7 +236,7 @@ static void expr_call_ops_eval(const struct nft_expr *expr,
X(e, nft_objref_map_eval); X(e, nft_objref_map_eval);
#undef X #undef X
indirect_call: indirect_call:
#endif /* CONFIG_RETPOLINE */ #endif /* CONFIG_MITIGATION_RETPOLINE */
expr->ops->eval(expr, regs, pkt); expr->ops->eval(expr, regs, pkt);
} }

View file

@ -754,7 +754,7 @@ static bool nft_ct_set_reduce(struct nft_regs_track *track,
return false; return false;
} }
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
static const struct nft_expr_ops nft_ct_get_fast_ops = { static const struct nft_expr_ops nft_ct_get_fast_ops = {
.type = &nft_ct_type, .type = &nft_ct_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_ct)), .size = NFT_EXPR_SIZE(sizeof(struct nft_ct)),
@ -799,7 +799,7 @@ nft_ct_select_ops(const struct nft_ctx *ctx,
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
if (tb[NFTA_CT_DREG]) { if (tb[NFTA_CT_DREG]) {
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
u32 k = ntohl(nla_get_be32(tb[NFTA_CT_KEY])); u32 k = ntohl(nla_get_be32(tb[NFTA_CT_KEY]));
switch (k) { switch (k) {

View file

@ -24,7 +24,7 @@ struct nft_lookup {
struct nft_set_binding binding; struct nft_set_binding binding;
}; };
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
bool nft_set_do_lookup(const struct net *net, const struct nft_set *set, bool nft_set_do_lookup(const struct net *net, const struct nft_set *set,
const u32 *key, const struct nft_set_ext **ext) const u32 *key, const struct nft_set_ext **ext)
{ {

View file

@ -2410,7 +2410,7 @@ static struct pernet_operations psched_net_ops = {
.exit = psched_net_exit, .exit = psched_net_exit,
}; };
#if IS_ENABLED(CONFIG_RETPOLINE) #if IS_ENABLED(CONFIG_MITIGATION_RETPOLINE)
DEFINE_STATIC_KEY_FALSE(tc_skip_wrapper); DEFINE_STATIC_KEY_FALSE(tc_skip_wrapper);
#endif #endif

View file

@ -254,7 +254,7 @@ objtool := $(objtree)/tools/objtool/objtool
objtool-args-$(CONFIG_HAVE_JUMP_LABEL_HACK) += --hacks=jump_label objtool-args-$(CONFIG_HAVE_JUMP_LABEL_HACK) += --hacks=jump_label
objtool-args-$(CONFIG_HAVE_NOINSTR_HACK) += --hacks=noinstr objtool-args-$(CONFIG_HAVE_NOINSTR_HACK) += --hacks=noinstr
objtool-args-$(CONFIG_CALL_DEPTH_TRACKING) += --hacks=skylake objtool-args-$(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) += --hacks=skylake
objtool-args-$(CONFIG_X86_KERNEL_IBT) += --ibt objtool-args-$(CONFIG_X86_KERNEL_IBT) += --ibt
objtool-args-$(CONFIG_FINEIBT) += --cfi objtool-args-$(CONFIG_FINEIBT) += --cfi
objtool-args-$(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL) += --mcount objtool-args-$(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL) += --mcount
@ -262,9 +262,9 @@ ifdef CONFIG_FTRACE_MCOUNT_USE_OBJTOOL
objtool-args-$(CONFIG_HAVE_OBJTOOL_NOP_MCOUNT) += --mnop objtool-args-$(CONFIG_HAVE_OBJTOOL_NOP_MCOUNT) += --mnop
endif endif
objtool-args-$(CONFIG_UNWINDER_ORC) += --orc objtool-args-$(CONFIG_UNWINDER_ORC) += --orc
objtool-args-$(CONFIG_RETPOLINE) += --retpoline objtool-args-$(CONFIG_MITIGATION_RETPOLINE) += --retpoline
objtool-args-$(CONFIG_RETHUNK) += --rethunk objtool-args-$(CONFIG_MITIGATION_RETHUNK) += --rethunk
objtool-args-$(CONFIG_SLS) += --sls objtool-args-$(CONFIG_MITIGATION_SLS) += --sls
objtool-args-$(CONFIG_STACK_VALIDATION) += --stackval objtool-args-$(CONFIG_STACK_VALIDATION) += --stackval
objtool-args-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call objtool-args-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call
objtool-args-$(CONFIG_HAVE_UACCESS_VALIDATION) += --uaccess objtool-args-$(CONFIG_HAVE_UACCESS_VALIDATION) += --uaccess

View file

@ -38,7 +38,7 @@ objtool-enabled := $(or $(delay-objtool),$(CONFIG_NOINSTR_VALIDATION))
vmlinux-objtool-args-$(delay-objtool) += $(objtool-args-y) vmlinux-objtool-args-$(delay-objtool) += $(objtool-args-y)
vmlinux-objtool-args-$(CONFIG_GCOV_KERNEL) += --no-unreachable vmlinux-objtool-args-$(CONFIG_GCOV_KERNEL) += --no-unreachable
vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \ vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \
$(if $(or $(CONFIG_CPU_UNRET_ENTRY),$(CONFIG_CPU_SRSO)), --unret) $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_MITIGATION_SRSO)), --unret)
objtool-args = $(vmlinux-objtool-args-y) --link objtool-args = $(vmlinux-objtool-args-y) --link

View file

@ -155,7 +155,7 @@ fn main() {
"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128", "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128",
); );
let mut features = "-3dnow,-3dnowa,-mmx,+soft-float".to_string(); let mut features = "-3dnow,-3dnowa,-mmx,+soft-float".to_string();
if cfg.has("RETPOLINE") { if cfg.has("MITIGATION_RETPOLINE") {
features += ",+retpoline-external-thunk"; features += ",+retpoline-external-thunk";
} }
ts.push("features", features); ts.push("features", features);

View file

@ -1848,7 +1848,7 @@ static void add_header(struct buffer *b, struct module *mod)
buf_printf(b, buf_printf(b,
"\n" "\n"
"#ifdef CONFIG_RETPOLINE\n" "#ifdef CONFIG_MITIGATION_RETPOLINE\n"
"MODULE_INFO(retpoline, \"Y\");\n" "MODULE_INFO(retpoline, \"Y\");\n"
"#endif\n"); "#endif\n");

View file

@ -44,32 +44,32 @@
# define DISABLE_LA57 (1<<(X86_FEATURE_LA57 & 31)) # define DISABLE_LA57 (1<<(X86_FEATURE_LA57 & 31))
#endif #endif
#ifdef CONFIG_PAGE_TABLE_ISOLATION #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
# define DISABLE_PTI 0 # define DISABLE_PTI 0
#else #else
# define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31)) # define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31))
#endif #endif
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_MITIGATION_RETPOLINE
# define DISABLE_RETPOLINE 0 # define DISABLE_RETPOLINE 0
#else #else
# define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \ # define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \
(1 << (X86_FEATURE_RETPOLINE_LFENCE & 31))) (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
#endif #endif
#ifdef CONFIG_RETHUNK #ifdef CONFIG_MITIGATION_RETHUNK
# define DISABLE_RETHUNK 0 # define DISABLE_RETHUNK 0
#else #else
# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31)) # define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
#endif #endif
#ifdef CONFIG_CPU_UNRET_ENTRY #ifdef CONFIG_MITIGATION_UNRET_ENTRY
# define DISABLE_UNRET 0 # define DISABLE_UNRET 0
#else #else
# define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31)) # define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31))
#endif #endif
#ifdef CONFIG_CALL_DEPTH_TRACKING #ifdef CONFIG_MITIGATION_CALL_DEPTH_TRACKING
# define DISABLE_CALL_DEPTH_TRACKING 0 # define DISABLE_CALL_DEPTH_TRACKING 0
#else #else
# define DISABLE_CALL_DEPTH_TRACKING (1 << (X86_FEATURE_CALL_DEPTH & 31)) # define DISABLE_CALL_DEPTH_TRACKING (1 << (X86_FEATURE_CALL_DEPTH & 31))

View file

@ -83,7 +83,7 @@ bool arch_support_alt_relocation(struct special_alt *special_alt,
* TODO: Once we have DWARF CFI and smarter instruction decoding logic, * TODO: Once we have DWARF CFI and smarter instruction decoding logic,
* ensure the same register is used in the mov and jump instructions. * ensure the same register is used in the mov and jump instructions.
* *
* NOTE: RETPOLINE made it harder still to decode dynamic jumps. * NOTE: MITIGATION_RETPOLINE made it harder still to decode dynamic jumps.
*/ */
struct reloc *arch_find_switch_table(struct objtool_file *file, struct reloc *arch_find_switch_table(struct objtool_file *file,
struct instruction *insn) struct instruction *insn)

View file

@ -3980,11 +3980,11 @@ static int validate_retpoline(struct objtool_file *file)
if (insn->type == INSN_RETURN) { if (insn->type == INSN_RETURN) {
if (opts.rethunk) { if (opts.rethunk) {
WARN_INSN(insn, "'naked' return found in RETHUNK build"); WARN_INSN(insn, "'naked' return found in MITIGATION_RETHUNK build");
} else } else
continue; continue;
} else { } else {
WARN_INSN(insn, "indirect %s found in RETPOLINE build", WARN_INSN(insn, "indirect %s found in MITIGATION_RETPOLINE build",
insn->type == INSN_JUMP_DYNAMIC ? "jump" : "call"); insn->type == INSN_JUMP_DYNAMIC ? "jump" : "call");
} }