powerpc updates for 6.6

- Add HOTPLUG_SMT support (/sys/devices/system/cpu/smt) and honour the
    configured SMT state when hotplugging CPUs into the system.
 
  - Combine final TLB flush and lazy TLB mm shootdown IPIs when using the Radix
    MMU to avoid a broadcast TLBIE flush on exit.
 
  - Drop the exclusion between ptrace/perf watchpoints, and drop the now unused
    associated arch hooks.
 
  - Add support for the "nohlt" command line option to disable CPU idle.
 
  - Add support for -fpatchable-function-entry for ftrace, with GCC >= 13.1.
 
  - Rework memory block size determination, and support 256MB size on systems
    with GPUs that have hotpluggable memory.
 
  - Various other small features and fixes.
 
 Thanks to: Andrew Donnellan, Aneesh Kumar K.V, Arnd Bergmann, Athira Rajeev,
 Benjamin Gray, Christophe Leroy, Frederic Barrat, Gautam Menghani, Geoff Levand,
 Hari Bathini, Immad Mir, Jialin Zhang, Joel Stanley, Jordan Niethe, Justin
 Stitt, Kajol Jain, Kees Cook, Krzysztof Kozlowski, Laurent Dufour, Liang He,
 Linus Walleij, Mahesh Salgaonkar, Masahiro Yamada, Michal Suchanek, Nageswara
 R Sastry, Nathan Chancellor, Nathan Lynch, Naveen N Rao, Nicholas Piggin, Nick
 Desaulniers, Omar Sandoval, Randy Dunlap, Reza Arbab, Rob Herring, Russell
 Currey, Sourabh Jain, Thomas Gleixner, Trevor Woerner, Uwe Kleine-König, Vaibhav
 Jain, Xiongfeng Wang, Yuan Tan, Zhang Rui, Zheng Zengkai.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCAAxFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAmTwgbwTHG1wZUBlbGxl
 cm1hbi5pZC5hdQAKCRBR6+o8yOGlgFmpD/432vipeoqvkAYsyK0xi/Y3GcY0wcyd
 WJApLXXadEbtKQrgXQ6sowWqalg5thYnQCRarg/tXKK/po3KfgwkPjGDpOL+cIdr
 12QVN2XJm9VmJ1wYJxzk+yXx4F43AdmMdr94qWAGufbTHezwb4UpzVR1NxtFrOE/
 X5TNsC2+2mdZY/ZaNHS5vsTIFv3EhQfqgjZPlIAdLn6CGc8xWT514Q/uHA8+ytM/
 HL7Hqs33DoPSvgTa5TT/2E0d0k5nO3P5KObzAjpYlireTPaBi51mpKGewcrtm0o2
 v3cBlbfx3C7pe9ZhKBK9BH8cjynfiqsVZ9/lCw/7eBNdm9tHuzG0jeS7Db9tCZXS
 fM7G2R7SoIusPTqxlBmkU5DpYslwrHiVgCyy3ijxkoA/fakVwh/GgTcMsRt73IY6
 n6DsUvWwuYHCIeIiHmHQJqCqCRtV+aMzU3AbbBHOjtdIanhlW16M686dEsgCirh7
 akRVRD5VqKaqXs34PpkRL89Xv3wZRjl6XZ3hZFfCjSYXfpXDXhgSToIskpHYhKL8
 gpY7WtG9YQP05Xz5HRCx6EluaZVeKe0lZi6fezX7Mi9AygJQO8FfXqP1mHBlEq40
 ThWtvL9D89RV6lADqqFN20XepgvKNOyAXcE4szvsnIZYUSPmZQZSPxx+DHtROaLP
 jX3ifxtxJp92pQ==
 =5g7K
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-6.6-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:

 - Add HOTPLUG_SMT support (/sys/devices/system/cpu/smt) and honour the
   configured SMT state when hotplugging CPUs into the system

 - Combine final TLB flush and lazy TLB mm shootdown IPIs when using the
   Radix MMU to avoid a broadcast TLBIE flush on exit

 - Drop the exclusion between ptrace/perf watchpoints, and drop the now
   unused associated arch hooks

 - Add support for the "nohlt" command line option to disable CPU idle

 - Add support for -fpatchable-function-entry for ftrace, with GCC >=
   13.1

 - Rework memory block size determination, and support 256MB size on
   systems with GPUs that have hotpluggable memory

 - Various other small features and fixes

Thanks to Andrew Donnellan, Aneesh Kumar K.V, Arnd Bergmann, Athira
Rajeev, Benjamin Gray, Christophe Leroy, Frederic Barrat, Gautam
Menghani, Geoff Levand, Hari Bathini, Immad Mir, Jialin Zhang, Joel
Stanley, Jordan Niethe, Justin Stitt, Kajol Jain, Kees Cook, Krzysztof
Kozlowski, Laurent Dufour, Liang He, Linus Walleij, Mahesh Salgaonkar,
Masahiro Yamada, Michal Suchanek, Nageswara R Sastry, Nathan Chancellor,
Nathan Lynch, Naveen N Rao, Nicholas Piggin, Nick Desaulniers, Omar
Sandoval, Randy Dunlap, Reza Arbab, Rob Herring, Russell Currey, Sourabh
Jain, Thomas Gleixner, Trevor Woerner, Uwe Kleine-König, Vaibhav Jain,
Xiongfeng Wang, Yuan Tan, Zhang Rui, and Zheng Zengkai.

* tag 'powerpc-6.6-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (135 commits)
  macintosh/ams: linux/platform_device.h is needed
  powerpc/xmon: Reapply "Relax frame size for clang"
  powerpc/mm/book3s64: Use 256M as the upper limit with coherent device memory attached
  powerpc/mm/book3s64: Fix build error with SPARSEMEM disabled
  powerpc/iommu: Fix notifiers being shared by PCI and VIO buses
  powerpc/mpc5xxx: Add missing fwnode_handle_put()
  powerpc/config: Disable SLAB_DEBUG_ON in skiroot
  powerpc/pseries: Remove unused hcall tracing instruction
  powerpc/pseries: Fix hcall tracepoints with JUMP_LABEL=n
  powerpc: dts: add missing space before {
  powerpc/eeh: Use pci_dev_id() to simplify the code
  powerpc/64s: Move CPU -mtune options into Kconfig
  powerpc/powermac: Fix unused function warning
  powerpc/pseries: Rework lppaca_shared_proc() to avoid DEBUG_PREEMPT
  powerpc: Don't include lppaca.h in paca.h
  powerpc/pseries: Move hcall_vphn() prototype into vphn.h
  powerpc/pseries: Move VPHN constants into vphn.h
  cxl: Drop unused detach_spa()
  powerpc: Drop zalloc_maybe_bootmem()
  powerpc/powernv: Use struct opal_prd_msg in more places
  ...
This commit is contained in:
Linus Torvalds 2023-08-31 12:43:10 -07:00
commit 4ad0a4c234
305 changed files with 4121 additions and 3369 deletions

View file

@ -80,3 +80,163 @@ Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
Description: read only
This sysfs file exposes the cpumask which is designated to make
HCALLs to retrieve hv-gpci pmu event counter data.
What: /sys/devices/hv_gpci/interface/processor_bus_topology
Date: July 2023
Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
Description: admin read only
This sysfs file exposes the system topology information by making HCALL
H_GET_PERF_COUNTER_INFO. The HCALL is made with counter request value
PROCESSOR_BUS_TOPOLOGY(0xD0).
* This sysfs file will be created only for power10 and above platforms.
* User needs root privileges to read data from this sysfs file.
* This sysfs file will be created, only when the HCALL returns "H_SUCCESS",
"H_AUTHORITY" or "H_PARAMETER" as the return type.
HCALL with return error type "H_AUTHORITY" can be resolved during
runtime by setting "Enable Performance Information Collection" option.
* The end user reading this sysfs file must decode the content as per
underlying platform/firmware.
Possible error codes while reading this sysfs file:
* "-EPERM" : Partition is not permitted to retrieve performance information,
required to set "Enable Performance Information Collection" option.
* "-EIO" : Can't retrieve system information because of invalid buffer length/invalid address
or because of some hardware error. Refer to getPerfCountInfo documentation for
more information.
* "-EFBIG" : System information exceeds PAGE_SIZE.
What: /sys/devices/hv_gpci/interface/processor_config
Date: July 2023
Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
Description: admin read only
This sysfs file exposes the system topology information by making HCALL
H_GET_PERF_COUNTER_INFO. The HCALL is made with counter request value
PROCESSOR_CONFIG(0x90).
* This sysfs file will be created only for power10 and above platforms.
* User needs root privileges to read data from this sysfs file.
* This sysfs file will be created, only when the HCALL returns "H_SUCCESS",
"H_AUTHORITY" or "H_PARAMETER" as the return type.
HCALL with return error type "H_AUTHORITY" can be resolved during
runtime by setting "Enable Performance Information Collection" option.
* The end user reading this sysfs file must decode the content as per
underlying platform/firmware.
Possible error codes while reading this sysfs file:
* "-EPERM" : Partition is not permitted to retrieve performance information,
required to set "Enable Performance Information Collection" option.
* "-EIO" : Can't retrieve system information because of invalid buffer length/invalid address
or because of some hardware error. Refer to getPerfCountInfo documentation for
more information.
* "-EFBIG" : System information exceeds PAGE_SIZE.
What: /sys/devices/hv_gpci/interface/affinity_domain_via_virtual_processor
Date: July 2023
Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
Description: admin read only
This sysfs file exposes the system topology information by making HCALL
H_GET_PERF_COUNTER_INFO. The HCALL is made with counter request value
AFFINITY_DOMAIN_INFORMATION_BY_VIRTUAL_PROCESSOR(0xA0).
* This sysfs file will be created only for power10 and above platforms.
* User needs root privileges to read data from this sysfs file.
* This sysfs file will be created, only when the HCALL returns "H_SUCCESS",
"H_AUTHORITY" or "H_PARAMETER" as the return type.
HCALL with return error type "H_AUTHORITY" can be resolved during
runtime by setting "Enable Performance Information Collection" option.
* The end user reading this sysfs file must decode the content as per
underlying platform/firmware.
Possible error codes while reading this sysfs file:
* "-EPERM" : Partition is not permitted to retrieve performance information,
required to set "Enable Performance Information Collection" option.
* "-EIO" : Can't retrieve system information because of invalid buffer length/invalid address
or because of some hardware error. Refer to getPerfCountInfo documentation for
more information.
* "-EFBIG" : System information exceeds PAGE_SIZE.
What: /sys/devices/hv_gpci/interface/affinity_domain_via_domain
Date: July 2023
Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
Description: admin read only
This sysfs file exposes the system topology information by making HCALL
H_GET_PERF_COUNTER_INFO. The HCALL is made with counter request value
AFFINITY_DOMAIN_INFORMATION_BY_DOMAIN(0xB0).
* This sysfs file will be created only for power10 and above platforms.
* User needs root privileges to read data from this sysfs file.
* This sysfs file will be created, only when the HCALL returns "H_SUCCESS",
"H_AUTHORITY" or "H_PARAMETER" as the return type.
HCALL with return error type "H_AUTHORITY" can be resolved during
runtime by setting "Enable Performance Information Collection" option.
* The end user reading this sysfs file must decode the content as per
underlying platform/firmware.
Possible error codes while reading this sysfs file:
* "-EPERM" : Partition is not permitted to retrieve performance information,
required to set "Enable Performance Information Collection" option.
* "-EIO" : Can't retrieve system information because of invalid buffer length/invalid address
or because of some hardware error. Refer to getPerfCountInfo documentation for
more information.
* "-EFBIG" : System information exceeds PAGE_SIZE.
What: /sys/devices/hv_gpci/interface/affinity_domain_via_partition
Date: July 2023
Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
Description: admin read only
This sysfs file exposes the system topology information by making HCALL
H_GET_PERF_COUNTER_INFO. The HCALL is made with counter request value
AFFINITY_DOMAIN_INFORMATION_BY_PARTITION(0xB1).
* This sysfs file will be created only for power10 and above platforms.
* User needs root privileges to read data from this sysfs file.
* This sysfs file will be created, only when the HCALL returns "H_SUCCESS",
"H_AUTHORITY" or "H_PARAMETER" as the return type.
HCALL with return error type "H_AUTHORITY" can be resolved during
runtime by setting "Enable Performance Information Collection" option.
* The end user reading this sysfs file must decode the content as per
underlying platform/firmware.
Possible error codes while reading this sysfs file:
* "-EPERM" : Partition is not permitted to retrieve performance information,
required to set "Enable Performance Information Collection" option.
* "-EIO" : Can't retrieve system information because of invalid buffer length/invalid address
or because of some hardware error. Refer to getPerfCountInfo documentation for
more information.
* "-EFBIG" : System information exceeds PAGE_SIZE.

View file

@ -3753,7 +3753,7 @@
nohibernate [HIBERNATION] Disable hibernation and resume.
nohlt [ARM,ARM64,MICROBLAZE,MIPS,SH] Forces the kernel to
nohlt [ARM,ARM64,MICROBLAZE,MIPS,PPC,SH] Forces the kernel to
busy wait in do_idle() and not use the arch_cpu_idle()
implementation; requires CONFIG_GENERIC_IDLE_POLL_SETUP
to be effective. This is useful on platforms where the
@ -3889,10 +3889,10 @@
nosmp [SMP] Tells an SMP kernel to act as a UP kernel,
and disable the IO APIC. legacy for "maxcpus=0".
nosmt [KNL,MIPS,S390] Disable symmetric multithreading (SMT).
nosmt [KNL,MIPS,PPC,S390] Disable symmetric multithreading (SMT).
Equivalent to smt=1.
[KNL,X86] Disable symmetric multithreading (SMT).
[KNL,X86,PPC] Disable symmetric multithreading (SMT).
nosmt=force: Force disable SMT, cannot be undone
via the sysfs control file.

View file

@ -15,7 +15,7 @@ that's extendable and that covers both BookE and server processors, so
that GDB doesn't need to special-case each of them. We added the
following 3 new ptrace requests.
1. PTRACE_PPC_GETHWDEBUGINFO
1. PPC_PTRACE_GETHWDBGINFO
============================
Query for GDB to discover the hardware debug features. The main info to
@ -48,7 +48,7 @@ features will have bits indicating whether there is support for::
#define PPC_DEBUG_FEATURE_DATA_BP_DAWR 0x10
#define PPC_DEBUG_FEATURE_DATA_BP_ARCH_31 0x20
2. PTRACE_SETHWDEBUG
2. PPC_PTRACE_SETHWDEBUG
Sets a hardware breakpoint or watchpoint, according to the provided structure::
@ -88,7 +88,7 @@ that the BookE supports. COMEFROM breakpoints available in server processors
are not contemplated, but that is out of the scope of this work.
ptrace will return an integer (handle) uniquely identifying the breakpoint or
watchpoint just created. This integer will be used in the PTRACE_DELHWDEBUG
watchpoint just created. This integer will be used in the PPC_PTRACE_DELHWDEBUG
request to ask for its removal. Return -ENOSPC if the requested breakpoint
can't be allocated on the registers.
@ -150,7 +150,7 @@ Some examples of using the structure to:
p.addr2 = (uint64_t) end_range;
p.condition_value = 0;
3. PTRACE_DELHWDEBUG
3. PPC_PTRACE_DELHWDEBUG
Takes an integer which identifies an existing breakpoint or watchpoint
(i.e., the value returned from PTRACE_SETHWDEBUG), and deletes the

View file

@ -188,6 +188,7 @@ config PPC
select DYNAMIC_FTRACE if FUNCTION_TRACER
select EDAC_ATOMIC_SCRUB
select EDAC_SUPPORT
select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY if ARCH_USING_PATCHABLE_FUNCTION_ENTRY
select GENERIC_ATOMIC64 if PPC32
select GENERIC_CLOCKEVENTS_BROADCAST if SMP
select GENERIC_CMOS_UPDATE
@ -195,6 +196,7 @@ config PPC
select GENERIC_CPU_VULNERABILITIES if PPC_BARRIER_NOSPEC
select GENERIC_EARLY_IOREMAP
select GENERIC_GETTIMEOFDAY
select GENERIC_IDLE_POLL_SETUP
select GENERIC_IOREMAP
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_SHOW_LEVEL
@ -229,8 +231,8 @@ config PPC
select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_ARGS if MPROFILE_KERNEL || PPC32
select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL || PPC32
select HAVE_DYNAMIC_FTRACE_WITH_ARGS if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32
select HAVE_DYNAMIC_FTRACE_WITH_REGS if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32
select HAVE_EBPF_JIT
select HAVE_EFFICIENT_UNALIGNED_ACCESS
select HAVE_FAST_GUP
@ -258,7 +260,7 @@ config PPC
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI if PERF_EVENTS || (PPC64 && PPC_BOOK3S)
select HAVE_OPTPROBES
select HAVE_OBJTOOL if PPC32 || MPROFILE_KERNEL
select HAVE_OBJTOOL if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32
select HAVE_OBJTOOL_MCOUNT if HAVE_OBJTOOL
select HAVE_PERF_EVENTS
select HAVE_PERF_EVENTS_NMI if PPC64
@ -275,6 +277,8 @@ config PPC
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_VIRT_CPU_ACCOUNTING
select HAVE_VIRT_CPU_ACCOUNTING_GEN
select HOTPLUG_SMT if HOTPLUG_CPU
select SMT_NUM_THREADS_DYNAMIC
select HUGETLB_PAGE_SIZE_VARIABLE if PPC_BOOK3S_64 && HUGETLB_PAGE
select IOMMU_HELPER if PPC64
select IRQ_DOMAIN
@ -554,6 +558,13 @@ config MPROFILE_KERNEL
def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-mprofile-kernel.sh $(CC) -mlittle-endian) if CPU_LITTLE_ENDIAN
def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-mprofile-kernel.sh $(CC) -mbig-endian) if CPU_BIG_ENDIAN
config ARCH_USING_PATCHABLE_FUNCTION_ENTRY
depends on FUNCTION_TRACER && (PPC32 || PPC64_ELF_ABI_V2)
depends on $(cc-option,-fpatchable-function-entry=2)
def_bool y if PPC32
def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mlittle-endian) if PPC64 && CPU_LITTLE_ENDIAN
def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mbig-endian) if PPC64 && CPU_BIG_ENDIAN
config HOTPLUG_CPU
bool "Support for enabling/disabling CPUs"
depends on SMP && (PPC_PSERIES || \
@ -1126,12 +1137,6 @@ config FSL_GTM
help
Freescale General-purpose Timers support
config PCI_8260
bool
depends on PCI && 8260
select PPC_INDIRECT_PCI
default y
config FSL_RIO
bool "Freescale Embedded SRIO Controller support"
depends on RAPIDIO = y && HAVE_RAPIDIO

View file

@ -143,18 +143,21 @@ CFLAGS-$(CONFIG_PPC32) += $(call cc-option, $(MULTIPLEWORD))
CFLAGS-$(CONFIG_PPC32) += $(call cc-option,-mno-readonly-in-sdata)
ifdef CONFIG_FUNCTION_TRACER
ifdef CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
CC_FLAGS_FTRACE := -fpatchable-function-entry=2
else
CC_FLAGS_FTRACE := -pg
ifdef CONFIG_MPROFILE_KERNEL
CC_FLAGS_FTRACE += -mprofile-kernel
endif
endif
endif
CFLAGS-$(CONFIG_TARGET_CPU_BOOL) += -mcpu=$(CONFIG_TARGET_CPU)
AFLAGS-$(CONFIG_TARGET_CPU_BOOL) += -mcpu=$(CONFIG_TARGET_CPU)
CFLAGS-$(CONFIG_POWERPC64_CPU) += $(call cc-option,-mtune=power10, \
$(call cc-option,-mtune=power9, \
$(call cc-option,-mtune=power8)))
CFLAGS-y += $(CONFIG_TUNE_CPU)
asinstr := $(call as-instr,lis 9$(comma)foo@high,-DHAVE_AS_ATHIGH=1)

View file

@ -124,10 +124,10 @@ crypto@80000 {
reg = <0x80000 0x20000>;
ranges = <0x0 0x80000 0x20000>;
jr@1000{
jr@1000 {
interrupts = <45 2 0 0>;
};
jr@2000{
jr@2000 {
interrupts = <57 2 0 0>;
};
};
@ -140,10 +140,10 @@ crypto@a0000 {
reg = <0xa0000 0x20000>;
ranges = <0x0 0xa0000 0x20000>;
jr@1000{
jr@1000 {
interrupts = <49 2 0 0>;
};
jr@2000{
jr@2000 {
interrupts = <50 2 0 0>;
};
};
@ -156,10 +156,10 @@ crypto@c0000 {
reg = <0xc0000 0x20000>;
ranges = <0x0 0xc0000 0x20000>;
jr@1000{
jr@1000 {
interrupts = <55 2 0 0>;
};
jr@2000{
jr@2000 {
interrupts = <56 2 0 0>;
};
};

View file

@ -60,23 +60,23 @@ rtc@68 {
compatible = "st,m41t62";
reg = <0x68>;
};
adt7461@4c{
adt7461@4c {
compatible = "adi,adt7461";
reg = <0x4c>;
};
zl6100@21{
zl6100@21 {
compatible = "isil,zl6100";
reg = <0x21>;
};
zl6100@24{
zl6100@24 {
compatible = "isil,zl6100";
reg = <0x24>;
};
zl6100@26{
zl6100@26 {
compatible = "isil,zl6100";
reg = <0x26>;
};
zl6100@29{
zl6100@29 {
compatible = "isil,zl6100";
reg = <0x29>;
};

View file

@ -238,7 +238,7 @@ global-utilities@e0000 {
fsl,has-rstcr;
};
power@e0070{
power@e0070 {
compatible = "fsl,mpc8536-pmc", "fsl,mpc8548-pmc";
reg = <0xe0070 0x20>;
};

View file

@ -41,7 +41,7 @@ / {
#size-cells = <2>;
interrupt-parent = <&mpic>;
aliases{
aliases {
phy_rgmii_0 = &phy_rgmii_0;
phy_rgmii_1 = &phy_rgmii_1;
phy_sgmii_1c = &phy_sgmii_1c;
@ -165,7 +165,7 @@ adt7461@4c {
};
};
fman@400000{
fman@400000 {
ethernet@e0000 {
phy-handle = <&phy_sgmii_1c>;
phy-connection-type = "sgmii";

View file

@ -41,7 +41,7 @@ / {
#size-cells = <2>;
interrupt-parent = <&mpic>;
aliases{
aliases {
phy_sgmii_slot2_1c = &phy_sgmii_slot2_1c;
phy_sgmii_slot2_1d = &phy_sgmii_slot2_1d;
phy_sgmii_slot2_1e = &phy_sgmii_slot2_1e;

View file

@ -41,7 +41,7 @@ / {
#size-cells = <2>;
interrupt-parent = <&mpic>;
aliases{
aliases {
phy_rgmii1 = &phyrgmii1;
phy_rgmii2 = &phyrgmii2;
phy_sgmii3 = &phy3;

View file

@ -140,7 +140,7 @@ clks: clock@f00 {
};
/* Power Management Controller */
pmc@1000{
pmc@1000 {
compatible = "fsl,mpc5121-pmc";
reg = <0x1000 0x100>;
interrupts = <83 0x8>;

View file

@ -104,7 +104,7 @@ clks: clock@f00 { // Clock control
clock-names = "osc";
};
pmc@1000{ // Power Management Controller
pmc@1000 { // Power Management Controller
compatible = "fsl,mpc5121-pmc";
reg = <0x1000 0x100>;
interrupts = <83 0x2>;

View file

@ -176,8 +176,9 @@ CONFIG_MOUSE_APPLETOUCH=y
# CONFIG_SERIO_I8042 is not set
# CONFIG_SERIO_SERPORT is not set
CONFIG_SERIAL_8250=m
CONFIG_SERIAL_PMACZILOG=m
CONFIG_SERIAL_PMACZILOG=y
CONFIG_SERIAL_PMACZILOG_TTYS=y
CONFIG_SERIAL_PMACZILOG_CONSOLE=y
CONFIG_NVRAM=y
CONFIG_I2C_CHARDEV=m
CONFIG_APM_POWER=y

View file

@ -390,8 +390,11 @@ CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_LZO=m
CONFIG_CRYPTO_CRC32C_VPMSUM=m
CONFIG_CRYPTO_CRCT10DIF_VPMSUM=m
CONFIG_CRYPTO_VPMSUM_TESTER=m
CONFIG_CRYPTO_MD5_PPC=m
CONFIG_CRYPTO_SHA1_PPC=m
CONFIG_CRYPTO_AES_GCM_P10=m
CONFIG_CRYPTO_DEV_NX=y
CONFIG_CRYPTO_DEV_NX_ENCRYPT=m
CONFIG_CRYPTO_DEV_VMX=y

View file

@ -183,7 +183,6 @@ CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m

View file

@ -289,7 +289,6 @@ CONFIG_LIBCRC32C=y
# CONFIG_XZ_DEC_SPARC is not set
CONFIG_PRINTK_TIME=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_SLUB_DEBUG_ON=y
CONFIG_SCHED_STACK_END_CHECK=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_PANIC_ON_OOPS=y

View file

@ -100,7 +100,7 @@ config CRYPTO_AES_GCM_P10
select CRYPTO_LIB_AES
select CRYPTO_ALGAPI
select CRYPTO_AEAD
default m
select CRYPTO_SKCIPHER
help
AEAD cipher: AES cipher algorithms (FIPS-197)
GCM (Galois/Counter Mode) authenticated encryption mode (NIST SP800-38D)

View file

@ -560,5 +560,7 @@ typedef struct immap {
cpm8xx_t im_cpm; /* Communication processor */
} immap_t;
extern immap_t __iomem *mpc8xx_immr;
#endif /* __IMMAP_8XX__ */
#endif /* __KERNEL__ */

View file

@ -3,7 +3,6 @@ generated-y += syscall_table_32.h
generated-y += syscall_table_64.h
generated-y += syscall_table_spu.h
generic-y += agp.h
generic-y += export.h
generic-y += kvm_types.h
generic-y += mcs_spinlock.h
generic-y += qrwlock.h

View file

@ -9,79 +9,53 @@
#ifndef __ASSEMBLY__
#include <linux/jump_label.h>
extern struct static_key_false disable_kuap_key;
static __always_inline bool kuep_is_disabled(void)
{
return !IS_ENABLED(CONFIG_PPC_KUEP);
}
#ifdef CONFIG_PPC_KUAP
#include <linux/sched.h>
#define KUAP_NONE (~0UL)
#define KUAP_ALL (~1UL)
static __always_inline bool kuap_is_disabled(void)
{
return static_branch_unlikely(&disable_kuap_key);
}
static inline void kuap_lock_one(unsigned long addr)
static __always_inline void kuap_lock_one(unsigned long addr)
{
mtsr(mfsr(addr) | SR_KS, addr);
isync(); /* Context sync required after mtsr() */
}
static inline void kuap_unlock_one(unsigned long addr)
static __always_inline void kuap_unlock_one(unsigned long addr)
{
mtsr(mfsr(addr) & ~SR_KS, addr);
isync(); /* Context sync required after mtsr() */
}
static inline void kuap_lock_all(void)
static __always_inline void uaccess_begin_32s(unsigned long addr)
{
update_user_segments(mfsr(0) | SR_KS);
isync(); /* Context sync required after mtsr() */
unsigned long tmp;
asm volatile(ASM_MMU_FTR_IFSET(
"mfsrin %0, %1;"
"rlwinm %0, %0, 0, %2;"
"mtsrin %0, %1;"
"isync", "", %3)
: "=&r"(tmp)
: "r"(addr), "i"(~SR_KS), "i"(MMU_FTR_KUAP)
: "memory");
}
static inline void kuap_unlock_all(void)
static __always_inline void uaccess_end_32s(unsigned long addr)
{
update_user_segments(mfsr(0) & ~SR_KS);
isync(); /* Context sync required after mtsr() */
unsigned long tmp;
asm volatile(ASM_MMU_FTR_IFSET(
"mfsrin %0, %1;"
"oris %0, %0, %2;"
"mtsrin %0, %1;"
"isync", "", %3)
: "=&r"(tmp)
: "r"(addr), "i"(SR_KS >> 16), "i"(MMU_FTR_KUAP)
: "memory");
}
void kuap_lock_all_ool(void);
void kuap_unlock_all_ool(void);
static inline void kuap_lock_addr(unsigned long addr, bool ool)
{
if (likely(addr != KUAP_ALL))
kuap_lock_one(addr);
else if (!ool)
kuap_lock_all();
else
kuap_lock_all_ool();
}
static inline void kuap_unlock(unsigned long addr, bool ool)
{
if (likely(addr != KUAP_ALL))
kuap_unlock_one(addr);
else if (!ool)
kuap_unlock_all();
else
kuap_unlock_all_ool();
}
static inline void __kuap_lock(void)
{
}
static inline void __kuap_save_and_lock(struct pt_regs *regs)
static __always_inline void __kuap_save_and_lock(struct pt_regs *regs)
{
unsigned long kuap = current->thread.kuap;
@ -90,18 +64,19 @@ static inline void __kuap_save_and_lock(struct pt_regs *regs)
return;
current->thread.kuap = KUAP_NONE;
kuap_lock_addr(kuap, false);
kuap_lock_one(kuap);
}
#define __kuap_save_and_lock __kuap_save_and_lock
static inline void kuap_user_restore(struct pt_regs *regs)
static __always_inline void kuap_user_restore(struct pt_regs *regs)
{
}
static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
{
if (unlikely(kuap != KUAP_NONE)) {
current->thread.kuap = KUAP_NONE;
kuap_lock_addr(kuap, false);
kuap_lock_one(kuap);
}
if (likely(regs->kuap == KUAP_NONE))
@ -109,10 +84,10 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
current->thread.kuap = regs->kuap;
kuap_unlock(regs->kuap, false);
kuap_unlock_one(regs->kuap);
}
static inline unsigned long __kuap_get_and_assert_locked(void)
static __always_inline unsigned long __kuap_get_and_assert_locked(void)
{
unsigned long kuap = current->thread.kuap;
@ -120,9 +95,10 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
return kuap;
}
#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
static __always_inline void __allow_user_access(void __user *to, const void __user *from,
u32 size, unsigned long dir)
static __always_inline void allow_user_access(void __user *to, const void __user *from,
u32 size, unsigned long dir)
{
BUILD_BUG_ON(!__builtin_constant_p(dir));
@ -130,10 +106,10 @@ static __always_inline void __allow_user_access(void __user *to, const void __us
return;
current->thread.kuap = (__force u32)to;
kuap_unlock_one((__force u32)to);
uaccess_begin_32s((__force u32)to);
}
static __always_inline void __prevent_user_access(unsigned long dir)
static __always_inline void prevent_user_access(unsigned long dir)
{
u32 kuap = current->thread.kuap;
@ -143,42 +119,51 @@ static __always_inline void __prevent_user_access(unsigned long dir)
return;
current->thread.kuap = KUAP_NONE;
kuap_lock_addr(kuap, true);
uaccess_end_32s(kuap);
}
static inline unsigned long __prevent_user_access_return(void)
static __always_inline unsigned long prevent_user_access_return(void)
{
unsigned long flags = current->thread.kuap;
if (flags != KUAP_NONE) {
current->thread.kuap = KUAP_NONE;
kuap_lock_addr(flags, true);
uaccess_end_32s(flags);
}
return flags;
}
static inline void __restore_user_access(unsigned long flags)
static __always_inline void restore_user_access(unsigned long flags)
{
if (flags != KUAP_NONE) {
current->thread.kuap = flags;
kuap_unlock(flags, true);
uaccess_begin_32s(flags);
}
}
static inline bool
static __always_inline bool
__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
unsigned long kuap = regs->kuap;
if (!is_write || kuap == KUAP_ALL)
if (!is_write)
return false;
if (kuap == KUAP_NONE)
return true;
/* If faulting address doesn't match unlocked segment, unlock all */
if ((kuap ^ address) & 0xf0000000)
regs->kuap = KUAP_ALL;
/*
* If faulting address doesn't match unlocked segment, change segment.
* In case of unaligned store crossing two segments, emulate store.
*/
if ((kuap ^ address) & 0xf0000000) {
if (!(kuap & 0x0fffffff) && address > kuap - 4 && fix_alignment(regs)) {
regs_add_return_ip(regs, 4);
emulate_single_step(regs);
} else {
regs->kuap = address;
}
}
return false;
}

View file

@ -536,58 +536,43 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
/* This low level function performs the actual PTE insertion
* Setting the PTE depends on the MMU type and other factors. It's
* an horrible mess that I'm not going to try to clean up now but
* I'm keeping it in one place rather than spread around
* Setting the PTE depends on the MMU type and other factors.
*
* First case is 32-bit in UP mode with 32-bit PTEs, we need to preserve
* the _PAGE_HASHPTE bit since we may not have invalidated the previous
* translation in the hash yet (done in a subsequent flush_tlb_xxx())
* and see we need to keep track that this PTE needs invalidating.
*
* Second case is 32-bit with 64-bit PTE. In this case, we
* can just store as long as we do the two halves in the right order
* with a barrier in between. This is possible because we take care,
* in the hash code, to pre-invalidate if the PTE was already hashed,
* which synchronizes us with any concurrent invalidation.
* In the percpu case, we fallback to the simple update preserving
* the hash bits (ie, same as the non-SMP case).
*
* Third case is 32-bit in SMP mode with 32-bit PTEs. We use the
* helper pte_update() which does an atomic update. We need to do that
* because a concurrent invalidation can clear _PAGE_HASHPTE. If it's a
* per-CPU PTE such as a kmap_atomic, we also do a simple update preserving
* the hash bits instead.
*/
static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte, int percpu)
{
#if defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT)
/* First case is 32-bit Hash MMU in SMP mode with 32-bit PTEs. We use the
* helper pte_update() which does an atomic update. We need to do that
* because a concurrent invalidation can clear _PAGE_HASHPTE. If it's a
* per-CPU PTE such as a kmap_atomic, we do a simple update preserving
* the hash bits instead (ie, same as the non-SMP case)
*/
if (percpu)
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
| (pte_val(pte) & ~_PAGE_HASHPTE));
else
if ((!IS_ENABLED(CONFIG_SMP) && !IS_ENABLED(CONFIG_PTE_64BIT)) || percpu) {
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE) |
(pte_val(pte) & ~_PAGE_HASHPTE));
} else if (IS_ENABLED(CONFIG_PTE_64BIT)) {
if (pte_val(*ptep) & _PAGE_HASHPTE)
flush_hash_entry(mm, ptep, addr);
asm volatile("stw%X0 %2,%0; eieio; stw%X1 %L2,%1" :
"=m" (*ptep), "=m" (*((unsigned char *)ptep+4)) :
"r" (pte) : "memory");
} else {
pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, pte_val(pte), 0);
#elif defined(CONFIG_PTE_64BIT)
/* Second case is 32-bit with 64-bit PTE. In this case, we
* can just store as long as we do the two halves in the right order
* with a barrier in between. This is possible because we take care,
* in the hash code, to pre-invalidate if the PTE was already hashed,
* which synchronizes us with any concurrent invalidation.
* In the percpu case, we also fallback to the simple update preserving
* the hash bits
*/
if (percpu) {
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
| (pte_val(pte) & ~_PAGE_HASHPTE));
return;
}
if (pte_val(*ptep) & _PAGE_HASHPTE)
flush_hash_entry(mm, ptep, addr);
__asm__ __volatile__("\
stw%X0 %2,%0\n\
eieio\n\
stw%X1 %L2,%1"
: "=m" (*ptep), "=m" (*((unsigned char *)ptep+4))
: "r" (pte) : "memory");
#else
/* Third case is 32-bit hash table in UP mode, we need to preserve
* the _PAGE_HASHPTE bit since we may not have invalidated the previous
* translation in the hash yet (done in a subsequent flush_tlb_xxx())
* and see we need to keep track that this PTE needs invalidating
*/
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
| (pte_val(pte) & ~_PAGE_HASHPTE));
#endif
}
/*

View file

@ -24,7 +24,7 @@ static inline u64 pte_to_hpte_pkey_bits(u64 pteflags, unsigned long flags)
((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
if (mmu_has_feature(MMU_FTR_BOOK3S_KUAP) ||
if (mmu_has_feature(MMU_FTR_KUAP) ||
mmu_has_feature(MMU_FTR_BOOK3S_KUEP)) {
if ((pte_pkey == 0) && (flags & HPTE_USE_KERNEL_KEY))
return HASH_DEFAULT_KERNEL_KEY;

View file

@ -31,7 +31,7 @@
mfspr \gpr2, SPRN_AMR
cmpd \gpr1, \gpr2
beq 99f
END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_BOOK3S_KUAP, 68)
END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_KUAP, 68)
isync
mtspr SPRN_AMR, \gpr1
@ -78,7 +78,7 @@
* No need to restore IAMR when returning to kernel space.
*/
100:
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 67)
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
#endif
.endm
@ -91,7 +91,7 @@
LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
999: tdne \gpr1, \gpr2
EMIT_WARN_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 67)
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
#endif
.endm
#endif
@ -130,7 +130,7 @@
*/
BEGIN_MMU_FTR_SECTION_NESTED(68)
b 100f // skip_save_amr
END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY | MMU_FTR_BOOK3S_KUAP, 68)
END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY | MMU_FTR_KUAP, 68)
/*
* if pkey is disabled and we are entering from userspace
@ -166,7 +166,7 @@
mtspr SPRN_AMR, \gpr2
isync
102:
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 69)
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 69)
/*
* if entering from kernel we don't need save IAMR
@ -213,14 +213,14 @@ extern u64 __ro_after_init default_iamr;
* access restrictions. Because of this ignore AMR value when accessing
* userspace via kernel thread.
*/
static inline u64 current_thread_amr(void)
static __always_inline u64 current_thread_amr(void)
{
if (current->thread.regs)
return current->thread.regs->amr;
return default_amr;
}
static inline u64 current_thread_iamr(void)
static __always_inline u64 current_thread_iamr(void)
{
if (current->thread.regs)
return current->thread.regs->iamr;
@ -230,12 +230,7 @@ static inline u64 current_thread_iamr(void)
#ifdef CONFIG_PPC_KUAP
static __always_inline bool kuap_is_disabled(void)
{
return !mmu_has_feature(MMU_FTR_BOOK3S_KUAP);
}
static inline void kuap_user_restore(struct pt_regs *regs)
static __always_inline void kuap_user_restore(struct pt_regs *regs)
{
bool restore_amr = false, restore_iamr = false;
unsigned long amr, iamr;
@ -243,7 +238,7 @@ static inline void kuap_user_restore(struct pt_regs *regs)
if (!mmu_has_feature(MMU_FTR_PKEY))
return;
if (!mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) {
if (!mmu_has_feature(MMU_FTR_KUAP)) {
amr = mfspr(SPRN_AMR);
if (amr != regs->amr)
restore_amr = true;
@ -274,7 +269,7 @@ static inline void kuap_user_restore(struct pt_regs *regs)
*/
}
static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
{
if (likely(regs->amr == amr))
return;
@ -290,7 +285,7 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr
*/
}
static inline unsigned long __kuap_get_and_assert_locked(void)
static __always_inline unsigned long __kuap_get_and_assert_locked(void)
{
unsigned long amr = mfspr(SPRN_AMR);
@ -298,22 +293,16 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
WARN_ON_ONCE(amr != AMR_KUAP_BLOCKED);
return amr;
}
#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
/* Do nothing, book3s/64 does that in ASM */
static inline void __kuap_lock(void)
{
}
static inline void __kuap_save_and_lock(struct pt_regs *regs)
{
}
/* __kuap_lock() not required, book3s/64 does that in ASM */
/*
* We support individually allowing read or write, but we don't support nesting
* because that would require an expensive read/modify write of the AMR.
*/
static inline unsigned long get_kuap(void)
static __always_inline unsigned long get_kuap(void)
{
/*
* We return AMR_KUAP_BLOCKED when we don't support KUAP because
@ -323,7 +312,7 @@ static inline unsigned long get_kuap(void)
* This has no effect in terms of actually blocking things on hash,
* so it doesn't break anything.
*/
if (!mmu_has_feature(MMU_FTR_BOOK3S_KUAP))
if (!mmu_has_feature(MMU_FTR_KUAP))
return AMR_KUAP_BLOCKED;
return mfspr(SPRN_AMR);
@ -331,7 +320,7 @@ static inline unsigned long get_kuap(void)
static __always_inline void set_kuap(unsigned long value)
{
if (!mmu_has_feature(MMU_FTR_BOOK3S_KUAP))
if (!mmu_has_feature(MMU_FTR_KUAP))
return;
/*
@ -343,7 +332,8 @@ static __always_inline void set_kuap(unsigned long value)
isync();
}
static inline bool __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
static __always_inline bool
__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
/*
* For radix this will be a storage protection fault (DSISR_PROTFAULT).
@ -386,12 +376,12 @@ static __always_inline void allow_user_access(void __user *to, const void __user
#else /* CONFIG_PPC_KUAP */
static inline unsigned long get_kuap(void)
static __always_inline unsigned long get_kuap(void)
{
return AMR_KUAP_BLOCKED;
}
static inline void set_kuap(unsigned long value) { }
static __always_inline void set_kuap(unsigned long value) { }
static __always_inline void allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
@ -406,7 +396,7 @@ static __always_inline void prevent_user_access(unsigned long dir)
do_uaccess_flush();
}
static inline unsigned long prevent_user_access_return(void)
static __always_inline unsigned long prevent_user_access_return(void)
{
unsigned long flags = get_kuap();
@ -417,7 +407,7 @@ static inline unsigned long prevent_user_access_return(void)
return flags;
}
static inline void restore_user_access(unsigned long flags)
static __always_inline void restore_user_access(unsigned long flags)
{
set_kuap(flags);
if (static_branch_unlikely(&uaccess_flush_key) && flags == AMR_KUAP_BLOCKED)

View file

@ -71,10 +71,7 @@ extern unsigned int mmu_pid_bits;
/* Base PID to allocate from */
extern unsigned int mmu_base_pid;
/*
* memory block size used with radix translation.
*/
extern unsigned long __ro_after_init radix_mem_block_size;
extern unsigned long __ro_after_init memory_block_size;
#define PRTB_SIZE_SHIFT (mmu_pid_bits + 4)
#define PRTB_ENTRIES (1ul << mmu_pid_bits)
@ -261,7 +258,7 @@ static inline void radix_init_pseries(void) { }
#define arch_clear_mm_cpumask_cpu(cpu, mm) \
do { \
if (cpumask_test_cpu(cpu, mm_cpumask(mm))) { \
atomic_dec(&(mm)->context.active_cpus); \
dec_mm_active_cpus(mm); \
cpumask_clear_cpu(cpu, mm_cpumask(mm)); \
} \
} while (0)

View file

@ -120,6 +120,7 @@
struct pt_regs;
void hash__do_page_fault(struct pt_regs *);
void bad_page_fault(struct pt_regs *, int);
void emulate_single_step(struct pt_regs *regs);
extern void _exception(int, struct pt_regs *, int, unsigned long);
extern void _exception_pkey(struct pt_regs *, unsigned long, int);
extern void die(const char *, struct pt_regs *, long);

View file

@ -1080,6 +1080,9 @@ typedef struct im_idma {
#define FCC2_MEM_OFFSET FCC_MEM_OFFSET(1)
#define FCC3_MEM_OFFSET FCC_MEM_OFFSET(2)
/* Pipeline Maximum Depth */
#define MPC82XX_BCR_PLDP 0x00800000
/* Clocks and GRG's */
enum cpm_clk_dir {

View file

@ -252,7 +252,7 @@ static inline void cpu_feature_keys_init(void) { }
* This is also required by 52xx family.
*/
#if defined(CONFIG_SMP) || defined(CONFIG_MPC10X_BRIDGE) \
|| defined(CONFIG_PPC_83xx) || defined(CONFIG_8260) \
|| defined(CONFIG_PPC_83xx) || defined(CONFIG_PPC_82xx) \
|| defined(CONFIG_PPC_MPC52xx)
#define CPU_FTR_COMMON CPU_FTR_NEED_COHERENT
#else

View file

@ -39,6 +39,5 @@ extern rwlock_t dtl_access_lock;
extern void register_dtl_buffer(int cpu);
extern void alloc_dtl_buffers(unsigned long *time_limit);
extern long hcall_vphn(unsigned long cpu, u64 flags, __be32 *associativity);
#endif /* _ASM_POWERPC_DTL_H */

View file

@ -292,6 +292,7 @@ extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup;
extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
void apply_feature_fixups(void);
void update_mmu_feature_fixups(unsigned long mask);
void setup_feature_keys(void);
#endif

View file

@ -14,28 +14,6 @@
#include <sysdev/fsl_soc.h>
#include <asm/time.h>
#ifdef CONFIG_CPM2
#include <asm/cpm2.h>
#if defined(CONFIG_8260)
#include <asm/mpc8260.h>
#endif
#define cpm2_map(member) (&cpm2_immr->member)
#define cpm2_map_size(member, size) (&cpm2_immr->member)
#define cpm2_unmap(addr) do {} while(0)
#endif
#ifdef CONFIG_PPC_8xx
#include <asm/8xx_immap.h>
extern immap_t __iomem *mpc8xx_immr;
#define immr_map(member) (&mpc8xx_immr->member)
#define immr_map_size(member, size) (&mpc8xx_immr->member)
#define immr_unmap(addr) do {} while (0)
#endif
static inline int uart_baudrate(void)
{
return get_baudrate();

View file

@ -11,8 +11,8 @@
#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
/* Ignore unused weak functions which will have larger offsets */
#ifdef CONFIG_MPROFILE_KERNEL
#define FTRACE_MCOUNT_MAX_OFFSET 12
#if defined(CONFIG_MPROFILE_KERNEL) || defined(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)
#define FTRACE_MCOUNT_MAX_OFFSET 16
#elif defined(CONFIG_PPC32)
#define FTRACE_MCOUNT_MAX_OFFSET 8
#endif
@ -22,18 +22,26 @@ extern void _mcount(void);
static inline unsigned long ftrace_call_adjust(unsigned long addr)
{
/* relocation of mcount call site is the same as the address */
if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY))
addr += MCOUNT_INSN_SIZE;
return addr;
}
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip,
unsigned long sp);
struct module;
struct dyn_ftrace;
struct dyn_arch_ftrace {
struct module *mod;
};
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
#define ftrace_need_init_nop() (true)
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
#define ftrace_init_nop ftrace_init_nop
struct ftrace_regs {
struct pt_regs regs;
};
@ -124,15 +132,19 @@ static inline u8 this_cpu_get_ftrace_enabled(void)
{
return get_paca()->ftrace_enabled;
}
void ftrace_free_init_tramp(void);
#else /* CONFIG_PPC64 */
static inline void this_cpu_disable_ftrace(void) { }
static inline void this_cpu_enable_ftrace(void) { }
static inline void this_cpu_set_ftrace_enabled(u8 ftrace_enabled) { }
static inline u8 this_cpu_get_ftrace_enabled(void) { return 1; }
static inline void ftrace_free_init_tramp(void) { }
#endif /* CONFIG_PPC64 */
#ifdef CONFIG_FUNCTION_TRACER
extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[];
void ftrace_free_init_tramp(void);
#else
static inline void ftrace_free_init_tramp(void) { }
#endif
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_FTRACE */

View file

@ -18,6 +18,7 @@ struct arch_hw_breakpoint {
u16 len; /* length of the target data symbol */
u16 hw_len; /* length programmed in hw */
u8 flags;
bool perf_single_step; /* temporarily uninstalled for a perf single step */
};
/* Note: Don't change the first 6 bits below as they are in the same order

View file

@ -46,6 +46,8 @@
#include <linux/of_device.h>
#include <linux/of_platform.h>
struct platform_driver;
extern struct bus_type ibmebus_bus_type;
int ibmebus_register_driver(struct platform_driver *drv);

View file

@ -28,6 +28,9 @@
#define IOMMU_PAGE_MASK(tblptr) (~((1 << (tblptr)->it_page_shift) - 1))
#define IOMMU_PAGE_ALIGN(addr, tblptr) ALIGN(addr, IOMMU_PAGE_SIZE(tblptr))
#define DIRECT64_PROPNAME "linux,direct64-ddr-window-info"
#define DMA64_PROPNAME "linux,dma64-ddr-window-info"
/* Boot time flags */
extern int iommu_is_off;
extern int iommu_force_on;

View file

@ -23,7 +23,7 @@ static inline bool arch_kfence_init_pool(void)
#ifdef CONFIG_PPC64
static inline bool kfence_protect_page(unsigned long addr, bool protect)
{
struct page *page = virt_to_page(addr);
struct page *page = virt_to_page((void *)addr);
__kernel_map_pages(page, 1, !protect);

View file

@ -6,6 +6,12 @@
#define KUAP_WRITE 2
#define KUAP_READ_WRITE (KUAP_READ | KUAP_WRITE)
#ifndef __ASSEMBLY__
#include <linux/types.h>
static __always_inline bool kuap_is_disabled(void);
#endif
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/book3s/64/kup.h>
#endif
@ -41,26 +47,24 @@ void setup_kuep(bool disabled);
#ifdef CONFIG_PPC_KUAP
void setup_kuap(bool disabled);
static __always_inline bool kuap_is_disabled(void)
{
return !mmu_has_feature(MMU_FTR_KUAP);
}
#else
static inline void setup_kuap(bool disabled) { }
static __always_inline bool kuap_is_disabled(void) { return true; }
static inline bool
static __always_inline bool
__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
return false;
}
static inline void __kuap_lock(void) { }
static inline void __kuap_save_and_lock(struct pt_regs *regs) { }
static inline void kuap_user_restore(struct pt_regs *regs) { }
static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
static inline unsigned long __kuap_get_and_assert_locked(void)
{
return 0;
}
static __always_inline void kuap_user_restore(struct pt_regs *regs) { }
static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
/*
* book3s/64/kup-radix.h defines these functions for the !KUAP case to flush
@ -68,11 +72,11 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
* platforms.
*/
#ifndef CONFIG_PPC_BOOK3S_64
static inline void __allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir) { }
static inline void __prevent_user_access(unsigned long dir) { }
static inline unsigned long __prevent_user_access_return(void) { return 0UL; }
static inline void __restore_user_access(unsigned long flags) { }
static __always_inline void allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir) { }
static __always_inline void prevent_user_access(unsigned long dir) { }
static __always_inline unsigned long prevent_user_access_return(void) { return 0UL; }
static __always_inline void restore_user_access(unsigned long flags) { }
#endif /* CONFIG_PPC_BOOK3S_64 */
#endif /* CONFIG_PPC_KUAP */
@ -85,29 +89,24 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
return __bad_kuap_fault(regs, address, is_write);
}
static __always_inline void kuap_assert_locked(void)
{
if (kuap_is_disabled())
return;
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
__kuap_get_and_assert_locked();
}
static __always_inline void kuap_lock(void)
{
#ifdef __kuap_lock
if (kuap_is_disabled())
return;
__kuap_lock();
#endif
}
static __always_inline void kuap_save_and_lock(struct pt_regs *regs)
{
#ifdef __kuap_save_and_lock
if (kuap_is_disabled())
return;
__kuap_save_and_lock(regs);
#endif
}
static __always_inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
@ -120,47 +119,19 @@ static __always_inline void kuap_kernel_restore(struct pt_regs *regs, unsigned l
static __always_inline unsigned long kuap_get_and_assert_locked(void)
{
if (kuap_is_disabled())
return 0;
return __kuap_get_and_assert_locked();
#ifdef __kuap_get_and_assert_locked
if (!kuap_is_disabled())
return __kuap_get_and_assert_locked();
#endif
return 0;
}
#ifndef CONFIG_PPC_BOOK3S_64
static __always_inline void allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
static __always_inline void kuap_assert_locked(void)
{
if (kuap_is_disabled())
return;
__allow_user_access(to, from, size, dir);
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
kuap_get_and_assert_locked();
}
static __always_inline void prevent_user_access(unsigned long dir)
{
if (kuap_is_disabled())
return;
__prevent_user_access(dir);
}
static __always_inline unsigned long prevent_user_access_return(void)
{
if (kuap_is_disabled())
return 0;
return __prevent_user_access_return();
}
static __always_inline void restore_user_access(unsigned long flags)
{
if (kuap_is_disabled())
return;
__restore_user_access(flags);
}
#endif /* CONFIG_PPC_BOOK3S_64 */
static __always_inline void allow_read_from_user(const void __user *from, unsigned long size)
{
barrier_nospec();

View file

@ -6,28 +6,6 @@
#ifndef _ASM_POWERPC_LPPACA_H
#define _ASM_POWERPC_LPPACA_H
/*
* The below VPHN macros are outside the __KERNEL__ check since these are
* used for compiling the vphn selftest in userspace
*/
/* The H_HOME_NODE_ASSOCIATIVITY h_call returns 6 64-bit registers. */
#define VPHN_REGISTER_COUNT 6
/*
* 6 64-bit registers unpacked into up to 24 be32 associativity values. To
* form the complete property we have to add the length in the first cell.
*/
#define VPHN_ASSOC_BUFSIZE (VPHN_REGISTER_COUNT*sizeof(u64)/sizeof(u16) + 1)
/*
* The H_HOME_NODE_ASSOCIATIVITY hcall takes two values for flags:
* 1 for retrieving associativity information for a guest cpu
* 2 for retrieving associativity information for a host/hypervisor cpu
*/
#define VPHN_FLAG_VCPU 1
#define VPHN_FLAG_PCPU 2
#ifdef __KERNEL__
/*
@ -45,6 +23,7 @@
#include <asm/types.h>
#include <asm/mmu.h>
#include <asm/firmware.h>
#include <asm/paca.h>
/*
* The lppaca is the "virtual processor area" registered with the hypervisor,
@ -127,13 +106,23 @@ struct lppaca {
*/
#define LPPACA_OLD_SHARED_PROC 2
static inline bool lppaca_shared_proc(struct lppaca *l)
#ifdef CONFIG_PPC_PSERIES
/*
* All CPUs should have the same shared proc value, so directly access the PACA
* to avoid false positives from DEBUG_PREEMPT.
*/
static inline bool lppaca_shared_proc(void)
{
struct lppaca *l = local_paca->lppaca_ptr;
if (!firmware_has_feature(FW_FEATURE_SPLPAR))
return false;
return !!(l->__old_status & LPPACA_OLD_SHARED_PROC);
}
#define get_lppaca() (get_paca()->lppaca_ptr)
#endif
/*
* SLB shadow buffer structure as defined in the PAPR. The save_area
* contains adjacent ESID and VSID pairs for each shadowed SLB. The
@ -149,8 +138,6 @@ struct slb_shadow {
} save_area[SLB_NUM_BOLTED];
} ____cacheline_aligned;
extern long hcall_vphn(unsigned long cpu, u64 flags, __be32 *associativity);
#endif /* CONFIG_PPC_BOOK3S */
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_LPPACA_H */

View file

@ -3,7 +3,8 @@
#define __MACIO_ASIC_H__
#ifdef __KERNEL__
#include <linux/of_device.h>
#include <linux/of.h>
#include <linux/platform_device.h>
extern struct bus_type macio_bus_type;

View file

@ -33,7 +33,7 @@
* key 0 controlling userspace addresses on radix
* Key 3 on hash
*/
#define MMU_FTR_BOOK3S_KUAP ASM_CONST(0x00000200)
#define MMU_FTR_KUAP ASM_CONST(0x00000200)
/*
* Supports KUEP feature
@ -144,11 +144,6 @@
typedef pte_t *pgtable_t;
#ifdef CONFIG_PPC_E500
#include <asm/percpu.h>
DECLARE_PER_CPU(int, next_tlbcam_idx);
#endif
enum {
MMU_FTRS_POSSIBLE =
#if defined(CONFIG_PPC_BOOK3S_604)
@ -188,7 +183,7 @@ enum {
#endif /* CONFIG_PPC_RADIX_MMU */
#endif
#ifdef CONFIG_PPC_KUAP
MMU_FTR_BOOK3S_KUAP |
MMU_FTR_KUAP |
#endif /* CONFIG_PPC_KUAP */
#ifdef CONFIG_PPC_MEM_KEYS
MMU_FTR_PKEY |

View file

@ -127,6 +127,7 @@ static inline void inc_mm_active_cpus(struct mm_struct *mm)
static inline void dec_mm_active_cpus(struct mm_struct *mm)
{
VM_WARN_ON_ONCE(atomic_read(&mm->context.active_cpus) <= 0);
atomic_dec(&mm->context.active_cpus);
}

View file

@ -75,10 +75,6 @@ struct mod_arch_specific {
#endif
#ifdef CONFIG_DYNAMIC_FTRACE
# ifdef MODULE
asm(".section .ftrace.tramp,\"ax\",@nobits; .align 3; .previous");
# endif /* MODULE */
int module_trampoline_target(struct module *mod, unsigned long trampoline,
unsigned long *target);
int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs);

View file

@ -1,22 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Since there are many different boards and no standard configuration,
* we have a unique include file for each. Rather than change every
* file that has to include MPC8260 configuration, they all include
* this one and the configuration switching is done here.
*/
#ifdef __KERNEL__
#ifndef __ASM_POWERPC_MPC8260_H__
#define __ASM_POWERPC_MPC8260_H__
#define MPC82XX_BCR_PLDP 0x00800000 /* Pipeline Maximum Depth */
#ifdef CONFIG_8260
#ifdef CONFIG_PCI_8260
#include <platforms/82xx/m82xx_pci.h>
#endif
#endif /* CONFIG_8260 */
#endif /* !__ASM_POWERPC_MPC8260_H__ */
#endif /* __KERNEL__ */

View file

@ -9,76 +9,74 @@
#ifndef __ASSEMBLY__
#include <linux/jump_label.h>
#include <asm/reg.h>
extern struct static_key_false disable_kuap_key;
static __always_inline bool kuap_is_disabled(void)
{
return static_branch_unlikely(&disable_kuap_key);
}
static inline void __kuap_lock(void)
{
}
static inline void __kuap_save_and_lock(struct pt_regs *regs)
static __always_inline void __kuap_save_and_lock(struct pt_regs *regs)
{
regs->kuap = mfspr(SPRN_MD_AP);
mtspr(SPRN_MD_AP, MD_APG_KUAP);
}
#define __kuap_save_and_lock __kuap_save_and_lock
static inline void kuap_user_restore(struct pt_regs *regs)
static __always_inline void kuap_user_restore(struct pt_regs *regs)
{
}
static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
{
mtspr(SPRN_MD_AP, regs->kuap);
}
static inline unsigned long __kuap_get_and_assert_locked(void)
#ifdef CONFIG_PPC_KUAP_DEBUG
static __always_inline unsigned long __kuap_get_and_assert_locked(void)
{
unsigned long kuap;
WARN_ON_ONCE(mfspr(SPRN_MD_AP) >> 16 != MD_APG_KUAP >> 16);
kuap = mfspr(SPRN_MD_AP);
return 0;
}
#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
#endif
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
WARN_ON_ONCE(kuap >> 16 != MD_APG_KUAP >> 16);
return kuap;
static __always_inline void uaccess_begin_8xx(unsigned long val)
{
asm(ASM_MMU_FTR_IFSET("mtspr %0, %1", "", %2) : :
"i"(SPRN_MD_AP), "r"(val), "i"(MMU_FTR_KUAP) : "memory");
}
static inline void __allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
static __always_inline void uaccess_end_8xx(void)
{
mtspr(SPRN_MD_AP, MD_APG_INIT);
asm(ASM_MMU_FTR_IFSET("mtspr %0, %1", "", %2) : :
"i"(SPRN_MD_AP), "r"(MD_APG_KUAP), "i"(MMU_FTR_KUAP) : "memory");
}
static inline void __prevent_user_access(unsigned long dir)
static __always_inline void allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
{
mtspr(SPRN_MD_AP, MD_APG_KUAP);
uaccess_begin_8xx(MD_APG_INIT);
}
static inline unsigned long __prevent_user_access_return(void)
static __always_inline void prevent_user_access(unsigned long dir)
{
uaccess_end_8xx();
}
static __always_inline unsigned long prevent_user_access_return(void)
{
unsigned long flags;
flags = mfspr(SPRN_MD_AP);
mtspr(SPRN_MD_AP, MD_APG_KUAP);
uaccess_end_8xx();
return flags;
}
static inline void __restore_user_access(unsigned long flags)
static __always_inline void restore_user_access(unsigned long flags)
{
mtspr(SPRN_MD_AP, flags);
uaccess_begin_8xx(flags);
}
static inline bool
static __always_inline bool
__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
return !((regs->kuap ^ MD_APG_KUAP) & 0xff000000);

View file

@ -355,7 +355,7 @@ static inline int pte_young(pte_t pte)
#define pmd_pfn(pmd) (pmd_val(pmd) >> PAGE_SHIFT)
#else
#define pmd_page_vaddr(pmd) \
((unsigned long)(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1)))
((const void *)(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1)))
#define pmd_pfn(pmd) (__pa(pmd_val(pmd)) >> PAGE_SHIFT)
#endif

View file

@ -127,7 +127,7 @@ static inline pte_t pmd_pte(pmd_t pmd)
#define pmd_bad(pmd) (!is_kernel_addr(pmd_val(pmd)) \
|| (pmd_val(pmd) & PMD_BAD_BITS))
#define pmd_present(pmd) (!pmd_none(pmd))
#define pmd_page_vaddr(pmd) (pmd_val(pmd) & ~PMD_MASKED_BITS)
#define pmd_page_vaddr(pmd) ((const void *)(pmd_val(pmd) & ~PMD_MASKED_BITS))
extern struct page *pmd_page(pmd_t pmd);
#define pmd_pfn(pmd) (page_to_pfn(pmd_page(pmd)))

View file

@ -3,6 +3,7 @@
#define _ASM_POWERPC_KUP_BOOKE_H_
#include <asm/bug.h>
#include <asm/mmu.h>
#ifdef CONFIG_PPC_KUAP
@ -13,32 +14,26 @@
#else
#include <linux/jump_label.h>
#include <linux/sched.h>
#include <asm/reg.h>
extern struct static_key_false disable_kuap_key;
static __always_inline bool kuap_is_disabled(void)
{
return static_branch_unlikely(&disable_kuap_key);
}
static inline void __kuap_lock(void)
static __always_inline void __kuap_lock(void)
{
mtspr(SPRN_PID, 0);
isync();
}
#define __kuap_lock __kuap_lock
static inline void __kuap_save_and_lock(struct pt_regs *regs)
static __always_inline void __kuap_save_and_lock(struct pt_regs *regs)
{
regs->kuap = mfspr(SPRN_PID);
mtspr(SPRN_PID, 0);
isync();
}
#define __kuap_save_and_lock __kuap_save_and_lock
static inline void kuap_user_restore(struct pt_regs *regs)
static __always_inline void kuap_user_restore(struct pt_regs *regs)
{
if (kuap_is_disabled())
return;
@ -48,7 +43,7 @@ static inline void kuap_user_restore(struct pt_regs *regs)
/* Context synchronisation is performed by rfi */
}
static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
{
if (regs->kuap)
mtspr(SPRN_PID, current->thread.pid);
@ -56,48 +51,55 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
/* Context synchronisation is performed by rfi */
}
static inline unsigned long __kuap_get_and_assert_locked(void)
#ifdef CONFIG_PPC_KUAP_DEBUG
static __always_inline unsigned long __kuap_get_and_assert_locked(void)
{
unsigned long kuap = mfspr(SPRN_PID);
WARN_ON_ONCE(mfspr(SPRN_PID));
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
WARN_ON_ONCE(kuap);
return 0;
}
#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
#endif
return kuap;
static __always_inline void uaccess_begin_booke(unsigned long val)
{
asm(ASM_MMU_FTR_IFSET("mtspr %0, %1; isync", "", %2) : :
"i"(SPRN_PID), "r"(val), "i"(MMU_FTR_KUAP) : "memory");
}
static inline void __allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
static __always_inline void uaccess_end_booke(void)
{
mtspr(SPRN_PID, current->thread.pid);
isync();
asm(ASM_MMU_FTR_IFSET("mtspr %0, %1; isync", "", %2) : :
"i"(SPRN_PID), "r"(0), "i"(MMU_FTR_KUAP) : "memory");
}
static inline void __prevent_user_access(unsigned long dir)
static __always_inline void allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
{
mtspr(SPRN_PID, 0);
isync();
uaccess_begin_booke(current->thread.pid);
}
static inline unsigned long __prevent_user_access_return(void)
static __always_inline void prevent_user_access(unsigned long dir)
{
uaccess_end_booke();
}
static __always_inline unsigned long prevent_user_access_return(void)
{
unsigned long flags = mfspr(SPRN_PID);
mtspr(SPRN_PID, 0);
isync();
uaccess_end_booke();
return flags;
}
static inline void __restore_user_access(unsigned long flags)
static __always_inline void restore_user_access(unsigned long flags)
{
if (flags) {
mtspr(SPRN_PID, current->thread.pid);
isync();
}
if (flags)
uaccess_begin_booke(current->thread.pid);
}
static inline bool
static __always_inline bool
__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
return !regs->kuap;

View file

@ -319,6 +319,9 @@ extern int book3e_htw_mode;
#endif
#include <asm/percpu.h>
DECLARE_PER_CPU(int, next_tlbcam_idx);
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_MMU_BOOK3E_H_ */

View file

@ -15,7 +15,6 @@
#include <linux/cache.h>
#include <linux/string.h>
#include <asm/types.h>
#include <asm/lppaca.h>
#include <asm/mmu.h>
#include <asm/page.h>
#ifdef CONFIG_PPC_BOOK3E_64
@ -47,14 +46,11 @@ extern unsigned int debug_smp_processor_id(void); /* from linux/smp.h */
#define get_paca() local_paca
#endif
#ifdef CONFIG_PPC_PSERIES
#define get_lppaca() (get_paca()->lppaca_ptr)
#endif
#define get_slb_shadow() (get_paca()->slb_shadow_ptr)
struct task_struct;
struct rtas_args;
struct lppaca;
/*
* Defines the layout of the paca.

View file

@ -9,6 +9,7 @@
#ifndef __ASSEMBLY__
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/bug.h>
#else
#include <asm/types.h>
#endif
@ -119,16 +120,6 @@ extern long long virt_phys_offset;
#define ARCH_PFN_OFFSET ((unsigned long)(MEMORY_START >> PAGE_SHIFT))
#endif
#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT)
#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
#define virt_addr_valid(vaddr) ({ \
unsigned long _addr = (unsigned long)vaddr; \
_addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \
pfn_valid(virt_to_pfn(_addr)); \
})
/*
* On Book-E parts we need __va to parse the device tree and we can't
* determine MEMORY_START until then. However we can determine PHYSICAL_START
@ -233,6 +224,25 @@ extern long long virt_phys_offset;
#endif
#endif
#ifndef __ASSEMBLY__
static inline unsigned long virt_to_pfn(const void *kaddr)
{
return __pa(kaddr) >> PAGE_SHIFT;
}
static inline const void *pfn_to_kaddr(unsigned long pfn)
{
return __va(pfn << PAGE_SHIFT);
}
#endif
#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
#define virt_addr_valid(vaddr) ({ \
unsigned long _addr = (unsigned long)vaddr; \
_addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \
pfn_valid(virt_to_pfn((void *)_addr)); \
})
/*
* Unfortunately the PLT is in the BSS in the PPC32 ELF ABI,
* and needs to be executable. This means the whole heap ends

View file

@ -6,6 +6,7 @@
#include <asm/smp.h>
#ifdef CONFIG_PPC64
#include <asm/paca.h>
#include <asm/lppaca.h>
#include <asm/hvcall.h>
#endif

View file

@ -82,7 +82,8 @@ extern int pci_legacy_write(struct pci_bus *bus, loff_t port, u32 val,
extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
struct vm_area_struct *vma,
enum pci_mmap_state mmap_state);
extern void pci_adjust_legacy_attr(struct pci_bus *bus,
enum pci_mmap_state mmap_type);
#define HAVE_PCI_LEGACY 1
extern void pcibios_claim_one_bus(struct pci_bus *b);

View file

@ -72,9 +72,9 @@ static inline pgprot_t pte_pgprot(pte_t pte)
}
#ifndef pmd_page_vaddr
static inline unsigned long pmd_page_vaddr(pmd_t pmd)
static inline const void *pmd_page_vaddr(pmd_t pmd)
{
return ((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS));
return __va(pmd_val(pmd) & ~PMD_MASKED_BITS);
}
#define pmd_page_vaddr pmd_page_vaddr
#endif

View file

@ -9,6 +9,7 @@
#include <asm/hvcall.h>
#include <asm/paca.h>
#include <asm/lppaca.h>
#include <asm/page.h>
static inline long poll_pending(void)

View file

@ -397,6 +397,7 @@
#define PPC_RAW_RFCI (0x4c000066)
#define PPC_RAW_RFDI (0x4c00004e)
#define PPC_RAW_RFMCI (0x4c00004c)
#define PPC_RAW_TLBILX_LPID (0x7c000024)
#define PPC_RAW_TLBILX(t, a, b) (0x7c000024 | __PPC_T_TLB(t) | __PPC_RA0(a) | __PPC_RB(b))
#define PPC_RAW_WAIT_v203 (0x7c00007c)
#define PPC_RAW_WAIT(w, p) (0x7c00003c | __PPC_WC(w) | __PPC_PL(p))
@ -616,6 +617,7 @@
#define PPC_TLBILX(t, a, b) stringify_in_c(.long PPC_RAW_TLBILX(t, a, b))
#define PPC_TLBILX_ALL(a, b) PPC_TLBILX(0, a, b)
#define PPC_TLBILX_PID(a, b) PPC_TLBILX(1, a, b)
#define PPC_TLBILX_LPID stringify_in_c(.long PPC_RAW_TLBILX_LPID)
#define PPC_TLBILX_VA(a, b) PPC_TLBILX(3, a, b)
#define PPC_WAIT_v203 stringify_in_c(.long PPC_RAW_WAIT_v203)
#define PPC_WAIT(w, p) stringify_in_c(.long PPC_RAW_WAIT(w, p))

View file

@ -172,11 +172,6 @@ struct thread_struct {
unsigned int align_ctl; /* alignment handling control */
#ifdef CONFIG_HAVE_HW_BREAKPOINT
struct perf_event *ptrace_bps[HBP_NUM_MAX];
/*
* Helps identify source of single-step exception and subsequent
* hw-breakpoint enablement
*/
struct perf_event *last_hit_ubp[HBP_NUM_MAX];
#endif /* CONFIG_HAVE_HW_BREAKPOINT */
struct arch_hw_breakpoint hw_brk[HBP_NUM_MAX]; /* hardware breakpoint info */
unsigned long trap_nr; /* last trap # on this thread */

View file

@ -1414,11 +1414,9 @@ static inline void mtmsr_isync(unsigned long val)
#define mfspr(rn) ({unsigned long rval; \
asm volatile("mfspr %0," __stringify(rn) \
: "=r" (rval)); rval;})
#ifndef mtspr
#define mtspr(rn, v) asm volatile("mtspr " __stringify(rn) ",%0" : \
: "r" ((unsigned long)(v)) \
: "memory")
#endif
#define wrtspr(rn) asm volatile("mtspr " __stringify(rn) ",2" : : : "memory")
static inline void wrtee(unsigned long val)

View file

@ -202,7 +202,9 @@ typedef struct {
#define RTAS_USER_REGION_SIZE (64 * 1024)
/* RTAS return status codes */
#define RTAS_HARDWARE_ERROR -1 /* Hardware Error */
#define RTAS_BUSY -2 /* RTAS Busy */
#define RTAS_INVALID_PARAMETER -3 /* Invalid indicator/domain/sensor etc. */
#define RTAS_EXTENDED_DELAY_MIN 9900
#define RTAS_EXTENDED_DELAY_MAX 9905
@ -425,6 +427,7 @@ extern int rtas_set_indicator(int indicator, int index, int new_value);
extern int rtas_set_indicator_fast(int indicator, int index, int new_value);
extern void rtas_progress(char *s, unsigned short hex);
int rtas_ibm_suspend_me(int *fw_status);
int rtas_error_rc(int rtas_rc);
struct rtc_time;
extern time64_t rtas_get_boot_time(void);

View file

@ -74,6 +74,8 @@ static inline int overlaps_kernel_text(unsigned long start, unsigned long end)
(unsigned long)_stext < end;
}
#else
static inline unsigned long kernel_toc_addr(void) { BUILD_BUG(); return -1UL; }
#endif
#endif /* __KERNEL__ */

View file

@ -8,7 +8,6 @@
extern void ppc_printk_progress(char *s, unsigned short hex);
extern unsigned long long memory_limit;
extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
struct device_node;

View file

@ -143,5 +143,20 @@ static inline int cpu_to_coregroup_id(int cpu)
#endif
#endif
#ifdef CONFIG_HOTPLUG_SMT
#include <linux/cpu_smt.h>
#include <asm/cputhreads.h>
static inline bool topology_is_primary_thread(unsigned int cpu)
{
return cpu == cpu_first_thread_sibling(cpu);
}
static inline bool topology_smt_thread_allowed(unsigned int cpu)
{
return cpu_thread_in_core(cpu) < cpu_smt_num_threads;
}
#endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_TOPOLOGY_H */

View file

@ -386,7 +386,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
extern long __copy_from_user_flushcache(void *dst, const void __user *src,
unsigned size);
static __must_check inline bool user_access_begin(const void __user *ptr, size_t len)
static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len)
{
if (unlikely(!access_ok(ptr, len)))
return false;
@ -401,7 +401,7 @@ static __must_check inline bool user_access_begin(const void __user *ptr, size_t
#define user_access_save prevent_user_access_return
#define user_access_restore restore_user_access
static __must_check inline bool
static __must_check __always_inline bool
user_read_access_begin(const void __user *ptr, size_t len)
{
if (unlikely(!access_ok(ptr, len)))
@ -415,7 +415,7 @@ user_read_access_begin(const void __user *ptr, size_t len)
#define user_read_access_begin user_read_access_begin
#define user_read_access_end prevent_current_read_from_user
static __must_check inline bool
static __must_check __always_inline bool
user_write_access_begin(const void __user *ptr, size_t len)
{
if (unlikely(!access_ok(ptr, len)))

View file

@ -2,7 +2,9 @@
#ifndef _ASM_VERMAGIC_H
#define _ASM_VERMAGIC_H
#ifdef CONFIG_MPROFILE_KERNEL
#ifdef CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY
#define MODULE_ARCH_VERMAGIC_FTRACE "patchable-function-entry "
#elif defined(CONFIG_MPROFILE_KERNEL)
#define MODULE_ARCH_VERMAGIC_FTRACE "mprofile-kernel "
#else
#define MODULE_ARCH_VERMAGIC_FTRACE ""

View file

@ -0,0 +1,24 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef _ASM_POWERPC_VPHN_H
#define _ASM_POWERPC_VPHN_H
/* The H_HOME_NODE_ASSOCIATIVITY h_call returns 6 64-bit registers. */
#define VPHN_REGISTER_COUNT 6
/*
* 6 64-bit registers unpacked into up to 24 be32 associativity values. To
* form the complete property we have to add the length in the first cell.
*/
#define VPHN_ASSOC_BUFSIZE (VPHN_REGISTER_COUNT*sizeof(u64)/sizeof(u16) + 1)
/*
* The H_HOME_NODE_ASSOCIATIVITY hcall takes two values for flags:
* 1 for retrieving associativity information for a guest cpu
* 2 for retrieving associativity information for a host/hypervisor cpu
*/
#define VPHN_FLAG_VCPU 1
#define VPHN_FLAG_PCPU 2
long hcall_vphn(unsigned long cpu, u64 flags, __be32 *associativity);
#endif // _ASM_POWERPC_VPHN_H

View file

@ -4,6 +4,8 @@
#include <linux/audit.h>
#include <asm/unistd.h>
#include "audit_32.h"
static unsigned dir_class[] = {
#include <asm-generic/audit_dir_write.h>
~0U
@ -41,7 +43,6 @@ int audit_classify_arch(int arch)
int audit_classify_syscall(int abi, unsigned syscall)
{
#ifdef CONFIG_PPC64
extern int ppc32_classify_syscall(unsigned);
if (abi == AUDIT_ARCH_PPC)
return ppc32_classify_syscall(syscall);
#endif

View file

@ -0,0 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
#ifndef __AUDIT_32_H__
#define __AUDIT_32_H__
extern int ppc32_classify_syscall(unsigned);
#endif

View file

@ -3,6 +3,8 @@
#include <linux/audit_arch.h>
#include <asm/unistd.h>
#include "audit_32.h"
unsigned ppc32_dir_class[] = {
#include <asm-generic/audit_dir_write.h>
~0U

View file

@ -75,6 +75,10 @@ static struct cpu_spec * __init setup_cpu_spec(unsigned long offset,
t->cpu_features |= old.cpu_features & CPU_FTR_PMAO_BUG;
}
/* Set kuap ON at startup, will be disabled later if cmdline has 'nosmap' */
if (IS_ENABLED(CONFIG_PPC_KUAP) && IS_ENABLED(CONFIG_PPC32))
t->mmu_features |= MMU_FTR_KUAP;
*PTRRELOC(&cur_cpu_spec) = &the_cpu_spec;
/*

View file

@ -29,7 +29,6 @@
#include <asm/asm-offsets.h>
#include <asm/unistd.h>
#include <asm/ptrace.h>
#include <asm/export.h>
#include <asm/feature-fixups.h>
#include <asm/barrier.h>
#include <asm/kup.h>

View file

@ -3,6 +3,7 @@
* Copyright (C) 2012 Freescale Semiconductor, Inc.
*/
#include <linux/export.h>
#include <linux/threads.h>
#include <asm/epapr_hcalls.h>
#include <asm/reg.h>
@ -12,7 +13,6 @@
#include <asm/ppc_asm.h>
#include <asm/asm-compat.h>
#include <asm/asm-offsets.h>
#include <asm/export.h>
#ifndef CONFIG_PPC64
/* epapr_ev_idle() was derived from e500_idle() */

View file

@ -654,6 +654,7 @@ int __init fadump_reserve_mem(void)
return ret;
error_out:
fw_dump.fadump_enabled = 0;
fw_dump.reserve_dump_area_size = 0;
return 0;
}

View file

@ -9,6 +9,7 @@
* Copyright (C) 1997 Dan Malek (dmalek@jlc.net).
*/
#include <linux/export.h>
#include <asm/reg.h>
#include <asm/page.h>
#include <asm/mmu.h>
@ -18,7 +19,6 @@
#include <asm/ppc_asm.h>
#include <asm/asm-offsets.h>
#include <asm/ptrace.h>
#include <asm/export.h>
#include <asm/asm-compat.h>
#include <asm/feature-fixups.h>

View file

@ -38,7 +38,6 @@
#include <asm/ppc_asm.h>
#include <asm/asm-offsets.h>
#include <asm/ptrace.h>
#include <asm/export.h>
#include "head_32.h"

View file

@ -35,7 +35,6 @@
#include <asm/asm-offsets.h>
#include <asm/ptrace.h>
#include <asm/synch.h>
#include <asm/export.h>
#include <asm/code-patching-asm.h>
#include "head_booke.h"

View file

@ -40,7 +40,6 @@
#include <asm/hw_irq.h>
#include <asm/cputhreads.h>
#include <asm/ppc-opcode.h>
#include <asm/export.h>
#include <asm/feature-fixups.h>
#ifdef CONFIG_PPC_BOOK3S
#include <asm/exception-64s.h>

View file

@ -40,7 +40,6 @@
#include <asm/asm-offsets.h>
#include <asm/cache.h>
#include <asm/ptrace.h>
#include <asm/export.h>
#include <asm/feature-fixups.h>
#include "head_booke.h"

View file

@ -29,7 +29,6 @@
#include <asm/ppc_asm.h>
#include <asm/asm-offsets.h>
#include <asm/ptrace.h>
#include <asm/export.h>
#include <asm/code-patching-asm.h>
#include <asm/interrupt.h>

View file

@ -31,7 +31,6 @@
#include <asm/ptrace.h>
#include <asm/bug.h>
#include <asm/kvm_book3s_asm.h>
#include <asm/export.h>
#include <asm/feature-fixups.h>
#include <asm/interrupt.h>

View file

@ -43,16 +43,6 @@ int hw_breakpoint_slots(int type)
return 0; /* no instruction breakpoints available */
}
static bool single_step_pending(void)
{
int i;
for (i = 0; i < nr_wp_slots(); i++) {
if (current->thread.last_hit_ubp[i])
return true;
}
return false;
}
/*
* Install a perf counter breakpoint.
@ -84,7 +74,7 @@ int arch_install_hw_breakpoint(struct perf_event *bp)
* Do not install DABR values if the instruction must be single-stepped.
* If so, DABR will be populated in single_step_dabr_instruction().
*/
if (!single_step_pending())
if (!info->perf_single_step)
__set_breakpoint(i, info);
return 0;
@ -124,275 +114,6 @@ static bool is_ptrace_bp(struct perf_event *bp)
return bp->overflow_handler == ptrace_triggered;
}
struct breakpoint {
struct list_head list;
struct perf_event *bp;
bool ptrace_bp;
};
/*
* While kernel/events/hw_breakpoint.c does its own synchronization, we cannot
* rely on it safely synchronizing internals here; however, we can rely on it
* not requesting more breakpoints than available.
*/
static DEFINE_SPINLOCK(cpu_bps_lock);
static DEFINE_PER_CPU(struct breakpoint *, cpu_bps[HBP_NUM_MAX]);
static DEFINE_SPINLOCK(task_bps_lock);
static LIST_HEAD(task_bps);
static struct breakpoint *alloc_breakpoint(struct perf_event *bp)
{
struct breakpoint *tmp;
tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);
if (!tmp)
return ERR_PTR(-ENOMEM);
tmp->bp = bp;
tmp->ptrace_bp = is_ptrace_bp(bp);
return tmp;
}
static bool bp_addr_range_overlap(struct perf_event *bp1, struct perf_event *bp2)
{
__u64 bp1_saddr, bp1_eaddr, bp2_saddr, bp2_eaddr;
bp1_saddr = ALIGN_DOWN(bp1->attr.bp_addr, HW_BREAKPOINT_SIZE);
bp1_eaddr = ALIGN(bp1->attr.bp_addr + bp1->attr.bp_len, HW_BREAKPOINT_SIZE);
bp2_saddr = ALIGN_DOWN(bp2->attr.bp_addr, HW_BREAKPOINT_SIZE);
bp2_eaddr = ALIGN(bp2->attr.bp_addr + bp2->attr.bp_len, HW_BREAKPOINT_SIZE);
return (bp1_saddr < bp2_eaddr && bp1_eaddr > bp2_saddr);
}
static bool alternate_infra_bp(struct breakpoint *b, struct perf_event *bp)
{
return is_ptrace_bp(bp) ? !b->ptrace_bp : b->ptrace_bp;
}
static bool can_co_exist(struct breakpoint *b, struct perf_event *bp)
{
return !(alternate_infra_bp(b, bp) && bp_addr_range_overlap(b->bp, bp));
}
static int task_bps_add(struct perf_event *bp)
{
struct breakpoint *tmp;
tmp = alloc_breakpoint(bp);
if (IS_ERR(tmp))
return PTR_ERR(tmp);
spin_lock(&task_bps_lock);
list_add(&tmp->list, &task_bps);
spin_unlock(&task_bps_lock);
return 0;
}
static void task_bps_remove(struct perf_event *bp)
{
struct list_head *pos, *q;
spin_lock(&task_bps_lock);
list_for_each_safe(pos, q, &task_bps) {
struct breakpoint *tmp = list_entry(pos, struct breakpoint, list);
if (tmp->bp == bp) {
list_del(&tmp->list);
kfree(tmp);
break;
}
}
spin_unlock(&task_bps_lock);
}
/*
* If any task has breakpoint from alternate infrastructure,
* return true. Otherwise return false.
*/
static bool all_task_bps_check(struct perf_event *bp)
{
struct breakpoint *tmp;
bool ret = false;
spin_lock(&task_bps_lock);
list_for_each_entry(tmp, &task_bps, list) {
if (!can_co_exist(tmp, bp)) {
ret = true;
break;
}
}
spin_unlock(&task_bps_lock);
return ret;
}
/*
* If same task has breakpoint from alternate infrastructure,
* return true. Otherwise return false.
*/
static bool same_task_bps_check(struct perf_event *bp)
{
struct breakpoint *tmp;
bool ret = false;
spin_lock(&task_bps_lock);
list_for_each_entry(tmp, &task_bps, list) {
if (tmp->bp->hw.target == bp->hw.target &&
!can_co_exist(tmp, bp)) {
ret = true;
break;
}
}
spin_unlock(&task_bps_lock);
return ret;
}
static int cpu_bps_add(struct perf_event *bp)
{
struct breakpoint **cpu_bp;
struct breakpoint *tmp;
int i = 0;
tmp = alloc_breakpoint(bp);
if (IS_ERR(tmp))
return PTR_ERR(tmp);
spin_lock(&cpu_bps_lock);
cpu_bp = per_cpu_ptr(cpu_bps, bp->cpu);
for (i = 0; i < nr_wp_slots(); i++) {
if (!cpu_bp[i]) {
cpu_bp[i] = tmp;
break;
}
}
spin_unlock(&cpu_bps_lock);
return 0;
}
static void cpu_bps_remove(struct perf_event *bp)
{
struct breakpoint **cpu_bp;
int i = 0;
spin_lock(&cpu_bps_lock);
cpu_bp = per_cpu_ptr(cpu_bps, bp->cpu);
for (i = 0; i < nr_wp_slots(); i++) {
if (!cpu_bp[i])
continue;
if (cpu_bp[i]->bp == bp) {
kfree(cpu_bp[i]);
cpu_bp[i] = NULL;
break;
}
}
spin_unlock(&cpu_bps_lock);
}
static bool cpu_bps_check(int cpu, struct perf_event *bp)
{
struct breakpoint **cpu_bp;
bool ret = false;
int i;
spin_lock(&cpu_bps_lock);
cpu_bp = per_cpu_ptr(cpu_bps, cpu);
for (i = 0; i < nr_wp_slots(); i++) {
if (cpu_bp[i] && !can_co_exist(cpu_bp[i], bp)) {
ret = true;
break;
}
}
spin_unlock(&cpu_bps_lock);
return ret;
}
static bool all_cpu_bps_check(struct perf_event *bp)
{
int cpu;
for_each_online_cpu(cpu) {
if (cpu_bps_check(cpu, bp))
return true;
}
return false;
}
int arch_reserve_bp_slot(struct perf_event *bp)
{
int ret;
/* ptrace breakpoint */
if (is_ptrace_bp(bp)) {
if (all_cpu_bps_check(bp))
return -ENOSPC;
if (same_task_bps_check(bp))
return -ENOSPC;
return task_bps_add(bp);
}
/* perf breakpoint */
if (is_kernel_addr(bp->attr.bp_addr))
return 0;
if (bp->hw.target && bp->cpu == -1) {
if (same_task_bps_check(bp))
return -ENOSPC;
return task_bps_add(bp);
} else if (!bp->hw.target && bp->cpu != -1) {
if (all_task_bps_check(bp))
return -ENOSPC;
return cpu_bps_add(bp);
}
if (same_task_bps_check(bp))
return -ENOSPC;
ret = cpu_bps_add(bp);
if (ret)
return ret;
ret = task_bps_add(bp);
if (ret)
cpu_bps_remove(bp);
return ret;
}
void arch_release_bp_slot(struct perf_event *bp)
{
if (!is_kernel_addr(bp->attr.bp_addr)) {
if (bp->hw.target)
task_bps_remove(bp);
if (bp->cpu != -1)
cpu_bps_remove(bp);
}
}
/*
* Perform cleanup of arch-specific counters during unregistration
* of the perf-event
*/
void arch_unregister_hw_breakpoint(struct perf_event *bp)
{
/*
* If the breakpoint is unregistered between a hw_breakpoint_handler()
* and the single_step_dabr_instruction(), then cleanup the breakpoint
* restoration variables to prevent dangling pointers.
* FIXME, this should not be using bp->ctx at all! Sayeth peterz.
*/
if (bp->ctx && bp->ctx->task && bp->ctx->task != ((void *)-1L)) {
int i;
for (i = 0; i < nr_wp_slots(); i++) {
if (bp->ctx->task->thread.last_hit_ubp[i] == bp)
bp->ctx->task->thread.last_hit_ubp[i] = NULL;
}
}
}
/*
* Check for virtual address in kernel space.
*/
@ -499,6 +220,10 @@ int hw_breakpoint_arch_parse(struct perf_event *bp,
* Restores the breakpoint on the debug registers.
* Invoke this function if it is known that the execution context is
* about to change to cause loss of MSR_SE settings.
*
* The perf watchpoint will simply re-trigger once the thread is started again,
* and the watchpoint handler will set up MSR_SE and perf_single_step as
* needed.
*/
void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs)
{
@ -506,7 +231,9 @@ void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs)
int i;
for (i = 0; i < nr_wp_slots(); i++) {
if (unlikely(tsk->thread.last_hit_ubp[i]))
struct perf_event *bp = __this_cpu_read(bp_per_reg[i]);
if (unlikely(bp && counter_arch_bp(bp)->perf_single_step))
goto reset;
}
return;
@ -516,7 +243,7 @@ void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs)
for (i = 0; i < nr_wp_slots(); i++) {
info = counter_arch_bp(__this_cpu_read(bp_per_reg[i]));
__set_breakpoint(i, info);
tsk->thread.last_hit_ubp[i] = NULL;
info->perf_single_step = false;
}
}
@ -534,23 +261,22 @@ static bool is_octword_vsx_instr(int type, int size)
* We've failed in reliably handling the hw-breakpoint. Unregister
* it and throw a warning message to let the user know about it.
*/
static void handler_error(struct perf_event *bp, struct arch_hw_breakpoint *info)
static void handler_error(struct perf_event *bp)
{
WARN(1, "Unable to handle hardware breakpoint. Breakpoint at 0x%lx will be disabled.",
info->address);
counter_arch_bp(bp)->address);
perf_event_disable_inatomic(bp);
}
static void larx_stcx_err(struct perf_event *bp, struct arch_hw_breakpoint *info)
static void larx_stcx_err(struct perf_event *bp)
{
printk_ratelimited("Breakpoint hit on instruction that can't be emulated. Breakpoint at 0x%lx will be disabled.\n",
info->address);
counter_arch_bp(bp)->address);
perf_event_disable_inatomic(bp);
}
static bool stepping_handler(struct pt_regs *regs, struct perf_event **bp,
struct arch_hw_breakpoint **info, int *hit,
ppc_inst_t instr)
int *hit, ppc_inst_t instr)
{
int i;
int stepped;
@ -560,8 +286,9 @@ static bool stepping_handler(struct pt_regs *regs, struct perf_event **bp,
for (i = 0; i < nr_wp_slots(); i++) {
if (!hit[i])
continue;
current->thread.last_hit_ubp[i] = bp[i];
info[i] = NULL;
counter_arch_bp(bp[i])->perf_single_step = true;
bp[i] = NULL;
}
regs_set_return_msr(regs, regs->msr | MSR_SE);
return false;
@ -572,15 +299,15 @@ static bool stepping_handler(struct pt_regs *regs, struct perf_event **bp,
for (i = 0; i < nr_wp_slots(); i++) {
if (!hit[i])
continue;
handler_error(bp[i], info[i]);
info[i] = NULL;
handler_error(bp[i]);
bp[i] = NULL;
}
return false;
}
return true;
}
static void handle_p10dd1_spurious_exception(struct arch_hw_breakpoint **info,
static void handle_p10dd1_spurious_exception(struct perf_event **bp,
int *hit, unsigned long ea)
{
int i;
@ -592,10 +319,14 @@ static void handle_p10dd1_spurious_exception(struct arch_hw_breakpoint **info,
* spurious exception.
*/
for (i = 0; i < nr_wp_slots(); i++) {
if (!info[i])
struct arch_hw_breakpoint *info;
if (!bp[i])
continue;
hw_end_addr = ALIGN(info[i]->address + info[i]->len, HW_BREAKPOINT_SIZE);
info = counter_arch_bp(bp[i]);
hw_end_addr = ALIGN(info->address + info->len, HW_BREAKPOINT_SIZE);
/*
* Ending address of DAWR range is less than starting
@ -625,9 +356,9 @@ static void handle_p10dd1_spurious_exception(struct arch_hw_breakpoint **info,
return;
for (i = 0; i < nr_wp_slots(); i++) {
if (info[i]) {
if (bp[i]) {
hit[i] = 1;
info[i]->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
counter_arch_bp(bp[i])->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
}
}
}
@ -638,7 +369,6 @@ int hw_breakpoint_handler(struct die_args *args)
int rc = NOTIFY_STOP;
struct perf_event *bp[HBP_NUM_MAX] = { NULL };
struct pt_regs *regs = args->regs;
struct arch_hw_breakpoint *info[HBP_NUM_MAX] = { NULL };
int i;
int hit[HBP_NUM_MAX] = {0};
int nr_hit = 0;
@ -663,18 +393,20 @@ int hw_breakpoint_handler(struct die_args *args)
wp_get_instr_detail(regs, &instr, &type, &size, &ea);
for (i = 0; i < nr_wp_slots(); i++) {
struct arch_hw_breakpoint *info;
bp[i] = __this_cpu_read(bp_per_reg[i]);
if (!bp[i])
continue;
info[i] = counter_arch_bp(bp[i]);
info[i]->type &= ~HW_BRK_TYPE_EXTRANEOUS_IRQ;
info = counter_arch_bp(bp[i]);
info->type &= ~HW_BRK_TYPE_EXTRANEOUS_IRQ;
if (wp_check_constraints(regs, instr, ea, type, size, info[i])) {
if (wp_check_constraints(regs, instr, ea, type, size, info)) {
if (!IS_ENABLED(CONFIG_PPC_8xx) &&
ppc_inst_equal(instr, ppc_inst(0))) {
handler_error(bp[i], info[i]);
info[i] = NULL;
handler_error(bp[i]);
bp[i] = NULL;
err = 1;
continue;
}
@ -693,7 +425,7 @@ int hw_breakpoint_handler(struct die_args *args)
/* Workaround for Power10 DD1 */
if (!IS_ENABLED(CONFIG_PPC_8xx) && mfspr(SPRN_PVR) == 0x800100 &&
is_octword_vsx_instr(type, size)) {
handle_p10dd1_spurious_exception(info, hit, ea);
handle_p10dd1_spurious_exception(bp, hit, ea);
} else {
rc = NOTIFY_DONE;
goto out;
@ -708,10 +440,10 @@ int hw_breakpoint_handler(struct die_args *args)
*/
if (ptrace_bp) {
for (i = 0; i < nr_wp_slots(); i++) {
if (!hit[i])
if (!hit[i] || !is_ptrace_bp(bp[i]))
continue;
perf_bp_event(bp[i], regs);
info[i] = NULL;
bp[i] = NULL;
}
rc = NOTIFY_DONE;
goto reset;
@ -722,13 +454,13 @@ int hw_breakpoint_handler(struct die_args *args)
for (i = 0; i < nr_wp_slots(); i++) {
if (!hit[i])
continue;
larx_stcx_err(bp[i], info[i]);
info[i] = NULL;
larx_stcx_err(bp[i]);
bp[i] = NULL;
}
goto reset;
}
if (!stepping_handler(regs, bp, info, hit, instr))
if (!stepping_handler(regs, bp, hit, instr))
goto reset;
}
@ -739,15 +471,15 @@ int hw_breakpoint_handler(struct die_args *args)
for (i = 0; i < nr_wp_slots(); i++) {
if (!hit[i])
continue;
if (!(info[i]->type & HW_BRK_TYPE_EXTRANEOUS_IRQ))
if (!(counter_arch_bp(bp[i])->type & HW_BRK_TYPE_EXTRANEOUS_IRQ))
perf_bp_event(bp[i], regs);
}
reset:
for (i = 0; i < nr_wp_slots(); i++) {
if (!info[i])
if (!bp[i])
continue;
__set_breakpoint(i, info[i]);
__set_breakpoint(i, counter_arch_bp(bp[i]));
}
out:
@ -762,24 +494,28 @@ NOKPROBE_SYMBOL(hw_breakpoint_handler);
static int single_step_dabr_instruction(struct die_args *args)
{
struct pt_regs *regs = args->regs;
struct perf_event *bp = NULL;
struct arch_hw_breakpoint *info;
int i;
bool found = false;
/*
* Check if we are single-stepping as a result of a
* previous HW Breakpoint exception
*/
for (i = 0; i < nr_wp_slots(); i++) {
bp = current->thread.last_hit_ubp[i];
for (int i = 0; i < nr_wp_slots(); i++) {
struct perf_event *bp;
struct arch_hw_breakpoint *info;
bp = __this_cpu_read(bp_per_reg[i]);
if (!bp)
continue;
found = true;
info = counter_arch_bp(bp);
if (!info->perf_single_step)
continue;
found = true;
/*
* We shall invoke the user-defined callback function in the
* single stepping handler to confirm to 'trigger-after-execute'
@ -787,26 +523,16 @@ static int single_step_dabr_instruction(struct die_args *args)
*/
if (!(info->type & HW_BRK_TYPE_EXTRANEOUS_IRQ))
perf_bp_event(bp, regs);
current->thread.last_hit_ubp[i] = NULL;
}
if (!found)
return NOTIFY_DONE;
for (i = 0; i < nr_wp_slots(); i++) {
bp = __this_cpu_read(bp_per_reg[i]);
if (!bp)
continue;
info = counter_arch_bp(bp);
__set_breakpoint(i, info);
info->perf_single_step = false;
__set_breakpoint(i, counter_arch_bp(bp));
}
/*
* If the process was being single-stepped by ptrace, let the
* other single-step actions occur (e.g. generate SIGTRAP).
*/
if (test_thread_flag(TIF_SINGLESTEP))
if (!found || test_thread_flag(TIF_SINGLESTEP))
return NOTIFY_DONE;
return NOTIFY_STOP;

View file

@ -172,17 +172,28 @@ static int fail_iommu_bus_notify(struct notifier_block *nb,
return 0;
}
static struct notifier_block fail_iommu_bus_notifier = {
/*
* PCI and VIO buses need separate notifier_block structs, since they're linked
* list nodes. Sharing a notifier_block would mean that any notifiers later
* registered for PCI buses would also get called by VIO buses and vice versa.
*/
static struct notifier_block fail_iommu_pci_bus_notifier = {
.notifier_call = fail_iommu_bus_notify
};
#ifdef CONFIG_IBMVIO
static struct notifier_block fail_iommu_vio_bus_notifier = {
.notifier_call = fail_iommu_bus_notify
};
#endif
static int __init fail_iommu_setup(void)
{
#ifdef CONFIG_PCI
bus_register_notifier(&pci_bus_type, &fail_iommu_bus_notifier);
bus_register_notifier(&pci_bus_type, &fail_iommu_pci_bus_notifier);
#endif
#ifdef CONFIG_IBMVIO
bus_register_notifier(&vio_bus_type, &fail_iommu_bus_notifier);
bus_register_notifier(&vio_bus_type, &fail_iommu_vio_bus_notifier);
#endif
return 0;

View file

@ -5,8 +5,8 @@
#include <linux/serial_core.h>
#include <linux/console.h>
#include <linux/pci.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_irq.h>
#include <linux/serial_reg.h>
#include <asm/io.h>

View file

@ -10,11 +10,11 @@
*
* setjmp/longjmp code by Paul Mackerras.
*/
#include <linux/export.h>
#include <asm/ppc_asm.h>
#include <asm/unistd.h>
#include <asm/asm-compat.h>
#include <asm/asm-offsets.h>
#include <asm/export.h>
.text

View file

@ -8,6 +8,7 @@
*
*/
#include <linux/export.h>
#include <linux/sys.h>
#include <asm/unistd.h>
#include <asm/errno.h>
@ -22,7 +23,6 @@
#include <asm/processor.h>
#include <asm/bug.h>
#include <asm/ptrace.h>
#include <asm/export.h>
#include <asm/feature-fixups.h>
.text

View file

@ -9,6 +9,7 @@
* PPC64 updates by Dave Engebretsen (engebret@us.ibm.com)
*/
#include <linux/export.h>
#include <linux/linkage.h>
#include <linux/sys.h>
#include <asm/unistd.h>
@ -23,7 +24,6 @@
#include <asm/kexec.h>
#include <asm/ptrace.h>
#include <asm/mmu.h>
#include <asm/export.h>
#include <asm/feature-fixups.h>
.text

View file

@ -465,7 +465,7 @@ int module_frob_arch_sections(Elf64_Ehdr *hdr,
return 0;
}
#ifdef CONFIG_MPROFILE_KERNEL
#if defined(CONFIG_MPROFILE_KERNEL) || defined(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)
static u32 stub_insns[] = {
#ifdef CONFIG_PPC_KERNEL_PCREL

View file

@ -13,9 +13,7 @@
#include <linux/export.h>
#include <linux/mod_devicetable.h>
#include <linux/pci.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/atomic.h>
#include <asm/errno.h>

View file

@ -125,7 +125,7 @@ struct pci_controller *pcibios_alloc_controller(struct device_node *dev)
{
struct pci_controller *phb;
phb = zalloc_maybe_bootmem(sizeof(struct pci_controller), GFP_KERNEL);
phb = kzalloc(sizeof(struct pci_controller), GFP_KERNEL);
if (phb == NULL)
return NULL;

View file

@ -74,7 +74,7 @@ void release_pmc_hardware(void)
}
EXPORT_SYMBOL_GPL(release_pmc_hardware);
#ifdef CONFIG_PPC64
#ifdef CONFIG_PPC_BOOK3S_64
void power4_enable_pmcs(void)
{
unsigned long hid0;

View file

@ -716,69 +716,86 @@ int gpr32_get_common(struct task_struct *target,
return membuf_zero(&to, (ELF_NGREG - PT_REGS_COUNT) * sizeof(u32));
}
int gpr32_set_common(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf,
unsigned long *regs)
static int gpr32_set_common_kernel(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, unsigned long *regs)
{
const compat_ulong_t *k = kbuf;
pos /= sizeof(compat_ulong_t);
count /= sizeof(compat_ulong_t);
for (; count > 0 && pos < PT_MSR; --count)
regs[pos++] = *k++;
if (count > 0 && pos == PT_MSR) {
set_user_msr(target, *k++);
++pos;
--count;
}
for (; count > 0 && pos <= PT_MAX_PUT_REG; --count)
regs[pos++] = *k++;
for (; count > 0 && pos < PT_TRAP; --count, ++pos)
++k;
if (count > 0 && pos == PT_TRAP) {
set_user_trap(target, *k++);
++pos;
--count;
}
kbuf = k;
pos *= sizeof(compat_ulong_t);
count *= sizeof(compat_ulong_t);
user_regset_copyin_ignore(&pos, &count, &kbuf, NULL,
(PT_TRAP + 1) * sizeof(compat_ulong_t), -1);
return 0;
}
static int gpr32_set_common_user(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void __user *ubuf, unsigned long *regs)
{
const compat_ulong_t __user *u = ubuf;
const void *kbuf = NULL;
compat_ulong_t reg;
if (!kbuf && !user_read_access_begin(u, count))
if (!user_read_access_begin(u, count))
return -EFAULT;
pos /= sizeof(reg);
count /= sizeof(reg);
if (kbuf)
for (; count > 0 && pos < PT_MSR; --count)
regs[pos++] = *k++;
else
for (; count > 0 && pos < PT_MSR; --count) {
unsafe_get_user(reg, u++, Efault);
regs[pos++] = reg;
}
for (; count > 0 && pos < PT_MSR; --count) {
unsafe_get_user(reg, u++, Efault);
regs[pos++] = reg;
}
if (count > 0 && pos == PT_MSR) {
if (kbuf)
reg = *k++;
else
unsafe_get_user(reg, u++, Efault);
unsafe_get_user(reg, u++, Efault);
set_user_msr(target, reg);
++pos;
--count;
}
if (kbuf) {
for (; count > 0 && pos <= PT_MAX_PUT_REG; --count)
regs[pos++] = *k++;
for (; count > 0 && pos < PT_TRAP; --count, ++pos)
++k;
} else {
for (; count > 0 && pos <= PT_MAX_PUT_REG; --count) {
unsafe_get_user(reg, u++, Efault);
regs[pos++] = reg;
}
for (; count > 0 && pos < PT_TRAP; --count, ++pos)
unsafe_get_user(reg, u++, Efault);
for (; count > 0 && pos <= PT_MAX_PUT_REG; --count) {
unsafe_get_user(reg, u++, Efault);
regs[pos++] = reg;
}
for (; count > 0 && pos < PT_TRAP; --count, ++pos)
unsafe_get_user(reg, u++, Efault);
if (count > 0 && pos == PT_TRAP) {
if (kbuf)
reg = *k++;
else
unsafe_get_user(reg, u++, Efault);
unsafe_get_user(reg, u++, Efault);
set_user_trap(target, reg);
++pos;
--count;
}
if (!kbuf)
user_read_access_end();
user_read_access_end();
kbuf = k;
ubuf = u;
pos *= sizeof(reg);
count *= sizeof(reg);
@ -791,6 +808,18 @@ int gpr32_set_common(struct task_struct *target,
return -EFAULT;
}
int gpr32_set_common(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf,
unsigned long *regs)
{
if (kbuf)
return gpr32_set_common_kernel(target, regset, pos, count, kbuf, regs);
else
return gpr32_set_common_user(target, regset, pos, count, ubuf, regs);
}
static int gpr32_get(struct task_struct *target,
const struct user_regset *regset,
struct membuf to)

View file

@ -1330,33 +1330,34 @@ bool __ref rtas_busy_delay(int status)
}
EXPORT_SYMBOL_GPL(rtas_busy_delay);
static int rtas_error_rc(int rtas_rc)
int rtas_error_rc(int rtas_rc)
{
int rc;
switch (rtas_rc) {
case -1: /* Hardware Error */
rc = -EIO;
break;
case -3: /* Bad indicator/domain/etc */
rc = -EINVAL;
break;
case -9000: /* Isolation error */
rc = -EFAULT;
break;
case -9001: /* Outstanding TCE/PTE */
rc = -EEXIST;
break;
case -9002: /* No usable slot */
rc = -ENODEV;
break;
default:
pr_err("%s: unexpected error %d\n", __func__, rtas_rc);
rc = -ERANGE;
break;
case RTAS_HARDWARE_ERROR: /* Hardware Error */
rc = -EIO;
break;
case RTAS_INVALID_PARAMETER: /* Bad indicator/domain/etc */
rc = -EINVAL;
break;
case -9000: /* Isolation error */
rc = -EFAULT;
break;
case -9001: /* Outstanding TCE/PTE */
rc = -EEXIST;
break;
case -9002: /* No usable slot */
rc = -ENODEV;
break;
default:
pr_err("%s: unexpected error %d\n", __func__, rtas_rc);
rc = -ERANGE;
break;
}
return rc;
}
EXPORT_SYMBOL_GPL(rtas_error_rc);
int rtas_get_power_level(int powerdomain, int *level)
{
@ -1587,6 +1588,7 @@ static bool ibm_extended_os_term;
void rtas_os_term(char *str)
{
s32 token = rtas_function_token(RTAS_FN_IBM_OS_TERM);
static struct rtas_args args;
int status;
/*
@ -1607,7 +1609,8 @@ void rtas_os_term(char *str)
* schedules.
*/
do {
status = rtas_call(token, 1, 1, NULL, __pa(rtas_os_term_buf));
rtas_call_unlocked(&args, token, 1, 1, NULL, __pa(rtas_os_term_buf));
status = be32_to_cpu(args.rets[0]);
} while (rtas_busy_delay_time(status));
if (status != 0)

View file

@ -31,9 +31,9 @@
#include <linux/serial_8250.h>
#include <linux/percpu.h>
#include <linux/memblock.h>
#include <linux/of_irq.h>
#include <linux/of.h>
#include <linux/of_fdt.h>
#include <linux/of_platform.h>
#include <linux/of_irq.h>
#include <linux/hugetlb.h>
#include <linux/pgtable.h>
#include <asm/io.h>
@ -969,8 +969,12 @@ void __init setup_arch(char **cmdline_p)
klp_init_thread_info(&init_task);
setup_initial_init_mm(_stext, _etext, _edata, _end);
/* sched_init() does the mmgrab(&init_mm) for the primary CPU */
VM_WARN_ON(cpumask_test_cpu(smp_processor_id(), mm_cpumask(&init_mm)));
cpumask_set_cpu(smp_processor_id(), mm_cpumask(&init_mm));
inc_mm_active_cpus(&init_mm);
mm_iommu_init(&init_mm);
irqstack_early_init();
exc_lvl_early_init();
emergency_stack_init();

View file

@ -47,6 +47,7 @@
#include <asm/smp.h>
#include <asm/time.h>
#include <asm/machdep.h>
#include <asm/mmu_context.h>
#include <asm/cputhreads.h>
#include <asm/cputable.h>
#include <asm/mpic.h>
@ -1087,7 +1088,7 @@ static int __init init_big_cores(void)
void __init smp_prepare_cpus(unsigned int max_cpus)
{
unsigned int cpu;
unsigned int cpu, num_threads;
DBG("smp_prepare_cpus\n");
@ -1154,6 +1155,12 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
if (smp_ops && smp_ops->probe)
smp_ops->probe();
// Initalise the generic SMT topology support
num_threads = 1;
if (smt_enabled_at_boot)
num_threads = smt_enabled_at_boot;
cpu_smt_set_num_threads(num_threads, threads_per_core);
}
void smp_prepare_boot_cpu(void)
@ -1616,6 +1623,9 @@ void start_secondary(void *unused)
mmgrab_lazy_tlb(&init_mm);
current->active_mm = &init_mm;
VM_WARN_ON(cpumask_test_cpu(smp_processor_id(), mm_cpumask(&init_mm)));
cpumask_set_cpu(cpu, mm_cpumask(&init_mm));
inc_mm_active_cpus(&init_mm);
smp_store_cpu_info(cpu);
set_dec(tb_ticks_per_jiffy);
@ -1751,6 +1761,14 @@ int __cpu_disable(void)
void __cpu_die(unsigned int cpu)
{
/*
* This could perhaps be a generic call in idlea_task_dead(), but
* that requires testing from all archs, so first put it here to
*/
VM_WARN_ON_ONCE(!cpumask_test_cpu(cpu, mm_cpumask(&init_mm)));
dec_mm_active_cpus(&init_mm);
cpumask_clear_cpu(cpu, mm_cpumask(&init_mm));
if (smp_ops->cpu_die)
smp_ops->cpu_die(cpu);
}

View file

@ -46,7 +46,7 @@ notrace long system_call_exception(struct pt_regs *regs, unsigned long r0)
iamr = mfspr(SPRN_IAMR);
regs->amr = amr;
regs->iamr = iamr;
if (mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) {
if (mmu_has_feature(MMU_FTR_KUAP)) {
mtspr(SPRN_AMR, AMR_KUAP_BLOCKED);
flush_needed = true;
}

View file

@ -6,13 +6,13 @@
* Copyright 2012 Matt Evans & Michael Neuling, IBM Corporation.
*/
#include <linux/export.h>
#include <asm/asm-offsets.h>
#include <asm/ppc_asm.h>
#include <asm/ppc-opcode.h>
#include <asm/ptrace.h>
#include <asm/reg.h>
#include <asm/bug.h>
#include <asm/export.h>
#include <asm/feature-fixups.h>
#ifdef CONFIG_VSX

View file

@ -6,15 +6,15 @@
ifdef CONFIG_FUNCTION_TRACER
# do not trace tracer code
CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_ftrace_64_pg.o = $(CC_FLAGS_FTRACE)
endif
obj32-$(CONFIG_FUNCTION_TRACER) += ftrace_mprofile.o
obj32-$(CONFIG_FUNCTION_TRACER) += ftrace.o ftrace_entry.o
ifdef CONFIG_MPROFILE_KERNEL
obj64-$(CONFIG_FUNCTION_TRACER) += ftrace_mprofile.o
obj64-$(CONFIG_FUNCTION_TRACER) += ftrace.o ftrace_entry.o
else
obj64-$(CONFIG_FUNCTION_TRACER) += ftrace_64_pg.o
obj64-$(CONFIG_FUNCTION_TRACER) += ftrace_64_pg.o ftrace_64_pg_entry.o
endif
obj-$(CONFIG_FUNCTION_TRACER) += ftrace_low.o ftrace.o
obj-$(CONFIG_TRACING) += trace_clock.o
obj-$(CONFIG_PPC64) += $(obj64-y)
@ -25,3 +25,7 @@ GCOV_PROFILE_ftrace.o := n
KCOV_INSTRUMENT_ftrace.o := n
KCSAN_SANITIZE_ftrace.o := n
UBSAN_SANITIZE_ftrace.o := n
GCOV_PROFILE_ftrace_64_pg.o := n
KCOV_INSTRUMENT_ftrace_64_pg.o := n
KCSAN_SANITIZE_ftrace_64_pg.o := n
UBSAN_SANITIZE_ftrace_64_pg.o := n

File diff suppressed because it is too large Load diff

View file

@ -1,67 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Split from ftrace_64.S
*/
#include <linux/magic.h>
#include <asm/ppc_asm.h>
#include <asm/asm-offsets.h>
#include <asm/ftrace.h>
#include <asm/ppc-opcode.h>
#include <asm/export.h>
_GLOBAL_TOC(ftrace_caller)
lbz r3, PACA_FTRACE_ENABLED(r13)
cmpdi r3, 0
beqlr
/* Taken from output of objdump from lib64/glibc */
mflr r3
ld r11, 0(r1)
stdu r1, -112(r1)
std r3, 128(r1)
ld r4, 16(r11)
subi r3, r3, MCOUNT_INSN_SIZE
.globl ftrace_call
ftrace_call:
bl ftrace_stub
nop
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
.globl ftrace_graph_call
ftrace_graph_call:
b ftrace_graph_stub
_GLOBAL(ftrace_graph_stub)
#endif
ld r0, 128(r1)
mtlr r0
addi r1, r1, 112
_GLOBAL(ftrace_stub)
blr
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
_GLOBAL(ftrace_graph_caller)
addi r5, r1, 112
/* load r4 with local address */
ld r4, 128(r1)
subi r4, r4, MCOUNT_INSN_SIZE
/* Grab the LR out of the caller stack frame */
ld r11, 112(r1)
ld r3, 16(r11)
bl prepare_ftrace_return
nop
/*
* prepare_ftrace_return gives us the address we divert to.
* Change the LR in the callers stack frame to this.
*/
ld r11, 112(r1)
std r3, 16(r11)
ld r0, 128(r1)
mtlr r0
addi r1, r1, 112
blr
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */

View file

@ -0,0 +1,846 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Code for replacing ftrace calls with jumps.
*
* Copyright (C) 2007-2008 Steven Rostedt <srostedt@redhat.com>
*
* Thanks goes out to P.A. Semi, Inc for supplying me with a PPC64 box.
*
* Added function graph tracer code, taken from x86 that was written
* by Frederic Weisbecker, and ported to PPC by Steven Rostedt.
*
*/
#define pr_fmt(fmt) "ftrace-powerpc: " fmt
#include <linux/spinlock.h>
#include <linux/hardirq.h>
#include <linux/uaccess.h>
#include <linux/module.h>
#include <linux/ftrace.h>
#include <linux/percpu.h>
#include <linux/init.h>
#include <linux/list.h>
#include <asm/cacheflush.h>
#include <asm/code-patching.h>
#include <asm/ftrace.h>
#include <asm/syscall.h>
#include <asm/inst.h>
/*
* We generally only have a single long_branch tramp and at most 2 or 3 plt
* tramps generated. But, we don't use the plt tramps currently. We also allot
* 2 tramps after .text and .init.text. So, we only end up with around 3 usable
* tramps in total. Set aside 8 just to be sure.
*/
#define NUM_FTRACE_TRAMPS 8
static unsigned long ftrace_tramps[NUM_FTRACE_TRAMPS];
static ppc_inst_t
ftrace_call_replace(unsigned long ip, unsigned long addr, int link)
{
ppc_inst_t op;
addr = ppc_function_entry((void *)addr);
/* if (link) set op to 'bl' else 'b' */
create_branch(&op, (u32 *)ip, addr, link ? BRANCH_SET_LINK : 0);
return op;
}
static inline int
ftrace_modify_code(unsigned long ip, ppc_inst_t old, ppc_inst_t new)
{
ppc_inst_t replaced;
/*
* Note:
* We are paranoid about modifying text, as if a bug was to happen, it
* could cause us to read or write to someplace that could cause harm.
* Carefully read and modify the code with probe_kernel_*(), and make
* sure what we read is what we expected it to be before modifying it.
*/
/* read the text we want to modify */
if (copy_inst_from_kernel_nofault(&replaced, (void *)ip))
return -EFAULT;
/* Make sure it is what we expect it to be */
if (!ppc_inst_equal(replaced, old)) {
pr_err("%p: replaced (%08lx) != old (%08lx)", (void *)ip,
ppc_inst_as_ulong(replaced), ppc_inst_as_ulong(old));
return -EINVAL;
}
/* replace the text with the new text */
return patch_instruction((u32 *)ip, new);
}
/*
* Helper functions that are the same for both PPC64 and PPC32.
*/
static int test_24bit_addr(unsigned long ip, unsigned long addr)
{
addr = ppc_function_entry((void *)addr);
return is_offset_in_branch_range(addr - ip);
}
static int is_bl_op(ppc_inst_t op)
{
return (ppc_inst_val(op) & ~PPC_LI_MASK) == PPC_RAW_BL(0);
}
static int is_b_op(ppc_inst_t op)
{
return (ppc_inst_val(op) & ~PPC_LI_MASK) == PPC_RAW_BRANCH(0);
}
static unsigned long find_bl_target(unsigned long ip, ppc_inst_t op)
{
int offset;
offset = PPC_LI(ppc_inst_val(op));
/* make it signed */
if (offset & 0x02000000)
offset |= 0xfe000000;
return ip + (long)offset;
}
#ifdef CONFIG_MODULES
static int
__ftrace_make_nop(struct module *mod,
struct dyn_ftrace *rec, unsigned long addr)
{
unsigned long entry, ptr, tramp;
unsigned long ip = rec->ip;
ppc_inst_t op, pop;
/* read where this goes */
if (copy_inst_from_kernel_nofault(&op, (void *)ip)) {
pr_err("Fetching opcode failed.\n");
return -EFAULT;
}
/* Make sure that this is still a 24bit jump */
if (!is_bl_op(op)) {
pr_err("Not expected bl: opcode is %08lx\n", ppc_inst_as_ulong(op));
return -EINVAL;
}
/* lets find where the pointer goes */
tramp = find_bl_target(ip, op);
pr_devel("ip:%lx jumps to %lx", ip, tramp);
if (module_trampoline_target(mod, tramp, &ptr)) {
pr_err("Failed to get trampoline target\n");
return -EFAULT;
}
pr_devel("trampoline target %lx", ptr);
entry = ppc_global_function_entry((void *)addr);
/* This should match what was called */
if (ptr != entry) {
pr_err("addr %lx does not match expected %lx\n", ptr, entry);
return -EINVAL;
}
if (IS_ENABLED(CONFIG_MPROFILE_KERNEL)) {
if (copy_inst_from_kernel_nofault(&op, (void *)(ip - 4))) {
pr_err("Fetching instruction at %lx failed.\n", ip - 4);
return -EFAULT;
}
/* We expect either a mflr r0, or a std r0, LRSAVE(r1) */
if (!ppc_inst_equal(op, ppc_inst(PPC_RAW_MFLR(_R0))) &&
!ppc_inst_equal(op, ppc_inst(PPC_INST_STD_LR))) {
pr_err("Unexpected instruction %08lx around bl _mcount\n",
ppc_inst_as_ulong(op));
return -EINVAL;
}
} else if (IS_ENABLED(CONFIG_PPC64)) {
/*
* Check what is in the next instruction. We can see ld r2,40(r1), but
* on first pass after boot we will see mflr r0.
*/
if (copy_inst_from_kernel_nofault(&op, (void *)(ip + 4))) {
pr_err("Fetching op failed.\n");
return -EFAULT;
}
if (!ppc_inst_equal(op, ppc_inst(PPC_INST_LD_TOC))) {
pr_err("Expected %08lx found %08lx\n", PPC_INST_LD_TOC,
ppc_inst_as_ulong(op));
return -EINVAL;
}
}
/*
* When using -mprofile-kernel or PPC32 there is no load to jump over.
*
* Otherwise our original call site looks like:
*
* bl <tramp>
* ld r2,XX(r1)
*
* Milton Miller pointed out that we can not simply nop the branch.
* If a task was preempted when calling a trace function, the nops
* will remove the way to restore the TOC in r2 and the r2 TOC will
* get corrupted.
*
* Use a b +8 to jump over the load.
*/
if (IS_ENABLED(CONFIG_MPROFILE_KERNEL) || IS_ENABLED(CONFIG_PPC32))
pop = ppc_inst(PPC_RAW_NOP());
else
pop = ppc_inst(PPC_RAW_BRANCH(8)); /* b +8 */
if (patch_instruction((u32 *)ip, pop)) {
pr_err("Patching NOP failed.\n");
return -EPERM;
}
return 0;
}
#else
static int __ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr)
{
return 0;
}
#endif /* CONFIG_MODULES */
static unsigned long find_ftrace_tramp(unsigned long ip)
{
int i;
/*
* We have the compiler generated long_branch tramps at the end
* and we prefer those
*/
for (i = NUM_FTRACE_TRAMPS - 1; i >= 0; i--)
if (!ftrace_tramps[i])
continue;
else if (is_offset_in_branch_range(ftrace_tramps[i] - ip))
return ftrace_tramps[i];
return 0;
}
static int add_ftrace_tramp(unsigned long tramp)
{
int i;
for (i = 0; i < NUM_FTRACE_TRAMPS; i++)
if (!ftrace_tramps[i]) {
ftrace_tramps[i] = tramp;
return 0;
}
return -1;
}
/*
* If this is a compiler generated long_branch trampoline (essentially, a
* trampoline that has a branch to _mcount()), we re-write the branch to
* instead go to ftrace_[regs_]caller() and note down the location of this
* trampoline.
*/
static int setup_mcount_compiler_tramp(unsigned long tramp)
{
int i;
ppc_inst_t op;
unsigned long ptr;
/* Is this a known long jump tramp? */
for (i = 0; i < NUM_FTRACE_TRAMPS; i++)
if (ftrace_tramps[i] == tramp)
return 0;
/* New trampoline -- read where this goes */
if (copy_inst_from_kernel_nofault(&op, (void *)tramp)) {
pr_debug("Fetching opcode failed.\n");
return -1;
}
/* Is this a 24 bit branch? */
if (!is_b_op(op)) {
pr_debug("Trampoline is not a long branch tramp.\n");
return -1;
}
/* lets find where the pointer goes */
ptr = find_bl_target(tramp, op);
if (ptr != ppc_global_function_entry((void *)_mcount)) {
pr_debug("Trampoline target %p is not _mcount\n", (void *)ptr);
return -1;
}
/* Let's re-write the tramp to go to ftrace_[regs_]caller */
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
ptr = ppc_global_function_entry((void *)ftrace_regs_caller);
else
ptr = ppc_global_function_entry((void *)ftrace_caller);
if (patch_branch((u32 *)tramp, ptr, 0)) {
pr_debug("REL24 out of range!\n");
return -1;
}
if (add_ftrace_tramp(tramp)) {
pr_debug("No tramp locations left\n");
return -1;
}
return 0;
}
static int __ftrace_make_nop_kernel(struct dyn_ftrace *rec, unsigned long addr)
{
unsigned long tramp, ip = rec->ip;
ppc_inst_t op;
/* Read where this goes */
if (copy_inst_from_kernel_nofault(&op, (void *)ip)) {
pr_err("Fetching opcode failed.\n");
return -EFAULT;
}
/* Make sure that this is still a 24bit jump */
if (!is_bl_op(op)) {
pr_err("Not expected bl: opcode is %08lx\n", ppc_inst_as_ulong(op));
return -EINVAL;
}
/* Let's find where the pointer goes */
tramp = find_bl_target(ip, op);
pr_devel("ip:%lx jumps to %lx", ip, tramp);
if (setup_mcount_compiler_tramp(tramp)) {
/* Are other trampolines reachable? */
if (!find_ftrace_tramp(ip)) {
pr_err("No ftrace trampolines reachable from %ps\n",
(void *)ip);
return -EINVAL;
}
}
if (patch_instruction((u32 *)ip, ppc_inst(PPC_RAW_NOP()))) {
pr_err("Patching NOP failed.\n");
return -EPERM;
}
return 0;
}
int ftrace_make_nop(struct module *mod,
struct dyn_ftrace *rec, unsigned long addr)
{
unsigned long ip = rec->ip;
ppc_inst_t old, new;
/*
* If the calling address is more that 24 bits away,
* then we had to use a trampoline to make the call.
* Otherwise just update the call site.
*/
if (test_24bit_addr(ip, addr)) {
/* within range */
old = ftrace_call_replace(ip, addr, 1);
new = ppc_inst(PPC_RAW_NOP());
return ftrace_modify_code(ip, old, new);
} else if (core_kernel_text(ip)) {
return __ftrace_make_nop_kernel(rec, addr);
} else if (!IS_ENABLED(CONFIG_MODULES)) {
return -EINVAL;
}
/*
* Out of range jumps are called from modules.
* We should either already have a pointer to the module
* or it has been passed in.
*/
if (!rec->arch.mod) {
if (!mod) {
pr_err("No module loaded addr=%lx\n", addr);
return -EFAULT;
}
rec->arch.mod = mod;
} else if (mod) {
if (mod != rec->arch.mod) {
pr_err("Record mod %p not equal to passed in mod %p\n",
rec->arch.mod, mod);
return -EINVAL;
}
/* nothing to do if mod == rec->arch.mod */
} else
mod = rec->arch.mod;
return __ftrace_make_nop(mod, rec, addr);
}
#ifdef CONFIG_MODULES
/*
* Examine the existing instructions for __ftrace_make_call.
* They should effectively be a NOP, and follow formal constraints,
* depending on the ABI. Return false if they don't.
*/
static bool expected_nop_sequence(void *ip, ppc_inst_t op0, ppc_inst_t op1)
{
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
return ppc_inst_equal(op0, ppc_inst(PPC_RAW_NOP()));
else
return ppc_inst_equal(op0, ppc_inst(PPC_RAW_BRANCH(8))) &&
ppc_inst_equal(op1, ppc_inst(PPC_INST_LD_TOC));
}
static int
__ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
ppc_inst_t op[2];
void *ip = (void *)rec->ip;
unsigned long entry, ptr, tramp;
struct module *mod = rec->arch.mod;
/* read where this goes */
if (copy_inst_from_kernel_nofault(op, ip))
return -EFAULT;
if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) &&
copy_inst_from_kernel_nofault(op + 1, ip + 4))
return -EFAULT;
if (!expected_nop_sequence(ip, op[0], op[1])) {
pr_err("Unexpected call sequence at %p: %08lx %08lx\n", ip,
ppc_inst_as_ulong(op[0]), ppc_inst_as_ulong(op[1]));
return -EINVAL;
}
/* If we never set up ftrace trampoline(s), then bail */
if (!mod->arch.tramp ||
(IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && !mod->arch.tramp_regs)) {
pr_err("No ftrace trampoline\n");
return -EINVAL;
}
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && rec->flags & FTRACE_FL_REGS)
tramp = mod->arch.tramp_regs;
else
tramp = mod->arch.tramp;
if (module_trampoline_target(mod, tramp, &ptr)) {
pr_err("Failed to get trampoline target\n");
return -EFAULT;
}
pr_devel("trampoline target %lx", ptr);
entry = ppc_global_function_entry((void *)addr);
/* This should match what was called */
if (ptr != entry) {
pr_err("addr %lx does not match expected %lx\n", ptr, entry);
return -EINVAL;
}
if (patch_branch(ip, tramp, BRANCH_SET_LINK)) {
pr_err("REL24 out of range!\n");
return -EINVAL;
}
return 0;
}
#else
static int __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
return 0;
}
#endif /* CONFIG_MODULES */
static int __ftrace_make_call_kernel(struct dyn_ftrace *rec, unsigned long addr)
{
ppc_inst_t op;
void *ip = (void *)rec->ip;
unsigned long tramp, entry, ptr;
/* Make sure we're being asked to patch branch to a known ftrace addr */
entry = ppc_global_function_entry((void *)ftrace_caller);
ptr = ppc_global_function_entry((void *)addr);
if (ptr != entry && IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
entry = ppc_global_function_entry((void *)ftrace_regs_caller);
if (ptr != entry) {
pr_err("Unknown ftrace addr to patch: %ps\n", (void *)ptr);
return -EINVAL;
}
/* Make sure we have a nop */
if (copy_inst_from_kernel_nofault(&op, ip)) {
pr_err("Unable to read ftrace location %p\n", ip);
return -EFAULT;
}
if (!ppc_inst_equal(op, ppc_inst(PPC_RAW_NOP()))) {
pr_err("Unexpected call sequence at %p: %08lx\n",
ip, ppc_inst_as_ulong(op));
return -EINVAL;
}
tramp = find_ftrace_tramp((unsigned long)ip);
if (!tramp) {
pr_err("No ftrace trampolines reachable from %ps\n", ip);
return -EINVAL;
}
if (patch_branch(ip, tramp, BRANCH_SET_LINK)) {
pr_err("Error patching branch to ftrace tramp!\n");
return -EINVAL;
}
return 0;
}
int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
unsigned long ip = rec->ip;
ppc_inst_t old, new;
/*
* If the calling address is more that 24 bits away,
* then we had to use a trampoline to make the call.
* Otherwise just update the call site.
*/
if (test_24bit_addr(ip, addr)) {
/* within range */
old = ppc_inst(PPC_RAW_NOP());
new = ftrace_call_replace(ip, addr, 1);
return ftrace_modify_code(ip, old, new);
} else if (core_kernel_text(ip)) {
return __ftrace_make_call_kernel(rec, addr);
} else if (!IS_ENABLED(CONFIG_MODULES)) {
/* We should not get here without modules */
return -EINVAL;
}
/*
* Out of range jumps are called from modules.
* Being that we are converting from nop, it had better
* already have a module defined.
*/
if (!rec->arch.mod) {
pr_err("No module loaded\n");
return -EINVAL;
}
return __ftrace_make_call(rec, addr);
}
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
#ifdef CONFIG_MODULES
static int
__ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
unsigned long addr)
{
ppc_inst_t op;
unsigned long ip = rec->ip;
unsigned long entry, ptr, tramp;
struct module *mod = rec->arch.mod;
/* If we never set up ftrace trampolines, then bail */
if (!mod->arch.tramp || !mod->arch.tramp_regs) {
pr_err("No ftrace trampoline\n");
return -EINVAL;
}
/* read where this goes */
if (copy_inst_from_kernel_nofault(&op, (void *)ip)) {
pr_err("Fetching opcode failed.\n");
return -EFAULT;
}
/* Make sure that this is still a 24bit jump */
if (!is_bl_op(op)) {
pr_err("Not expected bl: opcode is %08lx\n", ppc_inst_as_ulong(op));
return -EINVAL;
}
/* lets find where the pointer goes */
tramp = find_bl_target(ip, op);
entry = ppc_global_function_entry((void *)old_addr);
pr_devel("ip:%lx jumps to %lx", ip, tramp);
if (tramp != entry) {
/* old_addr is not within range, so we must have used a trampoline */
if (module_trampoline_target(mod, tramp, &ptr)) {
pr_err("Failed to get trampoline target\n");
return -EFAULT;
}
pr_devel("trampoline target %lx", ptr);
/* This should match what was called */
if (ptr != entry) {
pr_err("addr %lx does not match expected %lx\n", ptr, entry);
return -EINVAL;
}
}
/* The new target may be within range */
if (test_24bit_addr(ip, addr)) {
/* within range */
if (patch_branch((u32 *)ip, addr, BRANCH_SET_LINK)) {
pr_err("REL24 out of range!\n");
return -EINVAL;
}
return 0;
}
if (rec->flags & FTRACE_FL_REGS)
tramp = mod->arch.tramp_regs;
else
tramp = mod->arch.tramp;
if (module_trampoline_target(mod, tramp, &ptr)) {
pr_err("Failed to get trampoline target\n");
return -EFAULT;
}
pr_devel("trampoline target %lx", ptr);
entry = ppc_global_function_entry((void *)addr);
/* This should match what was called */
if (ptr != entry) {
pr_err("addr %lx does not match expected %lx\n", ptr, entry);
return -EINVAL;
}
if (patch_branch((u32 *)ip, tramp, BRANCH_SET_LINK)) {
pr_err("REL24 out of range!\n");
return -EINVAL;
}
return 0;
}
#else
static int __ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, unsigned long addr)
{
return 0;
}
#endif
int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
unsigned long addr)
{
unsigned long ip = rec->ip;
ppc_inst_t old, new;
/*
* If the calling address is more that 24 bits away,
* then we had to use a trampoline to make the call.
* Otherwise just update the call site.
*/
if (test_24bit_addr(ip, addr) && test_24bit_addr(ip, old_addr)) {
/* within range */
old = ftrace_call_replace(ip, old_addr, 1);
new = ftrace_call_replace(ip, addr, 1);
return ftrace_modify_code(ip, old, new);
} else if (core_kernel_text(ip)) {
/*
* We always patch out of range locations to go to the regs
* variant, so there is nothing to do here
*/
return 0;
} else if (!IS_ENABLED(CONFIG_MODULES)) {
/* We should not get here without modules */
return -EINVAL;
}
/*
* Out of range jumps are called from modules.
*/
if (!rec->arch.mod) {
pr_err("No module loaded\n");
return -EINVAL;
}
return __ftrace_modify_call(rec, old_addr, addr);
}
#endif
int ftrace_update_ftrace_func(ftrace_func_t func)
{
unsigned long ip = (unsigned long)(&ftrace_call);
ppc_inst_t old, new;
int ret;
old = ppc_inst_read((u32 *)&ftrace_call);
new = ftrace_call_replace(ip, (unsigned long)func, 1);
ret = ftrace_modify_code(ip, old, new);
/* Also update the regs callback function */
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && !ret) {
ip = (unsigned long)(&ftrace_regs_call);
old = ppc_inst_read((u32 *)&ftrace_regs_call);
new = ftrace_call_replace(ip, (unsigned long)func, 1);
ret = ftrace_modify_code(ip, old, new);
}
return ret;
}
/*
* Use the default ftrace_modify_all_code, but without
* stop_machine().
*/
void arch_ftrace_update_code(int command)
{
ftrace_modify_all_code(command);
}
#ifdef CONFIG_PPC64
#define PACATOC offsetof(struct paca_struct, kernel_toc)
extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[];
void ftrace_free_init_tramp(void)
{
int i;
for (i = 0; i < NUM_FTRACE_TRAMPS && ftrace_tramps[i]; i++)
if (ftrace_tramps[i] == (unsigned long)ftrace_tramp_init) {
ftrace_tramps[i] = 0;
return;
}
}
int __init ftrace_dyn_arch_init(void)
{
int i;
unsigned int *tramp[] = { ftrace_tramp_text, ftrace_tramp_init };
u32 stub_insns[] = {
PPC_RAW_LD(_R12, _R13, PACATOC),
PPC_RAW_ADDIS(_R12, _R12, 0),
PPC_RAW_ADDI(_R12, _R12, 0),
PPC_RAW_MTCTR(_R12),
PPC_RAW_BCTR()
};
unsigned long addr;
long reladdr;
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
addr = ppc_global_function_entry((void *)ftrace_regs_caller);
else
addr = ppc_global_function_entry((void *)ftrace_caller);
reladdr = addr - kernel_toc_addr();
if (reladdr >= SZ_2G || reladdr < -(long)SZ_2G) {
pr_err("Address of %ps out of range of kernel_toc.\n",
(void *)addr);
return -1;
}
for (i = 0; i < 2; i++) {
memcpy(tramp[i], stub_insns, sizeof(stub_insns));
tramp[i][1] |= PPC_HA(reladdr);
tramp[i][2] |= PPC_LO(reladdr);
add_ftrace_tramp((unsigned long)tramp[i]);
}
return 0;
}
#endif
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
extern void ftrace_graph_call(void);
extern void ftrace_graph_stub(void);
static int ftrace_modify_ftrace_graph_caller(bool enable)
{
unsigned long ip = (unsigned long)(&ftrace_graph_call);
unsigned long addr = (unsigned long)(&ftrace_graph_caller);
unsigned long stub = (unsigned long)(&ftrace_graph_stub);
ppc_inst_t old, new;
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_ARGS))
return 0;
old = ftrace_call_replace(ip, enable ? stub : addr, 0);
new = ftrace_call_replace(ip, enable ? addr : stub, 0);
return ftrace_modify_code(ip, old, new);
}
int ftrace_enable_ftrace_graph_caller(void)
{
return ftrace_modify_ftrace_graph_caller(true);
}
int ftrace_disable_ftrace_graph_caller(void)
{
return ftrace_modify_ftrace_graph_caller(false);
}
/*
* Hook the return address and push it in the stack of return addrs
* in current thread info. Return the address we want to divert to.
*/
static unsigned long
__prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp)
{
unsigned long return_hooker;
int bit;
if (unlikely(ftrace_graph_is_dead()))
goto out;
if (unlikely(atomic_read(&current->tracing_graph_pause)))
goto out;
bit = ftrace_test_recursion_trylock(ip, parent);
if (bit < 0)
goto out;
return_hooker = ppc_function_entry(return_to_handler);
if (!function_graph_enter(parent, ip, 0, (unsigned long *)sp))
parent = return_hooker;
ftrace_test_recursion_unlock(bit);
out:
return parent;
}
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
fregs->regs.link = __prepare_ftrace_return(parent_ip, ip, fregs->regs.gpr[1]);
}
#else
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip,
unsigned long sp)
{
return __prepare_ftrace_return(parent, ip, sp);
}
#endif
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
#ifdef CONFIG_PPC64_ELF_ABI_V1
char *arch_ftrace_match_adjust(char *str, const char *search)
{
if (str[0] == '.' && search[0] != '.')
return str + 1;
else
return str;
}
#endif /* CONFIG_PPC64_ELF_ABI_V1 */

View file

@ -1,28 +1,82 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Split from entry_64.S
* Split from ftrace_64.S
*/
#include <linux/export.h>
#include <linux/magic.h>
#include <asm/ppc_asm.h>
#include <asm/asm-offsets.h>
#include <asm/ftrace.h>
#include <asm/ppc-opcode.h>
#include <asm/export.h>
#ifdef CONFIG_PPC64
_GLOBAL_TOC(ftrace_caller)
lbz r3, PACA_FTRACE_ENABLED(r13)
cmpdi r3, 0
beqlr
/* Taken from output of objdump from lib64/glibc */
mflr r3
ld r11, 0(r1)
stdu r1, -112(r1)
std r3, 128(r1)
ld r4, 16(r11)
subi r3, r3, MCOUNT_INSN_SIZE
.globl ftrace_call
ftrace_call:
bl ftrace_stub
nop
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
.globl ftrace_graph_call
ftrace_graph_call:
b ftrace_graph_stub
_GLOBAL(ftrace_graph_stub)
#endif
ld r0, 128(r1)
mtlr r0
addi r1, r1, 112
_GLOBAL(ftrace_stub)
blr
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
_GLOBAL(ftrace_graph_caller)
addi r5, r1, 112
/* load r4 with local address */
ld r4, 128(r1)
subi r4, r4, MCOUNT_INSN_SIZE
/* Grab the LR out of the caller stack frame */
ld r11, 112(r1)
ld r3, 16(r11)
bl prepare_ftrace_return
nop
/*
* prepare_ftrace_return gives us the address we divert to.
* Change the LR in the callers stack frame to this.
*/
ld r11, 112(r1)
std r3, 16(r11)
ld r0, 128(r1)
mtlr r0
addi r1, r1, 112
blr
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
.pushsection ".tramp.ftrace.text","aw",@progbits;
.globl ftrace_tramp_text
ftrace_tramp_text:
.space 64
.space 32
.popsection
.pushsection ".tramp.ftrace.init","aw",@progbits;
.globl ftrace_tramp_init
ftrace_tramp_init:
.space 64
.space 32
.popsection
#endif
_GLOBAL(mcount)
_GLOBAL(_mcount)

View file

@ -3,12 +3,12 @@
* Split from ftrace_64.S
*/
#include <linux/export.h>
#include <linux/magic.h>
#include <asm/ppc_asm.h>
#include <asm/asm-offsets.h>
#include <asm/ftrace.h>
#include <asm/ppc-opcode.h>
#include <asm/export.h>
#include <asm/thread_info.h>
#include <asm/bug.h>
#include <asm/ptrace.h>
@ -254,3 +254,70 @@ livepatch_handler:
/* Return to original caller of live patched function */
blr
#endif /* CONFIG_LIVEPATCH */
#ifndef CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY
_GLOBAL(mcount)
_GLOBAL(_mcount)
EXPORT_SYMBOL(_mcount)
mflr r12
mtctr r12
mtlr r0
bctr
#endif
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
_GLOBAL(return_to_handler)
/* need to save return values */
#ifdef CONFIG_PPC64
std r4, -32(r1)
std r3, -24(r1)
/* save TOC */
std r2, -16(r1)
std r31, -8(r1)
mr r31, r1
stdu r1, -112(r1)
/*
* We might be called from a module.
* Switch to our TOC to run inside the core kernel.
*/
LOAD_PACA_TOC()
#else
stwu r1, -16(r1)
stw r3, 8(r1)
stw r4, 12(r1)
#endif
bl ftrace_return_to_handler
nop
/* return value has real return address */
mtlr r3
#ifdef CONFIG_PPC64
ld r1, 0(r1)
ld r4, -32(r1)
ld r3, -24(r1)
ld r2, -16(r1)
ld r31, -8(r1)
#else
lwz r3, 8(r1)
lwz r4, 12(r1)
addi r1, r1, 16
#endif
/* Jump back to real return address */
blr
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
.pushsection ".tramp.ftrace.text","aw",@progbits;
.globl ftrace_tramp_text
ftrace_tramp_text:
.space 32
.popsection
.pushsection ".tramp.ftrace.init","aw",@progbits;
.globl ftrace_tramp_init
ftrace_tramp_init:
.space 32
.popsection

Some files were not shown because too many files have changed in this diff Show more