Long Li says:

====================
Introduce Microsoft Azure Network Adapter (MANA) RDMA driver [netdev prep]

The first 11 patches which modify the MANA Ethernet driver to support
RDMA driver.

* 'mana-shared-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
  net: mana: Define data structures for protection domain and memory registration
  net: mana: Define data structures for allocating doorbell page from GDMA
  net: mana: Define and process GDMA response code GDMA_STATUS_MORE_ENTRIES
  net: mana: Define max values for SGL entries
  net: mana: Move header files to a common location
  net: mana: Record port number in netdev
  net: mana: Export Work Queue functions for use by RDMA driver
  net: mana: Set the DMA device max segment size
  net: mana: Handle vport sharing between devices
  net: mana: Record the physical address for doorbell page region
  net: mana: Add support for auxiliary device
====================

Link: https://lore.kernel.org/all/1667502990-2559-1-git-send-email-longli@linuxonhyperv.com/
Signed-off-by: Leon Romanovsky <leon@kernel.org>
This commit is contained in:
Leon Romanovsky 2022-11-11 11:35:12 +02:00
commit 1ec5617432
850 changed files with 10053 additions and 6989 deletions

View File

@ -104,6 +104,7 @@ Christoph Hellwig <hch@lst.de>
Colin Ian King <colin.i.king@gmail.com> <colin.king@canonical.com>
Corey Minyard <minyard@acm.org>
Damian Hobson-Garcia <dhobsong@igel.co.jp>
Dan Carpenter <error27@gmail.com> <dan.carpenter@oracle.com>
Daniel Borkmann <daniel@iogearbox.net> <danborkmann@googlemail.com>
Daniel Borkmann <daniel@iogearbox.net> <danborkmann@iogearbox.net>
Daniel Borkmann <daniel@iogearbox.net> <daniel.borkmann@tik.ee.ethz.ch>
@ -353,7 +354,8 @@ Peter Oruba <peter@oruba.de>
Pratyush Anand <pratyush.anand@gmail.com> <pratyush.anand@st.com>
Praveen BP <praveenbp@ti.com>
Punit Agrawal <punitagrawal@gmail.com> <punit.agrawal@arm.com>
Qais Yousef <qsyousef@gmail.com> <qais.yousef@imgtec.com>
Qais Yousef <qyousef@layalina.io> <qais.yousef@imgtec.com>
Qais Yousef <qyousef@layalina.io> <qais.yousef@arm.com>
Quentin Monnet <quentin@isovalent.com> <quentin.monnet@netronome.com>
Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com>
Rafael J. Wysocki <rjw@rjwysocki.net> <rjw@sisk.pl>

View File

@ -10,7 +10,7 @@ Description: A collection of all the memory tiers allocated.
What: /sys/devices/virtual/memory_tiering/memory_tierN/
/sys/devices/virtual/memory_tiering/memory_tierN/nodes
/sys/devices/virtual/memory_tiering/memory_tierN/nodelist
Date: August 2022
Contact: Linux memory management mailing list <linux-mm@kvack.org>
Description: Directory with details of a specific memory tier
@ -21,5 +21,5 @@ Description: Directory with details of a specific memory tier
A smaller value of N implies a higher (faster) memory tier in the
hierarchy.
nodes: NUMA nodes that are part of this memory tier.
nodelist: NUMA nodes that are part of this memory tier.

View File

@ -9,7 +9,6 @@ the Linux ACPI support.
:maxdepth: 1
initrd_table_override
dsdt-override
ssdt-overlays
cppc_sysfs
fan_performance_states

View File

@ -141,6 +141,10 @@ root_hash_sig_key_desc <key_description>
also gain new certificates at run time if they are signed by a certificate
already in the secondary trusted keyring.
try_verify_in_tasklet
If verity hashes are in cache, verify data blocks in kernel tasklet instead
of workqueue. This option can reduce IO latency.
Theory of operation
===================

View File

@ -1318,7 +1318,7 @@ instance. This setup would require the following commands:
$ v4l2-ctl -d2 -i2
$ v4l2-ctl -d2 -c horizontal_movement=4
$ v4l2-ctl -d1 --overlay=1
$ v4l2-ctl -d1 -c loop_video=1
$ v4l2-ctl -d0 -c loop_video=1
$ v4l2-ctl -d2 --stream-mmap --overlay=1
And from another console:

View File

@ -144,6 +144,42 @@ managing and controlling ublk devices with help of several control commands:
For retrieving device info via ``ublksrv_ctrl_dev_info``. It is the server's
responsibility to save IO target specific info in userspace.
- ``UBLK_CMD_START_USER_RECOVERY``
This command is valid if ``UBLK_F_USER_RECOVERY`` feature is enabled. This
command is accepted after the old process has exited, ublk device is quiesced
and ``/dev/ublkc*`` is released. User should send this command before he starts
a new process which re-opens ``/dev/ublkc*``. When this command returns, the
ublk device is ready for the new process.
- ``UBLK_CMD_END_USER_RECOVERY``
This command is valid if ``UBLK_F_USER_RECOVERY`` feature is enabled. This
command is accepted after ublk device is quiesced and a new process has
opened ``/dev/ublkc*`` and get all ublk queues be ready. When this command
returns, ublk device is unquiesced and new I/O requests are passed to the
new process.
- user recovery feature description
Two new features are added for user recovery: ``UBLK_F_USER_RECOVERY`` and
``UBLK_F_USER_RECOVERY_REISSUE``.
With ``UBLK_F_USER_RECOVERY`` set, after one ubq_daemon(ublk server's io
handler) is dying, ublk does not delete ``/dev/ublkb*`` during the whole
recovery stage and ublk device ID is kept. It is ublk server's
responsibility to recover the device context by its own knowledge.
Requests which have not been issued to userspace are requeued. Requests
which have been issued to userspace are aborted.
With ``UBLK_F_USER_RECOVERY_REISSUE`` set, after one ubq_daemon(ublk
server's io handler) is dying, contrary to ``UBLK_F_USER_RECOVERY``,
requests which have been issued to userspace are requeued and will be
re-issued to the new process after handling ``UBLK_CMD_END_USER_RECOVERY``.
``UBLK_F_USER_RECOVERY_REISSUE`` is designed for backends who tolerate
double-write since the driver may issue the same I/O request twice. It
might be useful to a read-only FS or a VM backend.
Data plane
----------

View File

@ -118,6 +118,12 @@ Text Searching
CRC and Math Functions in Linux
===============================
Arithmetic Overflow Checking
----------------------------
.. kernel-doc:: include/linux/overflow.h
:internal:
CRC Functions
-------------

View File

@ -1,9 +0,0 @@
Dongwoon Anatech DW9714 camera voice coil lens driver
DW9174 is a 10-bit DAC with current sink capability. It is intended
for driving voice coil lenses in camera modules.
Mandatory properties:
- compatible: "dongwoon,dw9714"
- reg: I²C slave address

View File

@ -0,0 +1,47 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/media/i2c/dongwoon,dw9714.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Dongwoon Anatech DW9714 camera voice coil lens driver
maintainers:
- Krzysztof Kozlowski <krzk@kernel.org>
description:
DW9174 is a 10-bit DAC with current sink capability. It is intended for
driving voice coil lenses in camera modules.
properties:
compatible:
const: dongwoon,dw9714
reg:
maxItems: 1
powerdown-gpios:
description:
XSD pin for shutdown (active low)
vcc-supply:
description: VDD power supply
required:
- compatible
- reg
additionalProperties: false
examples:
- |
i2c {
#address-cells = <1>;
#size-cells = <0>;
camera-lens@c {
compatible = "dongwoon,dw9714";
reg = <0x0c>;
vcc-supply = <&reg_csi_1v8>;
};
};

View File

@ -8,7 +8,6 @@ title: Samsung S3FWRN5 NCI NFC Controller
maintainers:
- Krzysztof Kozlowski <krzk@kernel.org>
- Krzysztof Opasiak <k.opasiak@samsung.com>
properties:
compatible:

View File

@ -274,10 +274,6 @@ patternProperties:
slew-rate:
enum: [0, 1]
output-enable:
description:
This will internally disable the tri-state for MIO pins.
drive-strength:
description:
Selects the drive strength for MIO pins, in mA.

View File

@ -107,9 +107,6 @@ Kernel utility functions
.. kernel-doc:: kernel/panic.c
:export:
.. kernel-doc:: include/linux/overflow.h
:internal:
Device Resource Management
--------------------------

View File

@ -214,18 +214,29 @@ Link properties can be modified at runtime by calling
Pipelines and media streams
^^^^^^^^^^^^^^^^^^^^^^^^^^^
A media stream is a stream of pixels or metadata originating from one or more
source devices (such as a sensors) and flowing through media entity pads
towards the final sinks. The stream can be modified on the route by the
devices (e.g. scaling or pixel format conversions), or it can be split into
multiple branches, or multiple branches can be merged.
A media pipeline is a set of media streams which are interdependent. This
interdependency can be caused by the hardware (e.g. configuration of a second
stream cannot be changed if the first stream has been enabled) or by the driver
due to the software design. Most commonly a media pipeline consists of a single
stream which does not branch.
When starting streaming, drivers must notify all entities in the pipeline to
prevent link states from being modified during streaming by calling
:c:func:`media_pipeline_start()`.
The function will mark all entities connected to the given entity through
enabled links, either directly or indirectly, as streaming.
The function will mark all the pads which are part of the pipeline as streaming.
The struct media_pipeline instance pointed to by
the pipe argument will be stored in every entity in the pipeline.
the pipe argument will be stored in every pad in the pipeline.
Drivers should embed the struct media_pipeline
in higher-level pipeline structures and can then access the
pipeline through the struct media_entity
pipeline through the struct media_pad
pipe field.
Calls to :c:func:`media_pipeline_start()` can be nested.

View File

@ -19,6 +19,8 @@ Supported devices:
Corsair HX1200i
Corsair HX1500i
Corsair RM550i
Corsair RM650i

View File

@ -319,3 +319,13 @@ unpatched tree to confirm infrastructure didn't mangle it.
Finally, go back and read
:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
to be sure you are not repeating some common mistake documented there.
My company uses peer feedback in employee performance reviews. Can I ask netdev maintainers for feedback?
---------------------------------------------------------------------------------------------------------
Yes, especially if you spend significant amount of time reviewing code
and go out of your way to improve shared infrastructure.
The feedback must be requested by you, the contributor, and will always
be shared with you (even if you request for it to be submitted to your
manager).

View File

@ -239,6 +239,7 @@ ignore define CEC_OP_FEAT_DEV_HAS_DECK_CONTROL
ignore define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_RATE
ignore define CEC_OP_FEAT_DEV_SINK_HAS_ARC_TX
ignore define CEC_OP_FEAT_DEV_SOURCE_HAS_ARC_RX
ignore define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_VOLUME_LEVEL
ignore define CEC_MSG_GIVE_FEATURES
@ -487,6 +488,7 @@ ignore define CEC_OP_SYS_AUD_STATUS_ON
ignore define CEC_MSG_SYSTEM_AUDIO_MODE_REQUEST
ignore define CEC_MSG_SYSTEM_AUDIO_MODE_STATUS
ignore define CEC_MSG_SET_AUDIO_VOLUME_LEVEL
ignore define CEC_OP_AUD_FMT_ID_CEA861
ignore define CEC_OP_AUD_FMT_ID_CEA861_CXT

View File

@ -136,9 +136,9 @@ V4L2 functions
operates like the :c:func:`read()` function.
.. c:function:: void v4l2_mmap(void *start, size_t length, int prot, int flags, int fd, int64_t offset);
.. c:function:: void *v4l2_mmap(void *start, size_t length, int prot, int flags, int fd, int64_t offset);
operates like the :c:func:`munmap()` function.
operates like the :c:func:`mmap()` function.
.. c:function:: int v4l2_munmap(void *_start, size_t length);

View File

@ -4101,6 +4101,7 @@ N: bcm7038
N: bcm7120
BROADCOM BDC DRIVER
M: Justin Chen <justinpopo6@gmail.com>
M: Al Cooper <alcooperx@gmail.com>
L: linux-usb@vger.kernel.org
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
@ -4207,6 +4208,7 @@ F: Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml
F: drivers/tty/serial/8250/8250_bcm7271.c
BROADCOM BRCMSTB USB EHCI DRIVER
M: Justin Chen <justinpopo6@gmail.com>
M: Al Cooper <alcooperx@gmail.com>
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-usb@vger.kernel.org
@ -4223,6 +4225,7 @@ F: Documentation/devicetree/bindings/usb/brcm,usb-pinmap.yaml
F: drivers/usb/misc/brcmstb-usb-pinmap.c
BROADCOM BRCMSTB USB2 and USB3 PHY DRIVER
M: Justin Chen <justinpopo6@gmail.com>
M: Al Cooper <alcooperx@gmail.com>
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-kernel@vger.kernel.org
@ -4459,13 +4462,15 @@ M: Josef Bacik <josef@toxicpanda.com>
M: David Sterba <dsterba@suse.com>
L: linux-btrfs@vger.kernel.org
S: Maintained
W: http://btrfs.wiki.kernel.org/
Q: http://patchwork.kernel.org/project/linux-btrfs/list/
W: https://btrfs.readthedocs.io
W: https://btrfs.wiki.kernel.org/
Q: https://patchwork.kernel.org/project/linux-btrfs/list/
C: irc://irc.libera.chat/btrfs
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git
F: Documentation/filesystems/btrfs.rst
F: fs/btrfs/
F: include/linux/btrfs*
F: include/trace/events/btrfs.h
F: include/uapi/linux/btrfs*
BTTV VIDEO4LINUX DRIVER
@ -5266,6 +5271,7 @@ F: tools/testing/selftests/cgroup/
CONTROL GROUP - BLOCK IO CONTROLLER (BLKIO)
M: Tejun Heo <tj@kernel.org>
M: Josef Bacik <josef@toxicpanda.com>
M: Jens Axboe <axboe@kernel.dk>
L: cgroups@vger.kernel.org
L: linux-block@vger.kernel.org
@ -5273,6 +5279,7 @@ T: git git://git.kernel.dk/linux-block
F: Documentation/admin-guide/cgroup-v1/blkio-controller.rst
F: block/bfq-cgroup.c
F: block/blk-cgroup.c
F: block/blk-iocost.c
F: block/blk-iolatency.c
F: block/blk-throttle.c
F: include/linux/blk-cgroup.h
@ -6280,7 +6287,7 @@ M: Sakari Ailus <sakari.ailus@linux.intel.com>
L: linux-media@vger.kernel.org
S: Maintained
T: git git://linuxtv.org/media_tree.git
F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.txt
F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml
F: drivers/media/i2c/dw9714.c
DONGWOON DW9768 LENS VOICE COIL DRIVER
@ -9534,6 +9541,7 @@ F: include/asm-generic/hyperv-tlfs.h
F: include/asm-generic/mshyperv.h
F: include/clocksource/hyperv_timer.h
F: include/linux/hyperv.h
F: include/net/mana
F: include/uapi/linux/hyperv.h
F: net/vmw_vsock/hyperv_transport.c
F: tools/hv/
@ -11254,7 +11262,6 @@ M: Claudio Imbrenda <imbrenda@linux.ibm.com>
R: David Hildenbrand <david@redhat.com>
L: kvm@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git
F: Documentation/virt/kvm/s390*
F: arch/s390/include/asm/gmap.h
@ -14520,7 +14527,7 @@ L: linux-nilfs@vger.kernel.org
S: Supported
W: https://nilfs.sourceforge.io/
W: https://nilfs.osdn.jp/
T: git git://github.com/konis/nilfs2.git
T: git https://github.com/konis/nilfs2.git
F: Documentation/filesystems/nilfs2.rst
F: fs/nilfs2/
F: include/trace/events/nilfs2.h
@ -14709,6 +14716,12 @@ F: drivers/nvme/target/auth.c
F: drivers/nvme/target/fabrics-cmd-auth.c
F: include/linux/nvme-auth.h
NVM EXPRESS HARDWARE MONITORING SUPPORT
M: Guenter Roeck <linux@roeck-us.net>
L: linux-nvme@lists.infradead.org
S: Supported
F: drivers/nvme/host/hwmon.c
NVM EXPRESS FC TRANSPORT DRIVERS
M: James Smart <james.smart@broadcom.com>
L: linux-nvme@lists.infradead.org
@ -15426,6 +15439,7 @@ S: Maintained
W: http://openvswitch.org
F: include/uapi/linux/openvswitch.h
F: net/openvswitch/
F: tools/testing/selftests/net/openvswitch/
OPERATING PERFORMANCE POINTS (OPP)
M: Viresh Kumar <vireshk@kernel.org>
@ -15839,7 +15853,7 @@ F: Documentation/devicetree/bindings/pci/snps,dw-pcie-ep.yaml
F: drivers/pci/controller/dwc/*designware*
PCI DRIVER FOR TI DRA7XX/J721E
M: Kishon Vijay Abraham I <kishon@ti.com>
M: Vignesh Raghavendra <vigneshr@ti.com>
L: linux-omap@vger.kernel.org
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -15856,10 +15870,10 @@ F: Documentation/devicetree/bindings/pci/v3-v360epc-pci.txt
F: drivers/pci/controller/pci-v3-semi.c
PCI ENDPOINT SUBSYSTEM
M: Kishon Vijay Abraham I <kishon@ti.com>
M: Lorenzo Pieralisi <lpieralisi@kernel.org>
R: Krzysztof Wilczyński <kw@linux.com>
R: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
R: Kishon Vijay Abraham I <kishon@kernel.org>
L: linux-pci@vger.kernel.org
S: Supported
Q: https://patchwork.kernel.org/project/linux-pci/list/
@ -16665,6 +16679,7 @@ F: Documentation/driver-api/ptp.rst
F: drivers/net/phy/dp83640*
F: drivers/ptp/*
F: include/linux/ptp_cl*
K: (?:\b|_)ptp(?:\b|_)
PTP VIRTUAL CLOCK SUPPORT
M: Yangbo Lu <yangbo.lu@nxp.com>
@ -17801,7 +17816,7 @@ S: Odd Fixes
F: drivers/tty/serial/rp2.*
ROHM BD99954 CHARGER IC
R: Matti Vaittinen <mazziesaccount@gmail.com>
M: Matti Vaittinen <mazziesaccount@gmail.com>
S: Supported
F: drivers/power/supply/bd99954-charger.c
F: drivers/power/supply/bd99954-charger.h
@ -17824,7 +17839,7 @@ F: drivers/regulator/bd9571mwv-regulator.c
F: include/linux/mfd/bd9571mwv.h
ROHM POWER MANAGEMENT IC DEVICE DRIVERS
R: Matti Vaittinen <mazziesaccount@gmail.com>
M: Matti Vaittinen <mazziesaccount@gmail.com>
S: Supported
F: drivers/clk/clk-bd718x7.c
F: drivers/gpio/gpio-bd71815.c
@ -17986,7 +18001,6 @@ R: Christian Borntraeger <borntraeger@linux.ibm.com>
R: Sven Schnelle <svens@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git
F: Documentation/driver-api/s390-drivers.rst
F: Documentation/s390/
@ -17998,7 +18012,6 @@ M: Vineeth Vijayan <vneethv@linux.ibm.com>
M: Peter Oberparleiter <oberpar@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/cio/
S390 DASD DRIVER
@ -18006,7 +18019,6 @@ M: Stefan Haberland <sth@linux.ibm.com>
M: Jan Hoeppner <hoeppner@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: block/partitions/ibm.c
F: drivers/s390/block/dasd*
F: include/linux/dasd_mod.h
@ -18016,7 +18028,6 @@ M: Matthew Rosato <mjrosato@linux.ibm.com>
M: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/iommu/s390-iommu.c
S390 IUCV NETWORK LAYER
@ -18025,7 +18036,6 @@ M: Wenjia Zhang <wenjia@linux.ibm.com>
L: linux-s390@vger.kernel.org
L: netdev@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/net/*iucv*
F: include/net/iucv/
F: net/iucv/
@ -18036,7 +18046,6 @@ M: Wenjia Zhang <wenjia@linux.ibm.com>
L: linux-s390@vger.kernel.org
L: netdev@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/net/
S390 PCI SUBSYSTEM
@ -18044,7 +18053,6 @@ M: Niklas Schnelle <schnelle@linux.ibm.com>
M: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: arch/s390/pci/
F: drivers/pci/hotplug/s390_pci_hpc.c
F: Documentation/s390/pci.rst
@ -18055,7 +18063,6 @@ M: Halil Pasic <pasic@linux.ibm.com>
M: Jason Herne <jjherne@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: Documentation/s390/vfio-ap*
F: drivers/s390/crypto/vfio_ap*
@ -18084,7 +18091,6 @@ S390 ZCRYPT DRIVER
M: Harald Freudenberger <freude@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/crypto/
S390 ZFCP DRIVER
@ -18092,7 +18098,6 @@ M: Steffen Maier <maier@linux.ibm.com>
M: Benjamin Block <bblock@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: drivers/s390/scsi/zfcp_*
S3C ADC BATTERY DRIVER
@ -18131,7 +18136,6 @@ L: linux-media@vger.kernel.org
S: Maintained
T: git git://linuxtv.org/media_tree.git
F: drivers/staging/media/deprecated/saa7146/
F: include/media/drv-intf/saa7146*
SAFESETID SECURITY MODULE
M: Micah Morton <mortonm@chromium.org>
@ -18211,7 +18215,6 @@ F: include/media/drv-intf/s3c_camif.h
SAMSUNG S3FWRN5 NFC DRIVER
M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
M: Krzysztof Opasiak <k.opasiak@samsung.com>
L: linux-nfc@lists.01.org (subscribers-only)
S: Maintained
F: Documentation/devicetree/bindings/net/nfc/samsung,s3fwrn5.yaml
@ -18666,7 +18669,6 @@ M: Wenjia Zhang <wenjia@linux.ibm.com>
M: Jan Karcher <jaka@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
F: net/smc/
SHARP GP2AP002A00F/GP2AP002S00F SENSOR DRIVER
@ -18777,7 +18779,7 @@ M: Palmer Dabbelt <palmer@dabbelt.com>
M: Paul Walmsley <paul.walmsley@sifive.com>
L: linux-riscv@lists.infradead.org
S: Supported
T: git git://github.com/sifive/riscv-linux.git
T: git https://github.com/sifive/riscv-linux.git
N: sifive
K: [^@]sifive
@ -21181,15 +21183,6 @@ S: Maintained
F: Documentation/usb/ehci.rst
F: drivers/usb/host/ehci*
USB GADGET/PERIPHERAL SUBSYSTEM
M: Felipe Balbi <balbi@kernel.org>
L: linux-usb@vger.kernel.org
S: Maintained
W: http://www.linux-usb.org/gadget
T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git
F: drivers/usb/gadget/
F: include/linux/usb/gadget*
USB HID/HIDBP DRIVERS (USB KEYBOARDS, MICE, REMOTE CONTROLS, ...)
M: Jiri Kosina <jikos@kernel.org>
M: Benjamin Tissoires <benjamin.tissoires@redhat.com>
@ -21293,16 +21286,9 @@ L: linux-usb@vger.kernel.org
L: netdev@vger.kernel.org
S: Maintained
W: https://github.com/petkan/pegasus
T: git git://github.com/petkan/pegasus.git
T: git https://github.com/petkan/pegasus.git
F: drivers/net/usb/pegasus.*
USB PHY LAYER
M: Felipe Balbi <balbi@kernel.org>
L: linux-usb@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git
F: drivers/usb/phy/
USB PRINTER DRIVER (usblp)
M: Pete Zaitcev <zaitcev@redhat.com>
L: linux-usb@vger.kernel.org
@ -21330,7 +21316,7 @@ L: linux-usb@vger.kernel.org
L: netdev@vger.kernel.org
S: Maintained
W: https://github.com/petkan/rtl8150
T: git git://github.com/petkan/rtl8150.git
T: git https://github.com/petkan/rtl8150.git
F: drivers/net/usb/rtl8150.c
USB SERIAL SUBSYSTEM
@ -22121,6 +22107,7 @@ F: Documentation/watchdog/
F: drivers/watchdog/
F: include/linux/watchdog.h
F: include/uapi/linux/watchdog.h
F: include/trace/events/watchdog.h
WHISKEYCOVE PMIC GPIO DRIVER
M: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
@ -22761,7 +22748,7 @@ S: Maintained
W: http://mjpeg.sourceforge.net/driver-zoran/
Q: https://patchwork.linuxtv.org/project/linux-media/list/
F: Documentation/driver-api/media/drivers/zoran.rst
F: drivers/staging/media/zoran/
F: drivers/media/pci/zoran/
ZRAM COMPRESSED RAM BLOCK DEVICE DRVIER
M: Minchan Kim <minchan@kernel.org>

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 1
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc3
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*

View File

@ -103,11 +103,11 @@ ethernet@18000 {
dma-coherent;
};
ehci@40000 {
usb@40000 {
dma-coherent;
};
ohci@60000 {
usb@60000 {
dma-coherent;
};

View File

@ -110,11 +110,11 @@ ethernet@18000 {
dma-coherent;
};
ehci@40000 {
usb@40000 {
dma-coherent;
};
ohci@60000 {
usb@60000 {
dma-coherent;
};

View File

@ -87,13 +87,13 @@ gmac: ethernet@18000 {
mac-address = [00 00 00 00 00 00]; /* Filled in by U-Boot */
};
ehci@40000 {
usb@40000 {
compatible = "generic-ehci";
reg = < 0x40000 0x100 >;
interrupts = < 8 >;
};
ohci@60000 {
usb@60000 {
compatible = "generic-ohci";
reg = < 0x60000 0x100 >;
interrupts = < 8 >;

View File

@ -234,7 +234,7 @@ phy0: ethernet-phy@0 { /* Micrel KSZ9031 */
};
};
ohci@60000 {
usb@60000 {
compatible = "snps,hsdk-v1.0-ohci", "generic-ohci";
reg = <0x60000 0x100>;
interrupts = <15>;
@ -242,7 +242,7 @@ ohci@60000 {
dma-coherent;
};
ehci@40000 {
usb@40000 {
compatible = "snps,hsdk-v1.0-ehci", "generic-ehci";
reg = <0x40000 0x100>;
interrupts = <15>;

View File

@ -46,7 +46,7 @@ ethernet@18000 {
clock-names = "stmmaceth";
};
ehci@40000 {
usb@40000 {
compatible = "generic-ehci";
reg = < 0x40000 0x100 >;
interrupts = < 8 >;

View File

@ -35,9 +35,6 @@ CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
@ -99,7 +96,6 @@ CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10

View File

@ -34,9 +34,6 @@ CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
@ -97,7 +94,6 @@ CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10

View File

@ -35,9 +35,6 @@ CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
@ -100,7 +97,6 @@ CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10

View File

@ -59,6 +59,5 @@ CONFIG_EXT2_FS_XATTR=y
CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PREEMPT is not set

View File

@ -59,6 +59,5 @@ CONFIG_EXT2_FS_XATTR=y
CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_SOFTLOCKUP_DETECTOR=y
# CONFIG_DEBUG_PREEMPT is not set

View File

@ -85,7 +85,6 @@ CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10

View File

@ -56,5 +56,4 @@ CONFIG_EXT2_FS_XATTR=y
CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
# CONFIG_ENABLE_MUST_CHECK is not set
# CONFIG_DEBUG_PREEMPT is not set

View File

@ -65,4 +65,3 @@ CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
# CONFIG_ENABLE_MUST_CHECK is not set

View File

@ -63,4 +63,3 @@ CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
# CONFIG_ENABLE_MUST_CHECK is not set

View File

@ -26,9 +26,6 @@ CONFIG_UNIX=y
CONFIG_UNIX_DIAG=y
CONFIG_NET_KEY=y
CONFIG_INET=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set
CONFIG_DEVTMPFS=y
@ -37,7 +34,6 @@ CONFIG_DEVTMPFS=y
# CONFIG_BLK_DEV is not set
CONFIG_NETDEVICES=y
# CONFIG_NET_VENDOR_ARC is not set
# CONFIG_NET_CADENCE is not set
# CONFIG_NET_VENDOR_BROADCOM is not set
CONFIG_EZCHIP_NPS_MANAGEMENT_ENET=y
# CONFIG_NET_VENDOR_INTEL is not set
@ -74,5 +70,5 @@ CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_FTRACE=y
# CONFIG_NET_VENDOR_CADENCE is not set

View File

@ -35,15 +35,11 @@ CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set
CONFIG_DEVTMPFS=y
CONFIG_NETDEVICES=y
# CONFIG_NET_CADENCE is not set
# CONFIG_NET_VENDOR_BROADCOM is not set
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_NET_VENDOR_MARVELL is not set
@ -94,12 +90,11 @@ CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
CONFIG_STRIP_ASM_SYMS=y
CONFIG_DEBUG_FS=y
CONFIG_HEADERS_INSTALL=y
CONFIG_HEADERS_CHECK=y
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y
# CONFIG_CRYPTO_HW is not set
# CONFIG_NET_VENDOR_CADENCE is not set

View File

@ -58,8 +58,6 @@ CONFIG_SERIAL_OF_PLATFORM=y
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
CONFIG_FB=y
CONFIG_ARCPGU_RGB888=y
CONFIG_ARCPGU_DISPTYPE=0
# CONFIG_VGA_CONSOLE is not set
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
@ -87,7 +85,6 @@ CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_DEBUG_SHIRQ=y
CONFIG_SOFTLOCKUP_DETECTOR=y

View File

@ -91,7 +91,6 @@ CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_STRIP_ASM_SYMS=y
CONFIG_DEBUG_SHIRQ=y
CONFIG_SOFTLOCKUP_DETECTOR=y

View File

@ -82,7 +82,7 @@ static inline __attribute__ ((const)) int fls(unsigned int x)
/*
* __fls: Similar to fls, but zero based (0-31)
*/
static inline __attribute__ ((const)) int __fls(unsigned long x)
static inline __attribute__ ((const)) unsigned long __fls(unsigned long x)
{
if (!x)
return 0;
@ -131,7 +131,7 @@ static inline __attribute__ ((const)) int fls(unsigned int x)
/*
* __fls: Similar to fls, but zero based (0-31). Also 0 if no bit set
*/
static inline __attribute__ ((const)) int __fls(unsigned long x)
static inline __attribute__ ((const)) unsigned long __fls(unsigned long x)
{
/* FLS insn has exactly same semantics as the API */
return __builtin_arc_fls(x);

View File

@ -21,7 +21,7 @@
* r25 contains the kernel current task ptr
* - Defined Stack Switching Macro to be reused in all intr/excp hdlrs
* - Shaved off 11 instructions from RESTORE_ALL_INT1 by using the
* address Write back load ld.ab instead of seperate ld/add instn
* address Write back load ld.ab instead of separate ld/add instn
*
* Amit Bhor, Sameer Dhavale: Codito Technologies 2004
*/

View File

@ -32,7 +32,7 @@ static inline void ioport_unmap(void __iomem *addr)
{
}
extern void iounmap(const void __iomem *addr);
extern void iounmap(const volatile void __iomem *addr);
/*
* io{read,write}{16,32}be() macros

View File

@ -161,7 +161,7 @@
#define pmd_pfn(pmd) ((pmd_val(pmd) & PAGE_MASK) >> PAGE_SHIFT)
#define pmd_page(pmd) virt_to_page(pmd_page_vaddr(pmd))
#define set_pmd(pmdp, pmd) (*(pmdp) = pmd)
#define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd))
#define pmd_pgtable(pmd) ((pgtable_t) pmd_page(pmd))
/*
* 4th level paging: pte

View File

@ -385,7 +385,7 @@ irqreturn_t do_IPI(int irq, void *dev_id)
* API called by platform code to hookup arch-common ISR to their IPI IRQ
*
* Note: If IPI is provided by platform (vs. say ARC MCIP), their intc setup/map
* function needs to call call irq_set_percpu_devid() for IPI IRQ, otherwise
* function needs to call irq_set_percpu_devid() for IPI IRQ, otherwise
* request_percpu_irq() below will fail
*/
static DEFINE_PER_CPU(int, ipi_dev);

View File

@ -750,7 +750,7 @@ static inline void arc_slc_enable(void)
* -In SMP, if hardware caches are coherent
*
* There's a corollary case, where kernel READs from a userspace mapped page.
* If the U-mapping is not congruent to to K-mapping, former needs flushing.
* If the U-mapping is not congruent to K-mapping, former needs flushing.
*/
void flush_dcache_page(struct page *page)
{
@ -910,7 +910,7 @@ EXPORT_SYMBOL(flush_icache_range);
* @vaddr is typically user vaddr (breakpoint) or kernel vaddr (vmalloc)
* However in one instance, when called by kprobe (for a breakpt in
* builtin kernel code) @vaddr will be paddr only, meaning CDU operation will
* use a paddr to index the cache (despite VIPT). This is fine since since a
* use a paddr to index the cache (despite VIPT). This is fine since a
* builtin kernel page will not have any virtual mappings.
* kprobe on loadable module will be kernel vaddr.
*/

View File

@ -94,7 +94,7 @@ void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
EXPORT_SYMBOL(ioremap_prot);
void iounmap(const void __iomem *addr)
void iounmap(const volatile void __iomem *addr)
{
/* weird double cast to handle phys_addr_t > 32 bits */
if (arc_uncached_addr_space((phys_addr_t)(u32)addr))

View File

@ -13,6 +13,18 @@
#define KVM_PGTABLE_MAX_LEVELS 4U
/*
* The largest supported block sizes for KVM (no 52-bit PA support):
* - 4K (level 1): 1GB
* - 16K (level 2): 32MB
* - 64K (level 2): 512MB
*/
#ifdef CONFIG_ARM64_4K_PAGES
#define KVM_PGTABLE_MIN_BLOCK_LEVEL 1U
#else
#define KVM_PGTABLE_MIN_BLOCK_LEVEL 2U
#endif
static inline u64 kvm_get_parange(u64 mmfr0)
{
u64 parange = cpuid_feature_extract_unsigned_field(mmfr0,
@ -58,11 +70,7 @@ static inline u64 kvm_granule_size(u32 level)
static inline bool kvm_level_supports_block_mapping(u32 level)
{
/*
* Reject invalid block mappings and don't bother with 4TB mappings for
* 52-bit PAs.
*/
return !(level == 0 || (PAGE_SIZE != SZ_4K && level == 1));
return level >= KVM_PGTABLE_MIN_BLOCK_LEVEL;
}
/**

View File

@ -10,13 +10,6 @@
#include <linux/pgtable.h>
/*
* PGDIR_SHIFT determines the size a top-level page table entry can map
* and depends on the number of levels in the page table. Compute the
* PGDIR_SHIFT for a given number of levels.
*/
#define pt_levels_pgdir_shift(lvls) ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (lvls))
/*
* The hardware supports concatenation of up to 16 tables at stage2 entry
* level and we use the feature whenever possible, which means we resolve 4
@ -30,11 +23,6 @@
#define stage2_pgtable_levels(ipa) ARM64_HW_PGTABLE_LEVELS((ipa) - 4)
#define kvm_stage2_levels(kvm) VTCR_EL2_LVLS(kvm->arch.vtcr)
/* stage2_pgdir_shift() is the size mapped by top-level stage2 entry for the VM */
#define stage2_pgdir_shift(kvm) pt_levels_pgdir_shift(kvm_stage2_levels(kvm))
#define stage2_pgdir_size(kvm) (1ULL << stage2_pgdir_shift(kvm))
#define stage2_pgdir_mask(kvm) ~(stage2_pgdir_size(kvm) - 1)
/*
* kvm_mmmu_cache_min_pages() is the number of pages required to install
* a stage-2 translation. We pre-allocate the entry level page table at
@ -42,12 +30,4 @@
*/
#define kvm_mmu_cache_min_pages(kvm) (kvm_stage2_levels(kvm) - 1)
static inline phys_addr_t
stage2_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
{
phys_addr_t boundary = (addr + stage2_pgdir_size(kvm)) & stage2_pgdir_mask(kvm);
return (boundary - 1 < end - 1) ? boundary : end;
}
#endif /* __ARM64_S2_PGTABLE_H_ */

View File

@ -7,6 +7,7 @@
*/
#include <linux/linkage.h>
#include <linux/cfi_types.h>
#include <asm/asm-offsets.h>
#include <asm/assembler.h>
#include <asm/ftrace.h>
@ -294,10 +295,14 @@ SYM_FUNC_END(ftrace_graph_caller)
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
SYM_FUNC_START(ftrace_stub)
SYM_TYPED_FUNC_START(ftrace_stub)
ret
SYM_FUNC_END(ftrace_stub)
SYM_TYPED_FUNC_START(ftrace_stub_graph)
ret
SYM_FUNC_END(ftrace_stub_graph)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* void return_to_handler(void)

View File

@ -5,9 +5,6 @@
incdir := $(srctree)/$(src)/include
subdir-asflags-y := -I$(incdir)
subdir-ccflags-y := -I$(incdir) \
-fno-stack-protector \
-DDISABLE_BRANCH_PROFILING \
$(DISABLE_STACKLEAK_PLUGIN)
subdir-ccflags-y := -I$(incdir)
obj-$(CONFIG_KVM) += vhe/ nvhe/ pgtable.o

View File

@ -10,6 +10,9 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS
# will explode instantly (Words of Marc Zyngier). So introduce a generic flag
# __DISABLE_TRACE_MMIO__ to disable MMIO tracing for nVHE KVM.
ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -D__DISABLE_TRACE_MMIO__
ccflags-y += -fno-stack-protector \
-DDISABLE_BRANCH_PROFILING \
$(DISABLE_STACKLEAK_PLUGIN)
hostprogs := gen-hyprel
HOST_EXTRACFLAGS += -I$(objtree)/include
@ -89,6 +92,10 @@ quiet_cmd_hypcopy = HYPCOPY $@
# Remove ftrace, Shadow Call Stack, and CFI CFLAGS.
# This is equivalent to the 'notrace', '__noscs', and '__nocfi' annotations.
KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI), $(KBUILD_CFLAGS))
# Starting from 13.0.0 llvm emits SHT_REL section '.llvm.call-graph-profile'
# when profile optimization is applied. gen-hyprel does not support SHT_REL and
# causes a build failure. Remove profile optimization flags.
KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%, $(KBUILD_CFLAGS))
# KVM nVHE code is run at a different exception code with a different map, so
# compiler instrumentation that inserts callbacks or checks into the code may

View File

@ -31,6 +31,13 @@ static phys_addr_t hyp_idmap_vector;
static unsigned long io_map_base;
static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end)
{
phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL);
phys_addr_t boundary = ALIGN_DOWN(addr + size, size);
return (boundary - 1 < end - 1) ? boundary : end;
}
/*
* Release kvm_mmu_lock periodically if the memory region is large. Otherwise,
@ -52,7 +59,7 @@ static int stage2_apply_range(struct kvm *kvm, phys_addr_t addr,
if (!pgt)
return -EINVAL;
next = stage2_pgd_addr_end(kvm, addr, end);
next = stage2_range_addr_end(addr, end);
ret = fn(pgt, addr, next - addr);
if (ret)
break;

View File

@ -2149,7 +2149,7 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
memset(entry, 0, esz);
while (len > 0) {
while (true) {
int next_offset;
size_t byte_offset;
@ -2162,6 +2162,9 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
return next_offset;
byte_offset = next_offset * esz;
if (byte_offset >= len)
break;
id += next_offset;
gpa += byte_offset;
len -= byte_offset;

View File

@ -191,7 +191,7 @@ static inline void flush_thread(void)
unsigned long __get_wchan(struct task_struct *p);
#define __KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + \
THREAD_SIZE - 32 - sizeof(struct pt_regs))
THREAD_SIZE - sizeof(struct pt_regs))
#define task_pt_regs(tsk) ((struct pt_regs *)__KSTK_TOS(tsk))
#define KSTK_EIP(tsk) (task_pt_regs(tsk)->csr_era)
#define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[3])

View File

@ -29,7 +29,7 @@ struct pt_regs {
unsigned long csr_euen;
unsigned long csr_ecfg;
unsigned long csr_estat;
unsigned long __last[0];
unsigned long __last[];
} __aligned(8);
static inline int regs_irqs_disabled(struct pt_regs *regs)
@ -133,7 +133,7 @@ static inline void die_if_kernel(const char *str, struct pt_regs *regs)
#define current_pt_regs() \
({ \
unsigned long sp = (unsigned long)__builtin_frame_address(0); \
(struct pt_regs *)((sp | (THREAD_SIZE - 1)) + 1 - 32) - 1; \
(struct pt_regs *)((sp | (THREAD_SIZE - 1)) + 1) - 1; \
})
/* Helpers for working with the user stack pointer */

View File

@ -84,10 +84,9 @@ SYM_CODE_START(kernel_entry) # kernel entry point
la.pcrel tp, init_thread_union
/* Set the SP after an empty pt_regs. */
PTR_LI sp, (_THREAD_SIZE - 32 - PT_SIZE)
PTR_LI sp, (_THREAD_SIZE - PT_SIZE)
PTR_ADD sp, sp, tp
set_saved_sp sp, t0, t1
PTR_ADDI sp, sp, -4 * SZREG # init stack pointer
bl start_kernel
ASM_BUG()

View File

@ -129,7 +129,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
unsigned long clone_flags = args->flags;
struct pt_regs *childregs, *regs = current_pt_regs();
childksp = (unsigned long)task_stack_page(p) + THREAD_SIZE - 32;
childksp = (unsigned long)task_stack_page(p) + THREAD_SIZE;
/* set up new TSS. */
childregs = (struct pt_regs *) childksp - 1;
@ -236,7 +236,7 @@ bool in_task_stack(unsigned long stack, struct task_struct *task,
struct stack_info *info)
{
unsigned long begin = (unsigned long)task_stack_page(task);
unsigned long end = begin + THREAD_SIZE - 32;
unsigned long end = begin + THREAD_SIZE;
if (stack < begin || stack >= end)
return false;

View File

@ -26,7 +26,7 @@ SYM_FUNC_START(__switch_to)
move tp, a2
cpu_restore_nonscratch a1
li.w t0, _THREAD_SIZE - 32
li.w t0, _THREAD_SIZE
PTR_ADD t0, t0, tp
set_saved_sp t0, t1, t2

View File

@ -279,6 +279,7 @@ static void emit_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
const u8 t1 = LOONGARCH_GPR_T1;
const u8 t2 = LOONGARCH_GPR_T2;
const u8 t3 = LOONGARCH_GPR_T3;
const u8 r0 = regmap[BPF_REG_0];
const u8 src = regmap[insn->src_reg];
const u8 dst = regmap[insn->dst_reg];
const s16 off = insn->off;
@ -359,8 +360,6 @@ static void emit_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
break;
/* r0 = atomic_cmpxchg(dst + off, r0, src); */
case BPF_CMPXCHG:
u8 r0 = regmap[BPF_REG_0];
move_reg(ctx, t2, r0);
if (isdw) {
emit_insn(ctx, lld, r0, t1, 0);
@ -390,8 +389,11 @@ static bool is_signed_bpf_cond(u8 cond)
static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool extra_pass)
{
const bool is32 = BPF_CLASS(insn->code) == BPF_ALU ||
BPF_CLASS(insn->code) == BPF_JMP32;
u8 tm = -1;
u64 func_addr;
bool func_addr_fixed;
int i = insn - ctx->prog->insnsi;
int ret, jmp_offset;
const u8 code = insn->code;
const u8 cond = BPF_OP(code);
const u8 t1 = LOONGARCH_GPR_T1;
@ -400,8 +402,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
const u8 dst = regmap[insn->dst_reg];
const s16 off = insn->off;
const s32 imm = insn->imm;
int jmp_offset;
int i = insn - ctx->prog->insnsi;
const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
const bool is32 = BPF_CLASS(insn->code) == BPF_ALU || BPF_CLASS(insn->code) == BPF_JMP32;
switch (code) {
/* dst = src */
@ -724,24 +726,23 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K:
u8 t7 = -1;
jmp_offset = bpf2la_offset(i, off, ctx);
if (imm) {
move_imm(ctx, t1, imm, false);
t7 = t1;
tm = t1;
} else {
/* If imm is 0, simply use zero register. */
t7 = LOONGARCH_GPR_ZERO;
tm = LOONGARCH_GPR_ZERO;
}
move_reg(ctx, t2, dst);
if (is_signed_bpf_cond(BPF_OP(code))) {
emit_sext_32(ctx, t7, is32);
emit_sext_32(ctx, tm, is32);
emit_sext_32(ctx, t2, is32);
} else {
emit_zext_32(ctx, t7, is32);
emit_zext_32(ctx, tm, is32);
emit_zext_32(ctx, t2, is32);
}
if (emit_cond_jmp(ctx, cond, t2, t7, jmp_offset) < 0)
if (emit_cond_jmp(ctx, cond, t2, tm, jmp_offset) < 0)
goto toofar;
break;
@ -775,10 +776,6 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
/* function call */
case BPF_JMP | BPF_CALL:
int ret;
u64 func_addr;
bool func_addr_fixed;
mark_call(ctx);
ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
&func_addr, &func_addr_fixed);
@ -811,8 +808,6 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
/* dst = imm64 */
case BPF_LD | BPF_IMM | BPF_DW:
u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
move_imm(ctx, dst, imm64, is32);
return 1;

View File

@ -32,6 +32,11 @@ static inline void arch_enter_lazy_mmu_mode(void)
if (radix_enabled())
return;
/*
* apply_to_page_range can call us this preempt enabled when
* operating on kernel page tables.
*/
preempt_disable();
batch = this_cpu_ptr(&ppc64_tlb_batch);
batch->active = 1;
}
@ -47,6 +52,7 @@ static inline void arch_leave_lazy_mmu_mode(void)
if (batch->index)
__flush_tlb_pending(batch);
batch->active = 0;
preempt_enable();
}
#define arch_flush_lazy_mmu_mode() do {} while (0)

View File

@ -813,6 +813,13 @@ kernel_dbg_exc:
EXCEPTION_COMMON(0x260)
CHECK_NAPPING()
addi r3,r1,STACK_FRAME_OVERHEAD
/*
* XXX: Returning from performance_monitor_exception taken as a
* soft-NMI (Linux irqs disabled) may be risky to use interrupt_return
* and could cause bugs in return or elsewhere. That case should just
* restore registers and return. There is a workaround for one known
* problem in interrupt_exit_kernel_prepare().
*/
bl performance_monitor_exception
b interrupt_return

View File

@ -2357,9 +2357,21 @@ EXC_VIRT_END(performance_monitor, 0x4f00, 0x20)
EXC_COMMON_BEGIN(performance_monitor_common)
GEN_COMMON performance_monitor
addi r3,r1,STACK_FRAME_OVERHEAD
bl performance_monitor_exception
lbz r4,PACAIRQSOFTMASK(r13)
cmpdi r4,IRQS_ENABLED
bne 1f
bl performance_monitor_exception_async
b interrupt_return_srr
1:
bl performance_monitor_exception_nmi
/* Clear MSR_RI before setting SRR0 and SRR1. */
li r9,0
mtmsrd r9,1
kuap_kernel_restore r9, r10
EXCEPTION_RESTORE_REGS hsrr=0
RFI_TO_KERNEL
/**
* Interrupt 0xf20 - Vector Unavailable Interrupt.

View File

@ -374,10 +374,18 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs)
if (regs_is_unrecoverable(regs))
unrecoverable_exception(regs);
/*
* CT_WARN_ON comes here via program_check_exception,
* so avoid recursion.
* CT_WARN_ON comes here via program_check_exception, so avoid
* recursion.
*
* Skip the assertion on PMIs on 64e to work around a problem caused
* by NMI PMIs incorrectly taking this interrupt return path, it's
* possible for this to hit after interrupt exit to user switches
* context to user. See also the comment in the performance monitor
* handler in exceptions-64e.S
*/
if (TRAP(regs) != INTERRUPT_PROGRAM)
if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64) &&
TRAP(regs) != INTERRUPT_PROGRAM &&
TRAP(regs) != INTERRUPT_PERFMON)
CT_WARN_ON(ct_state() == CONTEXT_USER);
kuap = kuap_get_and_assert_locked();

View File

@ -532,15 +532,24 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\()_kernel)
* Returning to soft-disabled context.
* Check if a MUST_HARD_MASK interrupt has become pending, in which
* case we need to disable MSR[EE] in the return context.
*
* The MSR[EE] check catches among other things the short incoherency
* in hard_irq_disable() between clearing MSR[EE] and setting
* PACA_IRQ_HARD_DIS.
*/
ld r12,_MSR(r1)
andi. r10,r12,MSR_EE
beq .Lfast_kernel_interrupt_return_\srr\() // EE already disabled
lbz r11,PACAIRQHAPPENED(r13)
andi. r10,r11,PACA_IRQ_MUST_HARD_MASK
beq .Lfast_kernel_interrupt_return_\srr\() // No HARD_MASK pending
bne 1f // HARD_MASK is pending
// No HARD_MASK pending, clear possible HARD_DIS set by interrupt
andi. r11,r11,(~PACA_IRQ_HARD_DIS)@l
stb r11,PACAIRQHAPPENED(r13)
b .Lfast_kernel_interrupt_return_\srr\()
/* Must clear MSR_EE from _MSR */
1: /* Must clear MSR_EE from _MSR */
#ifdef CONFIG_PPC_BOOK3S
li r10,0
/* Clear valid before changing _MSR */

View File

@ -51,6 +51,7 @@ config KVM_BOOK3S_HV_POSSIBLE
config KVM_BOOK3S_32
tristate "KVM support for PowerPC book3s_32 processors"
depends on PPC_BOOK3S_32 && !SMP && !PTE_64BIT
depends on !CONTEXT_TRACKING_USER
select KVM
select KVM_BOOK3S_32_HANDLER
select KVM_BOOK3S_PR_POSSIBLE
@ -105,6 +106,7 @@ config KVM_BOOK3S_64_HV
config KVM_BOOK3S_64_PR
tristate "KVM support without using hypervisor mode in host"
depends on KVM_BOOK3S_64
depends on !CONTEXT_TRACKING_USER
select KVM_BOOK3S_PR_POSSIBLE
help
Support running guest kernels in virtual machines on processors
@ -190,6 +192,7 @@ config KVM_EXIT_TIMING
config KVM_E500V2
bool "KVM support for PowerPC E500v2 processors"
depends on PPC_E500 && !PPC_E500MC
depends on !CONTEXT_TRACKING_USER
select KVM
select KVM_MMIO
select MMU_NOTIFIER
@ -205,6 +208,7 @@ config KVM_E500V2
config KVM_E500MC
bool "KVM support for PowerPC E500MC/E5500/E6500 processors"
depends on PPC_E500MC
depends on !CONTEXT_TRACKING_USER
select KVM
select KVM_MMIO
select KVM_BOOKE_HV

View File

@ -36,7 +36,17 @@ int exit_vmx_usercopy(void)
{
disable_kernel_altivec();
pagefault_enable();
preempt_enable();
preempt_enable_no_resched();
/*
* Must never explicitly call schedule (including preempt_enable())
* while in a kuap-unlocked user copy, because the AMR register will
* not be saved and restored across context switch. However preempt
* kernels need to be preempted as soon as possible if need_resched is
* set and we are preemptible. The hack here is to schedule a
* decrementer to fire here and reschedule for us if necessary.
*/
if (IS_ENABLED(CONFIG_PREEMPT) && need_resched())
set_dec(1);
return 0;
}

View File

@ -43,6 +43,29 @@
static DEFINE_RAW_SPINLOCK(native_tlbie_lock);
#ifdef CONFIG_LOCKDEP
static struct lockdep_map hpte_lock_map =
STATIC_LOCKDEP_MAP_INIT("hpte_lock", &hpte_lock_map);
static void acquire_hpte_lock(void)
{
lock_map_acquire(&hpte_lock_map);
}
static void release_hpte_lock(void)
{
lock_map_release(&hpte_lock_map);
}
#else
static void acquire_hpte_lock(void)
{
}
static void release_hpte_lock(void)
{
}
#endif
static inline unsigned long ___tlbie(unsigned long vpn, int psize,
int apsize, int ssize)
{
@ -220,6 +243,7 @@ static inline void native_lock_hpte(struct hash_pte *hptep)
{
unsigned long *word = (unsigned long *)&hptep->v;
acquire_hpte_lock();
while (1) {
if (!test_and_set_bit_lock(HPTE_LOCK_BIT, word))
break;
@ -234,6 +258,7 @@ static inline void native_unlock_hpte(struct hash_pte *hptep)
{
unsigned long *word = (unsigned long *)&hptep->v;
release_hpte_lock();
clear_bit_unlock(HPTE_LOCK_BIT, word);
}
@ -243,8 +268,11 @@ static long native_hpte_insert(unsigned long hpte_group, unsigned long vpn,
{
struct hash_pte *hptep = htab_address + hpte_group;
unsigned long hpte_v, hpte_r;
unsigned long flags;
int i;
local_irq_save(flags);
if (!(vflags & HPTE_V_BOLTED)) {
DBG_LOW(" insert(group=%lx, vpn=%016lx, pa=%016lx,"
" rflags=%lx, vflags=%lx, psize=%d)\n",
@ -263,8 +291,10 @@ static long native_hpte_insert(unsigned long hpte_group, unsigned long vpn,
hptep++;
}
if (i == HPTES_PER_GROUP)
if (i == HPTES_PER_GROUP) {
local_irq_restore(flags);
return -1;
}
hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
@ -286,10 +316,13 @@ static long native_hpte_insert(unsigned long hpte_group, unsigned long vpn,
* Now set the first dword including the valid bit
* NOTE: this also unlocks the hpte
*/
release_hpte_lock();
hptep->v = cpu_to_be64(hpte_v);
__asm__ __volatile__ ("ptesync" : : : "memory");
local_irq_restore(flags);
return i | (!!(vflags & HPTE_V_SECONDARY) << 3);
}
@ -327,6 +360,7 @@ static long native_hpte_remove(unsigned long hpte_group)
return -1;
/* Invalidate the hpte. NOTE: this also unlocks it */
release_hpte_lock();
hptep->v = 0;
return i;
@ -339,6 +373,9 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
struct hash_pte *hptep = htab_address + slot;
unsigned long hpte_v, want_v;
int ret = 0, local = 0;
unsigned long irqflags;
local_irq_save(irqflags);
want_v = hpte_encode_avpn(vpn, bpsize, ssize);
@ -382,6 +419,8 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
if (!(flags & HPTE_NOHPTE_UPDATE))
tlbie(vpn, bpsize, apsize, ssize, local);
local_irq_restore(irqflags);
return ret;
}
@ -445,6 +484,9 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea,
unsigned long vsid;
long slot;
struct hash_pte *hptep;
unsigned long flags;
local_irq_save(flags);
vsid = get_kernel_vsid(ea, ssize);
vpn = hpt_vpn(ea, vsid, ssize);
@ -463,6 +505,8 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea,
* actual page size will be same.
*/
tlbie(vpn, psize, psize, ssize, 0);
local_irq_restore(flags);
}
/*
@ -476,6 +520,9 @@ static int native_hpte_removebolted(unsigned long ea, int psize, int ssize)
unsigned long vsid;
long slot;
struct hash_pte *hptep;
unsigned long flags;
local_irq_save(flags);
vsid = get_kernel_vsid(ea, ssize);
vpn = hpt_vpn(ea, vsid, ssize);
@ -493,6 +540,9 @@ static int native_hpte_removebolted(unsigned long ea, int psize, int ssize)
/* Invalidate the TLB */
tlbie(vpn, psize, psize, ssize, 0);
local_irq_restore(flags);
return 0;
}
@ -517,10 +567,11 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
/* recheck with locks held */
hpte_v = hpte_get_old_v(hptep);
if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID))
if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID)) {
/* Invalidate the hpte. NOTE: this also unlocks it */
release_hpte_lock();
hptep->v = 0;
else
} else
native_unlock_hpte(hptep);
}
/*
@ -580,10 +631,8 @@ static void native_hugepage_invalidate(unsigned long vsid,
hpte_v = hpte_get_old_v(hptep);
if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID)) {
/*
* Invalidate the hpte. NOTE: this also unlocks it
*/
/* Invalidate the hpte. NOTE: this also unlocks it */
release_hpte_lock();
hptep->v = 0;
} else
native_unlock_hpte(hptep);
@ -765,8 +814,10 @@ static void native_flush_hash_range(unsigned long number, int local)
if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))
native_unlock_hpte(hptep);
else
else {
release_hpte_lock();
hptep->v = 0;
}
} pte_iterate_hashed_end();
}

View File

@ -404,7 +404,8 @@ EXPORT_SYMBOL_GPL(hash__has_transparent_hugepage);
struct change_memory_parms {
unsigned long start, end, newpp;
unsigned int step, nr_cpus, master_cpu;
unsigned int step, nr_cpus;
atomic_t master_cpu;
atomic_t cpu_counter;
};
@ -478,7 +479,8 @@ static int change_memory_range_fn(void *data)
{
struct change_memory_parms *parms = data;
if (parms->master_cpu != smp_processor_id())
// First CPU goes through, all others wait.
if (atomic_xchg(&parms->master_cpu, 1) == 1)
return chmem_secondary_loop(parms);
// Wait for all but one CPU (this one) to call-in
@ -516,7 +518,7 @@ static bool hash__change_memory_range(unsigned long start, unsigned long end,
chmem_parms.end = end;
chmem_parms.step = step;
chmem_parms.newpp = newpp;
chmem_parms.master_cpu = smp_processor_id();
atomic_set(&chmem_parms.master_cpu, 0);
cpus_read_lock();

View File

@ -1981,7 +1981,7 @@ long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
}
#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
static DEFINE_SPINLOCK(linear_map_hash_lock);
static DEFINE_RAW_SPINLOCK(linear_map_hash_lock);
static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
{
@ -2005,10 +2005,10 @@ static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
mmu_linear_psize, mmu_kernel_ssize);
BUG_ON (ret < 0);
spin_lock(&linear_map_hash_lock);
raw_spin_lock(&linear_map_hash_lock);
BUG_ON(linear_map_hash_slots[lmi] & 0x80);
linear_map_hash_slots[lmi] = ret | 0x80;
spin_unlock(&linear_map_hash_lock);
raw_spin_unlock(&linear_map_hash_lock);
}
static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long lmi)
@ -2018,14 +2018,14 @@ static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long lmi)
unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize);
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
spin_lock(&linear_map_hash_lock);
raw_spin_lock(&linear_map_hash_lock);
if (!(linear_map_hash_slots[lmi] & 0x80)) {
spin_unlock(&linear_map_hash_lock);
raw_spin_unlock(&linear_map_hash_lock);
return;
}
hidx = linear_map_hash_slots[lmi] & 0x7f;
linear_map_hash_slots[lmi] = 0;
spin_unlock(&linear_map_hash_lock);
raw_spin_unlock(&linear_map_hash_lock);
if (hidx & _PTEIDX_SECONDARY)
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;

View File

@ -35,6 +35,7 @@
#include <asm/drmem.h>
#include "pseries.h"
#include "vas.h" /* pseries_vas_dlpar_cpu() */
/*
* This isn't a module but we expose that to userspace
@ -748,6 +749,16 @@ static ssize_t lparcfg_write(struct file *file, const char __user * buf,
return -EINVAL;
retval = update_ppp(new_entitled_ptr, NULL);
if (retval == H_SUCCESS || retval == H_CONSTRAINED) {
/*
* The hypervisor assigns VAS resources based
* on entitled capacity for shared mode.
* Reconfig VAS windows based on DLPAR CPU events.
*/
if (pseries_vas_dlpar_cpu() != 0)
retval = H_HARDWARE;
}
} else if (!strcmp(kbuf, "capacity_weight")) {
char *endp;
*new_weight_ptr = (u8) simple_strtoul(tmp, &endp, 10);

View File

@ -200,16 +200,41 @@ static irqreturn_t pseries_vas_fault_thread_fn(int irq, void *data)
struct vas_user_win_ref *tsk_ref;
int rc;
rc = h_get_nx_fault(txwin->vas_win.winid, (u64)virt_to_phys(&crb));
if (!rc) {
tsk_ref = &txwin->vas_win.task_ref;
vas_dump_crb(&crb);
vas_update_csb(&crb, tsk_ref);
while (atomic_read(&txwin->pending_faults)) {
rc = h_get_nx_fault(txwin->vas_win.winid, (u64)virt_to_phys(&crb));
if (!rc) {
tsk_ref = &txwin->vas_win.task_ref;
vas_dump_crb(&crb);
vas_update_csb(&crb, tsk_ref);
}
atomic_dec(&txwin->pending_faults);
}
return IRQ_HANDLED;
}
/*
* irq_default_primary_handler() can be used only with IRQF_ONESHOT
* which disables IRQ before executing the thread handler and enables
* it after. But this disabling interrupt sets the VAS IRQ OFF
* state in the hypervisor. If the NX generates fault interrupt
* during this window, the hypervisor will not deliver this
* interrupt to the LPAR. So use VAS specific IRQ handler instead
* of calling the default primary handler.
*/
static irqreturn_t pseries_vas_irq_handler(int irq, void *data)
{
struct pseries_vas_window *txwin = data;
/*
* The thread hanlder will process this interrupt if it is
* already running.
*/
atomic_inc(&txwin->pending_faults);
return IRQ_WAKE_THREAD;
}
/*
* Allocate window and setup IRQ mapping.
*/
@ -240,8 +265,9 @@ static int allocate_setup_window(struct pseries_vas_window *txwin,
goto out_irq;
}
rc = request_threaded_irq(txwin->fault_virq, NULL,
pseries_vas_fault_thread_fn, IRQF_ONESHOT,
rc = request_threaded_irq(txwin->fault_virq,
pseries_vas_irq_handler,
pseries_vas_fault_thread_fn, 0,
txwin->name, txwin);
if (rc) {
pr_err("VAS-Window[%d]: Request IRQ(%u) failed with %d\n",
@ -826,6 +852,25 @@ int vas_reconfig_capabilties(u8 type, int new_nr_creds)
mutex_unlock(&vas_pseries_mutex);
return rc;
}
int pseries_vas_dlpar_cpu(void)
{
int new_nr_creds, rc;
rc = h_query_vas_capabilities(H_QUERY_VAS_CAPABILITIES,
vascaps[VAS_GZIP_DEF_FEAT_TYPE].feat,
(u64)virt_to_phys(&hv_cop_caps));
if (!rc) {
new_nr_creds = be16_to_cpu(hv_cop_caps.target_lpar_creds);
rc = vas_reconfig_capabilties(VAS_GZIP_DEF_FEAT_TYPE, new_nr_creds);
}
if (rc)
pr_err("Failed reconfig VAS capabilities with DLPAR\n");
return rc;
}
/*
* Total number of default credits available (target_credits)
* in LPAR depends on number of cores configured. It varies based on
@ -840,7 +885,15 @@ static int pseries_vas_notifier(struct notifier_block *nb,
struct of_reconfig_data *rd = data;
struct device_node *dn = rd->dn;
const __be32 *intserv = NULL;
int new_nr_creds, len, rc = 0;
int len;
/*
* For shared CPU partition, the hypervisor assigns total credits
* based on entitled core capacity. So updating VAS windows will
* be called from lparcfg_write().
*/
if (is_shared_processor())
return NOTIFY_OK;
if ((action == OF_RECONFIG_ATTACH_NODE) ||
(action == OF_RECONFIG_DETACH_NODE))
@ -852,19 +905,7 @@ static int pseries_vas_notifier(struct notifier_block *nb,
if (!intserv)
return NOTIFY_OK;
rc = h_query_vas_capabilities(H_QUERY_VAS_CAPABILITIES,
vascaps[VAS_GZIP_DEF_FEAT_TYPE].feat,
(u64)virt_to_phys(&hv_cop_caps));
if (!rc) {
new_nr_creds = be16_to_cpu(hv_cop_caps.target_lpar_creds);
rc = vas_reconfig_capabilties(VAS_GZIP_DEF_FEAT_TYPE,
new_nr_creds);
}
if (rc)
pr_err("Failed reconfig VAS capabilities with DLPAR\n");
return rc;
return pseries_vas_dlpar_cpu();
}
static struct notifier_block pseries_vas_nb = {

View File

@ -132,6 +132,7 @@ struct pseries_vas_window {
u64 flags;
char *name;
int fault_virq;
atomic_t pending_faults; /* Number of pending faults */
};
int sysfs_add_vas_caps(struct vas_cop_feat_caps *caps);
@ -140,10 +141,15 @@ int __init sysfs_pseries_vas_init(struct vas_all_caps *vas_caps);
#ifdef CONFIG_PPC_VAS
int vas_migration_handler(int action);
int pseries_vas_dlpar_cpu(void);
#else
static inline int vas_migration_handler(int action)
{
return 0;
}
static inline int pseries_vas_dlpar_cpu(void)
{
return 0;
}
#endif
#endif /* _VAS_H */

View File

@ -411,14 +411,16 @@ config RISCV_ISA_SVPBMT
If you don't know what to do here, say Y.
config CC_HAS_ZICBOM
config TOOLCHAIN_HAS_ZICBOM
bool
default y if 64BIT && $(cc-option,-mabi=lp64 -march=rv64ima_zicbom)
default y if 32BIT && $(cc-option,-mabi=ilp32 -march=rv32ima_zicbom)
default y
depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zicbom)
depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zicbom)
depends on LLD_VERSION >= 150000 || LD_VERSION >= 23800
config RISCV_ISA_ZICBOM
bool "Zicbom extension support for non-coherent DMA operation"
depends on CC_HAS_ZICBOM
depends on TOOLCHAIN_HAS_ZICBOM
depends on !XIP_KERNEL && MMU
select RISCV_DMA_NONCOHERENT
select RISCV_ALTERNATIVE
@ -433,6 +435,13 @@ config RISCV_ISA_ZICBOM
If you don't know what to do here, say Y.
config TOOLCHAIN_HAS_ZIHINTPAUSE
bool
default y
depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zihintpause)
depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zihintpause)
depends on LLD_VERSION >= 150000 || LD_VERSION >= 23600
config FPU
bool "FPU support"
default y

View File

@ -59,12 +59,10 @@ toolchain-need-zicsr-zifencei := $(call cc-option-yn, -march=$(riscv-march-y)_zi
riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei
# Check if the toolchain supports Zicbom extension
toolchain-supports-zicbom := $(call cc-option-yn, -march=$(riscv-march-y)_zicbom)
riscv-march-$(toolchain-supports-zicbom) := $(riscv-march-y)_zicbom
riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZICBOM) := $(riscv-march-y)_zicbom
# Check if the toolchain supports Zihintpause extension
toolchain-supports-zihintpause := $(call cc-option-yn, -march=$(riscv-march-y)_zihintpause)
riscv-march-$(toolchain-supports-zihintpause) := $(riscv-march-y)_zihintpause
riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE) := $(riscv-march-y)_zihintpause
KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y))
KBUILD_AFLAGS += -march=$(riscv-march-y)

View File

@ -42,16 +42,8 @@ void flush_icache_mm(struct mm_struct *mm, bool local);
#endif /* CONFIG_SMP */
/*
* The T-Head CMO errata internally probe the CBOM block size, but otherwise
* don't depend on Zicbom.
*/
extern unsigned int riscv_cbom_block_size;
#ifdef CONFIG_RISCV_ISA_ZICBOM
void riscv_init_cbom_blocksize(void);
#else
static inline void riscv_init_cbom_blocksize(void) { }
#endif
#ifdef CONFIG_RISCV_DMA_NONCOHERENT
void riscv_noncoherent_supported(void);

View File

@ -14,8 +14,8 @@
#define JUMP_LABEL_NOP_SIZE 4
static __always_inline bool arch_static_branch(struct static_key *key,
bool branch)
static __always_inline bool arch_static_branch(struct static_key * const key,
const bool branch)
{
asm_volatile_goto(
" .option push \n\t"
@ -35,8 +35,8 @@ static __always_inline bool arch_static_branch(struct static_key *key,
return true;
}
static __always_inline bool arch_static_branch_jump(struct static_key *key,
bool branch)
static __always_inline bool arch_static_branch_jump(struct static_key * const key,
const bool branch)
{
asm_volatile_goto(
" .option push \n\t"

View File

@ -45,6 +45,7 @@ int kvm_riscv_vcpu_timer_deinit(struct kvm_vcpu *vcpu);
int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu);
void kvm_riscv_guest_timer_init(struct kvm *kvm);
void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu);
bool kvm_riscv_vcpu_timer_pending(struct kvm_vcpu *vcpu);

View File

@ -21,7 +21,7 @@ static inline void cpu_relax(void)
* Reduce instruction retirement.
* This assumes the PC changes.
*/
#ifdef __riscv_zihintpause
#ifdef CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE
__asm__ __volatile__ ("pause");
#else
/* Encoding of the pause instruction */

View File

@ -213,6 +213,9 @@ static void print_mmu(struct seq_file *f)
static void *c_start(struct seq_file *m, loff_t *pos)
{
if (*pos == nr_cpu_ids)
return NULL;
*pos = cpumask_next(*pos - 1, cpu_online_mask);
if ((*pos) < nr_cpu_ids)
return (void *)(uintptr_t)(1 + *pos);

View File

@ -708,6 +708,9 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
clear_bit(IRQ_VS_SOFT, &v->irqs_pending);
}
}
/* Sync-up timer CSRs */
kvm_riscv_vcpu_timer_sync(vcpu);
}
int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)

View File

@ -320,6 +320,21 @@ void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu)
kvm_riscv_vcpu_timer_unblocking(vcpu);
}
void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_timer *t = &vcpu->arch.timer;
if (!t->sstc_enabled)
return;
#if defined(CONFIG_32BIT)
t->next_cycles = csr_read(CSR_VSTIMECMP);
t->next_cycles |= (u64)csr_read(CSR_VSTIMECMPH) << 32;
#else
t->next_cycles = csr_read(CSR_VSTIMECMP);
#endif
}
void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_timer *t = &vcpu->arch.timer;
@ -327,13 +342,11 @@ void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu)
if (!t->sstc_enabled)
return;
t = &vcpu->arch.timer;
#if defined(CONFIG_32BIT)
t->next_cycles = csr_read(CSR_VSTIMECMP);
t->next_cycles |= (u64)csr_read(CSR_VSTIMECMPH) << 32;
#else
t->next_cycles = csr_read(CSR_VSTIMECMP);
#endif
/*
* The vstimecmp CSRs are saved by kvm_riscv_vcpu_timer_sync()
* upon every VM exit so no need to save here.
*/
/* timer should be enabled for the remaining operations */
if (unlikely(!t->init_done))
return;

View File

@ -3,6 +3,7 @@
* Copyright (C) 2017 SiFive
*/
#include <linux/of.h>
#include <asm/cacheflush.h>
#ifdef CONFIG_SMP
@ -86,3 +87,40 @@ void flush_icache_pte(pte_t pte)
flush_icache_all();
}
#endif /* CONFIG_MMU */
unsigned int riscv_cbom_block_size;
EXPORT_SYMBOL_GPL(riscv_cbom_block_size);
void riscv_init_cbom_blocksize(void)
{
struct device_node *node;
unsigned long cbom_hartid;
u32 val, probed_block_size;
int ret;
probed_block_size = 0;
for_each_of_cpu_node(node) {
unsigned long hartid;
ret = riscv_of_processor_hartid(node, &hartid);
if (ret)
continue;
/* set block-size for cbom extension if available */
ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
if (ret)
continue;
if (!probed_block_size) {
probed_block_size = val;
cbom_hartid = hartid;
} else {
if (probed_block_size != val)
pr_warn("cbom-block-size mismatched between harts %lu and %lu\n",
cbom_hartid, hartid);
}
}
if (probed_block_size)
riscv_cbom_block_size = probed_block_size;
}

View File

@ -8,13 +8,8 @@
#include <linux/dma-direct.h>
#include <linux/dma-map-ops.h>
#include <linux/mm.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <asm/cacheflush.h>
unsigned int riscv_cbom_block_size;
EXPORT_SYMBOL_GPL(riscv_cbom_block_size);
static bool noncoherent_supported;
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
@ -77,42 +72,6 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
dev->dma_coherent = coherent;
}
#ifdef CONFIG_RISCV_ISA_ZICBOM
void riscv_init_cbom_blocksize(void)
{
struct device_node *node;
unsigned long cbom_hartid;
u32 val, probed_block_size;
int ret;
probed_block_size = 0;
for_each_of_cpu_node(node) {
unsigned long hartid;
ret = riscv_of_processor_hartid(node, &hartid);
if (ret)
continue;
/* set block-size for cbom extension if available */
ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
if (ret)
continue;
if (!probed_block_size) {
probed_block_size = val;
cbom_hartid = hartid;
} else {
if (probed_block_size != val)
pr_warn("cbom-block-size mismatched between harts %lu and %lu\n",
cbom_hartid, hartid);
}
}
if (probed_block_size)
riscv_cbom_block_size = probed_block_size;
}
#endif
void riscv_noncoherent_supported(void)
{
WARN(!riscv_cbom_block_size,

View File

@ -113,6 +113,8 @@ static void __init kasan_populate_pud(pgd_t *pgd,
base_pud = pt_ops.get_pud_virt(pfn_to_phys(_pgd_pfn(*pgd)));
} else if (pgd_none(*pgd)) {
base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
memcpy(base_pud, (void *)kasan_early_shadow_pud,
sizeof(pud_t) * PTRS_PER_PUD);
} else {
base_pud = (pud_t *)pgd_page_vaddr(*pgd);
if (base_pud == lm_alias(kasan_early_shadow_pud)) {
@ -173,8 +175,11 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgd)));
} else {
base_p4d = (p4d_t *)pgd_page_vaddr(*pgd);
if (base_p4d == lm_alias(kasan_early_shadow_p4d))
if (base_p4d == lm_alias(kasan_early_shadow_p4d)) {
base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE);
memcpy(base_p4d, (void *)kasan_early_shadow_p4d,
sizeof(p4d_t) * PTRS_PER_P4D);
}
}
p4dp = base_p4d + p4d_index(vaddr);

View File

@ -102,8 +102,17 @@ SECTIONS
_compressed_start = .;
*(.vmlinux.bin.compressed)
_compressed_end = .;
FILL(0xff);
. = ALIGN(4096);
}
#define SB_TRAILER_SIZE 32
/* Trailer needed for Secure Boot */
. += SB_TRAILER_SIZE; /* make sure .sb.trailer does not overwrite the previous section */
. = ALIGN(4096) - SB_TRAILER_SIZE;
.sb.trailer : {
QUAD(0)
QUAD(0)
QUAD(0)
QUAD(0x000000207a49504c)
}
_end = .;

View File

@ -17,7 +17,8 @@
"3: jl 1b\n" \
" lhi %0,0\n" \
"4: sacf 768\n" \
EX_TABLE(0b,4b) EX_TABLE(2b,4b) EX_TABLE(3b,4b) \
EX_TABLE(0b,4b) EX_TABLE(1b,4b) \
EX_TABLE(2b,4b) EX_TABLE(3b,4b) \
: "=d" (ret), "=&d" (oldval), "=&d" (newval), \
"=m" (*uaddr) \
: "0" (-EFAULT), "d" (oparg), "a" (uaddr), \

View File

@ -459,6 +459,7 @@ static int paiext_push_sample(void)
raw.frag.data = cpump->save;
raw.size = raw.frag.size;
data.raw = &raw;
data.sample_flags |= PERF_SAMPLE_RAW;
}
overflow = perf_event_overflow(event, &data, &regs);

View File

@ -157,7 +157,7 @@ unsigned long __clear_user(void __user *to, unsigned long size)
asm volatile(
" lr 0,%[spec]\n"
"0: mvcos 0(%1),0(%4),%0\n"
" jz 4f\n"
"6: jz 4f\n"
"1: algr %0,%2\n"
" slgr %1,%2\n"
" j 0b\n"
@ -167,11 +167,11 @@ unsigned long __clear_user(void __user *to, unsigned long size)
" clgr %0,%3\n" /* copy crosses next page boundary? */
" jnh 5f\n"
"3: mvcos 0(%1),0(%4),%3\n"
" slgr %0,%3\n"
"7: slgr %0,%3\n"
" j 5f\n"
"4: slgr %0,%0\n"
"5:\n"
EX_TABLE(0b,2b) EX_TABLE(3b,5b)
EX_TABLE(0b,2b) EX_TABLE(6b,2b) EX_TABLE(3b,5b) EX_TABLE(7b,5b)
: "+a" (size), "+a" (to), "+a" (tmp1), "=a" (tmp2)
: "a" (empty_zero_page), [spec] "d" (spec.val)
: "cc", "memory", "0");

View File

@ -64,7 +64,7 @@ static inline int __pcistg_mio_inuser(
asm volatile (
" sacf 256\n"
"0: llgc %[tmp],0(%[src])\n"
" sllg %[val],%[val],8\n"
"4: sllg %[val],%[val],8\n"
" aghi %[src],1\n"
" ogr %[val],%[tmp]\n"
" brctg %[cnt],0b\n"
@ -72,7 +72,7 @@ static inline int __pcistg_mio_inuser(
"2: ipm %[cc]\n"
" srl %[cc],28\n"
"3: sacf 768\n"
EX_TABLE(0b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)
EX_TABLE(0b, 3b) EX_TABLE(4b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)
:
[src] "+a" (src), [cnt] "+d" (cnt),
[val] "+d" (val), [tmp] "=d" (tmp),
@ -215,10 +215,10 @@ static inline int __pcilg_mio_inuser(
"2: ahi %[shift],-8\n"
" srlg %[tmp],%[val],0(%[shift])\n"
"3: stc %[tmp],0(%[dst])\n"
" aghi %[dst],1\n"
"5: aghi %[dst],1\n"
" brctg %[cnt],2b\n"
"4: sacf 768\n"
EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b)
EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b) EX_TABLE(5b, 4b)
:
[ioaddr_len] "+&d" (ioaddr_len.pair),
[cc] "+d" (cc), [val] "=d" (val),

View File

@ -1973,7 +1973,6 @@ config EFI
config EFI_STUB
bool "EFI stub support"
depends on EFI
depends on $(cc-option,-mabi=ms) || X86_32
select RELOCATABLE
help
This kernel feature allows a bzImage to be loaded directly

View File

@ -27,13 +27,17 @@
#include <asm/cpu_device_id.h>
#include <asm/simd.h>
#define POLYVAL_ALIGN 16
#define POLYVAL_ALIGN_ATTR __aligned(POLYVAL_ALIGN)
#define POLYVAL_ALIGN_EXTRA ((POLYVAL_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1))
#define POLYVAL_CTX_SIZE (sizeof(struct polyval_tfm_ctx) + POLYVAL_ALIGN_EXTRA)
#define NUM_KEY_POWERS 8
struct polyval_tfm_ctx {
/*
* These powers must be in the order h^8, ..., h^1.
*/
u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE];
u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE] POLYVAL_ALIGN_ATTR;
};
struct polyval_desc_ctx {
@ -45,6 +49,11 @@ asmlinkage void clmul_polyval_update(const struct polyval_tfm_ctx *keys,
const u8 *in, size_t nblocks, u8 *accumulator);
asmlinkage void clmul_polyval_mul(u8 *op1, const u8 *op2);
static inline struct polyval_tfm_ctx *polyval_tfm_ctx(struct crypto_shash *tfm)
{
return PTR_ALIGN(crypto_shash_ctx(tfm), POLYVAL_ALIGN);
}
static void internal_polyval_update(const struct polyval_tfm_ctx *keys,
const u8 *in, size_t nblocks, u8 *accumulator)
{
@ -72,7 +81,7 @@ static void internal_polyval_mul(u8 *op1, const u8 *op2)
static int polyval_x86_setkey(struct crypto_shash *tfm,
const u8 *key, unsigned int keylen)
{
struct polyval_tfm_ctx *tctx = crypto_shash_ctx(tfm);
struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(tfm);
int i;
if (keylen != POLYVAL_BLOCK_SIZE)
@ -102,7 +111,7 @@ static int polyval_x86_update(struct shash_desc *desc,
const u8 *src, unsigned int srclen)
{
struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
const struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(desc->tfm);
u8 *pos;
unsigned int nblocks;
unsigned int n;
@ -143,7 +152,7 @@ static int polyval_x86_update(struct shash_desc *desc,
static int polyval_x86_final(struct shash_desc *desc, u8 *dst)
{
struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
const struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(desc->tfm);
if (dctx->bytes) {
internal_polyval_mul(dctx->buffer,
@ -167,7 +176,7 @@ static struct shash_alg polyval_alg = {
.cra_driver_name = "polyval-clmulni",
.cra_priority = 200,
.cra_blocksize = POLYVAL_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct polyval_tfm_ctx),
.cra_ctxsize = POLYVAL_CTX_SIZE,
.cra_module = THIS_MODULE,
},
};

View File

@ -801,7 +801,7 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
/* Extension Memory */
if (ibs_caps & IBS_CAPS_ZEN4 &&
ibs_data_src == IBS_DATA_SRC_EXT_EXT_MEM) {
data_src->mem_lvl_num = PERF_MEM_LVLNUM_EXTN_MEM;
data_src->mem_lvl_num = PERF_MEM_LVLNUM_CXL;
if (op_data2->rmt_node) {
data_src->mem_remote = PERF_MEM_REMOTE_REMOTE;
/* IBS doesn't provide Remote socket detail */

View File

@ -1596,7 +1596,7 @@ void __init intel_pmu_arch_lbr_init(void)
return;
clear_arch_lbr:
clear_cpu_cap(&boot_cpu_data, X86_FEATURE_ARCH_LBR);
setup_clear_cpu_cap(X86_FEATURE_ARCH_LBR);
}
/**

View File

@ -806,7 +806,11 @@ static const struct x86_cpu_id rapl_model_match[] __initconst = {
X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_N, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &model_spr),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S, &model_skl),
{},
};
MODULE_DEVICE_TABLE(x86cpu, rapl_model_match);

View File

@ -25,8 +25,10 @@ arch_rmrr_sanity_check(struct acpi_dmar_reserved_memory *rmrr)
{
u64 start = rmrr->base_address;
u64 end = rmrr->end_address + 1;
int entry_type;
if (e820__mapped_all(start, end, E820_TYPE_RESERVED))
entry_type = e820__get_entry_type(start, end);
if (entry_type == E820_TYPE_RESERVED || entry_type == E820_TYPE_NVS)
return 0;
pr_err(FW_BUG "No firmware reserved region can cover this RMRR [%#018Lx-%#018Lx], contact BIOS vendor for fixes\n",

View File

@ -10,10 +10,13 @@
/* Even with __builtin_ the compiler may decide to use the out of line
function. */
#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
#include <linux/kmsan_string.h>
#endif
#define __HAVE_ARCH_MEMCPY 1
#if defined(__SANITIZE_MEMORY__)
#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
#undef memcpy
void *__msan_memcpy(void *dst, const void *src, size_t size);
#define memcpy __msan_memcpy
#else
extern void *memcpy(void *to, const void *from, size_t len);
@ -21,7 +24,7 @@ extern void *memcpy(void *to, const void *from, size_t len);
extern void *__memcpy(void *to, const void *from, size_t len);
#define __HAVE_ARCH_MEMSET
#if defined(__SANITIZE_MEMORY__)
#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
extern void *__msan_memset(void *s, int c, size_t n);
#undef memset
#define memset __msan_memset
@ -67,7 +70,7 @@ static inline void *memset64(uint64_t *s, uint64_t v, size_t n)
}
#define __HAVE_ARCH_MEMMOVE
#if defined(__SANITIZE_MEMORY__)
#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
#undef memmove
void *__msan_memmove(void *dest, const void *src, size_t len);
#define memmove __msan_memmove

View File

@ -254,24 +254,25 @@ extern void __put_user_nocheck_8(void);
#define __put_user_size(x, ptr, size, label) \
do { \
__typeof__(*(ptr)) __x = (x); /* eval x once */ \
__chk_user_ptr(ptr); \
__typeof__(ptr) __ptr = (ptr); /* eval ptr once */ \
__chk_user_ptr(__ptr); \
switch (size) { \
case 1: \
__put_user_goto(__x, ptr, "b", "iq", label); \
__put_user_goto(__x, __ptr, "b", "iq", label); \
break; \
case 2: \
__put_user_goto(__x, ptr, "w", "ir", label); \
__put_user_goto(__x, __ptr, "w", "ir", label); \
break; \
case 4: \
__put_user_goto(__x, ptr, "l", "ir", label); \
__put_user_goto(__x, __ptr, "l", "ir", label); \
break; \
case 8: \
__put_user_goto_u64(__x, ptr, label); \
__put_user_goto_u64(__x, __ptr, label); \
break; \
default: \
__put_user_bad(); \
} \
instrument_put_user(__x, ptr, size); \
instrument_put_user(__x, __ptr, size); \
} while (0)
#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT

View File

@ -440,7 +440,13 @@ apply_microcode_early_amd(u32 cpuid_1_eax, void *ucode, size_t size, bool save_p
return ret;
native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
if (rev >= mc->hdr.patch_id)
/*
* Allow application of the same revision to pick up SMT-specific
* changes even if the revision of the other SMT thread is already
* up-to-date.
*/
if (rev > mc->hdr.patch_id)
return ret;
if (!__apply_microcode_amd(mc)) {
@ -528,8 +534,12 @@ void load_ucode_amd_ap(unsigned int cpuid_1_eax)
native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
/* Check whether we have saved a new patch already: */
if (*new_rev && rev < mc->hdr.patch_id) {
/*
* Check whether a new patch has been saved already. Also, allow application of
* the same revision in order to pick up SMT-thread-specific configuration even
* if the sibling SMT thread already has an up-to-date revision.
*/
if (*new_rev && rev <= mc->hdr.patch_id) {
if (!__apply_microcode_amd(mc)) {
*new_rev = mc->hdr.patch_id;
return;

View File

@ -66,9 +66,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
.rid = RDT_RESOURCE_L3,
.name = "L3",
.cache_level = 3,
.cache = {
.min_cbm_bits = 1,
},
.domains = domain_init(RDT_RESOURCE_L3),
.parse_ctrlval = parse_cbm,
.format_str = "%d=%0*x",
@ -83,9 +80,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
.rid = RDT_RESOURCE_L2,
.name = "L2",
.cache_level = 2,
.cache = {
.min_cbm_bits = 1,
},
.domains = domain_init(RDT_RESOURCE_L2),
.parse_ctrlval = parse_cbm,
.format_str = "%d=%0*x",
@ -836,6 +830,7 @@ static __init void rdt_init_res_defs_intel(void)
r->cache.arch_has_sparse_bitmaps = false;
r->cache.arch_has_empty_bitmaps = false;
r->cache.arch_has_per_cpu_cfg = false;
r->cache.min_cbm_bits = 1;
} else if (r->rid == RDT_RESOURCE_MBA) {
hw_res->msr_base = MSR_IA32_MBA_THRTL_BASE;
hw_res->msr_update = mba_wrmsr_intel;
@ -856,6 +851,7 @@ static __init void rdt_init_res_defs_amd(void)
r->cache.arch_has_sparse_bitmaps = true;
r->cache.arch_has_empty_bitmaps = true;
r->cache.arch_has_per_cpu_cfg = true;
r->cache.min_cbm_bits = 0;
} else if (r->rid == RDT_RESOURCE_MBA) {
hw_res->msr_base = MSR_IA32_MBA_BW_BASE;
hw_res->msr_update = mba_wrmsr_amd;

View File

@ -96,6 +96,7 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
unsigned int ht_mask_width, core_plus_mask_width, die_plus_mask_width;
unsigned int core_select_mask, core_level_siblings;
unsigned int die_select_mask, die_level_siblings;
unsigned int pkg_mask_width;
bool die_level_present = false;
int leaf;
@ -111,10 +112,10 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
sub_index = 1;
do {
while (true) {
cpuid_count(leaf, sub_index, &eax, &ebx, &ecx, &edx);
/*
@ -132,10 +133,15 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
}
sub_index++;
} while (LEAFB_SUBTYPE(ecx) != INVALID_TYPE);
if (LEAFB_SUBTYPE(ecx) != INVALID_TYPE)
pkg_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
else
break;
core_select_mask = (~(-1 << core_plus_mask_width)) >> ht_mask_width;
sub_index++;
}
core_select_mask = (~(-1 << pkg_mask_width)) >> ht_mask_width;
die_select_mask = (~(-1 << die_plus_mask_width)) >>
core_plus_mask_width;
@ -148,7 +154,7 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
}
c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid,
die_plus_mask_width);
pkg_mask_width);
/*
* Reinit the apicid, now that we have extended initial_apicid.
*/

View File

@ -210,13 +210,6 @@ static void __init fpu__init_system_xstate_size_legacy(void)
fpstate_reset(&current->thread.fpu);
}
static void __init fpu__init_init_fpstate(void)
{
/* Bring init_fpstate size and features up to date */
init_fpstate.size = fpu_kernel_cfg.max_size;
init_fpstate.xfeatures = fpu_kernel_cfg.max_features;
}
/*
* Called on the boot CPU once per system bootup, to set up the initial
* FPU state that is later cloned into all processes:
@ -236,5 +229,4 @@ void __init fpu__init_system(struct cpuinfo_x86 *c)
fpu__init_system_xstate_size_legacy();
fpu__init_system_xstate(fpu_kernel_cfg.max_size);
fpu__init_task_struct_size();
fpu__init_init_fpstate();
}

View File

@ -360,7 +360,7 @@ static void __init setup_init_fpu_buf(void)
print_xstate_features();
xstate_init_xcomp_bv(&init_fpstate.regs.xsave, fpu_kernel_cfg.max_features);
xstate_init_xcomp_bv(&init_fpstate.regs.xsave, init_fpstate.xfeatures);
/*
* Init all the features state with header.xfeatures being 0x0
@ -678,20 +678,6 @@ static unsigned int __init get_xsave_size_user(void)
return ebx;
}
/*
* Will the runtime-enumerated 'xstate_size' fit in the init
* task's statically-allocated buffer?
*/
static bool __init is_supported_xstate_size(unsigned int test_xstate_size)
{
if (test_xstate_size <= sizeof(init_fpstate.regs))
return true;
pr_warn("x86/fpu: xstate buffer too small (%zu < %d), disabling xsave\n",
sizeof(init_fpstate.regs), test_xstate_size);
return false;
}
static int __init init_xstate_size(void)
{
/* Recompute the context size for enabled features: */
@ -717,10 +703,6 @@ static int __init init_xstate_size(void)
kernel_default_size =
xstate_calculate_size(fpu_kernel_cfg.default_features, compacted);
/* Ensure we have the space to store all default enabled features. */
if (!is_supported_xstate_size(kernel_default_size))
return -EINVAL;
if (!paranoid_xstate_size_valid(kernel_size))
return -EINVAL;
@ -875,6 +857,19 @@ void __init fpu__init_system_xstate(unsigned int legacy_size)
update_regset_xstate_info(fpu_user_cfg.max_size,
fpu_user_cfg.max_features);
/*
* init_fpstate excludes dynamic states as they are large but init
* state is zero.
*/
init_fpstate.size = fpu_kernel_cfg.default_size;
init_fpstate.xfeatures = fpu_kernel_cfg.default_features;
if (init_fpstate.size > sizeof(init_fpstate.regs)) {
pr_warn("x86/fpu: init_fpstate buffer too small (%zu < %d), disabling XSAVE\n",
sizeof(init_fpstate.regs), init_fpstate.size);
goto out_disable;
}
setup_init_fpu_buf();
/*
@ -1130,6 +1125,15 @@ void __copy_xstate_to_uabi_buf(struct membuf to, struct fpstate *fpstate,
*/
mask = fpstate->user_xfeatures;
/*
* Dynamic features are not present in init_fpstate. When they are
* in an all zeros init state, remove those from 'mask' to zero
* those features in the user buffer instead of retrieving them
* from init_fpstate.
*/
if (fpu_state_size_dynamic())
mask &= (header.xfeatures | xinit->header.xcomp_bv);
for_each_extended_xfeature(i, mask) {
/*
* If there was a feature or alignment gap, zero the space

Some files were not shown because too many files have changed in this diff Show More