pci-v5.13-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmCRp48UHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vwsVRAAsIYueNKzZczpkeQwHigYzf4HLdKm
 yyT2c/Zlj9REAUOe7ApkowVAJWiMGDJP0J361KIluAGvAxnkMP1V6WlVdByorYd0
 CrXc/UhD//cs+3QDo4SmJRHyL8q5QQTDa8Z/8seVJUYTR/t5OhSpMOuEJPhpeQ1s
 nqUk0yWNJRoN6wn6T/7KqgYEvPhARXo9epuWy5MNPZ5f8E7SRi/QG/6hP8/YOLpK
 A+8beIOX5LAvUJaXxEovwv5UQnSUkeZTGDyRietQYE6xXNeHPKCvZ7vDjjSE7NOW
 mIodD6JcG3n/riYV3sMA5PKDZgsPI3P/qJU6Y6vWBBYOaO/kQX/c7CZ+M2bcZay4
 mh1dW0vOqoTy/pAVwQB2aq08Rrg2SAskpNdeyzduXllmuTyuwCMPXzG4RKmbQ8I1
 qMFb8qOyNulRAWcTKgSMKByEQYASQsFA5yShtaba6h0+vqrseuP6hchBKKOEan8F
 9THTI3ZflKwRvGjkI0MDbp0z0+wPYmNhrcZDpAJ3bEltw58E8TL/9aBtuhajmo8+
 wJ64mZclFuMmSyhsfkAXOvjeKXMlEBaw7vinZGbcACmv4ZGI0MV7r4vVYQbQltcy
 myzB6xJxcWB8N07UpKpUbsGMb9JjTUPlaT36eZNvUZQDntrE1ljt8RSq3nphDrcD
 KmBRU8ru74I2RE0=
 =WvTD
 -----END PGP SIGNATURE-----

Merge tag 'pci-v5.13-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:
   - Release OF node when pci_scan_device() fails (Dmitry Baryshkov)
   - Add pci_disable_parity() (Bjorn Helgaas)
   - Disable Mellanox Tavor parity reporting (Heiner Kallweit)
   - Disable N2100 r8169 parity reporting (Heiner Kallweit)
   - Fix RCiEP device to RCEC association (Qiuxu Zhuo)
   - Convert sysfs "config", "rom", "reset", "label", "index",
     "acpi_index" to static attributes to help fix races in device
     enumeration (Krzysztof Wilczyński)
   - Convert sysfs "vpd" to static attribute (Heiner Kallweit, Krzysztof
     Wilczyński)
   - Use sysfs_emit() in "show" functions (Krzysztof Wilczyński)
   - Remove unused alloc_pci_root_info() return value (Krzysztof
     Wilczyński)

  PCI device hotplug:
   - Fix acpiphp reference count leak (Feilong Lin)

  Power management:
   - Fix acpi_pci_set_power_state() debug message (Rafael J. Wysocki)
   - Fix runtime PM imbalance (Dinghao Liu)

  Virtualization:
   - Increase delay after FLR to work around Intel DC P4510 NVMe erratum
     (Raphael Norwitz)

  MSI:
   - Convert rcar, tegra, xilinx to MSI domains (Marc Zyngier)
   - For rcar, xilinx, use controller address as MSI doorbell (Marc
     Zyngier)
   - Remove unused hv msi_controller struct (Marc Zyngier)
   - Remove unused PCI core msi_controller support (Marc Zyngier)
   - Remove struct msi_controller altogether (Marc Zyngier)
   - Remove unused default_teardown_msi_irqs() (Marc Zyngier)
   - Let host bridges declare their reliance on MSI domains (Marc
     Zyngier)
   - Make pci_host_common_probe() declare its reliance on MSI domains
     (Marc Zyngier)
   - Advertise mediatek lack of built-in MSI handling (Thomas Gleixner)
   - Document ways of ending up with NO_MSI (Marc Zyngier)
   - Refactor HT advertising of NO_MSI flag (Marc Zyngier)

  VPD:
   - Remove obsolete Broadcom NIC VPD length-limiting quirk (Heiner
     Kallweit)
   - Remove sysfs VPD size checking dead code (Heiner Kallweit)
   - Convert VPF sysfs file to static attribute (Heiner Kallweit)
   - Remove unnecessary pci_set_vpd_size() (Heiner Kallweit)
   - Tone down "missing VPD" message (Heiner Kallweit)

  Endpoint framework:
   - Fix NULL pointer dereference when epc_features not implemented
     (Shradha Todi)
   - Add missing destroy_workqueue() in endpoint test (Yang Yingliang)

  Amazon Annapurna Labs PCIe controller driver:
   - Fix compile testing without CONFIG_PCI_ECAM (Arnd Bergmann)
   - Fix "no symbols" warnings when compile testing with
     CONFIG_TRIM_UNUSED_KSYMS (Arnd Bergmann)

  APM X-Gene PCIe controller driver:
   - Fix cfg resource mapping regression (Dejin Zheng)

  Broadcom iProc PCIe controller driver:
   - Return zero for success of iproc_msi_irq_domain_alloc() (Pali
     Rohár)

  Broadcom STB PCIe controller driver:
   - Add reset_control_rearm() stub for !CONFIG_RESET_CONTROLLER (Jim
     Quinlan)
   - Fix use of BCM7216 reset controller (Jim Quinlan)
   - Use reset/rearm for Broadcom STB pulse reset instead of
     deassert/assert (Jim Quinlan)
   - Fix brcm_pcie_probe() error return for unsupported revision (Wei
     Yongjun)

  Cavium ThunderX PCIe controller driver:
   - Fix compile testing (Arnd Bergmann)
   - Fix "no symbols" warnings when compile testing with
     CONFIG_TRIM_UNUSED_KSYMS (Arnd Bergmann)

  Freescale Layerscape PCIe controller driver:
   - Fix ls_pcie_ep_probe() syntax error (comma for semicolon)
     (Krzysztof Wilczyński)
   - Remove layerscape-gen4 dependencies on OF and ARM64, add dependency
     on ARCH_LAYERSCAPE (Geert Uytterhoeven)

  HiSilicon HIP PCIe controller driver:
   - Remove obsolete HiSilicon PCIe DT description (Dongdong Liu)

  Intel Gateway PCIe controller driver:
   - Remove unused pcie_app_rd() (Jiapeng Chong)

  Intel VMD host bridge driver:
   - Program IRTE with Requester ID of VMD endpoint, not child device
     (Jon Derrick)
   - Disable VMD MSI-X remapping when possible so children can use more
     MSI-X vectors (Jon Derrick)

  MediaTek PCIe controller driver:
   - Configure FC and FTS for functions other than 0 (Ryder Lee)
   - Add YAML schema for MediaTek (Jianjun Wang)
   - Export pci_pio_to_address() for module use (Jianjun Wang)
   - Add MediaTek MT8192 PCIe controller driver (Jianjun Wang)
   - Add MediaTek MT8192 INTx support (Jianjun Wang)
   - Add MediaTek MT8192 MSI support (Jianjun Wang)
   - Add MediaTek MT8192 system power management support (Jianjun Wang)
   - Add missing MODULE_DEVICE_TABLE (Qiheng Lin)

  Microchip PolarFlare PCIe controller driver:
   - Make several symbols static (Wei Yongjun)

  NVIDIA Tegra PCIe controller driver:
   - Add MCFG quirks for Tegra194 ECAM errata (Vidya Sagar)
   - Make several symbols const (Rikard Falkeborn)
   - Fix Kconfig host/endpoint typo (Wesley Sheng)

  SiFive FU740 PCIe controller driver:
   - Add pcie_aux clock to prci driver (Greentime Hu)
   - Use reset-simple in prci driver for PCIe (Greentime Hu)
   - Add SiFive FU740 PCIe host controller driver and DT binding (Paul
     Walmsley, Greentime Hu)

  Synopsys DesignWare PCIe controller driver:
   - Move MSI Receiver init to dw_pcie_host_init() so it is
     re-initialized along with the RC in resume (Jisheng Zhang)
   - Move iATU detection earlier to fix regression (Hou Zhiqiang)

  TI J721E PCIe driver:
   - Add DT binding and TI j721e support for refclk to PCIe connector
     (Kishon Vijay Abraham I)
   - Add host mode and endpoint mode DT bindings for TI AM64 SoC (Kishon
     Vijay Abraham I)

  TI Keystone PCIe controller driver:
   - Use generic config accessors for TI AM65x (K3) to fix regression
     (Kishon Vijay Abraham I)

  Xilinx NWL PCIe controller driver:
   - Add support for coherent PCIe DMA traffic using CCI (Bharat Kumar
     Gogada)
   - Add optional "dma-coherent" DT property (Bharat Kumar Gogada)

  Miscellaneous:
   - Fix kernel-doc warnings (Krzysztof Wilczyński)
   - Remove unused MicroGate SyncLink device IDs (Jiri Slaby)
   - Remove redundant dev_err() for devm_ioremap_resource() failure
     (Chen Hui)
   - Remove redundant initialization (Colin Ian King)
   - Drop redundant dev_err() for platform_get_irq() errors (Krzysztof
     Wilczyński)"

* tag 'pci-v5.13-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (98 commits)
  riscv: dts: Add PCIe support for the SiFive FU740-C000 SoC
  PCI: fu740: Add SiFive FU740 PCIe host controller driver
  dt-bindings: PCI: Add SiFive FU740 PCIe host controller
  MAINTAINERS: Add maintainers for SiFive FU740 PCIe driver
  clk: sifive: Use reset-simple in prci driver for PCIe driver
  clk: sifive: Add pcie_aux clock in prci driver for PCIe driver
  PCI: brcmstb: Use reset/rearm instead of deassert/assert
  ata: ahci_brcm: Fix use of BCM7216 reset controller
  reset: add missing empty function reset_control_rearm()
  PCI: Allow VPD access for QLogic ISP2722
  PCI/VPD: Add helper pci_get_func0_dev()
  PCI/VPD: Remove pci_vpd_find_tag() SRDT handling
  PCI/VPD: Remove pci_vpd_find_tag() 'offset' argument
  PCI/VPD: Change pci_vpd_init() return type to void
  PCI/VPD: Make missing VPD message less alarming
  PCI/VPD: Remove pci_set_vpd_size()
  x86/PCI: Remove unused alloc_pci_root_info() return value
  MAINTAINERS: Add Jianjun Wang as MediaTek PCI co-maintainer
  PCI: mediatek-gen3: Add system PM support
  PCI: mediatek-gen3: Add MSI support
  ...
This commit is contained in:
Linus Torvalds 2021-05-05 13:24:11 -07:00
commit 57151b502c
88 changed files with 2965 additions and 1304 deletions

View file

@ -1,43 +0,0 @@
HiSilicon Hip05 and Hip06 PCIe host bridge DT description
HiSilicon PCIe host controller is based on the Synopsys DesignWare PCI core.
It shares common functions with the PCIe DesignWare core driver and inherits
common properties defined in
Documentation/devicetree/bindings/pci/designware-pcie.txt.
Additional properties are described here:
Required properties
- compatible: Should contain "hisilicon,hip05-pcie" or "hisilicon,hip06-pcie".
- reg: Should contain rc_dbi, config registers location and length.
- reg-names: Must include the following entries:
"rc_dbi": controller configuration registers;
"config": PCIe configuration space registers.
- msi-parent: Should be its_pcie which is an ITS receiving MSI interrupts.
- port-id: Should be 0, 1, 2 or 3.
Optional properties:
- status: Either "ok" or "disabled".
- dma-coherent: Present if DMA operations are coherent.
Hip05 Example (note that Hip06 is the same except compatible):
pcie@b0080000 {
compatible = "hisilicon,hip05-pcie", "snps,dw-pcie";
reg = <0 0xb0080000 0 0x10000>, <0x220 0x00000000 0 0x2000>;
reg-names = "rc_dbi", "config";
bus-range = <0 15>;
msi-parent = <&its_pcie>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
dma-coherent;
ranges = <0x82000000 0 0x00000000 0x220 0x00000000 0 0x10000000>;
num-lanes = <8>;
port-id = <1>;
#interrupt-cells = <1>;
interrupt-map-mask = <0xf800 0 0 7>;
interrupt-map = <0x0 0 0 1 &mbigen_pcie 1 10
0x0 0 0 2 &mbigen_pcie 2 11
0x0 0 0 3 &mbigen_pcie 3 12
0x0 0 0 4 &mbigen_pcie 4 13>;
};

View file

@ -0,0 +1,181 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/mediatek-pcie-gen3.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Gen3 PCIe controller on MediaTek SoCs
maintainers:
- Jianjun Wang <jianjun.wang@mediatek.com>
description: |+
PCIe Gen3 MAC controller for MediaTek SoCs, it supports Gen3 speed
and compatible with Gen2, Gen1 speed.
This PCIe controller supports up to 256 MSI vectors, the MSI hardware
block diagram is as follows:
+-----+
| GIC |
+-----+
^
|
port->irq
|
+-+-+-+-+-+-+-+-+
|0|1|2|3|4|5|6|7| (PCIe intc)
+-+-+-+-+-+-+-+-+
^ ^ ^
| | ... |
+-------+ +------+ +-----------+
| | |
+-+-+---+--+--+ +-+-+---+--+--+ +-+-+---+--+--+
|0|1|...|30|31| |0|1|...|30|31| |0|1|...|30|31| (MSI sets)
+-+-+---+--+--+ +-+-+---+--+--+ +-+-+---+--+--+
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
| | | | | | | | | | | | (MSI vectors)
| | | | | | | | | | | |
(MSI SET0) (MSI SET1) ... (MSI SET7)
With 256 MSI vectors supported, the MSI vectors are composed of 8 sets,
each set has its own address for MSI message, and supports 32 MSI vectors
to generate interrupt.
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
properties:
compatible:
const: mediatek,mt8192-pcie
reg:
maxItems: 1
reg-names:
items:
- const: pcie-mac
interrupts:
maxItems: 1
ranges:
minItems: 1
maxItems: 8
resets:
minItems: 1
maxItems: 2
reset-names:
minItems: 1
maxItems: 2
items:
- const: phy
- const: mac
clocks:
maxItems: 6
clock-names:
items:
- const: pl_250m
- const: tl_26m
- const: tl_96m
- const: tl_32k
- const: peri_26m
- const: top_133m
assigned-clocks:
maxItems: 1
assigned-clock-parents:
maxItems: 1
phys:
maxItems: 1
'#interrupt-cells':
const: 1
interrupt-controller:
description: Interrupt controller node for handling legacy PCI interrupts.
type: object
properties:
'#address-cells':
const: 0
'#interrupt-cells':
const: 1
interrupt-controller: true
required:
- '#address-cells'
- '#interrupt-cells'
- interrupt-controller
additionalProperties: false
required:
- compatible
- reg
- reg-names
- interrupts
- ranges
- clocks
- '#interrupt-cells'
- interrupt-controller
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
bus {
#address-cells = <2>;
#size-cells = <2>;
pcie: pcie@11230000 {
compatible = "mediatek,mt8192-pcie";
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
reg = <0x00 0x11230000 0x00 0x4000>;
reg-names = "pcie-mac";
interrupts = <GIC_SPI 251 IRQ_TYPE_LEVEL_HIGH 0>;
bus-range = <0x00 0xff>;
ranges = <0x82000000 0x00 0x12000000 0x00
0x12000000 0x00 0x1000000>;
clocks = <&infracfg 44>,
<&infracfg 40>,
<&infracfg 43>,
<&infracfg 97>,
<&infracfg 99>,
<&infracfg 111>;
clock-names = "pl_250m", "tl_26m", "tl_96m",
"tl_32k", "peri_26m", "top_133m";
assigned-clocks = <&topckgen 50>;
assigned-clock-parents = <&topckgen 91>;
phys = <&pciephy>;
phy-names = "pcie-phy";
resets = <&infracfg_rst 2>,
<&infracfg_rst 3>;
reset-names = "phy", "mac";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &pcie_intc 0>,
<0 0 0 2 &pcie_intc 1>,
<0 0 0 3 &pcie_intc 2>,
<0 0 0 4 &pcie_intc 3>;
pcie_intc: interrupt-controller {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
};
};
};

View file

@ -0,0 +1,113 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/sifive,fu740-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: SiFive FU740 PCIe host controller
description: |+
SiFive FU740 PCIe host controller is based on the Synopsys DesignWare
PCI core. It shares common features with the PCIe DesignWare core and
inherits common properties defined in
Documentation/devicetree/bindings/pci/designware-pcie.txt.
maintainers:
- Paul Walmsley <paul.walmsley@sifive.com>
- Greentime Hu <greentime.hu@sifive.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
properties:
compatible:
const: sifive,fu740-pcie
reg:
maxItems: 3
reg-names:
items:
- const: dbi
- const: config
- const: mgmt
num-lanes:
const: 8
msi-parent: true
interrupt-names:
items:
- const: msi
- const: inta
- const: intb
- const: intc
- const: intd
resets:
description: A phandle to the PCIe power up reset line.
maxItems: 1
pwren-gpios:
description: Should specify the GPIO for controlling the PCI bus device power on.
maxItems: 1
reset-gpios:
maxItems: 1
required:
- dma-coherent
- num-lanes
- interrupts
- interrupt-names
- interrupt-parent
- interrupt-map-mask
- interrupt-map
- clock-names
- clocks
- resets
- pwren-gpios
- reset-gpios
unevaluatedProperties: false
examples:
- |
bus {
#address-cells = <2>;
#size-cells = <2>;
#include <dt-bindings/clock/sifive-fu740-prci.h>
pcie@e00000000 {
compatible = "sifive,fu740-pcie";
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
reg = <0xe 0x00000000 0x0 0x80000000>,
<0xd 0xf0000000 0x0 0x10000000>,
<0x0 0x100d0000 0x0 0x1000>;
reg-names = "dbi", "config", "mgmt";
device_type = "pci";
dma-coherent;
bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x60080000 0x0 0x60080000 0x0 0x10000>, /* I/O */
<0x82000000 0x0 0x60090000 0x0 0x60090000 0x0 0xff70000>, /* mem */
<0x82000000 0x0 0x70000000 0x0 0x70000000 0x0 0x1000000>, /* mem */
<0xc3000000 0x20 0x00000000 0x20 0x00000000 0x20 0x00000000>; /* mem prefetchable */
num-lanes = <0x8>;
interrupts = <56>, <57>, <58>, <59>, <60>, <61>, <62>, <63>, <64>;
interrupt-names = "msi", "inta", "intb", "intc", "intd";
interrupt-parent = <&plic0>;
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
interrupt-map = <0x0 0x0 0x0 0x1 &plic0 57>,
<0x0 0x0 0x0 0x2 &plic0 58>,
<0x0 0x0 0x0 0x3 &plic0 59>,
<0x0 0x0 0x0 0x4 &plic0 60>;
clock-names = "pcie_aux";
clocks = <&prci PRCI_CLK_PCIE_AUX>;
resets = <&prci 4>;
pwren-gpios = <&gpio 5 0>;
reset-gpios = <&gpio 8 0>;
};
};

View file

@ -16,13 +16,15 @@ allOf:
properties:
compatible:
oneOf:
- const: ti,j721e-pcie-ep
- description: PCIe EP controller in AM64
items:
- const: ti,am64-pcie-ep
- const: ti,j721e-pcie-ep
- description: PCIe EP controller in J7200
items:
- const: ti,j7200-pcie-ep
- const: ti,j721e-pcie-ep
- description: PCIe EP controller in J721E
items:
- const: ti,j721e-pcie-ep
reg:
maxItems: 4
@ -66,7 +68,6 @@ required:
- power-domains
- clocks
- clock-names
- dma-coherent
- max-functions
- phys
- phy-names

View file

@ -16,13 +16,15 @@ allOf:
properties:
compatible:
oneOf:
- const: ti,j721e-pcie-host
- description: PCIe controller in AM64
items:
- const: ti,am64-pcie-host
- const: ti,j721e-pcie-host
- description: PCIe controller in J7200
items:
- const: ti,j7200-pcie-host
- const: ti,j721e-pcie-host
- description: PCIe controller in J721E
items:
- const: ti,j721e-pcie-host
reg:
maxItems: 4
@ -46,12 +48,17 @@ properties:
maxItems: 1
clocks:
maxItems: 1
description: clock-specifier to represent input to the PCIe
minItems: 1
maxItems: 2
description: |+
clock-specifier to represent input to the PCIe for 1 item.
2nd item if present represents reference clock to the connector.
clock-names:
minItems: 1
items:
- const: fck
- const: pcie_refclk
vendor-id:
const: 0x104c
@ -62,6 +69,8 @@ properties:
- const: 0xb00d
- items:
- const: 0xb00f
- items:
- const: 0xb010
msi-map: true
@ -78,7 +87,6 @@ required:
- vendor-id
- device-id
- msi-map
- dma-coherent
- dma-ranges
- ranges
- reset-gpios

View file

@ -33,6 +33,8 @@ Required properties:
- #address-cells: specifies the number of cells needed to encode an
address. The value must be 0.
Optional properties:
- dma-coherent: present if DMA operations are coherent
Example:
++++++++

View file

@ -13983,6 +13983,14 @@ S: Maintained
F: Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt
F: drivers/pci/controller/dwc/*imx6*
PCI DRIVER FOR FU740
M: Paul Walmsley <paul.walmsley@sifive.com>
M: Greentime Hu <greentime.hu@sifive.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/sifive,fu740-pcie.yaml
F: drivers/pci/controller/dwc/pcie-fu740.c
PCI DRIVER FOR INTEL VOLUME MANAGEMENT DEVICE (VMD)
M: Jonathan Derrick <jonathan.derrick@intel.com>
L: linux-pci@vger.kernel.org
@ -14190,7 +14198,6 @@ PCIE DRIVER FOR HISILICON
M: Zhou Wang <wangzhou1@hisilicon.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/hisilicon-pcie.txt
F: drivers/pci/controller/dwc/pcie-hisi.c
PCIE DRIVER FOR HISILICON KIRIN
@ -14210,6 +14217,7 @@ F: drivers/pci/controller/dwc/pcie-histb.c
PCIE DRIVER FOR MEDIATEK
M: Ryder Lee <ryder.lee@mediatek.com>
M: Jianjun Wang <jianjun.wang@mediatek.com>
L: linux-pci@vger.kernel.org
L: linux-mediatek@lists.infradead.org
S: Supported

View file

@ -116,16 +116,16 @@ static struct hw_pci n2100_pci __initdata = {
};
/*
* Both r8169 chips on the n2100 exhibit PCI parity problems. Set
* the ->broken_parity_status flag for both ports so that the r8169
* driver knows it should ignore error interrupts.
* Both r8169 chips on the n2100 exhibit PCI parity problems. Turn
* off parity reporting for both ports so we don't get error interrupts
* for them.
*/
static void n2100_fixup_r8169(struct pci_dev *dev)
{
if (dev->bus->number == 0 &&
(dev->devfn == PCI_DEVFN(1, 0) ||
dev->devfn == PCI_DEVFN(2, 0)))
dev->broken_parity_status = 1;
pci_disable_parity(dev);
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REALTEK, PCI_ANY_ID, n2100_fixup_r8169);

View file

@ -159,6 +159,7 @@ prci: clock-controller@10000000 {
reg = <0x0 0x10000000 0x0 0x1000>;
clocks = <&hfclk>, <&rtcclk>;
#clock-cells = <1>;
#reset-cells = <1>;
};
uart0: serial@10010000 {
compatible = "sifive,fu740-c000-uart", "sifive,uart0";
@ -289,5 +290,37 @@ gpio: gpio@10060000 {
clocks = <&prci PRCI_CLK_PCLK>;
status = "disabled";
};
pcie@e00000000 {
compatible = "sifive,fu740-pcie";
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
reg = <0xe 0x00000000 0x0 0x80000000>,
<0xd 0xf0000000 0x0 0x10000000>,
<0x0 0x100d0000 0x0 0x1000>;
reg-names = "dbi", "config", "mgmt";
device_type = "pci";
dma-coherent;
bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x60080000 0x0 0x60080000 0x0 0x10000>, /* I/O */
<0x82000000 0x0 0x60090000 0x0 0x60090000 0x0 0xff70000>, /* mem */
<0x82000000 0x0 0x70000000 0x0 0x70000000 0x0 0x1000000>, /* mem */
<0xc3000000 0x20 0x00000000 0x20 0x00000000 0x20 0x00000000>; /* mem prefetchable */
num-lanes = <0x8>;
interrupts = <56>, <57>, <58>, <59>, <60>, <61>, <62>, <63>, <64>;
interrupt-names = "msi", "inta", "intb", "intc", "intd";
interrupt-parent = <&plic0>;
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
interrupt-map = <0x0 0x0 0x0 0x1 &plic0 57>,
<0x0 0x0 0x0 0x2 &plic0 58>,
<0x0 0x0 0x0 0x3 &plic0 59>,
<0x0 0x0 0x0 0x4 &plic0 60>;
clock-names = "pcie_aux";
clocks = <&prci PRCI_CLK_PCIE_AUX>;
pwren-gpios = <&gpio 5 0>;
reset-gpios = <&gpio 8 0>;
resets = <&prci 4>;
status = "okay";
};
};
};

View file

@ -126,7 +126,7 @@ static int __init early_root_info_init(void)
node = (reg >> 4) & 0x07;
link = (reg >> 8) & 0x03;
info = alloc_pci_root_info(min_bus, max_bus, node, link);
alloc_pci_root_info(min_bus, max_bus, node, link);
}
/*

View file

@ -116,6 +116,13 @@ static struct mcfg_fixup mcfg_quirks[] = {
THUNDER_ECAM_QUIRK(2, 12),
THUNDER_ECAM_QUIRK(2, 13),
{ "NVIDIA", "TEGRA194", 1, 0, MCFG_BUS_ANY, &tegra194_pcie_ops},
{ "NVIDIA", "TEGRA194", 1, 1, MCFG_BUS_ANY, &tegra194_pcie_ops},
{ "NVIDIA", "TEGRA194", 1, 2, MCFG_BUS_ANY, &tegra194_pcie_ops},
{ "NVIDIA", "TEGRA194", 1, 3, MCFG_BUS_ANY, &tegra194_pcie_ops},
{ "NVIDIA", "TEGRA194", 1, 4, MCFG_BUS_ANY, &tegra194_pcie_ops},
{ "NVIDIA", "TEGRA194", 1, 5, MCFG_BUS_ANY, &tegra194_pcie_ops},
#define XGENE_V1_ECAM_MCFG(rev, seg) \
{"APM ", "XGENE ", rev, seg, MCFG_BUS_ANY, \
&xgene_v1_pcie_ecam_ops }

View file

@ -86,7 +86,8 @@ struct brcm_ahci_priv {
u32 port_mask;
u32 quirks;
enum brcm_ahci_version version;
struct reset_control *rcdev;
struct reset_control *rcdev_rescal;
struct reset_control *rcdev_ahci;
};
static inline u32 brcm_sata_readreg(void __iomem *addr)
@ -352,8 +353,8 @@ static int brcm_ahci_suspend(struct device *dev)
else
ret = 0;
if (priv->version != BRCM_SATA_BCM7216)
reset_control_assert(priv->rcdev);
reset_control_assert(priv->rcdev_ahci);
reset_control_rearm(priv->rcdev_rescal);
return ret;
}
@ -365,10 +366,10 @@ static int __maybe_unused brcm_ahci_resume(struct device *dev)
struct brcm_ahci_priv *priv = hpriv->plat_data;
int ret = 0;
if (priv->version == BRCM_SATA_BCM7216)
ret = reset_control_reset(priv->rcdev);
else
ret = reset_control_deassert(priv->rcdev);
ret = reset_control_deassert(priv->rcdev_ahci);
if (ret)
return ret;
ret = reset_control_reset(priv->rcdev_rescal);
if (ret)
return ret;
@ -434,7 +435,6 @@ static int brcm_ahci_probe(struct platform_device *pdev)
{
const struct of_device_id *of_id;
struct device *dev = &pdev->dev;
const char *reset_name = NULL;
struct brcm_ahci_priv *priv;
struct ahci_host_priv *hpriv;
struct resource *res;
@ -456,15 +456,15 @@ static int brcm_ahci_probe(struct platform_device *pdev)
if (IS_ERR(priv->top_ctrl))
return PTR_ERR(priv->top_ctrl);
/* Reset is optional depending on platform and named differently */
if (priv->version == BRCM_SATA_BCM7216)
reset_name = "rescal";
else
reset_name = "ahci";
priv->rcdev = devm_reset_control_get_optional(&pdev->dev, reset_name);
if (IS_ERR(priv->rcdev))
return PTR_ERR(priv->rcdev);
if (priv->version == BRCM_SATA_BCM7216) {
priv->rcdev_rescal = devm_reset_control_get_optional_shared(
&pdev->dev, "rescal");
if (IS_ERR(priv->rcdev_rescal))
return PTR_ERR(priv->rcdev_rescal);
}
priv->rcdev_ahci = devm_reset_control_get_optional(&pdev->dev, "ahci");
if (IS_ERR(priv->rcdev_ahci))
return PTR_ERR(priv->rcdev_ahci);
hpriv = ahci_platform_get_resources(pdev, 0);
if (IS_ERR(hpriv))
@ -485,10 +485,10 @@ static int brcm_ahci_probe(struct platform_device *pdev)
break;
}
if (priv->version == BRCM_SATA_BCM7216)
ret = reset_control_reset(priv->rcdev);
else
ret = reset_control_deassert(priv->rcdev);
ret = reset_control_reset(priv->rcdev_rescal);
if (ret)
return ret;
ret = reset_control_deassert(priv->rcdev_ahci);
if (ret)
return ret;
@ -539,8 +539,8 @@ static int brcm_ahci_probe(struct platform_device *pdev)
out_disable_clks:
ahci_platform_disable_clks(hpriv);
out_reset:
if (priv->version != BRCM_SATA_BCM7216)
reset_control_assert(priv->rcdev);
reset_control_assert(priv->rcdev_ahci);
reset_control_rearm(priv->rcdev_rescal);
return ret;
}

View file

@ -10,6 +10,8 @@ if CLK_SIFIVE
config CLK_SIFIVE_PRCI
bool "PRCI driver for SiFive SoCs"
select RESET_CONTROLLER
select RESET_SIMPLE
select CLK_ANALOGBITS_WRPLL_CLN28HPC
help
Supports the Power Reset Clock interface (PRCI) IP block found in

View file

@ -72,6 +72,12 @@ static const struct clk_ops sifive_fu740_prci_hfpclkplldiv_clk_ops = {
.recalc_rate = sifive_prci_hfpclkplldiv_recalc_rate,
};
static const struct clk_ops sifive_fu740_prci_pcie_aux_clk_ops = {
.enable = sifive_prci_pcie_aux_clock_enable,
.disable = sifive_prci_pcie_aux_clock_disable,
.is_enabled = sifive_prci_pcie_aux_clock_is_enabled,
};
/* List of clock controls provided by the PRCI */
struct __prci_clock __prci_init_clocks_fu740[] = {
[PRCI_CLK_COREPLL] = {
@ -120,4 +126,9 @@ struct __prci_clock __prci_init_clocks_fu740[] = {
.parent_name = "hfpclkpll",
.ops = &sifive_fu740_prci_hfpclkplldiv_clk_ops,
},
[PRCI_CLK_PCIE_AUX] = {
.name = "pcie_aux",
.parent_name = "hfclk",
.ops = &sifive_fu740_prci_pcie_aux_clk_ops,
},
};

View file

@ -9,7 +9,7 @@
#include "sifive-prci.h"
#define NUM_CLOCK_FU740 8
#define NUM_CLOCK_FU740 9
extern struct __prci_clock __prci_init_clocks_fu740[NUM_CLOCK_FU740];

View file

@ -453,6 +453,47 @@ void sifive_prci_hfpclkpllsel_use_hfpclkpll(struct __prci_data *pd)
r = __prci_readl(pd, PRCI_HFPCLKPLLSEL_OFFSET); /* barrier */
}
/* PCIE AUX clock APIs for enable, disable. */
int sifive_prci_pcie_aux_clock_is_enabled(struct clk_hw *hw)
{
struct __prci_clock *pc = clk_hw_to_prci_clock(hw);
struct __prci_data *pd = pc->pd;
u32 r;
r = __prci_readl(pd, PRCI_PCIE_AUX_OFFSET);
if (r & PRCI_PCIE_AUX_EN_MASK)
return 1;
else
return 0;
}
int sifive_prci_pcie_aux_clock_enable(struct clk_hw *hw)
{
struct __prci_clock *pc = clk_hw_to_prci_clock(hw);
struct __prci_data *pd = pc->pd;
u32 r __maybe_unused;
if (sifive_prci_pcie_aux_clock_is_enabled(hw))
return 0;
__prci_writel(1, PRCI_PCIE_AUX_OFFSET, pd);
r = __prci_readl(pd, PRCI_PCIE_AUX_OFFSET); /* barrier */
return 0;
}
void sifive_prci_pcie_aux_clock_disable(struct clk_hw *hw)
{
struct __prci_clock *pc = clk_hw_to_prci_clock(hw);
struct __prci_data *pd = pc->pd;
u32 r __maybe_unused;
__prci_writel(0, PRCI_PCIE_AUX_OFFSET, pd);
r = __prci_readl(pd, PRCI_PCIE_AUX_OFFSET); /* barrier */
}
/**
* __prci_register_clocks() - register clock controls in the PRCI
* @dev: Linux struct device
@ -547,6 +588,19 @@ static int sifive_prci_probe(struct platform_device *pdev)
if (IS_ERR(pd->va))
return PTR_ERR(pd->va);
pd->reset.rcdev.owner = THIS_MODULE;
pd->reset.rcdev.nr_resets = PRCI_RST_NR;
pd->reset.rcdev.ops = &reset_simple_ops;
pd->reset.rcdev.of_node = pdev->dev.of_node;
pd->reset.active_low = true;
pd->reset.membase = pd->va + PRCI_DEVICESRESETREG_OFFSET;
spin_lock_init(&pd->reset.lock);
r = devm_reset_controller_register(&pdev->dev, &pd->reset.rcdev);
if (r) {
dev_err(dev, "could not register reset controller: %d\n", r);
return r;
}
r = __prci_register_clocks(dev, pd, desc);
if (r) {
dev_err(dev, "could not register clocks: %d\n", r);

View file

@ -11,6 +11,7 @@
#include <linux/clk/analogbits-wrpll-cln28hpc.h>
#include <linux/clk-provider.h>
#include <linux/reset/reset-simple.h>
#include <linux/platform_device.h>
/*
@ -67,6 +68,11 @@
#define PRCI_DDRPLLCFG1_CKE_SHIFT 31
#define PRCI_DDRPLLCFG1_CKE_MASK (0x1 << PRCI_DDRPLLCFG1_CKE_SHIFT)
/* PCIEAUX */
#define PRCI_PCIE_AUX_OFFSET 0x14
#define PRCI_PCIE_AUX_EN_SHIFT 0
#define PRCI_PCIE_AUX_EN_MASK (0x1 << PRCI_PCIE_AUX_EN_SHIFT)
/* GEMGXLPLLCFG0 */
#define PRCI_GEMGXLPLLCFG0_OFFSET 0x1c
#define PRCI_GEMGXLPLLCFG0_DIVR_SHIFT 0
@ -116,6 +122,8 @@
#define PRCI_DEVICESRESETREG_CHIPLINK_RST_N_MASK \
(0x1 << PRCI_DEVICESRESETREG_CHIPLINK_RST_N_SHIFT)
#define PRCI_RST_NR 7
/* CLKMUXSTATUSREG */
#define PRCI_CLKMUXSTATUSREG_OFFSET 0x2c
#define PRCI_CLKMUXSTATUSREG_TLCLKSEL_STATUS_SHIFT 1
@ -216,6 +224,7 @@
*/
struct __prci_data {
void __iomem *va;
struct reset_simple_data reset;
struct clk_hw_onecell_data hw_clks;
};
@ -296,4 +305,8 @@ unsigned long sifive_prci_tlclksel_recalc_rate(struct clk_hw *hw,
unsigned long sifive_prci_hfpclkplldiv_recalc_rate(struct clk_hw *hw,
unsigned long parent_rate);
int sifive_prci_pcie_aux_clock_is_enabled(struct clk_hw *hw);
int sifive_prci_pcie_aux_clock_enable(struct clk_hw *hw);
void sifive_prci_pcie_aux_clock_disable(struct clk_hw *hw);
#endif /* __SIFIVE_CLK_SIFIVE_PRCI_H */

View file

@ -1280,7 +1280,8 @@ static void intel_irq_remapping_prepare_irte(struct intel_ir_data *data,
break;
case X86_IRQ_ALLOC_TYPE_PCI_MSI:
case X86_IRQ_ALLOC_TYPE_PCI_MSIX:
set_msi_sid(irte, msi_desc_to_pci_dev(info->desc));
set_msi_sid(irte,
pci_real_dma_dev(msi_desc_to_pci_dev(info->desc)));
break;
default:
BUG_ON(1);

View file

@ -8057,7 +8057,7 @@ bnx2_read_vpd_fw_ver(struct bnx2 *bp)
data[i + 3] = data[i + BNX2_VPD_LEN];
}
i = pci_vpd_find_tag(data, 0, BNX2_VPD_LEN, PCI_VPD_LRDT_RO_DATA);
i = pci_vpd_find_tag(data, BNX2_VPD_LEN, PCI_VPD_LRDT_RO_DATA);
if (i < 0)
goto vpd_done;

View file

@ -12206,8 +12206,7 @@ static void bnx2x_read_fwinfo(struct bnx2x *bp)
/* VPD RO tag should be first tag after identifier string, hence
* we should be able to find it in first BNX2X_VPD_LEN chars
*/
i = pci_vpd_find_tag(vpd_start, 0, BNX2X_VPD_LEN,
PCI_VPD_LRDT_RO_DATA);
i = pci_vpd_find_tag(vpd_start, BNX2X_VPD_LEN, PCI_VPD_LRDT_RO_DATA);
if (i < 0)
goto out_not_found;

View file

@ -12794,7 +12794,7 @@ static void bnxt_vpd_read_info(struct bnxt *bp)
goto exit;
}
i = pci_vpd_find_tag(vpd_data, 0, vpd_size, PCI_VPD_LRDT_RO_DATA);
i = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA);
if (i < 0) {
netdev_err(bp->dev, "VPD READ-Only not found\n");
goto exit;

View file

@ -13016,7 +13016,7 @@ static int tg3_test_nvram(struct tg3 *tp)
if (!buf)
return -ENOMEM;
i = pci_vpd_find_tag((u8 *)buf, 0, len, PCI_VPD_LRDT_RO_DATA);
i = pci_vpd_find_tag((u8 *)buf, len, PCI_VPD_LRDT_RO_DATA);
if (i > 0) {
j = pci_vpd_lrdt_size(&((u8 *)buf)[i]);
if (j < 0)
@ -15629,7 +15629,7 @@ static void tg3_read_vpd(struct tg3 *tp)
if (!vpd_data)
goto out_no_vpd;
i = pci_vpd_find_tag(vpd_data, 0, vpdlen, PCI_VPD_LRDT_RO_DATA);
i = pci_vpd_find_tag(vpd_data, vpdlen, PCI_VPD_LRDT_RO_DATA);
if (i < 0)
goto out_not_found;

View file

@ -2775,7 +2775,7 @@ int t4_get_raw_vpd_params(struct adapter *adapter, struct vpd_params *p)
if (id_len > ID_LEN)
id_len = ID_LEN;
i = pci_vpd_find_tag(vpd, 0, VPD_LEN, PCI_VPD_LRDT_RO_DATA);
i = pci_vpd_find_tag(vpd, VPD_LEN, PCI_VPD_LRDT_RO_DATA);
if (i < 0) {
dev_err(adapter->pdev_dev, "missing VPD-R section\n");
ret = -EINVAL;

View file

@ -4398,20 +4398,6 @@ static void rtl8169_pcierr_interrupt(struct net_device *dev)
if (net_ratelimit())
netdev_err(dev, "PCI error (cmd = 0x%04x, status_errs = 0x%04x)\n",
pci_cmd, pci_status_errs);
/*
* The recovery sequence below admits a very elaborated explanation:
* - it seems to work;
* - I did not see what else could be done;
* - it makes iop3xx happy.
*
* Feel free to adjust to your needs.
*/
if (pdev->broken_parity_status)
pci_cmd &= ~PCI_COMMAND_PARITY;
else
pci_cmd |= PCI_COMMAND_SERR | PCI_COMMAND_PARITY;
pci_write_config_word(pdev, PCI_COMMAND, pci_cmd);
rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_PENDING);
}

View file

@ -920,7 +920,7 @@ static void efx_probe_vpd_strings(struct efx_nic *efx)
}
/* Get the Read only section */
ro_start = pci_vpd_find_tag(vpd_data, 0, vpd_size, PCI_VPD_LRDT_RO_DATA);
ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA);
if (ro_start < 0) {
netif_err(efx, drv, efx->net_dev, "VPD Read-only not found\n");
return;

View file

@ -2800,7 +2800,7 @@ static void ef4_probe_vpd_strings(struct ef4_nic *efx)
}
/* Get the Read only section */
ro_start = pci_vpd_find_tag(vpd_data, 0, vpd_size, PCI_VPD_LRDT_RO_DATA);
ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA);
if (ro_start < 0) {
netif_err(efx, drv, efx->net_dev, "VPD Read-only not found\n");
return;

View file

@ -480,7 +480,7 @@ EXPORT_SYMBOL_GPL(pci_pasid_features);
#define PASID_NUMBER_SHIFT 8
#define PASID_NUMBER_MASK (0x1f << PASID_NUMBER_SHIFT)
/**
* pci_max_pasid - Get maximum number of PASIDs supported by device
* pci_max_pasids - Get maximum number of PASIDs supported by device
* @pdev: PCI device structure
*
* Returns negative value when PASID capability is not present.

View file

@ -41,7 +41,6 @@ config PCI_TEGRA
bool "NVIDIA Tegra PCIe controller"
depends on ARCH_TEGRA || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCI_MSI_ARCH_FALLBACKS
help
Say Y here if you want support for the PCIe host controller found
on NVIDIA Tegra SoCs.
@ -59,7 +58,6 @@ config PCIE_RCAR_HOST
bool "Renesas R-Car PCIe host controller"
depends on ARCH_RENESAS || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCI_MSI_ARCH_FALLBACKS
help
Say Y here if you want PCIe controller support on R-Car SoCs in host
mode.
@ -88,7 +86,7 @@ config PCI_HOST_GENERIC
config PCIE_XILINX
bool "Xilinx AXI PCIe host bridge support"
depends on OF || COMPILE_TEST
select PCI_MSI_ARCH_FALLBACKS
depends on PCI_MSI_IRQ_DOMAIN
help
Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
Host Bridge driver.
@ -233,6 +231,19 @@ config PCIE_MEDIATEK
Say Y here if you want to enable PCIe controller support on
MediaTek SoCs.
config PCIE_MEDIATEK_GEN3
tristate "MediaTek Gen3 PCIe controller"
depends on ARCH_MEDIATEK || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
help
Adds support for PCIe Gen3 MAC controller for MediaTek SoCs.
This PCIe controller is compatible with Gen3, Gen2 and Gen1 speed,
and support up to 256 MSI interrupt numbers for
multi-function devices.
Say Y here if you want to enable Gen3 PCIe controller support on
MediaTek SoCs.
config VMD
depends on PCI_MSI && X86_64 && SRCU
tristate "Intel Volume Management Device Driver"

View file

@ -11,10 +11,13 @@ obj-$(CONFIG_PCIE_RCAR_HOST) += pcie-rcar.o pcie-rcar-host.o
obj-$(CONFIG_PCIE_RCAR_EP) += pcie-rcar.o pcie-rcar-ep.o
obj-$(CONFIG_PCI_HOST_COMMON) += pci-host-common.o
obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o
obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o
obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o
obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o
obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o
obj-$(CONFIG_PCIE_XILINX_CPM) += pcie-xilinx-cpm.o
obj-$(CONFIG_PCI_V3_SEMI) += pci-v3-semi.o
obj-$(CONFIG_PCI_XGENE) += pci-xgene.o
obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o
obj-$(CONFIG_PCI_VERSATILE) += pci-versatile.o
obj-$(CONFIG_PCIE_IPROC) += pcie-iproc.o
@ -27,6 +30,7 @@ obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
obj-$(CONFIG_PCIE_MEDIATEK_GEN3) += pcie-mediatek-gen3.o
obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
obj-$(CONFIG_VMD) += vmd.o
obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
@ -47,8 +51,10 @@ obj-y += mobiveil/
# ARM64 and use internal ifdefs to only build the pieces we need
# depending on whether ACPI, the DT driver, or both are enabled.
ifdef CONFIG_PCI
ifdef CONFIG_ACPI
ifdef CONFIG_PCI_QUIRKS
obj-$(CONFIG_ARM64) += pci-thunder-ecam.o
obj-$(CONFIG_ARM64) += pci-thunder-pem.o
obj-$(CONFIG_ARM64) += pci-xgene.o
endif
endif

View file

@ -1,11 +1,12 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* pci-j721e - PCIe controller driver for TI's J721E SoCs
*
* Copyright (C) 2020 Texas Instruments Incorporated - http://www.ti.com
* Author: Kishon Vijay Abraham I <kishon@ti.com>
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/io.h>
@ -50,6 +51,7 @@ enum link_status {
struct j721e_pcie {
struct device *dev;
struct clk *refclk;
u32 mode;
u32 num_lanes;
struct cdns_pcie *cdns_pcie;
@ -312,6 +314,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
struct cdns_pcie_ep *ep;
struct gpio_desc *gpiod;
void __iomem *base;
struct clk *clk;
u32 num_lanes;
u32 mode;
int ret;
@ -411,6 +414,20 @@ static int j721e_pcie_probe(struct platform_device *pdev)
goto err_get_sync;
}
clk = devm_clk_get_optional(dev, "pcie_refclk");
if (IS_ERR(clk)) {
ret = PTR_ERR(clk);
dev_err(dev, "failed to get pcie_refclk\n");
goto err_pcie_setup;
}
ret = clk_prepare_enable(clk);
if (ret) {
dev_err(dev, "failed to enable pcie_refclk\n");
goto err_get_sync;
}
pcie->refclk = clk;
/*
* "Power Sequencing and Reset Signal Timings" table in
* PCI EXPRESS CARD ELECTROMECHANICAL SPECIFICATION, REV. 3.0
@ -425,8 +442,10 @@ static int j721e_pcie_probe(struct platform_device *pdev)
}
ret = cdns_pcie_host_setup(rc);
if (ret < 0)
if (ret < 0) {
clk_disable_unprepare(pcie->refclk);
goto err_pcie_setup;
}
break;
case PCI_MODE_EP:
@ -479,6 +498,7 @@ static int j721e_pcie_remove(struct platform_device *pdev)
struct cdns_pcie *cdns_pcie = pcie->cdns_pcie;
struct device *dev = &pdev->dev;
clk_disable_unprepare(pcie->refclk);
cdns_pcie_disable_phy(cdns_pcie);
pm_runtime_put(dev);
pm_runtime_disable(dev);

View file

@ -280,7 +280,7 @@ config PCIE_TEGRA194_EP
select PCIE_TEGRA194
help
Enables support for the PCIe controller in the NVIDIA Tegra194 SoC to
work in host mode. There are two instances of PCIe controllers in
work in endpoint mode. There are two instances of PCIe controllers in
Tegra194. This controller can work either as EP or RC. In order to
enable host-specific features PCIE_TEGRA194_HOST must be selected and
in order to enable device-specific features PCIE_TEGRA194_EP must be
@ -311,6 +311,7 @@ config PCIE_AL
depends on OF && (ARM64 || COMPILE_TEST)
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST
select PCI_ECAM
help
Say Y here to enable support of the Amazon's Annapurna Labs PCIe
controller IP on Amazon SoCs. The PCIe controller uses the DesignWare
@ -318,4 +319,13 @@ config PCIE_AL
required only for DT-based platforms. ACPI platforms with the
Annapurna Labs PCIe controller don't need to enable this.
config PCIE_FU740
bool "SiFive FU740 PCIe host controller"
depends on PCI_MSI_IRQ_DOMAIN
depends on SOC_SIFIVE || COMPILE_TEST
select PCIE_DW_HOST
help
Say Y here if you want PCIe controller support for the SiFive
FU740.
endmenu

View file

@ -5,6 +5,7 @@ obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
obj-$(CONFIG_PCIE_FU740) += pcie-fu740.o
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o
@ -17,7 +18,6 @@ obj-$(CONFIG_PCIE_INTEL_GW) += pcie-intel-gw.o
obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o
obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o
obj-$(CONFIG_PCI_MESON) += pci-meson.o
obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o
obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o
@ -31,7 +31,13 @@ obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o
# ARM64 and use internal ifdefs to only build the pieces we need
# depending on whether ACPI, the DT driver, or both are enabled.
ifdef CONFIG_PCI
obj-$(CONFIG_PCIE_AL) += pcie-al.o
obj-$(CONFIG_PCI_HISI) += pcie-hisi.o
ifdef CONFIG_ACPI
ifdef CONFIG_PCI_QUIRKS
obj-$(CONFIG_ARM64) += pcie-al.o
obj-$(CONFIG_ARM64) += pcie-hisi.o
obj-$(CONFIG_ARM64) += pcie-tegra194.o
endif
endif

View file

@ -346,8 +346,9 @@ static const struct irq_domain_ops ks_pcie_legacy_irq_domain_ops = {
};
/**
* ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask
* registers
* ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask registers
* @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone
* PCIe host controller driver information.
*
* Since modification of dbi_cs2 involves different clock domain, read the
* status back to ensure the transition is complete.
@ -367,6 +368,8 @@ static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie)
/**
* ks_pcie_clear_dbi_mode() - Disable DBI mode
* @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone
* PCIe host controller driver information.
*
* Since modification of dbi_cs2 involves different clock domain, read the
* status back to ensure the transition is complete.
@ -449,6 +452,7 @@ static struct pci_ops ks_child_pcie_ops = {
/**
* ks_pcie_v3_65_add_bus() - keystone add_bus post initialization
* @bus: A pointer to the PCI bus structure.
*
* This sets BAR0 to enable inbound access for MSI_IRQ register
*/
@ -488,6 +492,8 @@ static struct pci_ops ks_pcie_ops = {
/**
* ks_pcie_link_up() - Check if link up
* @pci: A pointer to the dw_pcie structure which holds the DesignWare PCIe host
* controller driver information.
*/
static int ks_pcie_link_up(struct dw_pcie *pci)
{
@ -605,7 +611,6 @@ static void ks_pcie_msi_irq_handler(struct irq_desc *desc)
/**
* ks_pcie_legacy_irq_handler() - Handle legacy interrupt
* @irq: IRQ line for legacy interrupts
* @desc: Pointer to irq descriptor
*
* Traverse through pending legacy interrupts and invoke handler for each. Also
@ -798,7 +803,8 @@ static int __init ks_pcie_host_init(struct pcie_port *pp)
int ret;
pp->bridge->ops = &ks_pcie_ops;
pp->bridge->child_ops = &ks_child_pcie_ops;
if (!ks_pcie->is_am6)
pp->bridge->child_ops = &ks_child_pcie_ops;
ret = ks_pcie_config_legacy_irq(ks_pcie);
if (ret)

View file

@ -154,7 +154,7 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
pci->dev = dev;
pci->ops = pcie->drvdata->dw_pcie_ops;
ls_epc->bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4),
ls_epc->bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4);
pcie->pci = pci;
pcie->ls_epc = ls_epc;

View file

@ -705,6 +705,8 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
}
}
dw_pcie_iatu_detect(pci);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space");
if (!res)
return -EINVAL;

View file

@ -398,9 +398,9 @@ int dw_pcie_host_init(struct pcie_port *pp)
if (ret)
goto err_free_msi;
}
dw_pcie_iatu_detect(pci);
dw_pcie_setup_rc(pp);
dw_pcie_msi_init(pp);
if (!dw_pcie_link_up(pci) && pci->ops && pci->ops->start_link) {
ret = pci->ops->start_link(pci);
@ -551,6 +551,8 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
}
}
dw_pcie_msi_init(pp);
/* Setup RC BARs */
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0x00000004);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000);

View file

@ -660,11 +660,9 @@ static void dw_pcie_iatu_detect_regions(struct dw_pcie *pci)
pci->num_ob_windows = ob;
}
void dw_pcie_setup(struct dw_pcie *pci)
void dw_pcie_iatu_detect(struct dw_pcie *pci)
{
u32 val;
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
struct platform_device *pdev = to_platform_device(dev);
if (pci->version >= 0x480A || (!pci->version &&
@ -693,6 +691,13 @@ void dw_pcie_setup(struct dw_pcie *pci)
dev_info(pci->dev, "Detected iATU regions: %u outbound, %u inbound",
pci->num_ob_windows, pci->num_ib_windows);
}
void dw_pcie_setup(struct dw_pcie *pci)
{
u32 val;
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
if (pci->link_gen > 0)
dw_pcie_link_set_max_speed(pci, pci->link_gen);

View file

@ -306,6 +306,7 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
void dw_pcie_disable_atu(struct dw_pcie *pci, int index,
enum dw_pcie_region_type type);
void dw_pcie_setup(struct dw_pcie *pci);
void dw_pcie_iatu_detect(struct dw_pcie *pci);
static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val)
{

View file

@ -0,0 +1,309 @@
// SPDX-License-Identifier: GPL-2.0
/*
* FU740 DesignWare PCIe Controller integration
* Copyright (C) 2019-2021 SiFive, Inc.
* Paul Walmsley
* Greentime Hu
*
* Based in part on the i.MX6 PCIe host controller shim which is:
*
* Copyright (C) 2013 Kosagi
* https://www.kosagi.com
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/regulator/consumer.h>
#include <linux/resource.h>
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/iopoll.h>
#include <linux/reset.h>
#include "pcie-designware.h"
#define to_fu740_pcie(x) dev_get_drvdata((x)->dev)
struct fu740_pcie {
struct dw_pcie pci;
void __iomem *mgmt_base;
struct gpio_desc *reset;
struct gpio_desc *pwren;
struct clk *pcie_aux;
struct reset_control *rst;
};
#define SIFIVE_DEVICESRESETREG 0x28
#define PCIEX8MGMT_PERST_N 0x0
#define PCIEX8MGMT_APP_LTSSM_ENABLE 0x10
#define PCIEX8MGMT_APP_HOLD_PHY_RST 0x18
#define PCIEX8MGMT_DEVICE_TYPE 0x708
#define PCIEX8MGMT_PHY0_CR_PARA_ADDR 0x860
#define PCIEX8MGMT_PHY0_CR_PARA_RD_EN 0x870
#define PCIEX8MGMT_PHY0_CR_PARA_RD_DATA 0x878
#define PCIEX8MGMT_PHY0_CR_PARA_SEL 0x880
#define PCIEX8MGMT_PHY0_CR_PARA_WR_DATA 0x888
#define PCIEX8MGMT_PHY0_CR_PARA_WR_EN 0x890
#define PCIEX8MGMT_PHY0_CR_PARA_ACK 0x898
#define PCIEX8MGMT_PHY1_CR_PARA_ADDR 0x8a0
#define PCIEX8MGMT_PHY1_CR_PARA_RD_EN 0x8b0
#define PCIEX8MGMT_PHY1_CR_PARA_RD_DATA 0x8b8
#define PCIEX8MGMT_PHY1_CR_PARA_SEL 0x8c0
#define PCIEX8MGMT_PHY1_CR_PARA_WR_DATA 0x8c8
#define PCIEX8MGMT_PHY1_CR_PARA_WR_EN 0x8d0
#define PCIEX8MGMT_PHY1_CR_PARA_ACK 0x8d8
#define PCIEX8MGMT_PHY_CDR_TRACK_EN BIT(0)
#define PCIEX8MGMT_PHY_LOS_THRSHLD BIT(5)
#define PCIEX8MGMT_PHY_TERM_EN BIT(9)
#define PCIEX8MGMT_PHY_TERM_ACDC BIT(10)
#define PCIEX8MGMT_PHY_EN BIT(11)
#define PCIEX8MGMT_PHY_INIT_VAL (PCIEX8MGMT_PHY_CDR_TRACK_EN|\
PCIEX8MGMT_PHY_LOS_THRSHLD|\
PCIEX8MGMT_PHY_TERM_EN|\
PCIEX8MGMT_PHY_TERM_ACDC|\
PCIEX8MGMT_PHY_EN)
#define PCIEX8MGMT_PHY_LANEN_DIG_ASIC_RX_OVRD_IN_3 0x1008
#define PCIEX8MGMT_PHY_LANE_OFF 0x100
#define PCIEX8MGMT_PHY_LANE0_BASE (PCIEX8MGMT_PHY_LANEN_DIG_ASIC_RX_OVRD_IN_3 + 0x100 * 0)
#define PCIEX8MGMT_PHY_LANE1_BASE (PCIEX8MGMT_PHY_LANEN_DIG_ASIC_RX_OVRD_IN_3 + 0x100 * 1)
#define PCIEX8MGMT_PHY_LANE2_BASE (PCIEX8MGMT_PHY_LANEN_DIG_ASIC_RX_OVRD_IN_3 + 0x100 * 2)
#define PCIEX8MGMT_PHY_LANE3_BASE (PCIEX8MGMT_PHY_LANEN_DIG_ASIC_RX_OVRD_IN_3 + 0x100 * 3)
static void fu740_pcie_assert_reset(struct fu740_pcie *afp)
{
/* Assert PERST_N GPIO */
gpiod_set_value_cansleep(afp->reset, 0);
/* Assert controller PERST_N */
writel_relaxed(0x0, afp->mgmt_base + PCIEX8MGMT_PERST_N);
}
static void fu740_pcie_deassert_reset(struct fu740_pcie *afp)
{
/* Deassert controller PERST_N */
writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_PERST_N);
/* Deassert PERST_N GPIO */
gpiod_set_value_cansleep(afp->reset, 1);
}
static void fu740_pcie_power_on(struct fu740_pcie *afp)
{
gpiod_set_value_cansleep(afp->pwren, 1);
/*
* Ensure that PERST has been asserted for at least 100 ms.
* Section 2.2 of PCI Express Card Electromechanical Specification
* Revision 3.0
*/
msleep(100);
}
static void fu740_pcie_drive_reset(struct fu740_pcie *afp)
{
fu740_pcie_assert_reset(afp);
fu740_pcie_power_on(afp);
fu740_pcie_deassert_reset(afp);
}
static void fu740_phyregwrite(const uint8_t phy, const uint16_t addr,
const uint16_t wrdata, struct fu740_pcie *afp)
{
struct device *dev = afp->pci.dev;
void __iomem *phy_cr_para_addr;
void __iomem *phy_cr_para_wr_data;
void __iomem *phy_cr_para_wr_en;
void __iomem *phy_cr_para_ack;
int ret, val;
/* Setup */
if (phy) {
phy_cr_para_addr = afp->mgmt_base + PCIEX8MGMT_PHY1_CR_PARA_ADDR;
phy_cr_para_wr_data = afp->mgmt_base + PCIEX8MGMT_PHY1_CR_PARA_WR_DATA;
phy_cr_para_wr_en = afp->mgmt_base + PCIEX8MGMT_PHY1_CR_PARA_WR_EN;
phy_cr_para_ack = afp->mgmt_base + PCIEX8MGMT_PHY1_CR_PARA_ACK;
} else {
phy_cr_para_addr = afp->mgmt_base + PCIEX8MGMT_PHY0_CR_PARA_ADDR;
phy_cr_para_wr_data = afp->mgmt_base + PCIEX8MGMT_PHY0_CR_PARA_WR_DATA;
phy_cr_para_wr_en = afp->mgmt_base + PCIEX8MGMT_PHY0_CR_PARA_WR_EN;
phy_cr_para_ack = afp->mgmt_base + PCIEX8MGMT_PHY0_CR_PARA_ACK;
}
writel_relaxed(addr, phy_cr_para_addr);
writel_relaxed(wrdata, phy_cr_para_wr_data);
writel_relaxed(1, phy_cr_para_wr_en);
/* Wait for wait_idle */
ret = readl_poll_timeout(phy_cr_para_ack, val, val, 10, 5000);
if (ret)
dev_warn(dev, "Wait for wait_idle state failed!\n");
/* Clear */
writel_relaxed(0, phy_cr_para_wr_en);
/* Wait for ~wait_idle */
ret = readl_poll_timeout(phy_cr_para_ack, val, !val, 10, 5000);
if (ret)
dev_warn(dev, "Wait for !wait_idle state failed!\n");
}
static void fu740_pcie_init_phy(struct fu740_pcie *afp)
{
/* Enable phy cr_para_sel interfaces */
writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_PHY0_CR_PARA_SEL);
writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_PHY1_CR_PARA_SEL);
/*
* Wait 10 cr_para cycles to guarantee that the registers are ready
* to be edited.
*/
ndelay(10);
/* Set PHY AC termination mode */
fu740_phyregwrite(0, PCIEX8MGMT_PHY_LANE0_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp);
fu740_phyregwrite(0, PCIEX8MGMT_PHY_LANE1_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp);
fu740_phyregwrite(0, PCIEX8MGMT_PHY_LANE2_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp);
fu740_phyregwrite(0, PCIEX8MGMT_PHY_LANE3_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp);
fu740_phyregwrite(1, PCIEX8MGMT_PHY_LANE0_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp);
fu740_phyregwrite(1, PCIEX8MGMT_PHY_LANE1_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp);
fu740_phyregwrite(1, PCIEX8MGMT_PHY_LANE2_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp);
fu740_phyregwrite(1, PCIEX8MGMT_PHY_LANE3_BASE, PCIEX8MGMT_PHY_INIT_VAL, afp);
}
static int fu740_pcie_start_link(struct dw_pcie *pci)
{
struct device *dev = pci->dev;
struct fu740_pcie *afp = dev_get_drvdata(dev);
/* Enable LTSSM */
writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_APP_LTSSM_ENABLE);
return 0;
}
static int fu740_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct fu740_pcie *afp = to_fu740_pcie(pci);
struct device *dev = pci->dev;
int ret;
/* Power on reset */
fu740_pcie_drive_reset(afp);
/* Enable pcieauxclk */
ret = clk_prepare_enable(afp->pcie_aux);
if (ret) {
dev_err(dev, "unable to enable pcie_aux clock\n");
return ret;
}
/*
* Assert hold_phy_rst (hold the controller LTSSM in reset after
* power_up_rst_n for register programming with cr_para)
*/
writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_APP_HOLD_PHY_RST);
/* Deassert power_up_rst_n */
ret = reset_control_deassert(afp->rst);
if (ret) {
dev_err(dev, "unable to deassert pcie_power_up_rst_n\n");
return ret;
}
fu740_pcie_init_phy(afp);
/* Disable pcieauxclk */
clk_disable_unprepare(afp->pcie_aux);
/* Clear hold_phy_rst */
writel_relaxed(0x0, afp->mgmt_base + PCIEX8MGMT_APP_HOLD_PHY_RST);
/* Enable pcieauxclk */
ret = clk_prepare_enable(afp->pcie_aux);
/* Set RC mode */
writel_relaxed(0x4, afp->mgmt_base + PCIEX8MGMT_DEVICE_TYPE);
return 0;
}
static const struct dw_pcie_host_ops fu740_pcie_host_ops = {
.host_init = fu740_pcie_host_init,
};
static const struct dw_pcie_ops dw_pcie_ops = {
.start_link = fu740_pcie_start_link,
};
static int fu740_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dw_pcie *pci;
struct fu740_pcie *afp;
afp = devm_kzalloc(dev, sizeof(*afp), GFP_KERNEL);
if (!afp)
return -ENOMEM;
pci = &afp->pci;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
pci->pp.ops = &fu740_pcie_host_ops;
/* SiFive specific region: mgmt */
afp->mgmt_base = devm_platform_ioremap_resource_byname(pdev, "mgmt");
if (IS_ERR(afp->mgmt_base))
return PTR_ERR(afp->mgmt_base);
/* Fetch GPIOs */
afp->reset = devm_gpiod_get_optional(dev, "reset-gpios", GPIOD_OUT_LOW);
if (IS_ERR(afp->reset))
return dev_err_probe(dev, PTR_ERR(afp->reset), "unable to get reset-gpios\n");
afp->pwren = devm_gpiod_get_optional(dev, "pwren-gpios", GPIOD_OUT_LOW);
if (IS_ERR(afp->pwren))
return dev_err_probe(dev, PTR_ERR(afp->pwren), "unable to get pwren-gpios\n");
/* Fetch clocks */
afp->pcie_aux = devm_clk_get(dev, "pcie_aux");
if (IS_ERR(afp->pcie_aux))
return dev_err_probe(dev, PTR_ERR(afp->pcie_aux),
"pcie_aux clock source missing or invalid\n");
/* Fetch reset */
afp->rst = devm_reset_control_get_exclusive(dev, NULL);
if (IS_ERR(afp->rst))
return dev_err_probe(dev, PTR_ERR(afp->rst), "unable to get reset\n");
platform_set_drvdata(pdev, afp);
return dw_pcie_host_init(&pci->pp);
}
static void fu740_pcie_shutdown(struct platform_device *pdev)
{
struct fu740_pcie *afp = platform_get_drvdata(pdev);
/* Bring down link, so bootloader gets clean state in case of reboot */
fu740_pcie_assert_reset(afp);
}
static const struct of_device_id fu740_pcie_of_match[] = {
{ .compatible = "sifive,fu740-pcie", },
{},
};
static struct platform_driver fu740_pcie_driver = {
.driver = {
.name = "fu740-pcie",
.of_match_table = fu740_pcie_of_match,
.suppress_bind_attrs = true,
},
.probe = fu740_pcie_probe,
.shutdown = fu740_pcie_shutdown,
};
builtin_platform_driver(fu740_pcie_driver);

View file

@ -81,11 +81,6 @@ static void pcie_update_bits(void __iomem *base, u32 ofs, u32 mask, u32 val)
writel(val, base + ofs);
}
static inline u32 pcie_app_rd(struct intel_pcie_port *lpp, u32 ofs)
{
return readl(lpp->app_base + ofs);
}
static inline void pcie_app_wr(struct intel_pcie_port *lpp, u32 ofs, u32 val)
{
writel(val, lpp->app_base + ofs);

View file

@ -22,6 +22,8 @@
#include <linux/of_irq.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/pci-acpi.h>
#include <linux/pci-ecam.h>
#include <linux/phy/phy.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
@ -311,6 +313,104 @@ struct tegra_pcie_dw_of_data {
enum dw_pcie_device_mode mode;
};
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
struct tegra194_pcie_ecam {
void __iomem *config_base;
void __iomem *iatu_base;
void __iomem *dbi_base;
};
static int tegra194_acpi_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct tegra194_pcie_ecam *pcie_ecam;
pcie_ecam = devm_kzalloc(dev, sizeof(*pcie_ecam), GFP_KERNEL);
if (!pcie_ecam)
return -ENOMEM;
pcie_ecam->config_base = cfg->win;
pcie_ecam->iatu_base = cfg->win + SZ_256K;
pcie_ecam->dbi_base = cfg->win + SZ_512K;
cfg->priv = pcie_ecam;
return 0;
}
static void atu_reg_write(struct tegra194_pcie_ecam *pcie_ecam, int index,
u32 val, u32 reg)
{
u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index);
writel(val, pcie_ecam->iatu_base + offset + reg);
}
static void program_outbound_atu(struct tegra194_pcie_ecam *pcie_ecam,
int index, int type, u64 cpu_addr,
u64 pci_addr, u64 size)
{
atu_reg_write(pcie_ecam, index, lower_32_bits(cpu_addr),
PCIE_ATU_LOWER_BASE);
atu_reg_write(pcie_ecam, index, upper_32_bits(cpu_addr),
PCIE_ATU_UPPER_BASE);
atu_reg_write(pcie_ecam, index, lower_32_bits(pci_addr),
PCIE_ATU_LOWER_TARGET);
atu_reg_write(pcie_ecam, index, lower_32_bits(cpu_addr + size - 1),
PCIE_ATU_LIMIT);
atu_reg_write(pcie_ecam, index, upper_32_bits(pci_addr),
PCIE_ATU_UPPER_TARGET);
atu_reg_write(pcie_ecam, index, type, PCIE_ATU_CR1);
atu_reg_write(pcie_ecam, index, PCIE_ATU_ENABLE, PCIE_ATU_CR2);
}
static void __iomem *tegra194_map_bus(struct pci_bus *bus,
unsigned int devfn, int where)
{
struct pci_config_window *cfg = bus->sysdata;
struct tegra194_pcie_ecam *pcie_ecam = cfg->priv;
u32 busdev;
int type;
if (bus->number < cfg->busr.start || bus->number > cfg->busr.end)
return NULL;
if (bus->number == cfg->busr.start) {
if (PCI_SLOT(devfn) == 0)
return pcie_ecam->dbi_base + where;
else
return NULL;
}
busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) |
PCIE_ATU_FUNC(PCI_FUNC(devfn));
if (bus->parent->number == cfg->busr.start) {
if (PCI_SLOT(devfn) == 0)
type = PCIE_ATU_TYPE_CFG0;
else
return NULL;
} else {
type = PCIE_ATU_TYPE_CFG1;
}
program_outbound_atu(pcie_ecam, 0, type, cfg->res.start, busdev,
SZ_256K);
return pcie_ecam->config_base + where;
}
const struct pci_ecam_ops tegra194_pcie_ops = {
.init = tegra194_acpi_init,
.pci_ops = {
.map_bus = tegra194_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
}
};
#endif /* defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) */
#ifdef CONFIG_PCIE_TEGRA194
static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci)
{
return container_of(pci, struct tegra_pcie_dw, pci);
@ -1019,7 +1119,7 @@ static const struct dw_pcie_ops tegra_dw_pcie_ops = {
.stop_link = tegra_pcie_dw_stop_link,
};
static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = {
static const struct dw_pcie_host_ops tegra_pcie_dw_host_ops = {
.host_init = tegra_pcie_dw_host_init,
};
@ -1645,7 +1745,7 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
if (pcie->ep_state == EP_STATE_ENABLED)
return;
ret = pm_runtime_get_sync(dev);
ret = pm_runtime_resume_and_get(dev);
if (ret < 0) {
dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n",
ret);
@ -1881,7 +1981,7 @@ tegra_pcie_ep_get_features(struct dw_pcie_ep *ep)
return &tegra_pcie_epc_features;
}
static struct dw_pcie_ep_ops pcie_ep_ops = {
static const struct dw_pcie_ep_ops pcie_ep_ops = {
.raise_irq = tegra_pcie_ep_raise_irq,
.get_features = tegra_pcie_ep_get_features,
};
@ -2311,3 +2411,5 @@ MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match);
MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>");
MODULE_DESCRIPTION("NVIDIA PCIe host controller driver");
MODULE_LICENSE("GPL v2");
#endif /* CONFIG_PCIE_TEGRA194 */

View file

@ -24,8 +24,7 @@ config PCIE_MOBIVEIL_PLAT
config PCIE_LAYERSCAPE_GEN4
bool "Freescale Layerscape PCIe Gen4 controller"
depends on PCI
depends on OF && (ARM64 || ARCH_LAYERSCAPE)
depends on ARCH_LAYERSCAPE || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_MOBIVEIL_HOST
help

View file

@ -79,6 +79,7 @@ int pci_host_common_probe(struct platform_device *pdev)
bridge->sysdata = cfg;
bridge->ops = (struct pci_ops *)&ops->pci_ops;
bridge->msi_domain = true;
return pci_host_probe(bridge);
}

View file

@ -473,7 +473,6 @@ struct hv_pcibus_device {
struct list_head dr_list;
struct msi_domain_info msi_info;
struct msi_controller msi_chip;
struct irq_domain *irq_domain;
spinlock_t retarget_msi_interrupt_lock;
@ -1866,9 +1865,6 @@ static int create_root_hv_pci_bus(struct hv_pcibus_device *hbus)
if (!hbus->pci_bus)
return -ENODEV;
hbus->pci_bus->msi = &hbus->msi_chip;
hbus->pci_bus->msi->dev = &hbus->hdev->device;
pci_lock_rescan_remove();
pci_scan_child_bus(hbus->pci_bus);
hv_pci_assign_numa_node(hbus);

View file

@ -21,6 +21,7 @@
#include <linux/interrupt.h>
#include <linux/iopoll.h>
#include <linux/irq.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/init.h>
@ -78,23 +79,8 @@
#define AFI_MSI_FPCI_BAR_ST 0x64
#define AFI_MSI_AXI_BAR_ST 0x68
#define AFI_MSI_VEC0 0x6c
#define AFI_MSI_VEC1 0x70
#define AFI_MSI_VEC2 0x74
#define AFI_MSI_VEC3 0x78
#define AFI_MSI_VEC4 0x7c
#define AFI_MSI_VEC5 0x80
#define AFI_MSI_VEC6 0x84
#define AFI_MSI_VEC7 0x88
#define AFI_MSI_EN_VEC0 0x8c
#define AFI_MSI_EN_VEC1 0x90
#define AFI_MSI_EN_VEC2 0x94
#define AFI_MSI_EN_VEC3 0x98
#define AFI_MSI_EN_VEC4 0x9c
#define AFI_MSI_EN_VEC5 0xa0
#define AFI_MSI_EN_VEC6 0xa4
#define AFI_MSI_EN_VEC7 0xa8
#define AFI_MSI_VEC(x) (0x6c + ((x) * 4))
#define AFI_MSI_EN_VEC(x) (0x8c + ((x) * 4))
#define AFI_CONFIGURATION 0xac
#define AFI_CONFIGURATION_EN_FPCI (1 << 0)
@ -280,10 +266,10 @@
#define LINK_RETRAIN_TIMEOUT 100000 /* in usec */
struct tegra_msi {
struct msi_controller chip;
DECLARE_BITMAP(used, INT_PCI_MSI_NR);
struct irq_domain *domain;
struct mutex lock;
struct mutex map_lock;
spinlock_t mask_lock;
void *virt;
dma_addr_t phys;
int irq;
@ -333,11 +319,6 @@ struct tegra_pcie_soc {
} ectl;
};
static inline struct tegra_msi *to_tegra_msi(struct msi_controller *chip)
{
return container_of(chip, struct tegra_msi, chip);
}
struct tegra_pcie {
struct device *dev;
@ -372,6 +353,11 @@ struct tegra_pcie {
struct dentry *debugfs;
};
static inline struct tegra_pcie *msi_to_pcie(struct tegra_msi *msi)
{
return container_of(msi, struct tegra_pcie, msi);
}
struct tegra_pcie_port {
struct tegra_pcie *pcie;
struct device_node *np;
@ -1432,7 +1418,6 @@ static void tegra_pcie_phys_put(struct tegra_pcie *pcie)
}
}
static int tegra_pcie_get_resources(struct tegra_pcie *pcie)
{
struct device *dev = pcie->dev;
@ -1509,6 +1494,7 @@ static int tegra_pcie_get_resources(struct tegra_pcie *pcie)
phys_put:
if (soc->program_uphy)
tegra_pcie_phys_put(pcie);
return err;
}
@ -1551,161 +1537,227 @@ static void tegra_pcie_pme_turnoff(struct tegra_pcie_port *port)
afi_writel(pcie, val, AFI_PCIE_PME);
}
static int tegra_msi_alloc(struct tegra_msi *chip)
static void tegra_pcie_msi_irq(struct irq_desc *desc)
{
int msi;
mutex_lock(&chip->lock);
msi = find_first_zero_bit(chip->used, INT_PCI_MSI_NR);
if (msi < INT_PCI_MSI_NR)
set_bit(msi, chip->used);
else
msi = -ENOSPC;
mutex_unlock(&chip->lock);
return msi;
}
static void tegra_msi_free(struct tegra_msi *chip, unsigned long irq)
{
struct device *dev = chip->chip.dev;
mutex_lock(&chip->lock);
if (!test_bit(irq, chip->used))
dev_err(dev, "trying to free unused MSI#%lu\n", irq);
else
clear_bit(irq, chip->used);
mutex_unlock(&chip->lock);
}
static irqreturn_t tegra_pcie_msi_irq(int irq, void *data)
{
struct tegra_pcie *pcie = data;
struct device *dev = pcie->dev;
struct tegra_pcie *pcie = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
struct tegra_msi *msi = &pcie->msi;
unsigned int i, processed = 0;
struct device *dev = pcie->dev;
unsigned int i;
chained_irq_enter(chip, desc);
for (i = 0; i < 8; i++) {
unsigned long reg = afi_readl(pcie, AFI_MSI_VEC0 + i * 4);
unsigned long reg = afi_readl(pcie, AFI_MSI_VEC(i));
while (reg) {
unsigned int offset = find_first_bit(&reg, 32);
unsigned int index = i * 32 + offset;
unsigned int irq;
/* clear the interrupt */
afi_writel(pcie, 1 << offset, AFI_MSI_VEC0 + i * 4);
irq = irq_find_mapping(msi->domain, index);
irq = irq_find_mapping(msi->domain->parent, index);
if (irq) {
if (test_bit(index, msi->used))
generic_handle_irq(irq);
else
dev_info(dev, "unhandled MSI\n");
generic_handle_irq(irq);
} else {
/*
* that's weird who triggered this?
* just clear it
*/
dev_info(dev, "unexpected MSI\n");
afi_writel(pcie, BIT(index % 32), AFI_MSI_VEC(index));
}
/* see if there's any more pending in this vector */
reg = afi_readl(pcie, AFI_MSI_VEC0 + i * 4);
processed++;
reg = afi_readl(pcie, AFI_MSI_VEC(i));
}
}
return processed > 0 ? IRQ_HANDLED : IRQ_NONE;
chained_irq_exit(chip, desc);
}
static int tegra_msi_setup_irq(struct msi_controller *chip,
struct pci_dev *pdev, struct msi_desc *desc)
static void tegra_msi_top_irq_ack(struct irq_data *d)
{
struct tegra_msi *msi = to_tegra_msi(chip);
struct msi_msg msg;
unsigned int irq;
int hwirq;
hwirq = tegra_msi_alloc(msi);
if (hwirq < 0)
return hwirq;
irq = irq_create_mapping(msi->domain, hwirq);
if (!irq) {
tegra_msi_free(msi, hwirq);
return -EINVAL;
}
irq_set_msi_desc(irq, desc);
msg.address_lo = lower_32_bits(msi->phys);
msg.address_hi = upper_32_bits(msi->phys);
msg.data = hwirq;
pci_write_msi_msg(irq, &msg);
return 0;
irq_chip_ack_parent(d);
}
static void tegra_msi_teardown_irq(struct msi_controller *chip,
unsigned int irq)
static void tegra_msi_top_irq_mask(struct irq_data *d)
{
struct tegra_msi *msi = to_tegra_msi(chip);
struct irq_data *d = irq_get_irq_data(irq);
irq_hw_number_t hwirq = irqd_to_hwirq(d);
irq_dispose_mapping(irq);
tegra_msi_free(msi, hwirq);
pci_msi_mask_irq(d);
irq_chip_mask_parent(d);
}
static struct irq_chip tegra_msi_irq_chip = {
.name = "Tegra PCIe MSI",
.irq_enable = pci_msi_unmask_irq,
.irq_disable = pci_msi_mask_irq,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
static void tegra_msi_top_irq_unmask(struct irq_data *d)
{
pci_msi_unmask_irq(d);
irq_chip_unmask_parent(d);
}
static struct irq_chip tegra_msi_top_chip = {
.name = "Tegra PCIe MSI",
.irq_ack = tegra_msi_top_irq_ack,
.irq_mask = tegra_msi_top_irq_mask,
.irq_unmask = tegra_msi_top_irq_unmask,
};
static int tegra_msi_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
static void tegra_msi_irq_ack(struct irq_data *d)
{
irq_set_chip_and_handler(irq, &tegra_msi_irq_chip, handle_simple_irq);
irq_set_chip_data(irq, domain->host_data);
struct tegra_msi *msi = irq_data_get_irq_chip_data(d);
struct tegra_pcie *pcie = msi_to_pcie(msi);
unsigned int index = d->hwirq / 32;
/* clear the interrupt */
afi_writel(pcie, BIT(d->hwirq % 32), AFI_MSI_VEC(index));
}
static void tegra_msi_irq_mask(struct irq_data *d)
{
struct tegra_msi *msi = irq_data_get_irq_chip_data(d);
struct tegra_pcie *pcie = msi_to_pcie(msi);
unsigned int index = d->hwirq / 32;
unsigned long flags;
u32 value;
spin_lock_irqsave(&msi->mask_lock, flags);
value = afi_readl(pcie, AFI_MSI_EN_VEC(index));
value &= ~BIT(d->hwirq % 32);
afi_writel(pcie, value, AFI_MSI_EN_VEC(index));
spin_unlock_irqrestore(&msi->mask_lock, flags);
}
static void tegra_msi_irq_unmask(struct irq_data *d)
{
struct tegra_msi *msi = irq_data_get_irq_chip_data(d);
struct tegra_pcie *pcie = msi_to_pcie(msi);
unsigned int index = d->hwirq / 32;
unsigned long flags;
u32 value;
spin_lock_irqsave(&msi->mask_lock, flags);
value = afi_readl(pcie, AFI_MSI_EN_VEC(index));
value |= BIT(d->hwirq % 32);
afi_writel(pcie, value, AFI_MSI_EN_VEC(index));
spin_unlock_irqrestore(&msi->mask_lock, flags);
}
static int tegra_msi_set_affinity(struct irq_data *d, const struct cpumask *mask, bool force)
{
return -EINVAL;
}
static void tegra_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct tegra_msi *msi = irq_data_get_irq_chip_data(data);
msg->address_lo = lower_32_bits(msi->phys);
msg->address_hi = upper_32_bits(msi->phys);
msg->data = data->hwirq;
}
static struct irq_chip tegra_msi_bottom_chip = {
.name = "Tegra MSI",
.irq_ack = tegra_msi_irq_ack,
.irq_mask = tegra_msi_irq_mask,
.irq_unmask = tegra_msi_irq_unmask,
.irq_set_affinity = tegra_msi_set_affinity,
.irq_compose_msi_msg = tegra_compose_msi_msg,
};
static int tegra_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args)
{
struct tegra_msi *msi = domain->host_data;
unsigned int i;
int hwirq;
mutex_lock(&msi->map_lock);
hwirq = bitmap_find_free_region(msi->used, INT_PCI_MSI_NR, order_base_2(nr_irqs));
mutex_unlock(&msi->map_lock);
if (hwirq < 0)
return -ENOSPC;
for (i = 0; i < nr_irqs; i++)
irq_domain_set_info(domain, virq + i, hwirq + i,
&tegra_msi_bottom_chip, domain->host_data,
handle_edge_irq, NULL, NULL);
tegra_cpuidle_pcie_irqs_in_use();
return 0;
}
static const struct irq_domain_ops msi_domain_ops = {
.map = tegra_msi_map,
static void tegra_msi_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct tegra_msi *msi = domain->host_data;
mutex_lock(&msi->map_lock);
bitmap_release_region(msi->used, d->hwirq, order_base_2(nr_irqs));
mutex_unlock(&msi->map_lock);
}
static const struct irq_domain_ops tegra_msi_domain_ops = {
.alloc = tegra_msi_domain_alloc,
.free = tegra_msi_domain_free,
};
static struct msi_domain_info tegra_msi_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_PCI_MSIX),
.chip = &tegra_msi_top_chip,
};
static int tegra_allocate_domains(struct tegra_msi *msi)
{
struct tegra_pcie *pcie = msi_to_pcie(msi);
struct fwnode_handle *fwnode = dev_fwnode(pcie->dev);
struct irq_domain *parent;
parent = irq_domain_create_linear(fwnode, INT_PCI_MSI_NR,
&tegra_msi_domain_ops, msi);
if (!parent) {
dev_err(pcie->dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS);
msi->domain = pci_msi_create_irq_domain(fwnode, &tegra_msi_info, parent);
if (!msi->domain) {
dev_err(pcie->dev, "failed to create MSI domain\n");
irq_domain_remove(parent);
return -ENOMEM;
}
return 0;
}
static void tegra_free_domains(struct tegra_msi *msi)
{
struct irq_domain *parent = msi->domain->parent;
irq_domain_remove(msi->domain);
irq_domain_remove(parent);
}
static int tegra_pcie_msi_setup(struct tegra_pcie *pcie)
{
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
struct platform_device *pdev = to_platform_device(pcie->dev);
struct tegra_msi *msi = &pcie->msi;
struct device *dev = pcie->dev;
int err;
mutex_init(&msi->lock);
mutex_init(&msi->map_lock);
spin_lock_init(&msi->mask_lock);
msi->chip.dev = dev;
msi->chip.setup_irq = tegra_msi_setup_irq;
msi->chip.teardown_irq = tegra_msi_teardown_irq;
msi->domain = irq_domain_add_linear(dev->of_node, INT_PCI_MSI_NR,
&msi_domain_ops, &msi->chip);
if (!msi->domain) {
dev_err(dev, "failed to create IRQ domain\n");
return -ENOMEM;
if (IS_ENABLED(CONFIG_PCI_MSI)) {
err = tegra_allocate_domains(msi);
if (err)
return err;
}
err = platform_get_irq_byname(pdev, "msi");
@ -1714,12 +1766,7 @@ static int tegra_pcie_msi_setup(struct tegra_pcie *pcie)
msi->irq = err;
err = request_irq(msi->irq, tegra_pcie_msi_irq, IRQF_NO_THREAD,
tegra_msi_irq_chip.name, pcie);
if (err < 0) {
dev_err(dev, "failed to request IRQ: %d\n", err);
goto free_irq_domain;
}
irq_set_chained_handler_and_data(msi->irq, tegra_pcie_msi_irq, pcie);
/* Though the PCIe controller can address >32-bit address space, to
* facilitate endpoints that support only 32-bit MSI target address,
@ -1740,14 +1787,14 @@ static int tegra_pcie_msi_setup(struct tegra_pcie *pcie)
goto free_irq;
}
host->msi = &msi->chip;
return 0;
free_irq:
free_irq(msi->irq, pcie);
irq_set_chained_handler_and_data(msi->irq, NULL, NULL);
free_irq_domain:
irq_domain_remove(msi->domain);
if (IS_ENABLED(CONFIG_PCI_MSI))
tegra_free_domains(msi);
return err;
}
@ -1755,22 +1802,18 @@ static void tegra_pcie_enable_msi(struct tegra_pcie *pcie)
{
const struct tegra_pcie_soc *soc = pcie->soc;
struct tegra_msi *msi = &pcie->msi;
u32 reg;
u32 reg, msi_state[INT_PCI_MSI_NR / 32];
int i;
afi_writel(pcie, msi->phys >> soc->msi_base_shift, AFI_MSI_FPCI_BAR_ST);
afi_writel(pcie, msi->phys, AFI_MSI_AXI_BAR_ST);
/* this register is in 4K increments */
afi_writel(pcie, 1, AFI_MSI_BAR_SZ);
/* enable all MSI vectors */
afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC0);
afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC1);
afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC2);
afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC3);
afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC4);
afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC5);
afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC6);
afi_writel(pcie, 0xffffffff, AFI_MSI_EN_VEC7);
/* Restore the MSI allocation state */
bitmap_to_arr32(msi_state, msi->used, INT_PCI_MSI_NR);
for (i = 0; i < ARRAY_SIZE(msi_state); i++)
afi_writel(pcie, msi_state[i], AFI_MSI_EN_VEC(i));
/* and unmask the MSI interrupt */
reg = afi_readl(pcie, AFI_INTR_MASK);
@ -1786,16 +1829,16 @@ static void tegra_pcie_msi_teardown(struct tegra_pcie *pcie)
dma_free_attrs(pcie->dev, PAGE_SIZE, msi->virt, msi->phys,
DMA_ATTR_NO_KERNEL_MAPPING);
if (msi->irq > 0)
free_irq(msi->irq, pcie);
for (i = 0; i < INT_PCI_MSI_NR; i++) {
irq = irq_find_mapping(msi->domain, i);
if (irq > 0)
irq_dispose_mapping(irq);
irq_domain_free_irqs(irq, 1);
}
irq_domain_remove(msi->domain);
irq_set_chained_handler_and_data(msi->irq, NULL, NULL);
if (IS_ENABLED(CONFIG_PCI_MSI))
tegra_free_domains(msi);
}
static int tegra_pcie_disable_msi(struct tegra_pcie *pcie)
@ -1807,16 +1850,6 @@ static int tegra_pcie_disable_msi(struct tegra_pcie *pcie)
value &= ~AFI_INTR_MASK_MSI_MASK;
afi_writel(pcie, value, AFI_INTR_MASK);
/* disable all MSI vectors */
afi_writel(pcie, 0, AFI_MSI_EN_VEC0);
afi_writel(pcie, 0, AFI_MSI_EN_VEC1);
afi_writel(pcie, 0, AFI_MSI_EN_VEC2);
afi_writel(pcie, 0, AFI_MSI_EN_VEC3);
afi_writel(pcie, 0, AFI_MSI_EN_VEC4);
afi_writel(pcie, 0, AFI_MSI_EN_VEC5);
afi_writel(pcie, 0, AFI_MSI_EN_VEC6);
afi_writel(pcie, 0, AFI_MSI_EN_VEC7);
return 0;
}

View file

@ -116,7 +116,7 @@ static int thunder_ecam_p2_config_read(struct pci_bus *bus, unsigned int devfn,
* the config space access window. Since we are working with
* the high-order 32 bits, shift everything down by 32 bits.
*/
node_bits = (cfg->res.start >> 32) & (1 << 12);
node_bits = upper_32_bits(cfg->res.start) & (1 << 12);
v |= node_bits;
set_val(v, where, size, val);

View file

@ -12,6 +12,7 @@
#include <linux/pci-acpi.h>
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include "../pci.h"
#if defined(CONFIG_PCI_HOST_THUNDER_PEM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
@ -324,9 +325,9 @@ static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg,
* structure here for the BAR.
*/
bar4_start = res_pem->start + 0xf00000;
pem_pci->ea_entry[0] = (u32)bar4_start | 2;
pem_pci->ea_entry[1] = (u32)(res_pem->end - bar4_start) & ~3u;
pem_pci->ea_entry[2] = (u32)(bar4_start >> 32);
pem_pci->ea_entry[0] = lower_32_bits(bar4_start) | 2;
pem_pci->ea_entry[1] = lower_32_bits(res_pem->end - bar4_start) & ~3u;
pem_pci->ea_entry[2] = upper_32_bits(bar4_start);
cfg->priv = pem_pci;
return 0;
@ -334,9 +335,9 @@ static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg,
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
#define PEM_RES_BASE 0x87e0c0000000UL
#define PEM_NODE_MASK GENMASK(45, 44)
#define PEM_INDX_MASK GENMASK(26, 24)
#define PEM_RES_BASE 0x87e0c0000000ULL
#define PEM_NODE_MASK GENMASK_ULL(45, 44)
#define PEM_INDX_MASK GENMASK_ULL(26, 24)
#define PEM_MIN_DOM_IN_NODE 4
#define PEM_MAX_DOM_IN_NODE 10

View file

@ -354,7 +354,8 @@ static int xgene_pcie_map_reg(struct xgene_pcie_port *port,
if (IS_ERR(port->csr_base))
return PTR_ERR(port->csr_base);
port->cfg_base = devm_platform_ioremap_resource_byname(pdev, "cfg");
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
port->cfg_base = devm_ioremap_resource(dev, res);
if (IS_ERR(port->cfg_base))
return PTR_ERR(port->cfg_base);
port->cfg_addr = res->start;

View file

@ -236,10 +236,8 @@ static int altera_msi_probe(struct platform_device *pdev)
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"vector_slave");
msi->vector_base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(msi->vector_base)) {
dev_err(&pdev->dev, "failed to map vector_slave memory\n");
if (IS_ERR(msi->vector_base))
return PTR_ERR(msi->vector_base);
}
msi->vector_phy = res->start;

View file

@ -1148,6 +1148,7 @@ static int brcm_pcie_suspend(struct device *dev)
brcm_pcie_turn_off(pcie);
ret = brcm_phy_stop(pcie);
reset_control_rearm(pcie->rescal);
clk_disable_unprepare(pcie->clk);
return ret;
@ -1163,9 +1164,13 @@ static int brcm_pcie_resume(struct device *dev)
base = pcie->base;
clk_prepare_enable(pcie->clk);
ret = reset_control_reset(pcie->rescal);
if (ret)
goto err_disable_clk;
ret = brcm_phy_start(pcie);
if (ret)
goto err;
goto err_reset;
/* Take bridge out of reset so we can access the SERDES reg */
pcie->bridge_sw_init_set(pcie, 0);
@ -1180,14 +1185,16 @@ static int brcm_pcie_resume(struct device *dev)
ret = brcm_pcie_setup(pcie);
if (ret)
goto err;
goto err_reset;
if (pcie->msi)
brcm_msi_set_regs(pcie->msi);
return 0;
err:
err_reset:
reset_control_rearm(pcie->rescal);
err_disable_clk:
clk_disable_unprepare(pcie->clk);
return ret;
}
@ -1197,7 +1204,7 @@ static void __brcm_pcie_remove(struct brcm_pcie *pcie)
brcm_msi_remove(pcie);
brcm_pcie_turn_off(pcie);
brcm_phy_stop(pcie);
reset_control_assert(pcie->rescal);
reset_control_rearm(pcie->rescal);
clk_disable_unprepare(pcie->clk);
}
@ -1278,13 +1285,13 @@ static int brcm_pcie_probe(struct platform_device *pdev)
return PTR_ERR(pcie->perst_reset);
}
ret = reset_control_deassert(pcie->rescal);
ret = reset_control_reset(pcie->rescal);
if (ret)
dev_err(&pdev->dev, "failed to deassert 'rescal'\n");
ret = brcm_phy_start(pcie);
if (ret) {
reset_control_assert(pcie->rescal);
reset_control_rearm(pcie->rescal);
clk_disable_unprepare(pcie->clk);
return ret;
}
@ -1296,6 +1303,7 @@ static int brcm_pcie_probe(struct platform_device *pdev)
pcie->hw_rev = readl(pcie->base + PCIE_MISC_REVISION);
if (pcie->type == BCM4908 && pcie->hw_rev >= BRCM_PCIE_HW_REV_3_20) {
dev_err(pcie->dev, "hardware revision with unsupported PERST# setup\n");
ret = -ENODEV;
goto fail;
}

View file

@ -271,7 +271,7 @@ static int iproc_msi_irq_domain_alloc(struct irq_domain *domain,
NULL, NULL);
}
return hwirq;
return 0;
}
static void iproc_msi_irq_domain_free(struct irq_domain *domain,

File diff suppressed because it is too large Load diff

View file

@ -143,6 +143,7 @@ struct mtk_pcie_port;
* struct mtk_pcie_soc - differentiate between host generations
* @need_fix_class_id: whether this host's class ID needed to be fixed or not
* @need_fix_device_id: whether this host's device ID needed to be fixed or not
* @no_msi: Bridge has no MSI support, and relies on an external block
* @device_id: device ID which this host need to be fixed
* @ops: pointer to configuration access functions
* @startup: pointer to controller setting functions
@ -151,6 +152,7 @@ struct mtk_pcie_port;
struct mtk_pcie_soc {
bool need_fix_class_id;
bool need_fix_device_id;
bool no_msi;
unsigned int device_id;
struct pci_ops *ops;
int (*startup)(struct mtk_pcie_port *port);
@ -760,7 +762,7 @@ static struct pci_ops mtk_pcie_ops = {
static int mtk_pcie_startup_port(struct mtk_pcie_port *port)
{
struct mtk_pcie *pcie = port->pcie;
u32 func = PCI_FUNC(port->slot << 3);
u32 func = PCI_FUNC(port->slot);
u32 slot = PCI_SLOT(port->slot << 3);
u32 val;
int err;
@ -1087,6 +1089,7 @@ static int mtk_pcie_probe(struct platform_device *pdev)
host->ops = pcie->soc->ops;
host->sysdata = pcie;
host->msi_domain = pcie->soc->no_msi;
err = pci_host_probe(host);
if (err)
@ -1176,6 +1179,7 @@ static const struct dev_pm_ops mtk_pcie_pm_ops = {
};
static const struct mtk_pcie_soc mtk_pcie_soc_v1 = {
.no_msi = true,
.ops = &mtk_pcie_ops,
.startup = mtk_pcie_startup_port,
};
@ -1210,6 +1214,7 @@ static const struct of_device_id mtk_pcie_ids[] = {
{ .compatible = "mediatek,mt7629-pcie", .data = &mtk_pcie_soc_mt7629 },
{},
};
MODULE_DEVICE_TABLE(of, mtk_pcie_ids);
static struct platform_driver mtk_pcie_driver = {
.probe = mtk_pcie_probe,

View file

@ -301,27 +301,27 @@ static const struct cause event_cause[NUM_EVENTS] = {
LOCAL_EVENT_CAUSE(PM_MSI_INT_SYS_ERR, "system error"),
};
struct event_map pcie_event_to_event[] = {
static struct event_map pcie_event_to_event[] = {
PCIE_EVENT_TO_EVENT_MAP(L2_EXIT),
PCIE_EVENT_TO_EVENT_MAP(HOTRST_EXIT),
PCIE_EVENT_TO_EVENT_MAP(DLUP_EXIT),
};
struct event_map sec_error_to_event[] = {
static struct event_map sec_error_to_event[] = {
SEC_ERROR_TO_EVENT_MAP(TX_RAM_SEC_ERR),
SEC_ERROR_TO_EVENT_MAP(RX_RAM_SEC_ERR),
SEC_ERROR_TO_EVENT_MAP(PCIE2AXI_RAM_SEC_ERR),
SEC_ERROR_TO_EVENT_MAP(AXI2PCIE_RAM_SEC_ERR),
};
struct event_map ded_error_to_event[] = {
static struct event_map ded_error_to_event[] = {
DED_ERROR_TO_EVENT_MAP(TX_RAM_DED_ERR),
DED_ERROR_TO_EVENT_MAP(RX_RAM_DED_ERR),
DED_ERROR_TO_EVENT_MAP(PCIE2AXI_RAM_DED_ERR),
DED_ERROR_TO_EVENT_MAP(AXI2PCIE_RAM_DED_ERR),
};
struct event_map local_status_to_event[] = {
static struct event_map local_status_to_event[] = {
LOCAL_STATUS_TO_EVENT_MAP(DMA_END_ENGINE_0),
LOCAL_STATUS_TO_EVENT_MAP(DMA_END_ENGINE_1),
LOCAL_STATUS_TO_EVENT_MAP(DMA_ERROR_ENGINE_0),
@ -1023,10 +1023,8 @@ static int mc_platform_init(struct pci_config_window *cfg)
}
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
dev_err(dev, "unable to request IRQ%d\n", irq);
if (irq < 0)
return -ENODEV;
}
for (i = 0; i < NUM_EVENTS; i++) {
event_irq = irq_create_mapping(port->event_domain, i);

View file

@ -35,18 +35,12 @@
struct rcar_msi {
DECLARE_BITMAP(used, INT_PCI_MSI_NR);
struct irq_domain *domain;
struct msi_controller chip;
unsigned long pages;
struct mutex lock;
struct mutex map_lock;
spinlock_t mask_lock;
int irq1;
int irq2;
};
static inline struct rcar_msi *to_rcar_msi(struct msi_controller *chip)
{
return container_of(chip, struct rcar_msi, chip);
}
/* Structure representing the PCIe interface */
struct rcar_pcie_host {
struct rcar_pcie pcie;
@ -56,6 +50,11 @@ struct rcar_pcie_host {
int (*phy_init_fn)(struct rcar_pcie_host *host);
};
static struct rcar_pcie_host *msi_to_host(struct rcar_msi *msi)
{
return container_of(msi, struct rcar_pcie_host, msi);
}
static u32 rcar_read_conf(struct rcar_pcie *pcie, int where)
{
unsigned int shift = BITS_PER_BYTE * (where & 3);
@ -292,8 +291,6 @@ static int rcar_pcie_enable(struct rcar_pcie_host *host)
bridge->sysdata = host;
bridge->ops = &rcar_pcie_ops;
if (IS_ENABLED(CONFIG_PCI_MSI))
bridge->msi = &host->msi.chip;
return pci_host_probe(bridge);
}
@ -473,42 +470,6 @@ static int rcar_pcie_phy_init_gen3(struct rcar_pcie_host *host)
return err;
}
static int rcar_msi_alloc(struct rcar_msi *chip)
{
int msi;
mutex_lock(&chip->lock);
msi = find_first_zero_bit(chip->used, INT_PCI_MSI_NR);
if (msi < INT_PCI_MSI_NR)
set_bit(msi, chip->used);
else
msi = -ENOSPC;
mutex_unlock(&chip->lock);
return msi;
}
static int rcar_msi_alloc_region(struct rcar_msi *chip, int no_irqs)
{
int msi;
mutex_lock(&chip->lock);
msi = bitmap_find_free_region(chip->used, INT_PCI_MSI_NR,
order_base_2(no_irqs));
mutex_unlock(&chip->lock);
return msi;
}
static void rcar_msi_free(struct rcar_msi *chip, unsigned long irq)
{
mutex_lock(&chip->lock);
clear_bit(irq, chip->used);
mutex_unlock(&chip->lock);
}
static irqreturn_t rcar_pcie_msi_irq(int irq, void *data)
{
struct rcar_pcie_host *host = data;
@ -527,18 +488,13 @@ static irqreturn_t rcar_pcie_msi_irq(int irq, void *data)
unsigned int index = find_first_bit(&reg, 32);
unsigned int msi_irq;
/* clear the interrupt */
rcar_pci_write_reg(pcie, 1 << index, PCIEMSIFR);
msi_irq = irq_find_mapping(msi->domain, index);
msi_irq = irq_find_mapping(msi->domain->parent, index);
if (msi_irq) {
if (test_bit(index, msi->used))
generic_handle_irq(msi_irq);
else
dev_info(dev, "unhandled MSI\n");
generic_handle_irq(msi_irq);
} else {
/* Unknown MSI, just clear it */
dev_dbg(dev, "unexpected MSI\n");
rcar_pci_write_reg(pcie, BIT(index), PCIEMSIFR);
}
/* see if there's any more pending in this vector */
@ -548,149 +504,169 @@ static irqreturn_t rcar_pcie_msi_irq(int irq, void *data)
return IRQ_HANDLED;
}
static int rcar_msi_setup_irq(struct msi_controller *chip, struct pci_dev *pdev,
struct msi_desc *desc)
static void rcar_msi_top_irq_ack(struct irq_data *d)
{
struct rcar_msi *msi = to_rcar_msi(chip);
struct rcar_pcie_host *host = container_of(chip, struct rcar_pcie_host,
msi.chip);
struct rcar_pcie *pcie = &host->pcie;
struct msi_msg msg;
unsigned int irq;
int hwirq;
hwirq = rcar_msi_alloc(msi);
if (hwirq < 0)
return hwirq;
irq = irq_find_mapping(msi->domain, hwirq);
if (!irq) {
rcar_msi_free(msi, hwirq);
return -EINVAL;
}
irq_set_msi_desc(irq, desc);
msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE;
msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR);
msg.data = hwirq;
pci_write_msi_msg(irq, &msg);
return 0;
irq_chip_ack_parent(d);
}
static int rcar_msi_setup_irqs(struct msi_controller *chip,
struct pci_dev *pdev, int nvec, int type)
static void rcar_msi_top_irq_mask(struct irq_data *d)
{
struct rcar_msi *msi = to_rcar_msi(chip);
struct rcar_pcie_host *host = container_of(chip, struct rcar_pcie_host,
msi.chip);
struct rcar_pcie *pcie = &host->pcie;
struct msi_desc *desc;
struct msi_msg msg;
unsigned int irq;
pci_msi_mask_irq(d);
irq_chip_mask_parent(d);
}
static void rcar_msi_top_irq_unmask(struct irq_data *d)
{
pci_msi_unmask_irq(d);
irq_chip_unmask_parent(d);
}
static struct irq_chip rcar_msi_top_chip = {
.name = "PCIe MSI",
.irq_ack = rcar_msi_top_irq_ack,
.irq_mask = rcar_msi_top_irq_mask,
.irq_unmask = rcar_msi_top_irq_unmask,
};
static void rcar_msi_irq_ack(struct irq_data *d)
{
struct rcar_msi *msi = irq_data_get_irq_chip_data(d);
struct rcar_pcie *pcie = &msi_to_host(msi)->pcie;
/* clear the interrupt */
rcar_pci_write_reg(pcie, BIT(d->hwirq), PCIEMSIFR);
}
static void rcar_msi_irq_mask(struct irq_data *d)
{
struct rcar_msi *msi = irq_data_get_irq_chip_data(d);
struct rcar_pcie *pcie = &msi_to_host(msi)->pcie;
unsigned long flags;
u32 value;
spin_lock_irqsave(&msi->mask_lock, flags);
value = rcar_pci_read_reg(pcie, PCIEMSIIER);
value &= ~BIT(d->hwirq);
rcar_pci_write_reg(pcie, value, PCIEMSIIER);
spin_unlock_irqrestore(&msi->mask_lock, flags);
}
static void rcar_msi_irq_unmask(struct irq_data *d)
{
struct rcar_msi *msi = irq_data_get_irq_chip_data(d);
struct rcar_pcie *pcie = &msi_to_host(msi)->pcie;
unsigned long flags;
u32 value;
spin_lock_irqsave(&msi->mask_lock, flags);
value = rcar_pci_read_reg(pcie, PCIEMSIIER);
value |= BIT(d->hwirq);
rcar_pci_write_reg(pcie, value, PCIEMSIIER);
spin_unlock_irqrestore(&msi->mask_lock, flags);
}
static int rcar_msi_set_affinity(struct irq_data *d, const struct cpumask *mask, bool force)
{
return -EINVAL;
}
static void rcar_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct rcar_msi *msi = irq_data_get_irq_chip_data(data);
struct rcar_pcie *pcie = &msi_to_host(msi)->pcie;
msg->address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE;
msg->address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR);
msg->data = data->hwirq;
}
static struct irq_chip rcar_msi_bottom_chip = {
.name = "Rcar MSI",
.irq_ack = rcar_msi_irq_ack,
.irq_mask = rcar_msi_irq_mask,
.irq_unmask = rcar_msi_irq_unmask,
.irq_set_affinity = rcar_msi_set_affinity,
.irq_compose_msi_msg = rcar_compose_msi_msg,
};
static int rcar_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args)
{
struct rcar_msi *msi = domain->host_data;
unsigned int i;
int hwirq;
int i;
/* MSI-X interrupts are not supported */
if (type == PCI_CAP_ID_MSIX)
return -EINVAL;
mutex_lock(&msi->map_lock);
WARN_ON(!list_is_singular(&pdev->dev.msi_list));
desc = list_entry(pdev->dev.msi_list.next, struct msi_desc, list);
hwirq = bitmap_find_free_region(msi->used, INT_PCI_MSI_NR, order_base_2(nr_irqs));
mutex_unlock(&msi->map_lock);
hwirq = rcar_msi_alloc_region(msi, nvec);
if (hwirq < 0)
return -ENOSPC;
irq = irq_find_mapping(msi->domain, hwirq);
if (!irq)
return -ENOSPC;
for (i = 0; i < nvec; i++) {
/*
* irq_create_mapping() called from rcar_pcie_probe() pre-
* allocates descs, so there is no need to allocate descs here.
* We can therefore assume that if irq_find_mapping() above
* returns non-zero, then the descs are also successfully
* allocated.
*/
if (irq_set_msi_desc_off(irq, i, desc)) {
/* TODO: clear */
return -EINVAL;
}
}
desc->nvec_used = nvec;
desc->msi_attrib.multiple = order_base_2(nvec);
msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE;
msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR);
msg.data = hwirq;
pci_write_msi_msg(irq, &msg);
for (i = 0; i < nr_irqs; i++)
irq_domain_set_info(domain, virq + i, hwirq + i,
&rcar_msi_bottom_chip, domain->host_data,
handle_edge_irq, NULL, NULL);
return 0;
}
static void rcar_msi_teardown_irq(struct msi_controller *chip, unsigned int irq)
static void rcar_msi_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
struct rcar_msi *msi = to_rcar_msi(chip);
struct irq_data *d = irq_get_irq_data(irq);
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct rcar_msi *msi = domain->host_data;
rcar_msi_free(msi, d->hwirq);
mutex_lock(&msi->map_lock);
bitmap_release_region(msi->used, d->hwirq, order_base_2(nr_irqs));
mutex_unlock(&msi->map_lock);
}
static struct irq_chip rcar_msi_irq_chip = {
.name = "R-Car PCIe MSI",
.irq_enable = pci_msi_unmask_irq,
.irq_disable = pci_msi_mask_irq,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
static const struct irq_domain_ops rcar_msi_domain_ops = {
.alloc = rcar_msi_domain_alloc,
.free = rcar_msi_domain_free,
};
static int rcar_msi_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
static struct msi_domain_info rcar_msi_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_MULTI_PCI_MSI),
.chip = &rcar_msi_top_chip,
};
static int rcar_allocate_domains(struct rcar_msi *msi)
{
irq_set_chip_and_handler(irq, &rcar_msi_irq_chip, handle_simple_irq);
irq_set_chip_data(irq, domain->host_data);
struct rcar_pcie *pcie = &msi_to_host(msi)->pcie;
struct fwnode_handle *fwnode = dev_fwnode(pcie->dev);
struct irq_domain *parent;
parent = irq_domain_create_linear(fwnode, INT_PCI_MSI_NR,
&rcar_msi_domain_ops, msi);
if (!parent) {
dev_err(pcie->dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS);
msi->domain = pci_msi_create_irq_domain(fwnode, &rcar_msi_info, parent);
if (!msi->domain) {
dev_err(pcie->dev, "failed to create MSI domain\n");
irq_domain_remove(parent);
return -ENOMEM;
}
return 0;
}
static const struct irq_domain_ops msi_domain_ops = {
.map = rcar_msi_map,
};
static void rcar_pcie_unmap_msi(struct rcar_pcie_host *host)
static void rcar_free_domains(struct rcar_msi *msi)
{
struct rcar_msi *msi = &host->msi;
int i, irq;
for (i = 0; i < INT_PCI_MSI_NR; i++) {
irq = irq_find_mapping(msi->domain, i);
if (irq > 0)
irq_dispose_mapping(irq);
}
struct irq_domain *parent = msi->domain->parent;
irq_domain_remove(msi->domain);
}
static void rcar_pcie_hw_enable_msi(struct rcar_pcie_host *host)
{
struct rcar_pcie *pcie = &host->pcie;
struct rcar_msi *msi = &host->msi;
unsigned long base;
/* setup MSI data target */
base = virt_to_phys((void *)msi->pages);
rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR);
rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR);
/* enable all MSI interrupts */
rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER);
irq_domain_remove(parent);
}
static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
@ -698,29 +674,24 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
struct rcar_pcie *pcie = &host->pcie;
struct device *dev = pcie->dev;
struct rcar_msi *msi = &host->msi;
int err, i;
struct resource res;
int err;
mutex_init(&msi->lock);
mutex_init(&msi->map_lock);
spin_lock_init(&msi->mask_lock);
msi->chip.dev = dev;
msi->chip.setup_irq = rcar_msi_setup_irq;
msi->chip.setup_irqs = rcar_msi_setup_irqs;
msi->chip.teardown_irq = rcar_msi_teardown_irq;
err = of_address_to_resource(dev->of_node, 0, &res);
if (err)
return err;
msi->domain = irq_domain_add_linear(dev->of_node, INT_PCI_MSI_NR,
&msi_domain_ops, &msi->chip);
if (!msi->domain) {
dev_err(dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
for (i = 0; i < INT_PCI_MSI_NR; i++)
irq_create_mapping(msi->domain, i);
err = rcar_allocate_domains(msi);
if (err)
return err;
/* Two irqs are for MSI, but they are also used for non-MSI irqs */
err = devm_request_irq(dev, msi->irq1, rcar_pcie_msi_irq,
IRQF_SHARED | IRQF_NO_THREAD,
rcar_msi_irq_chip.name, host);
rcar_msi_bottom_chip.name, host);
if (err < 0) {
dev_err(dev, "failed to request IRQ: %d\n", err);
goto err;
@ -728,27 +699,32 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
err = devm_request_irq(dev, msi->irq2, rcar_pcie_msi_irq,
IRQF_SHARED | IRQF_NO_THREAD,
rcar_msi_irq_chip.name, host);
rcar_msi_bottom_chip.name, host);
if (err < 0) {
dev_err(dev, "failed to request IRQ: %d\n", err);
goto err;
}
/* setup MSI data target */
msi->pages = __get_free_pages(GFP_KERNEL | GFP_DMA32, 0);
rcar_pcie_hw_enable_msi(host);
/* disable all MSIs */
rcar_pci_write_reg(pcie, 0, PCIEMSIIER);
/*
* Setup MSI data target using RC base address address, which
* is guaranteed to be in the low 32bit range on any RCar HW.
*/
rcar_pci_write_reg(pcie, lower_32_bits(res.start) | MSIFE, PCIEMSIALR);
rcar_pci_write_reg(pcie, upper_32_bits(res.start), PCIEMSIAUR);
return 0;
err:
rcar_pcie_unmap_msi(host);
rcar_free_domains(msi);
return err;
}
static void rcar_pcie_teardown_msi(struct rcar_pcie_host *host)
{
struct rcar_pcie *pcie = &host->pcie;
struct rcar_msi *msi = &host->msi;
/* Disable all MSI interrupts */
rcar_pci_write_reg(pcie, 0, PCIEMSIIER);
@ -756,9 +732,7 @@ static void rcar_pcie_teardown_msi(struct rcar_pcie_host *host)
/* Disable address decoding of the MSI interrupt, MSIFE */
rcar_pci_write_reg(pcie, 0, PCIEMSIALR);
free_pages(msi->pages, 0);
rcar_pcie_unmap_msi(host);
rcar_free_domains(&host->msi);
}
static int rcar_pcie_get_resources(struct rcar_pcie_host *host)
@ -1011,8 +985,17 @@ static int __maybe_unused rcar_pcie_resume(struct device *dev)
dev_info(dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f);
/* Enable MSI */
if (IS_ENABLED(CONFIG_PCI_MSI))
rcar_pcie_hw_enable_msi(host);
if (IS_ENABLED(CONFIG_PCI_MSI)) {
struct resource res;
u32 val;
of_address_to_resource(dev->of_node, 0, &res);
rcar_pci_write_reg(pcie, upper_32_bits(res.start), PCIEMSIAUR);
rcar_pci_write_reg(pcie, lower_32_bits(res.start) | MSIFE, PCIEMSIALR);
bitmap_to_arr32(&val, host->msi.used, INT_PCI_MSI_NR);
rcar_pci_write_reg(pcie, val, PCIEMSIIER);
}
rcar_pcie_hw_enable(host);

View file

@ -26,6 +26,7 @@
/* Bridge core config registers */
#define BRCFG_PCIE_RX0 0x00000000
#define BRCFG_PCIE_RX1 0x00000004
#define BRCFG_INTERRUPT 0x00000010
#define BRCFG_PCIE_RX_MSG_FILTER 0x00000020
@ -128,6 +129,7 @@
#define NWL_ECAM_VALUE_DEFAULT 12
#define CFG_DMA_REG_BAR GENMASK(2, 0)
#define CFG_PCIE_CACHE GENMASK(7, 0)
#define INT_PCI_MSI_NR (2 * 32)
@ -675,6 +677,11 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie)
nwl_bridge_writel(pcie, CFG_ENABLE_MSG_FILTER_MASK,
BRCFG_PCIE_RX_MSG_FILTER);
/* This routes the PCIe DMA traffic to go through CCI path */
if (of_dma_is_coherent(dev->of_node))
nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, BRCFG_PCIE_RX1) |
CFG_PCIE_CACHE, BRCFG_PCIE_RX1);
err = nwl_wait_for_link(pcie);
if (err)
return err;

View file

@ -93,25 +93,23 @@
/**
* struct xilinx_pcie_port - PCIe port information
* @reg_base: IO Mapped Register Base
* @irq: Interrupt number
* @msi_pages: MSI pages
* @dev: Device pointer
* @msi_map: Bitmap of allocated MSIs
* @map_lock: Mutex protecting the MSI allocation
* @msi_domain: MSI IRQ domain pointer
* @leg_domain: Legacy IRQ domain pointer
* @resources: Bus Resources
*/
struct xilinx_pcie_port {
void __iomem *reg_base;
u32 irq;
unsigned long msi_pages;
struct device *dev;
unsigned long msi_map[BITS_TO_LONGS(XILINX_NUM_MSI_IRQS)];
struct mutex map_lock;
struct irq_domain *msi_domain;
struct irq_domain *leg_domain;
struct list_head resources;
};
static DECLARE_BITMAP(msi_irq_in_use, XILINX_NUM_MSI_IRQS);
static inline u32 pcie_read(struct xilinx_pcie_port *port, u32 reg)
{
return readl(port->reg_base + reg);
@ -196,151 +194,118 @@ static struct pci_ops xilinx_pcie_ops = {
/* MSI functions */
/**
* xilinx_pcie_destroy_msi - Free MSI number
* @irq: IRQ to be freed
*/
static void xilinx_pcie_destroy_msi(unsigned int irq)
static void xilinx_msi_top_irq_ack(struct irq_data *d)
{
struct msi_desc *msi;
struct xilinx_pcie_port *port;
struct irq_data *d = irq_get_irq_data(irq);
irq_hw_number_t hwirq = irqd_to_hwirq(d);
if (!test_bit(hwirq, msi_irq_in_use)) {
msi = irq_get_msi_desc(irq);
port = msi_desc_to_pci_sysdata(msi);
dev_err(port->dev, "Trying to free unused MSI#%d\n", irq);
} else {
clear_bit(hwirq, msi_irq_in_use);
}
/*
* xilinx_pcie_intr_handler() will have performed the Ack.
* Eventually, this should be fixed and the Ack be moved in
* the respective callbacks for INTx and MSI.
*/
}
/**
* xilinx_pcie_assign_msi - Allocate MSI number
*
* Return: A valid IRQ on success and error value on failure.
*/
static int xilinx_pcie_assign_msi(void)
{
int pos;
static struct irq_chip xilinx_msi_top_chip = {
.name = "PCIe MSI",
.irq_ack = xilinx_msi_top_irq_ack,
};
pos = find_first_zero_bit(msi_irq_in_use, XILINX_NUM_MSI_IRQS);
if (pos < XILINX_NUM_MSI_IRQS)
set_bit(pos, msi_irq_in_use);
else
static int xilinx_msi_set_affinity(struct irq_data *d, const struct cpumask *mask, bool force)
{
return -EINVAL;
}
static void xilinx_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct xilinx_pcie_port *pcie = irq_data_get_irq_chip_data(data);
phys_addr_t pa = ALIGN_DOWN(virt_to_phys(pcie), SZ_4K);
msg->address_lo = lower_32_bits(pa);
msg->address_hi = upper_32_bits(pa);
msg->data = data->hwirq;
}
static struct irq_chip xilinx_msi_bottom_chip = {
.name = "Xilinx MSI",
.irq_set_affinity = xilinx_msi_set_affinity,
.irq_compose_msi_msg = xilinx_compose_msi_msg,
};
static int xilinx_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args)
{
struct xilinx_pcie_port *port = domain->host_data;
int hwirq, i;
mutex_lock(&port->map_lock);
hwirq = bitmap_find_free_region(port->msi_map, XILINX_NUM_MSI_IRQS, order_base_2(nr_irqs));
mutex_unlock(&port->map_lock);
if (hwirq < 0)
return -ENOSPC;
return pos;
}
/**
* xilinx_msi_teardown_irq - Destroy the MSI
* @chip: MSI Chip descriptor
* @irq: MSI IRQ to destroy
*/
static void xilinx_msi_teardown_irq(struct msi_controller *chip,
unsigned int irq)
{
xilinx_pcie_destroy_msi(irq);
irq_dispose_mapping(irq);
}
/**
* xilinx_pcie_msi_setup_irq - Setup MSI request
* @chip: MSI chip pointer
* @pdev: PCIe device pointer
* @desc: MSI descriptor pointer
*
* Return: '0' on success and error value on failure
*/
static int xilinx_pcie_msi_setup_irq(struct msi_controller *chip,
struct pci_dev *pdev,
struct msi_desc *desc)
{
struct xilinx_pcie_port *port = pdev->bus->sysdata;
unsigned int irq;
int hwirq;
struct msi_msg msg;
phys_addr_t msg_addr;
hwirq = xilinx_pcie_assign_msi();
if (hwirq < 0)
return hwirq;
irq = irq_create_mapping(port->msi_domain, hwirq);
if (!irq)
return -EINVAL;
irq_set_msi_desc(irq, desc);
msg_addr = virt_to_phys((void *)port->msi_pages);
msg.address_hi = 0;
msg.address_lo = msg_addr;
msg.data = irq;
pci_write_msi_msg(irq, &msg);
for (i = 0; i < nr_irqs; i++)
irq_domain_set_info(domain, virq + i, hwirq + i,
&xilinx_msi_bottom_chip, domain->host_data,
handle_edge_irq, NULL, NULL);
return 0;
}
/* MSI Chip Descriptor */
static struct msi_controller xilinx_pcie_msi_chip = {
.setup_irq = xilinx_pcie_msi_setup_irq,
.teardown_irq = xilinx_msi_teardown_irq,
};
/* HW Interrupt Chip Descriptor */
static struct irq_chip xilinx_msi_irq_chip = {
.name = "Xilinx PCIe MSI",
.irq_enable = pci_msi_unmask_irq,
.irq_disable = pci_msi_mask_irq,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
/**
* xilinx_pcie_msi_map - Set the handler for the MSI and mark IRQ as valid
* @domain: IRQ domain
* @irq: Virtual IRQ number
* @hwirq: HW interrupt number
*
* Return: Always returns 0.
*/
static int xilinx_pcie_msi_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
static void xilinx_msi_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
irq_set_chip_and_handler(irq, &xilinx_msi_irq_chip, handle_simple_irq);
irq_set_chip_data(irq, domain->host_data);
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct xilinx_pcie_port *port = domain->host_data;
return 0;
mutex_lock(&port->map_lock);
bitmap_release_region(port->msi_map, d->hwirq, order_base_2(nr_irqs));
mutex_unlock(&port->map_lock);
}
/* IRQ Domain operations */
static const struct irq_domain_ops msi_domain_ops = {
.map = xilinx_pcie_msi_map,
static const struct irq_domain_ops xilinx_msi_domain_ops = {
.alloc = xilinx_msi_domain_alloc,
.free = xilinx_msi_domain_free,
};
/**
* xilinx_pcie_enable_msi - Enable MSI support
* @port: PCIe port information
*/
static int xilinx_pcie_enable_msi(struct xilinx_pcie_port *port)
{
phys_addr_t msg_addr;
static struct msi_domain_info xilinx_msi_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS),
.chip = &xilinx_msi_top_chip,
};
port->msi_pages = __get_free_pages(GFP_KERNEL, 0);
if (!port->msi_pages)
static int xilinx_allocate_msi_domains(struct xilinx_pcie_port *pcie)
{
struct fwnode_handle *fwnode = dev_fwnode(pcie->dev);
struct irq_domain *parent;
parent = irq_domain_create_linear(fwnode, XILINX_NUM_MSI_IRQS,
&xilinx_msi_domain_ops, pcie);
if (!parent) {
dev_err(pcie->dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS);
msg_addr = virt_to_phys((void *)port->msi_pages);
pcie_write(port, 0x0, XILINX_PCIE_REG_MSIBASE1);
pcie_write(port, msg_addr, XILINX_PCIE_REG_MSIBASE2);
pcie->msi_domain = pci_msi_create_irq_domain(fwnode, &xilinx_msi_info, parent);
if (!pcie->msi_domain) {
dev_err(pcie->dev, "failed to create MSI domain\n");
irq_domain_remove(parent);
return -ENOMEM;
}
return 0;
}
static void xilinx_free_msi_domains(struct xilinx_pcie_port *pcie)
{
struct irq_domain *parent = pcie->msi_domain->parent;
irq_domain_remove(pcie->msi_domain);
irq_domain_remove(parent);
}
/* INTx Functions */
/**
@ -420,6 +385,8 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
}
if (status & (XILINX_PCIE_INTR_INTX | XILINX_PCIE_INTR_MSI)) {
unsigned int irq;
val = pcie_read(port, XILINX_PCIE_REG_RPIFR1);
/* Check whether interrupt valid */
@ -432,20 +399,19 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
if (val & XILINX_PCIE_RPIFR1_MSI_INTR) {
val = pcie_read(port, XILINX_PCIE_REG_RPIFR2) &
XILINX_PCIE_RPIFR2_MSG_DATA;
irq = irq_find_mapping(port->msi_domain->parent, val);
} else {
val = (val & XILINX_PCIE_RPIFR1_INTR_MASK) >>
XILINX_PCIE_RPIFR1_INTR_SHIFT;
val = irq_find_mapping(port->leg_domain, val);
irq = irq_find_mapping(port->leg_domain, val);
}
/* Clear interrupt FIFO register 1 */
pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK,
XILINX_PCIE_REG_RPIFR1);
/* Handle the interrupt */
if (IS_ENABLED(CONFIG_PCI_MSI) ||
!(val & XILINX_PCIE_RPIFR1_MSI_INTR))
generic_handle_irq(val);
if (irq)
generic_handle_irq(irq);
}
if (status & XILINX_PCIE_INTR_SLV_UNSUPP)
@ -491,12 +457,11 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
{
struct device *dev = port->dev;
struct device_node *node = dev->of_node;
struct device_node *pcie_intc_node;
int ret;
/* Setup INTx */
pcie_intc_node = of_get_next_child(node, NULL);
pcie_intc_node = of_get_next_child(dev->of_node, NULL);
if (!pcie_intc_node) {
dev_err(dev, "No PCIe Intc node found\n");
return -ENODEV;
@ -513,18 +478,14 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
/* Setup MSI */
if (IS_ENABLED(CONFIG_PCI_MSI)) {
port->msi_domain = irq_domain_add_linear(node,
XILINX_NUM_MSI_IRQS,
&msi_domain_ops,
&xilinx_pcie_msi_chip);
if (!port->msi_domain) {
dev_err(dev, "Failed to get a MSI IRQ domain\n");
return -ENODEV;
}
phys_addr_t pa = ALIGN_DOWN(virt_to_phys(port), SZ_4K);
ret = xilinx_pcie_enable_msi(port);
ret = xilinx_allocate_msi_domains(port);
if (ret)
return ret;
pcie_write(port, upper_32_bits(pa), XILINX_PCIE_REG_MSIBASE1);
pcie_write(port, lower_32_bits(pa), XILINX_PCIE_REG_MSIBASE2);
}
return 0;
@ -572,6 +533,7 @@ static int xilinx_pcie_parse_dt(struct xilinx_pcie_port *port)
struct device *dev = port->dev;
struct device_node *node = dev->of_node;
struct resource regs;
unsigned int irq;
int err;
err = of_address_to_resource(node, 0, &regs);
@ -584,12 +546,12 @@ static int xilinx_pcie_parse_dt(struct xilinx_pcie_port *port)
if (IS_ERR(port->reg_base))
return PTR_ERR(port->reg_base);
port->irq = irq_of_parse_and_map(node, 0);
err = devm_request_irq(dev, port->irq, xilinx_pcie_intr_handler,
irq = irq_of_parse_and_map(node, 0);
err = devm_request_irq(dev, irq, xilinx_pcie_intr_handler,
IRQF_SHARED | IRQF_NO_THREAD,
"xilinx-pcie", port);
if (err) {
dev_err(dev, "unable to request irq %d\n", port->irq);
dev_err(dev, "unable to request irq %d\n", irq);
return err;
}
@ -617,7 +579,7 @@ static int xilinx_pcie_probe(struct platform_device *pdev)
return -ENODEV;
port = pci_host_bridge_priv(bridge);
mutex_init(&port->map_lock);
port->dev = dev;
err = xilinx_pcie_parse_dt(port);
@ -637,11 +599,11 @@ static int xilinx_pcie_probe(struct platform_device *pdev)
bridge->sysdata = port;
bridge->ops = &xilinx_pcie_ops;
#ifdef CONFIG_PCI_MSI
xilinx_pcie_msi_chip.dev = dev;
bridge->msi = &xilinx_pcie_msi_chip;
#endif
return pci_host_probe(bridge);
err = pci_host_probe(bridge);
if (err)
xilinx_free_msi_domains(port);
return err;
}
static const struct of_device_id xilinx_pcie_of_match[] = {

View file

@ -28,6 +28,7 @@
#define BUS_RESTRICT_CAP(vmcap) (vmcap & 0x1)
#define PCI_REG_VMCONFIG 0x44
#define BUS_RESTRICT_CFG(vmcfg) ((vmcfg >> 8) & 0x3)
#define VMCONFIG_MSI_REMAP 0x2
#define PCI_REG_VMLOCK 0x70
#define MB2_SHADOW_EN(vmlock) (vmlock & 0x2)
@ -59,6 +60,13 @@ enum vmd_features {
* be used for MSI remapping
*/
VMD_FEAT_OFFSET_FIRST_VECTOR = (1 << 3),
/*
* Device can bypass remapping MSI-X transactions into its MSI-X table,
* avoiding the requirement of a VMD MSI domain for child device
* interrupt handling.
*/
VMD_FEAT_CAN_BYPASS_MSI_REMAP = (1 << 4),
};
/*
@ -306,6 +314,16 @@ static struct msi_domain_info vmd_msi_domain_info = {
.chip = &vmd_msi_controller,
};
static void vmd_set_msi_remapping(struct vmd_dev *vmd, bool enable)
{
u16 reg;
pci_read_config_word(vmd->dev, PCI_REG_VMCONFIG, &reg);
reg = enable ? (reg & ~VMCONFIG_MSI_REMAP) :
(reg | VMCONFIG_MSI_REMAP);
pci_write_config_word(vmd->dev, PCI_REG_VMCONFIG, reg);
}
static int vmd_create_irq_domain(struct vmd_dev *vmd)
{
struct fwnode_handle *fn;
@ -325,6 +343,13 @@ static int vmd_create_irq_domain(struct vmd_dev *vmd)
static void vmd_remove_irq_domain(struct vmd_dev *vmd)
{
/*
* Some production BIOS won't enable remapping between soft reboots.
* Ensure remapping is restored before unloading the driver.
*/
if (!vmd->msix_count)
vmd_set_msi_remapping(vmd, true);
if (vmd->irq_domain) {
struct fwnode_handle *fn = vmd->irq_domain->fwnode;
@ -679,15 +704,32 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
sd->node = pcibus_to_node(vmd->dev->bus);
ret = vmd_create_irq_domain(vmd);
if (ret)
return ret;
/*
* Override the irq domain bus token so the domain can be distinguished
* from a regular PCI/MSI domain.
* Currently MSI remapping must be enabled in guest passthrough mode
* due to some missing interrupt remapping plumbing. This is probably
* acceptable because the guest is usually CPU-limited and MSI
* remapping doesn't become a performance bottleneck.
*/
irq_domain_update_bus_token(vmd->irq_domain, DOMAIN_BUS_VMD_MSI);
if (!(features & VMD_FEAT_CAN_BYPASS_MSI_REMAP) ||
offset[0] || offset[1]) {
ret = vmd_alloc_irqs(vmd);
if (ret)
return ret;
vmd_set_msi_remapping(vmd, true);
ret = vmd_create_irq_domain(vmd);
if (ret)
return ret;
/*
* Override the IRQ domain bus token so the domain can be
* distinguished from a regular PCI/MSI domain.
*/
irq_domain_update_bus_token(vmd->irq_domain, DOMAIN_BUS_VMD_MSI);
} else {
vmd_set_msi_remapping(vmd, false);
}
pci_add_resource(&resources, &vmd->resources[0]);
pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
@ -753,10 +795,6 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
vmd->first_vec = 1;
err = vmd_alloc_irqs(vmd);
if (err)
return err;
spin_lock_init(&vmd->cfg_lock);
pci_set_drvdata(dev, vmd);
err = vmd_enable_domain(vmd, features);
@ -825,7 +863,8 @@ static const struct pci_device_id vmd_ids[] = {
.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP,},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0),
.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW |
VMD_FEAT_HAS_BUS_RESTRICTIONS,},
VMD_FEAT_HAS_BUS_RESTRICTIONS |
VMD_FEAT_CAN_BYPASS_MSI_REMAP,},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x467f),
.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
VMD_FEAT_HAS_BUS_RESTRICTIONS |

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* Endpoint Function Driver to implement Non-Transparent Bridge functionality
*
* Copyright (C) 2020 Texas Instruments
@ -696,7 +696,8 @@ static void epf_ntb_cmd_handler(struct work_struct *work)
/**
* epf_ntb_peer_spad_bar_clear() - Clear Peer Scratchpad BAR
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
* @ntb_epc: EPC associated with one of the HOST which holds peer's outbound
* address.
*
*+-----------------+------->+------------------+ +-----------------+
*| BAR0 | | CONFIG REGION | | BAR0 |
@ -740,6 +741,7 @@ static void epf_ntb_peer_spad_bar_clear(struct epf_ntb_epc *ntb_epc)
/**
* epf_ntb_peer_spad_bar_set() - Set peer scratchpad BAR
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
* @type: PRIMARY interface or SECONDARY interface
*
*+-----------------+------->+------------------+ +-----------------+
*| BAR0 | | CONFIG REGION | | BAR0 |
@ -808,7 +810,8 @@ static int epf_ntb_peer_spad_bar_set(struct epf_ntb *ntb,
/**
* epf_ntb_config_sspad_bar_clear() - Clear Config + Self scratchpad BAR
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
* @ntb_epc: EPC associated with one of the HOST which holds peer's outbound
* address.
*
* +-----------------+------->+------------------+ +-----------------+
* | BAR0 | | CONFIG REGION | | BAR0 |
@ -851,7 +854,8 @@ static void epf_ntb_config_sspad_bar_clear(struct epf_ntb_epc *ntb_epc)
/**
* epf_ntb_config_sspad_bar_set() - Set Config + Self scratchpad BAR
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
* @ntb_epc: EPC associated with one of the HOST which holds peer's outbound
* address.
*
* +-----------------+------->+------------------+ +-----------------+
* | BAR0 | | CONFIG REGION | | BAR0 |
@ -1312,6 +1316,7 @@ static int epf_ntb_configure_interrupt(struct epf_ntb *ntb,
/**
* epf_ntb_alloc_peer_mem() - Allocate memory in peer's outbound address space
* @dev: The PCI device.
* @ntb_epc: EPC associated with one of the HOST whose BAR holds peer's outbound
* address
* @bar: BAR of @ntb_epc in for which memory has to be allocated (could be
@ -1660,7 +1665,6 @@ static int epf_ntb_init_epc_bar_interface(struct epf_ntb *ntb,
* epf_ntb_init_epc_bar() - Identify BARs to be used for each of the NTB
* constructs (scratchpad region, doorbell, memorywindow)
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
* @type: PRIMARY interface or SECONDARY interface
*
* Wrapper to epf_ntb_init_epc_bar_interface() to identify the free BARs
* to be used for each of BAR_CONFIG, BAR_PEER_SPAD, BAR_DB_MW1, BAR_MW2,
@ -2037,6 +2041,8 @@ static const struct config_item_type ntb_group_type = {
/**
* epf_ntb_add_cfs() - Add configfs directory specific to NTB
* @epf: NTB endpoint function device
* @group: A pointer to the config_group structure referencing a group of
* config_items of a specific type that belong to a specific sub-system.
*
* Add configfs directory specific to NTB. This directory will hold
* NTB specific properties like db_count, spad_count, num_mws etc.,

View file

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* Test driver to test endpoint functionality
*
* Copyright (C) 2017 Texas Instruments
@ -833,15 +833,18 @@ static int pci_epf_test_bind(struct pci_epf *epf)
return -EINVAL;
epc_features = pci_epc_get_features(epc, epf->func_no);
if (epc_features) {
linkup_notifier = epc_features->linkup_notifier;
core_init_notifier = epc_features->core_init_notifier;
test_reg_bar = pci_epc_get_first_free_bar(epc_features);
if (test_reg_bar < 0)
return -EINVAL;
pci_epf_configure_bar(epf, epc_features);
if (!epc_features) {
dev_err(&epf->dev, "epc_features not implemented\n");
return -EOPNOTSUPP;
}
linkup_notifier = epc_features->linkup_notifier;
core_init_notifier = epc_features->core_init_notifier;
test_reg_bar = pci_epc_get_first_free_bar(epc_features);
if (test_reg_bar < 0)
return -EINVAL;
pci_epf_configure_bar(epf, epc_features);
epf_test->test_reg_bar = test_reg_bar;
epf_test->epc_features = epc_features;
@ -922,6 +925,7 @@ static int __init pci_epf_test_init(void)
ret = pci_epf_register_driver(&test_driver);
if (ret) {
destroy_workqueue(kpcitest_workqueue);
pr_err("Failed to register pci epf test driver --> %d\n", ret);
return ret;
}
@ -932,6 +936,8 @@ module_init(pci_epf_test_init);
static void __exit pci_epf_test_exit(void)
{
if (kpcitest_workqueue)
destroy_workqueue(kpcitest_workqueue);
pci_epf_unregister_driver(&test_driver);
}
module_exit(pci_epf_test_exit);

View file

@ -594,6 +594,8 @@ EXPORT_SYMBOL_GPL(pci_epc_add_epf);
* pci_epc_remove_epf() - remove PCI endpoint function from endpoint controller
* @epc: the EPC device from which the endpoint function should be removed
* @epf: the endpoint function to be removed
* @type: identifies if the EPC is connected to the primary or secondary
* interface of EPF
*
* Invoke to remove PCI endpoint function from the endpoint controller.
*/

View file

@ -113,7 +113,7 @@ EXPORT_SYMBOL_GPL(pci_epf_bind);
void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
enum pci_epc_interface_type type)
{
struct device *dev = epf->epc->dev.parent;
struct device *dev;
struct pci_epf_bar *epf_bar;
struct pci_epc *epc;

View file

@ -157,7 +157,7 @@ static int pcihp_is_ejectable(acpi_handle handle)
}
/**
* acpi_pcihp_check_ejectable - check if handle is ejectable ACPI PCI slot
* acpi_pci_check_ejectable - check if handle is ejectable ACPI PCI slot
* @pbus: the PCI bus of the PCI slot corresponding to 'handle'
* @handle: ACPI handle to check
*

View file

@ -148,8 +148,7 @@ static inline struct acpiphp_root_context *to_acpiphp_root_context(struct acpi_h
* ACPI has no generic method of setting/getting attention status
* this allows for device specific driver registration
*/
struct acpiphp_attention_info
{
struct acpiphp_attention_info {
int (*set_attn)(struct hotplug_slot *slot, u8 status);
int (*get_attn)(struct hotplug_slot *slot, u8 *status);
struct module *owner;

View file

@ -533,6 +533,7 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge)
slot->flags &= ~SLOT_ENABLED;
continue;
}
pci_dev_put(dev);
}
}

View file

@ -80,7 +80,7 @@ static u8 evbuffer[1024];
static void __iomem *compaq_int15_entry_point;
/* lock for ordering int15_bios_call() */
static spinlock_t int15_lock;
static DEFINE_SPINLOCK(int15_lock);
/* This is a series of function that deals with
@ -415,9 +415,6 @@ void compaq_nvram_init(void __iomem *rom_start)
compaq_int15_entry_point = (rom_start + ROM_INT15_PHY_ADDR - ROM_PHY_ADDR);
dbg("int15 entry = %p\n", compaq_int15_entry_point);
/* initialize our int15 lock */
spin_lock_init(&int15_lock);
}

View file

@ -174,11 +174,6 @@ static inline u8 shpc_readb(struct controller *ctrl, int reg)
return readb(ctrl->creg + reg);
}
static inline void shpc_writeb(struct controller *ctrl, int reg, u8 val)
{
writeb(val, ctrl->creg + reg);
}
static inline u16 shpc_readw(struct controller *ctrl, int reg)
{
return readw(ctrl->creg + reg);

View file

@ -64,39 +64,18 @@ static void pci_msi_teardown_msi_irqs(struct pci_dev *dev)
/* Arch hooks */
int __weak arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc)
{
struct msi_controller *chip = dev->bus->msi;
int err;
if (!chip || !chip->setup_irq)
return -EINVAL;
err = chip->setup_irq(chip, dev, desc);
if (err < 0)
return err;
irq_set_chip_data(desc->irq, chip);
return 0;
return -EINVAL;
}
void __weak arch_teardown_msi_irq(unsigned int irq)
{
struct msi_controller *chip = irq_get_chip_data(irq);
if (!chip || !chip->teardown_irq)
return;
chip->teardown_irq(chip, irq);
}
int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
{
struct msi_controller *chip = dev->bus->msi;
struct msi_desc *entry;
int ret;
if (chip && chip->setup_irqs)
return chip->setup_irqs(chip, dev, nvec, type);
/*
* If an architecture wants to support multiple MSI, it needs to
* override arch_setup_msi_irqs()
@ -115,11 +94,7 @@ int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
return 0;
}
/*
* We have a default implementation available as a separate non-weak
* function, as it is used by the Xen x86 PCI code
*/
void default_teardown_msi_irqs(struct pci_dev *dev)
void __weak arch_teardown_msi_irqs(struct pci_dev *dev)
{
int i;
struct msi_desc *entry;
@ -129,11 +104,6 @@ void default_teardown_msi_irqs(struct pci_dev *dev)
for (i = 0; i < entry->nvec_used; i++)
arch_teardown_msi_irq(entry->irq + i);
}
void __weak arch_teardown_msi_irqs(struct pci_dev *dev)
{
return default_teardown_msi_irqs(dev);
}
#endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */
static void default_restore_msi_irq(struct pci_dev *dev, int irq)
@ -901,8 +871,15 @@ static int pci_msi_supported(struct pci_dev *dev, int nvec)
* Any bridge which does NOT route MSI transactions from its
* secondary bus to its primary bus must set NO_MSI flag on
* the secondary pci_bus.
* We expect only arch-specific PCI host bus controller driver
* or quirks for specific PCI bridges to be setting NO_MSI.
*
* The NO_MSI flag can either be set directly by:
* - arch-specific PCI host bus controller drivers (deprecated)
* - quirks for specific PCI bridges
*
* or indirectly by platform-specific PCI host bridge drivers by
* advertising the 'msi_domain' property, which results in
* the NO_MSI flag when no MSI domain is found for this bridge
* at probe time.
*/
for (bus = dev->bus; bus; bus = bus->parent)
if (bus->bus_flags & PCI_BUS_FLAGS_NO_MSI)

View file

@ -190,10 +190,18 @@ int of_pci_parse_bus_range(struct device_node *node, struct resource *res)
EXPORT_SYMBOL_GPL(of_pci_parse_bus_range);
/**
* This function will try to obtain the host bridge domain number by
* finding a property called "linux,pci-domain" of the given device node.
* of_get_pci_domain_nr - Find the host bridge domain number
* of the given device node.
* @node: Device tree node with the domain information.
*
* @node: device tree node with the domain information
* This function will try to obtain the host bridge domain number by finding
* a property called "linux,pci-domain" of the given device node.
*
* Return:
* * > 0 - On success, an associated domain number.
* * -EINVAL - The property "linux,pci-domain" does not exist.
* * -ENODATA - The linux,pci-domain" property does not have value.
* * -EOVERFLOW - Invalid "linux,pci-domain" property value.
*
* Returns the associated domain number from DT in the range [0-0xffff], or
* a negative value if the required property is not found.
@ -585,10 +593,16 @@ int devm_of_pci_bridge_init(struct device *dev, struct pci_host_bridge *bridge)
#endif /* CONFIG_PCI */
/**
* of_pci_get_max_link_speed - Find the maximum link speed of the given device node.
* @node: Device tree node with the maximum link speed information.
*
* This function will try to find the limitation of link speed by finding
* a property called "max-link-speed" of the given device node.
*
* @node: device tree node with the max link speed information
* Return:
* * > 0 - On success, a maximum link speed.
* * -EINVAL - Invalid "max-link-speed" property value, or failure to access
* the property of the device tree node.
*
* Returns the associated max link speed from DT, or a negative value if the
* required property is not found or is invalid.

View file

@ -1021,7 +1021,7 @@ static int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)
if (!error)
pci_dbg(dev, "power state changed by ACPI to %s\n",
acpi_power_state_string(state_conv[state]));
acpi_power_state_string(adev->power.state));
return error;
}

View file

@ -33,6 +33,21 @@
#include <linux/pci-acpi.h>
#include "pci.h"
static bool device_has_acpi_name(struct device *dev)
{
#ifdef CONFIG_ACPI
acpi_handle handle = ACPI_HANDLE(dev);
if (!handle)
return false;
return acpi_check_dsm(handle, &pci_acpi_dsm_guid, 0x2,
1 << DSM_PCI_DEVICE_NAME);
#else
return false;
#endif
}
#ifdef CONFIG_DMI
enum smbios_attr_enum {
SMBIOS_ATTR_NONE = 0,
@ -45,13 +60,9 @@ static size_t find_smbios_instance_string(struct pci_dev *pdev, char *buf,
{
const struct dmi_device *dmi;
struct dmi_dev_onboard *donboard;
int domain_nr;
int bus;
int devfn;
domain_nr = pci_domain_nr(pdev->bus);
bus = pdev->bus->number;
devfn = pdev->devfn;
int domain_nr = pci_domain_nr(pdev->bus);
int bus = pdev->bus->number;
int devfn = pdev->devfn;
dmi = NULL;
while ((dmi = dmi_find_device(DMI_DEV_TYPE_DEV_ONBOARD,
@ -62,13 +73,11 @@ static size_t find_smbios_instance_string(struct pci_dev *pdev, char *buf,
donboard->devfn == devfn) {
if (buf) {
if (attribute == SMBIOS_ATTR_INSTANCE_SHOW)
return scnprintf(buf, PAGE_SIZE,
"%d\n",
donboard->instance);
return sysfs_emit(buf, "%d\n",
donboard->instance);
else if (attribute == SMBIOS_ATTR_LABEL_SHOW)
return scnprintf(buf, PAGE_SIZE,
"%s\n",
dmi->name);
return sysfs_emit(buf, "%s\n",
dmi->name);
}
return strlen(dmi->name);
}
@ -76,78 +85,52 @@ static size_t find_smbios_instance_string(struct pci_dev *pdev, char *buf,
return 0;
}
static umode_t smbios_instance_string_exist(struct kobject *kobj,
struct attribute *attr, int n)
static ssize_t smbios_label_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct device *dev;
struct pci_dev *pdev;
dev = kobj_to_dev(kobj);
pdev = to_pci_dev(dev);
return find_smbios_instance_string(pdev, NULL, SMBIOS_ATTR_NONE) ?
S_IRUGO : 0;
}
static ssize_t smbioslabel_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct pci_dev *pdev;
pdev = to_pci_dev(dev);
struct pci_dev *pdev = to_pci_dev(dev);
return find_smbios_instance_string(pdev, buf,
SMBIOS_ATTR_LABEL_SHOW);
}
static struct device_attribute dev_attr_smbios_label = __ATTR(label, 0444,
smbios_label_show, NULL);
static ssize_t smbiosinstance_show(struct device *dev,
struct device_attribute *attr, char *buf)
static ssize_t index_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct pci_dev *pdev;
pdev = to_pci_dev(dev);
struct pci_dev *pdev = to_pci_dev(dev);
return find_smbios_instance_string(pdev, buf,
SMBIOS_ATTR_INSTANCE_SHOW);
}
static DEVICE_ATTR_RO(index);
static struct device_attribute smbios_attr_label = {
.attr = {.name = "label", .mode = 0444},
.show = smbioslabel_show,
};
static struct device_attribute smbios_attr_instance = {
.attr = {.name = "index", .mode = 0444},
.show = smbiosinstance_show,
};
static struct attribute *smbios_attributes[] = {
&smbios_attr_label.attr,
&smbios_attr_instance.attr,
static struct attribute *smbios_attrs[] = {
&dev_attr_smbios_label.attr,
&dev_attr_index.attr,
NULL,
};
static const struct attribute_group smbios_attr_group = {
.attrs = smbios_attributes,
.is_visible = smbios_instance_string_exist,
static umode_t smbios_attr_is_visible(struct kobject *kobj, struct attribute *a,
int n)
{
struct device *dev = kobj_to_dev(kobj);
struct pci_dev *pdev = to_pci_dev(dev);
if (device_has_acpi_name(dev))
return 0;
if (!find_smbios_instance_string(pdev, NULL, SMBIOS_ATTR_NONE))
return 0;
return a->mode;
}
const struct attribute_group pci_dev_smbios_attr_group = {
.attrs = smbios_attrs,
.is_visible = smbios_attr_is_visible,
};
static int pci_create_smbiosname_file(struct pci_dev *pdev)
{
return sysfs_create_group(&pdev->dev.kobj, &smbios_attr_group);
}
static void pci_remove_smbiosname_file(struct pci_dev *pdev)
{
sysfs_remove_group(&pdev->dev.kobj, &smbios_attr_group);
}
#else
static inline int pci_create_smbiosname_file(struct pci_dev *pdev)
{
return -1;
}
static inline void pci_remove_smbiosname_file(struct pci_dev *pdev)
{
}
#endif
#ifdef CONFIG_ACPI
@ -169,11 +152,10 @@ static void dsm_label_utf16s_to_utf8s(union acpi_object *obj, char *buf)
static int dsm_get_label(struct device *dev, char *buf,
enum acpi_attr_enum attr)
{
acpi_handle handle;
acpi_handle handle = ACPI_HANDLE(dev);
union acpi_object *obj, *tmp;
int len = -1;
handle = ACPI_HANDLE(dev);
if (!handle)
return -1;
@ -209,103 +191,39 @@ static int dsm_get_label(struct device *dev, char *buf,
return len;
}
static bool device_has_dsm(struct device *dev)
{
acpi_handle handle;
handle = ACPI_HANDLE(dev);
if (!handle)
return false;
return !!acpi_check_dsm(handle, &pci_acpi_dsm_guid, 0x2,
1 << DSM_PCI_DEVICE_NAME);
}
static umode_t acpi_index_string_exist(struct kobject *kobj,
struct attribute *attr, int n)
{
struct device *dev;
dev = kobj_to_dev(kobj);
if (device_has_dsm(dev))
return S_IRUGO;
return 0;
}
static ssize_t acpilabel_show(struct device *dev,
struct device_attribute *attr, char *buf)
static ssize_t label_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
return dsm_get_label(dev, buf, ACPI_ATTR_LABEL_SHOW);
}
static DEVICE_ATTR_RO(label);
static ssize_t acpiindex_show(struct device *dev,
static ssize_t acpi_index_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return dsm_get_label(dev, buf, ACPI_ATTR_INDEX_SHOW);
}
static DEVICE_ATTR_RO(acpi_index);
static struct device_attribute acpi_attr_label = {
.attr = {.name = "label", .mode = 0444},
.show = acpilabel_show,
};
static struct device_attribute acpi_attr_index = {
.attr = {.name = "acpi_index", .mode = 0444},
.show = acpiindex_show,
};
static struct attribute *acpi_attributes[] = {
&acpi_attr_label.attr,
&acpi_attr_index.attr,
static struct attribute *acpi_attrs[] = {
&dev_attr_label.attr,
&dev_attr_acpi_index.attr,
NULL,
};
static const struct attribute_group acpi_attr_group = {
.attrs = acpi_attributes,
.is_visible = acpi_index_string_exist,
static umode_t acpi_attr_is_visible(struct kobject *kobj, struct attribute *a,
int n)
{
struct device *dev = kobj_to_dev(kobj);
if (!device_has_acpi_name(dev))
return 0;
return a->mode;
}
const struct attribute_group pci_dev_acpi_attr_group = {
.attrs = acpi_attrs,
.is_visible = acpi_attr_is_visible,
};
static int pci_create_acpi_index_label_files(struct pci_dev *pdev)
{
return sysfs_create_group(&pdev->dev.kobj, &acpi_attr_group);
}
static int pci_remove_acpi_index_label_files(struct pci_dev *pdev)
{
sysfs_remove_group(&pdev->dev.kobj, &acpi_attr_group);
return 0;
}
#else
static inline int pci_create_acpi_index_label_files(struct pci_dev *pdev)
{
return -1;
}
static inline int pci_remove_acpi_index_label_files(struct pci_dev *pdev)
{
return -1;
}
static inline bool device_has_dsm(struct device *dev)
{
return false;
}
#endif
void pci_create_firmware_label_files(struct pci_dev *pdev)
{
if (device_has_dsm(&pdev->dev))
pci_create_acpi_index_label_files(pdev);
else
pci_create_smbiosname_file(pdev);
}
void pci_remove_firmware_label_files(struct pci_dev *pdev)
{
if (device_has_dsm(&pdev->dev))
pci_remove_acpi_index_label_files(pdev);
else
pci_remove_smbiosname_file(pdev);
}

View file

@ -39,7 +39,7 @@ field##_show(struct device *dev, struct device_attribute *attr, char *buf) \
struct pci_dev *pdev; \
\
pdev = to_pci_dev(dev); \
return sprintf(buf, format_string, pdev->field); \
return sysfs_emit(buf, format_string, pdev->field); \
} \
static DEVICE_ATTR_RO(field)
@ -56,7 +56,7 @@ static ssize_t broken_parity_status_show(struct device *dev,
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%u\n", pdev->broken_parity_status);
return sysfs_emit(buf, "%u\n", pdev->broken_parity_status);
}
static ssize_t broken_parity_status_store(struct device *dev,
@ -129,7 +129,7 @@ static ssize_t power_state_show(struct device *dev,
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%s\n", pci_power_name(pdev->current_state));
return sysfs_emit(buf, "%s\n", pci_power_name(pdev->current_state));
}
static DEVICE_ATTR_RO(power_state);
@ -138,10 +138,10 @@ static ssize_t resource_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct pci_dev *pci_dev = to_pci_dev(dev);
char *str = buf;
int i;
int max;
resource_size_t start, end;
size_t len = 0;
if (pci_dev->subordinate)
max = DEVICE_COUNT_RESOURCE;
@ -151,12 +151,12 @@ static ssize_t resource_show(struct device *dev, struct device_attribute *attr,
for (i = 0; i < max; i++) {
struct resource *res = &pci_dev->resource[i];
pci_resource_to_user(pci_dev, i, res, &start, &end);
str += sprintf(str, "0x%016llx 0x%016llx 0x%016llx\n",
(unsigned long long)start,
(unsigned long long)end,
(unsigned long long)res->flags);
len += sysfs_emit_at(buf, len, "0x%016llx 0x%016llx 0x%016llx\n",
(unsigned long long)start,
(unsigned long long)end,
(unsigned long long)res->flags);
}
return (str - buf);
return len;
}
static DEVICE_ATTR_RO(resource);
@ -165,8 +165,8 @@ static ssize_t max_link_speed_show(struct device *dev,
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%s\n",
pci_speed_string(pcie_get_speed_cap(pdev)));
return sysfs_emit(buf, "%s\n",
pci_speed_string(pcie_get_speed_cap(pdev)));
}
static DEVICE_ATTR_RO(max_link_speed);
@ -175,7 +175,7 @@ static ssize_t max_link_width_show(struct device *dev,
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%u\n", pcie_get_width_cap(pdev));
return sysfs_emit(buf, "%u\n", pcie_get_width_cap(pdev));
}
static DEVICE_ATTR_RO(max_link_width);
@ -193,7 +193,7 @@ static ssize_t current_link_speed_show(struct device *dev,
speed = pcie_link_speed[linkstat & PCI_EXP_LNKSTA_CLS];
return sprintf(buf, "%s\n", pci_speed_string(speed));
return sysfs_emit(buf, "%s\n", pci_speed_string(speed));
}
static DEVICE_ATTR_RO(current_link_speed);
@ -208,7 +208,7 @@ static ssize_t current_link_width_show(struct device *dev,
if (err)
return -EINVAL;
return sprintf(buf, "%u\n",
return sysfs_emit(buf, "%u\n",
(linkstat & PCI_EXP_LNKSTA_NLW) >> PCI_EXP_LNKSTA_NLW_SHIFT);
}
static DEVICE_ATTR_RO(current_link_width);
@ -225,7 +225,7 @@ static ssize_t secondary_bus_number_show(struct device *dev,
if (err)
return -EINVAL;
return sprintf(buf, "%u\n", sec_bus);
return sysfs_emit(buf, "%u\n", sec_bus);
}
static DEVICE_ATTR_RO(secondary_bus_number);
@ -241,7 +241,7 @@ static ssize_t subordinate_bus_number_show(struct device *dev,
if (err)
return -EINVAL;
return sprintf(buf, "%u\n", sub_bus);
return sysfs_emit(buf, "%u\n", sub_bus);
}
static DEVICE_ATTR_RO(subordinate_bus_number);
@ -251,7 +251,7 @@ static ssize_t ari_enabled_show(struct device *dev,
{
struct pci_dev *pci_dev = to_pci_dev(dev);
return sprintf(buf, "%u\n", pci_ari_enabled(pci_dev->bus));
return sysfs_emit(buf, "%u\n", pci_ari_enabled(pci_dev->bus));
}
static DEVICE_ATTR_RO(ari_enabled);
@ -260,11 +260,11 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
{
struct pci_dev *pci_dev = to_pci_dev(dev);
return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02X\n",
pci_dev->vendor, pci_dev->device,
pci_dev->subsystem_vendor, pci_dev->subsystem_device,
(u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8),
(u8)(pci_dev->class));
return sysfs_emit(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02X\n",
pci_dev->vendor, pci_dev->device,
pci_dev->subsystem_vendor, pci_dev->subsystem_device,
(u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8),
(u8)(pci_dev->class));
}
static DEVICE_ATTR_RO(modalias);
@ -302,7 +302,7 @@ static ssize_t enable_show(struct device *dev, struct device_attribute *attr,
struct pci_dev *pdev;
pdev = to_pci_dev(dev);
return sprintf(buf, "%u\n", atomic_read(&pdev->enable_cnt));
return sysfs_emit(buf, "%u\n", atomic_read(&pdev->enable_cnt));
}
static DEVICE_ATTR_RW(enable);
@ -338,7 +338,7 @@ static ssize_t numa_node_store(struct device *dev,
static ssize_t numa_node_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", dev->numa_node);
return sysfs_emit(buf, "%d\n", dev->numa_node);
}
static DEVICE_ATTR_RW(numa_node);
#endif
@ -348,7 +348,7 @@ static ssize_t dma_mask_bits_show(struct device *dev,
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%d\n", fls64(pdev->dma_mask));
return sysfs_emit(buf, "%d\n", fls64(pdev->dma_mask));
}
static DEVICE_ATTR_RO(dma_mask_bits);
@ -356,7 +356,7 @@ static ssize_t consistent_dma_mask_bits_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", fls64(dev->coherent_dma_mask));
return sysfs_emit(buf, "%d\n", fls64(dev->coherent_dma_mask));
}
static DEVICE_ATTR_RO(consistent_dma_mask_bits);
@ -366,9 +366,9 @@ static ssize_t msi_bus_show(struct device *dev, struct device_attribute *attr,
struct pci_dev *pdev = to_pci_dev(dev);
struct pci_bus *subordinate = pdev->subordinate;
return sprintf(buf, "%u\n", subordinate ?
!(subordinate->bus_flags & PCI_BUS_FLAGS_NO_MSI)
: !pdev->no_msi);
return sysfs_emit(buf, "%u\n", subordinate ?
!(subordinate->bus_flags & PCI_BUS_FLAGS_NO_MSI)
: !pdev->no_msi);
}
static ssize_t msi_bus_store(struct device *dev, struct device_attribute *attr,
@ -523,7 +523,7 @@ static ssize_t d3cold_allowed_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf(buf, "%u\n", pdev->d3cold_allowed);
return sysfs_emit(buf, "%u\n", pdev->d3cold_allowed);
}
static DEVICE_ATTR_RW(d3cold_allowed);
#endif
@ -537,7 +537,7 @@ static ssize_t devspec_show(struct device *dev,
if (np == NULL)
return 0;
return sprintf(buf, "%pOF", np);
return sysfs_emit(buf, "%pOF", np);
}
static DEVICE_ATTR_RO(devspec);
#endif
@ -583,7 +583,7 @@ static ssize_t driver_override_show(struct device *dev,
ssize_t len;
device_lock(dev);
len = scnprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override);
len = sysfs_emit(buf, "%s\n", pdev->driver_override);
device_unlock(dev);
return len;
}
@ -658,11 +658,11 @@ static ssize_t boot_vga_show(struct device *dev, struct device_attribute *attr,
struct pci_dev *vga_dev = vga_default_device();
if (vga_dev)
return sprintf(buf, "%u\n", (pdev == vga_dev));
return sysfs_emit(buf, "%u\n", (pdev == vga_dev));
return sprintf(buf, "%u\n",
!!(pdev->resource[PCI_ROM_RESOURCE].flags &
IORESOURCE_ROM_SHADOW));
return sysfs_emit(buf, "%u\n",
!!(pdev->resource[PCI_ROM_RESOURCE].flags &
IORESOURCE_ROM_SHADOW));
}
static DEVICE_ATTR_RO(boot_vga);
@ -808,6 +808,29 @@ static ssize_t pci_write_config(struct file *filp, struct kobject *kobj,
return count;
}
static BIN_ATTR(config, 0644, pci_read_config, pci_write_config, 0);
static struct bin_attribute *pci_dev_config_attrs[] = {
&bin_attr_config,
NULL,
};
static umode_t pci_dev_config_attr_is_visible(struct kobject *kobj,
struct bin_attribute *a, int n)
{
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
a->size = PCI_CFG_SPACE_SIZE;
if (pdev->cfg_size > PCI_CFG_SPACE_SIZE)
a->size = PCI_CFG_SPACE_EXP_SIZE;
return a->attr.mode;
}
static const struct attribute_group pci_dev_config_attr_group = {
.bin_attrs = pci_dev_config_attrs,
.is_bin_visible = pci_dev_config_attr_is_visible,
};
#ifdef HAVE_PCI_LEGACY
/**
@ -1283,25 +1306,32 @@ static ssize_t pci_read_rom(struct file *filp, struct kobject *kobj,
return count;
}
static BIN_ATTR(rom, 0600, pci_read_rom, pci_write_rom, 0);
static const struct bin_attribute pci_config_attr = {
.attr = {
.name = "config",
.mode = 0644,
},
.size = PCI_CFG_SPACE_SIZE,
.read = pci_read_config,
.write = pci_write_config,
static struct bin_attribute *pci_dev_rom_attrs[] = {
&bin_attr_rom,
NULL,
};
static const struct bin_attribute pcie_config_attr = {
.attr = {
.name = "config",
.mode = 0644,
},
.size = PCI_CFG_SPACE_EXP_SIZE,
.read = pci_read_config,
.write = pci_write_config,
static umode_t pci_dev_rom_attr_is_visible(struct kobject *kobj,
struct bin_attribute *a, int n)
{
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
size_t rom_size;
/* If the device has a ROM, try to expose it in sysfs. */
rom_size = pci_resource_len(pdev, PCI_ROM_RESOURCE);
if (!rom_size)
return 0;
a->size = rom_size;
return a->attr.mode;
}
static const struct attribute_group pci_dev_rom_attr_group = {
.bin_attrs = pci_dev_rom_attrs,
.is_bin_visible = pci_dev_rom_attr_is_visible,
};
static ssize_t reset_store(struct device *dev, struct device_attribute *attr,
@ -1325,102 +1355,35 @@ static ssize_t reset_store(struct device *dev, struct device_attribute *attr,
return count;
}
static DEVICE_ATTR_WO(reset);
static DEVICE_ATTR(reset, 0200, NULL, reset_store);
static struct attribute *pci_dev_reset_attrs[] = {
&dev_attr_reset.attr,
NULL,
};
static int pci_create_capabilities_sysfs(struct pci_dev *dev)
static umode_t pci_dev_reset_attr_is_visible(struct kobject *kobj,
struct attribute *a, int n)
{
int retval;
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
pcie_vpd_create_sysfs_dev_files(dev);
if (!pdev->reset_fn)
return 0;
if (dev->reset_fn) {
retval = device_create_file(&dev->dev, &dev_attr_reset);
if (retval)
goto error;
}
return 0;
error:
pcie_vpd_remove_sysfs_dev_files(dev);
return retval;
return a->mode;
}
static const struct attribute_group pci_dev_reset_attr_group = {
.attrs = pci_dev_reset_attrs,
.is_visible = pci_dev_reset_attr_is_visible,
};
int __must_check pci_create_sysfs_dev_files(struct pci_dev *pdev)
{
int retval;
int rom_size;
struct bin_attribute *attr;
if (!sysfs_initialized)
return -EACCES;
if (pdev->cfg_size > PCI_CFG_SPACE_SIZE)
retval = sysfs_create_bin_file(&pdev->dev.kobj, &pcie_config_attr);
else
retval = sysfs_create_bin_file(&pdev->dev.kobj, &pci_config_attr);
if (retval)
goto err;
retval = pci_create_resource_files(pdev);
if (retval)
goto err_config_file;
/* If the device has a ROM, try to expose it in sysfs. */
rom_size = pci_resource_len(pdev, PCI_ROM_RESOURCE);
if (rom_size) {
attr = kzalloc(sizeof(*attr), GFP_ATOMIC);
if (!attr) {
retval = -ENOMEM;
goto err_resource_files;
}
sysfs_bin_attr_init(attr);
attr->size = rom_size;
attr->attr.name = "rom";
attr->attr.mode = 0600;
attr->read = pci_read_rom;
attr->write = pci_write_rom;
retval = sysfs_create_bin_file(&pdev->dev.kobj, attr);
if (retval) {
kfree(attr);
goto err_resource_files;
}
pdev->rom_attr = attr;
}
/* add sysfs entries for various capabilities */
retval = pci_create_capabilities_sysfs(pdev);
if (retval)
goto err_rom_file;
pci_create_firmware_label_files(pdev);
return 0;
err_rom_file:
if (pdev->rom_attr) {
sysfs_remove_bin_file(&pdev->dev.kobj, pdev->rom_attr);
kfree(pdev->rom_attr);
pdev->rom_attr = NULL;
}
err_resource_files:
pci_remove_resource_files(pdev);
err_config_file:
if (pdev->cfg_size > PCI_CFG_SPACE_SIZE)
sysfs_remove_bin_file(&pdev->dev.kobj, &pcie_config_attr);
else
sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr);
err:
return retval;
}
static void pci_remove_capabilities_sysfs(struct pci_dev *dev)
{
pcie_vpd_remove_sysfs_dev_files(dev);
if (dev->reset_fn) {
device_remove_file(&dev->dev, &dev_attr_reset);
dev->reset_fn = 0;
}
return pci_create_resource_files(pdev);
}
/**
@ -1434,22 +1397,7 @@ void pci_remove_sysfs_dev_files(struct pci_dev *pdev)
if (!sysfs_initialized)
return;
pci_remove_capabilities_sysfs(pdev);
if (pdev->cfg_size > PCI_CFG_SPACE_SIZE)
sysfs_remove_bin_file(&pdev->dev.kobj, &pcie_config_attr);
else
sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr);
pci_remove_resource_files(pdev);
if (pdev->rom_attr) {
sysfs_remove_bin_file(&pdev->dev.kobj, pdev->rom_attr);
kfree(pdev->rom_attr);
pdev->rom_attr = NULL;
}
pci_remove_firmware_label_files(pdev);
}
static int __init pci_sysfs_init(void)
@ -1540,6 +1488,16 @@ static const struct attribute_group pci_dev_group = {
const struct attribute_group *pci_dev_groups[] = {
&pci_dev_group,
&pci_dev_config_attr_group,
&pci_dev_rom_attr_group,
&pci_dev_reset_attr_group,
&pci_dev_vpd_attr_group,
#ifdef CONFIG_DMI
&pci_dev_smbios_attr_group,
#endif
#ifdef CONFIG_ACPI
&pci_dev_acpi_attr_group,
#endif
NULL,
};

View file

@ -4072,6 +4072,7 @@ phys_addr_t pci_pio_to_address(unsigned long pio)
return address;
}
EXPORT_SYMBOL_GPL(pci_pio_to_address);
unsigned long __weak pci_address_to_pio(phys_addr_t address)
{
@ -4473,6 +4474,23 @@ void pci_clear_mwi(struct pci_dev *dev)
}
EXPORT_SYMBOL(pci_clear_mwi);
/**
* pci_disable_parity - disable parity checking for device
* @dev: the PCI device to operate on
*
* Disable parity checking for device @dev
*/
void pci_disable_parity(struct pci_dev *dev)
{
u16 cmd;
pci_read_config_word(dev, PCI_COMMAND, &cmd);
if (cmd & PCI_COMMAND_PARITY) {
cmd &= ~PCI_COMMAND_PARITY;
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
}
/**
* pci_intx - enables/disables PCI INTx for device dev
* @pdev: the PCI device to operate on

View file

@ -21,16 +21,10 @@ bool pcie_cap_has_rtctl(const struct pci_dev *dev);
int pci_create_sysfs_dev_files(struct pci_dev *pdev);
void pci_remove_sysfs_dev_files(struct pci_dev *pdev);
#if !defined(CONFIG_DMI) && !defined(CONFIG_ACPI)
static inline void pci_create_firmware_label_files(struct pci_dev *pdev)
{ return; }
static inline void pci_remove_firmware_label_files(struct pci_dev *pdev)
{ return; }
#else
void pci_create_firmware_label_files(struct pci_dev *pdev);
void pci_remove_firmware_label_files(struct pci_dev *pdev);
#endif
void pci_cleanup_rom(struct pci_dev *dev);
#ifdef CONFIG_DMI
extern const struct attribute_group pci_dev_smbios_attr_group;
#endif
enum pci_mmap_api {
PCI_MMAP_SYSFS, /* mmap on /sys/bus/pci/devices/<BDF>/resource<N> */
@ -141,10 +135,9 @@ static inline bool pcie_downstream_port(const struct pci_dev *dev)
type == PCI_EXP_TYPE_PCIE_BRIDGE;
}
int pci_vpd_init(struct pci_dev *dev);
void pci_vpd_init(struct pci_dev *dev);
void pci_vpd_release(struct pci_dev *dev);
void pcie_vpd_create_sysfs_dev_files(struct pci_dev *dev);
void pcie_vpd_remove_sysfs_dev_files(struct pci_dev *dev);
extern const struct attribute_group pci_dev_vpd_attr_group;
/* PCI Virtual Channel */
int pci_save_vc_state(struct pci_dev *dev);
@ -625,6 +618,12 @@ static inline int pci_dev_specific_reset(struct pci_dev *dev, int probe)
#if defined(CONFIG_PCI_QUIRKS) && defined(CONFIG_ARM64)
int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment,
struct resource *res);
#else
static inline int acpi_get_rc_resources(struct device *dev, const char *hid,
u16 segment, struct resource *res)
{
return -ENODEV;
}
#endif
int pci_rebar_get_current_size(struct pci_dev *pdev, int bar);
@ -697,6 +696,7 @@ static inline int pci_aer_raw_clear_status(struct pci_dev *dev) { return -EINVAL
#ifdef CONFIG_ACPI
int pci_acpi_program_hp_params(struct pci_dev *dev);
extern const struct attribute_group pci_dev_acpi_attr_group;
#else
static inline int pci_acpi_program_hp_params(struct pci_dev *dev)
{

View file

@ -129,7 +129,7 @@ static const char * const ecrc_policy_str[] = {
};
/**
* enable_ercr_checking - enable PCIe ECRC checking for a device
* enable_ecrc_checking - enable PCIe ECRC checking for a device
* @dev: the PCI device
*
* Returns 0 on success, or negative on failure.
@ -153,7 +153,7 @@ static int enable_ecrc_checking(struct pci_dev *dev)
}
/**
* disable_ercr_checking - disables PCIe ECRC checking for a device
* disable_ecrc_checking - disables PCIe ECRC checking for a device
* @dev: the PCI device
*
* Returns 0 on success, or negative on failure.
@ -1442,7 +1442,7 @@ static struct pcie_port_service_driver aerdriver = {
};
/**
* aer_service_init - register AER root service driver
* pcie_aer_init - register AER root service driver
*
* Invoked when AER root service driver is loaded.
*/

View file

@ -463,7 +463,7 @@ static struct pcie_port_service_driver pcie_pme_driver = {
};
/**
* pcie_pme_service_init - Register the PCIe PME service driver.
* pcie_pme_init - Register the PCIe PME service driver.
*/
int __init pcie_pme_init(void)
{

View file

@ -32,7 +32,7 @@ static bool rcec_assoc_rciep(struct pci_dev *rcec, struct pci_dev *rciep)
/* Same bus, so check bitmap */
for_each_set_bit(devn, &bitmap, 32)
if (devn == rciep->devfn)
if (devn == PCI_SLOT(rciep->devfn))
return true;
return false;

View file

@ -895,7 +895,6 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
/* Temporarily move resources off the list */
list_splice_init(&bridge->windows, &resources);
bus->sysdata = bridge->sysdata;
bus->msi = bridge->msi;
bus->ops = bridge->ops;
bus->number = bus->busn_res.start = bridge->busnr;
#ifdef CONFIG_PCI_DOMAINS_GENERIC
@ -926,6 +925,8 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
device_enable_async_suspend(bus->bridge);
pci_set_bus_of_node(bus);
pci_set_bus_msi_domain(bus);
if (bridge->msi_domain && !dev_get_msi_domain(&bus->dev))
bus->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
if (!parent)
set_dev_node(bus->bridge, pcibus_to_node(bus));
@ -1053,7 +1054,6 @@ static struct pci_bus *pci_alloc_child_bus(struct pci_bus *parent,
return NULL;
child->parent = parent;
child->msi = parent->msi;
child->sysdata = parent->sysdata;
child->bus_flags = parent->bus_flags;
@ -2353,6 +2353,7 @@ static struct pci_dev *pci_scan_device(struct pci_bus *bus, int devfn)
pci_set_of_node(dev);
if (pci_setup_device(dev)) {
pci_release_of_node(dev);
pci_bus_put(dev->bus);
kfree(dev);
return NULL;

View file

@ -206,16 +206,11 @@ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_BRIDGE_HOST, 8, quirk_mmio_always_on);
/*
* The Mellanox Tavor device gives false positive parity errors. Mark this
* device with a broken_parity_status to allow PCI scanning code to "skip"
* this now blacklisted device.
* The Mellanox Tavor device gives false positive parity errors. Disable
* parity error reporting.
*/
static void quirk_mellanox_tavor(struct pci_dev *dev)
{
dev->broken_parity_status = 1; /* This device gives false positives */
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR, quirk_mellanox_tavor);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR_BRIDGE, quirk_mellanox_tavor);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR, pci_disable_parity);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR_BRIDGE, pci_disable_parity);
/*
* Deal with broken BIOSes that neglect to enable passive release,
@ -2585,10 +2580,8 @@ static int msi_ht_cap_enabled(struct pci_dev *dev)
/* Check the HyperTransport MSI mapping to know whether MSI is enabled or not */
static void quirk_msi_ht_cap(struct pci_dev *dev)
{
if (dev->subordinate && !msi_ht_cap_enabled(dev)) {
pci_warn(dev, "MSI quirk detected; subordinate MSI disabled\n");
dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
}
if (!msi_ht_cap_enabled(dev))
quirk_disable_msi(dev);
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE,
quirk_msi_ht_cap);
@ -2601,9 +2594,6 @@ static void quirk_nvidia_ck804_msi_ht_cap(struct pci_dev *dev)
{
struct pci_dev *pdev;
if (!dev->subordinate)
return;
/*
* Check HT MSI cap on this chipset and the root one. A single one
* having MSI is enough to be sure that MSI is supported.
@ -2611,10 +2601,8 @@ static void quirk_nvidia_ck804_msi_ht_cap(struct pci_dev *dev)
pdev = pci_get_slot(dev->bus, 0);
if (!pdev)
return;
if (!msi_ht_cap_enabled(dev) && !msi_ht_cap_enabled(pdev)) {
pci_warn(dev, "MSI quirk detected; subordinate MSI disabled\n");
dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
}
if (!msi_ht_cap_enabled(pdev))
quirk_msi_ht_cap(dev);
pci_dev_put(pdev);
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_CK804_PCIE,
@ -3922,6 +3910,7 @@ static const struct pci_dev_reset_methods pci_dev_reset_methods[] = {
reset_ivb_igd },
{ PCI_VENDOR_ID_SAMSUNG, 0xa804, nvme_disable_and_flr },
{ PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr },
{ PCI_VENDOR_ID_INTEL, 0x0a54, delay_250ms_after_flr },
{ PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
reset_chelsio_generic_dev },
{ 0 }

View file

@ -19,6 +19,8 @@ static void pci_stop_dev(struct pci_dev *dev)
pci_pme_active(dev, false);
if (pci_dev_is_added(dev)) {
dev->reset_fn = 0;
device_release_driver(&dev->dev);
pci_proc_detach_device(dev);
pci_remove_sysfs_dev_files(dev);

View file

@ -16,12 +16,10 @@
struct pci_vpd_ops {
ssize_t (*read)(struct pci_dev *dev, loff_t pos, size_t count, void *buf);
ssize_t (*write)(struct pci_dev *dev, loff_t pos, size_t count, const void *buf);
int (*set_size)(struct pci_dev *dev, size_t len);
};
struct pci_vpd {
const struct pci_vpd_ops *ops;
struct bin_attribute *attr; /* Descriptor for sysfs VPD entry */
struct mutex lock;
unsigned int len;
u16 flag;
@ -30,6 +28,11 @@ struct pci_vpd {
unsigned int valid:1;
};
static struct pci_dev *pci_get_func0_dev(struct pci_dev *dev)
{
return pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
}
/**
* pci_read_vpd - Read one entry from Vital Product Data
* @dev: pci device struct
@ -60,19 +63,6 @@ ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void
}
EXPORT_SYMBOL(pci_write_vpd);
/**
* pci_set_vpd_size - Set size of Vital Product Data space
* @dev: pci device struct
* @len: size of vpd space
*/
int pci_set_vpd_size(struct pci_dev *dev, size_t len)
{
if (!dev->vpd || !dev->vpd->ops)
return -ENODEV;
return dev->vpd->ops->set_size(dev, len);
}
EXPORT_SYMBOL(pci_set_vpd_size);
#define PCI_VPD_MAX_SIZE (PCI_VPD_ADDR_MASK + 1)
/**
@ -85,10 +75,14 @@ static size_t pci_vpd_size(struct pci_dev *dev, size_t old_size)
size_t off = 0;
unsigned char header[1+2]; /* 1 byte tag, 2 bytes length */
while (off < old_size &&
pci_read_vpd(dev, off, 1, header) == 1) {
while (off < old_size && pci_read_vpd(dev, off, 1, header) == 1) {
unsigned char tag;
if (!header[0] && !off) {
pci_info(dev, "Invalid VPD tag 00, assume missing optional VPD EPROM\n");
return 0;
}
if (header[0] & PCI_VPD_LRDT) {
/* Large Resource Data Type Tag */
tag = pci_vpd_lrdt_tag(header);
@ -297,30 +291,15 @@ static ssize_t pci_vpd_write(struct pci_dev *dev, loff_t pos, size_t count,
return ret ? ret : count;
}
static int pci_vpd_set_size(struct pci_dev *dev, size_t len)
{
struct pci_vpd *vpd = dev->vpd;
if (len == 0 || len > PCI_VPD_MAX_SIZE)
return -EIO;
vpd->valid = 1;
vpd->len = len;
return 0;
}
static const struct pci_vpd_ops pci_vpd_ops = {
.read = pci_vpd_read,
.write = pci_vpd_write,
.set_size = pci_vpd_set_size,
};
static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count,
void *arg)
{
struct pci_dev *tdev = pci_get_slot(dev->bus,
PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
struct pci_dev *tdev = pci_get_func0_dev(dev);
ssize_t ret;
if (!tdev)
@ -334,8 +313,7 @@ static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count,
static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count,
const void *arg)
{
struct pci_dev *tdev = pci_get_slot(dev->bus,
PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
struct pci_dev *tdev = pci_get_func0_dev(dev);
ssize_t ret;
if (!tdev)
@ -346,38 +324,23 @@ static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count,
return ret;
}
static int pci_vpd_f0_set_size(struct pci_dev *dev, size_t len)
{
struct pci_dev *tdev = pci_get_slot(dev->bus,
PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
int ret;
if (!tdev)
return -ENODEV;
ret = pci_set_vpd_size(tdev, len);
pci_dev_put(tdev);
return ret;
}
static const struct pci_vpd_ops pci_vpd_f0_ops = {
.read = pci_vpd_f0_read,
.write = pci_vpd_f0_write,
.set_size = pci_vpd_f0_set_size,
};
int pci_vpd_init(struct pci_dev *dev)
void pci_vpd_init(struct pci_dev *dev)
{
struct pci_vpd *vpd;
u8 cap;
cap = pci_find_capability(dev, PCI_CAP_ID_VPD);
if (!cap)
return -ENODEV;
return;
vpd = kzalloc(sizeof(*vpd), GFP_ATOMIC);
if (!vpd)
return -ENOMEM;
return;
vpd->len = PCI_VPD_MAX_SIZE;
if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0)
@ -389,7 +352,6 @@ int pci_vpd_init(struct pci_dev *dev)
vpd->busy = 0;
vpd->valid = 0;
dev->vpd = vpd;
return 0;
}
void pci_vpd_release(struct pci_dev *dev)
@ -397,102 +359,56 @@ void pci_vpd_release(struct pci_dev *dev)
kfree(dev->vpd);
}
static ssize_t read_vpd_attr(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf,
loff_t off, size_t count)
static ssize_t vpd_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf, loff_t off,
size_t count)
{
struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj));
if (bin_attr->size > 0) {
if (off > bin_attr->size)
count = 0;
else if (count > bin_attr->size - off)
count = bin_attr->size - off;
}
return pci_read_vpd(dev, off, count, buf);
}
static ssize_t write_vpd_attr(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf,
loff_t off, size_t count)
static ssize_t vpd_write(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf, loff_t off,
size_t count)
{
struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj));
if (bin_attr->size > 0) {
if (off > bin_attr->size)
count = 0;
else if (count > bin_attr->size - off)
count = bin_attr->size - off;
}
return pci_write_vpd(dev, off, count, buf);
}
static BIN_ATTR(vpd, 0600, vpd_read, vpd_write, 0);
void pcie_vpd_create_sysfs_dev_files(struct pci_dev *dev)
static struct bin_attribute *vpd_attrs[] = {
&bin_attr_vpd,
NULL,
};
static umode_t vpd_attr_is_visible(struct kobject *kobj,
struct bin_attribute *a, int n)
{
int retval;
struct bin_attribute *attr;
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
if (!dev->vpd)
return;
if (!pdev->vpd)
return 0;
attr = kzalloc(sizeof(*attr), GFP_ATOMIC);
if (!attr)
return;
sysfs_bin_attr_init(attr);
attr->size = 0;
attr->attr.name = "vpd";
attr->attr.mode = S_IRUSR | S_IWUSR;
attr->read = read_vpd_attr;
attr->write = write_vpd_attr;
retval = sysfs_create_bin_file(&dev->dev.kobj, attr);
if (retval) {
kfree(attr);
return;
}
dev->vpd->attr = attr;
return a->attr.mode;
}
void pcie_vpd_remove_sysfs_dev_files(struct pci_dev *dev)
const struct attribute_group pci_dev_vpd_attr_group = {
.bin_attrs = vpd_attrs,
.is_bin_visible = vpd_attr_is_visible,
};
int pci_vpd_find_tag(const u8 *buf, unsigned int len, u8 rdt)
{
if (dev->vpd && dev->vpd->attr) {
sysfs_remove_bin_file(&dev->dev.kobj, dev->vpd->attr);
kfree(dev->vpd->attr);
}
}
int i = 0;
int pci_vpd_find_tag(const u8 *buf, unsigned int off, unsigned int len, u8 rdt)
{
int i;
/* look for LRDT tags only, end tag is the only SRDT tag */
while (i + PCI_VPD_LRDT_TAG_SIZE <= len && buf[i] & PCI_VPD_LRDT) {
if (buf[i] == rdt)
return i;
for (i = off; i < len; ) {
u8 val = buf[i];
if (val & PCI_VPD_LRDT) {
/* Don't return success of the tag isn't complete */
if (i + PCI_VPD_LRDT_TAG_SIZE > len)
break;
if (val == rdt)
return i;
i += PCI_VPD_LRDT_TAG_SIZE +
pci_vpd_lrdt_size(&buf[i]);
} else {
u8 tag = val & ~PCI_VPD_SRDT_LEN_MASK;
if (tag == rdt)
return i;
if (tag == PCI_VPD_SRDT_END)
break;
i += PCI_VPD_SRDT_TAG_SIZE +
pci_vpd_srdt_size(&buf[i]);
}
i += PCI_VPD_LRDT_TAG_SIZE + pci_vpd_lrdt_size(buf + i);
}
return -ENOENT;
@ -530,7 +446,7 @@ static void quirk_f0_vpd_link(struct pci_dev *dev)
if (!PCI_FUNC(dev->devfn))
return;
f0 = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
f0 = pci_get_func0_dev(dev);
if (!f0)
return;
@ -570,7 +486,6 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID,
quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd);
/*
* The Amazon Annapurna Labs 0x0031 device id is reused for other non Root Port
* device types, so the quirk is registered for the PCI_CLASS_BRIDGE_PCI class.
@ -578,51 +493,16 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031,
PCI_CLASS_BRIDGE_PCI, 8, quirk_blacklist_vpd);
/*
* For Broadcom 5706, 5708, 5709 rev. A nics, any read beyond the
* VPD end tag will hang the device. This problem was initially
* observed when a vpd entry was created in sysfs
* ('/sys/bus/pci/devices/<id>/vpd'). A read to this sysfs entry
* will dump 32k of data. Reading a full 32k will cause an access
* beyond the VPD end tag causing the device to hang. Once the device
* is hung, the bnx2 driver will not be able to reset the device.
* We believe that it is legal to read beyond the end tag and
* therefore the solution is to limit the read/write length.
*/
static void quirk_brcm_570x_limit_vpd(struct pci_dev *dev)
static void pci_vpd_set_size(struct pci_dev *dev, size_t len)
{
/*
* Only disable the VPD capability for 5706, 5706S, 5708,
* 5708S and 5709 rev. A
*/
if ((dev->device == PCI_DEVICE_ID_NX2_5706) ||
(dev->device == PCI_DEVICE_ID_NX2_5706S) ||
(dev->device == PCI_DEVICE_ID_NX2_5708) ||
(dev->device == PCI_DEVICE_ID_NX2_5708S) ||
((dev->device == PCI_DEVICE_ID_NX2_5709) &&
(dev->revision & 0xf0) == 0x0)) {
if (dev->vpd)
dev->vpd->len = 0x80;
}
struct pci_vpd *vpd = dev->vpd;
if (!vpd || len == 0 || len > PCI_VPD_MAX_SIZE)
return;
vpd->valid = 1;
vpd->len = len;
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM,
PCI_DEVICE_ID_NX2_5706,
quirk_brcm_570x_limit_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM,
PCI_DEVICE_ID_NX2_5706S,
quirk_brcm_570x_limit_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM,
PCI_DEVICE_ID_NX2_5708,
quirk_brcm_570x_limit_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM,
PCI_DEVICE_ID_NX2_5708S,
quirk_brcm_570x_limit_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM,
PCI_DEVICE_ID_NX2_5709,
quirk_brcm_570x_limit_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM,
PCI_DEVICE_ID_NX2_5709S,
quirk_brcm_570x_limit_vpd);
static void quirk_chelsio_extend_vpd(struct pci_dev *dev)
{
@ -642,9 +522,9 @@ static void quirk_chelsio_extend_vpd(struct pci_dev *dev)
* limits.
*/
if (chip == 0x0 && prod >= 0x20)
pci_set_vpd_size(dev, 8192);
pci_vpd_set_size(dev, 8192);
else if (chip >= 0x4 && func < 0x8)
pci_set_vpd_size(dev, 2048);
pci_vpd_set_size(dev, 2048);
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,

View file

@ -197,6 +197,7 @@ config RESET_SIMPLE
- RCC reset controller in STM32 MCUs
- Allwinner SoCs
- ZTE's zx2967 family
- SiFive FU740 SoCs
config RESET_STM32MP157
bool "STM32MP157 Reset Driver" if COMPILE_TEST

View file

@ -1649,8 +1649,7 @@ static int read_vpd(struct cxlflash_cfg *cfg, u64 wwpn[])
}
/* Get the read only section offset */
ro_start = pci_vpd_find_tag(vpd_data, 0, vpd_size,
PCI_VPD_LRDT_RO_DATA);
ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA);
if (unlikely(ro_start < 0)) {
dev_err(dev, "%s: VPD Read-only data not found\n", __func__);
rc = -ENODEV;

View file

@ -19,5 +19,6 @@
#define PRCI_CLK_CLTXPLL 5
#define PRCI_CLK_TLCLK 6
#define PRCI_CLK_PCLK 7
#define PRCI_CLK_PCIE_AUX 8
#endif /* __DT_BINDINGS_CLOCK_SIFIVE_FU740_PRCI_H */

View file

@ -240,8 +240,7 @@ void pci_msi_unmask_irq(struct irq_data *data);
/*
* The arch hooks to setup up msi irqs. Default functions are implemented
* as weak symbols so that they /can/ be overriden by architecture specific
* code if needed. These hooks must be enabled by the architecture or by
* drivers which depend on them via msi_controller based MSI handling.
* code if needed. These hooks can only be enabled by the architecture.
*
* If CONFIG_PCI_MSI_ARCH_FALLBACKS is not selected they are replaced by
* stubs with warnings.
@ -251,7 +250,6 @@ int arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc);
void arch_teardown_msi_irq(unsigned int irq);
int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
void arch_teardown_msi_irqs(struct pci_dev *dev);
void default_teardown_msi_irqs(struct pci_dev *dev);
#else
static inline int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
{
@ -272,19 +270,6 @@ static inline void arch_teardown_msi_irqs(struct pci_dev *dev)
void arch_restore_msi_irqs(struct pci_dev *dev);
void default_restore_msi_irqs(struct pci_dev *dev);
struct msi_controller {
struct module *owner;
struct device *dev;
struct device_node *of_node;
struct list_head list;
int (*setup_irq)(struct msi_controller *chip, struct pci_dev *dev,
struct msi_desc *desc);
int (*setup_irqs)(struct msi_controller *chip, struct pci_dev *dev,
int nvec, int type);
void (*teardown_irq)(struct msi_controller *chip, unsigned int irq);
};
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
#include <linux/irqhandler.h>

View file

@ -85,6 +85,7 @@ extern const struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */
extern const struct pci_ecam_ops xgene_v1_pcie_ecam_ops; /* APM X-Gene PCIe v1 */
extern const struct pci_ecam_ops xgene_v2_pcie_ecam_ops; /* APM X-Gene PCIe v2.x */
extern const struct pci_ecam_ops al_pcie_ops; /* Amazon Annapurna Labs PCIe */
extern const struct pci_ecam_ops tegra194_pcie_ops; /* Tegra194 PCIe */
#endif
#if IS_ENABLED(CONFIG_PCI_HOST_COMMON)

View file

@ -458,7 +458,6 @@ struct pci_dev {
u32 saved_config_space[16]; /* Config space saved at suspend time */
struct hlist_head saved_cap_space;
struct bin_attribute *rom_attr; /* Attribute descriptor for sysfs ROM entry */
int rom_attr_enabled; /* Display of ROM attribute enabled? */
struct bin_attribute *res_attr[DEVICE_COUNT_RESOURCE]; /* sysfs file for resources */
struct bin_attribute *res_attr_wc[DEVICE_COUNT_RESOURCE]; /* sysfs file for WC mapping of resources */
@ -540,7 +539,6 @@ struct pci_host_bridge {
int (*map_irq)(const struct pci_dev *, u8, u8);
void (*release_fn)(struct pci_host_bridge *);
void *release_data;
struct msi_controller *msi;
unsigned int ignore_reset_delay:1; /* For entire hierarchy */
unsigned int no_ext_tags:1; /* No Extended Tags */
unsigned int native_aer:1; /* OS may use PCIe AER */
@ -551,6 +549,7 @@ struct pci_host_bridge {
unsigned int native_dpc:1; /* OS may use PCIe DPC */
unsigned int preserve_config:1; /* Preserve FW resource setup */
unsigned int size_windows:1; /* Enable root bus sizing */
unsigned int msi_domain:1; /* Bridge wants MSI domain */
/* Resource alignment requirements */
resource_size_t (*align_resource)(struct pci_dev *dev,
@ -621,7 +620,6 @@ struct pci_bus {
struct resource busn_res; /* Bus numbers routed to this bus */
struct pci_ops *ops; /* Configuration access functions */
struct msi_controller *msi; /* MSI controller */
void *sysdata; /* Hook for sys-specific extension */
struct proc_dir_entry *procdir; /* Directory entry in /proc/bus/pci */
@ -1210,6 +1208,7 @@ int __must_check pci_set_mwi(struct pci_dev *dev);
int __must_check pcim_set_mwi(struct pci_dev *dev);
int pci_try_set_mwi(struct pci_dev *dev);
void pci_clear_mwi(struct pci_dev *dev);
void pci_disable_parity(struct pci_dev *dev);
void pci_intx(struct pci_dev *dev, int enable);
bool pci_check_and_mask_intx(struct pci_dev *dev);
bool pci_check_and_unmask_intx(struct pci_dev *dev);
@ -1311,7 +1310,6 @@ void pci_unlock_rescan_remove(void);
/* Vital Product Data routines */
ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf);
ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf);
int pci_set_vpd_size(struct pci_dev *dev, size_t len);
/* Helper functions for low-level code (drivers/pci/setup-[bus,res].c) */
resource_size_t pcibios_retrieve_fw_addr(struct pci_dev *dev, int idx);
@ -2320,14 +2318,13 @@ static inline u8 pci_vpd_info_field_size(const u8 *info_field)
/**
* pci_vpd_find_tag - Locates the Resource Data Type tag provided
* @buf: Pointer to buffered vpd data
* @off: The offset into the buffer at which to begin the search
* @len: The length of the vpd buffer
* @rdt: The Resource Data Type to search for
*
* Returns the index where the Resource Data Type was found or
* -ENOENT otherwise.
*/
int pci_vpd_find_tag(const u8 *buf, unsigned int off, unsigned int len, u8 rdt);
int pci_vpd_find_tag(const u8 *buf, unsigned int len, u8 rdt);
/**
* pci_vpd_find_info_keyword - Locates an information field keyword in the VPD

View file

@ -76,6 +76,11 @@ static inline int reset_control_reset(struct reset_control *rstc)
return 0;
}
static inline int reset_control_rearm(struct reset_control *rstc)
{
return 0;
}
static inline int reset_control_assert(struct reset_control *rstc)
{
return 0;