MHI changes for v5.19

MHI Host
 --------
 
 Support for new modems:
 
  - Foxconn Cinterion MV32-WA/MV32-WB based on SDX62/SDX65
  - Telit FN980 v1 based on SDX55
  - Telit FN990 based on SDX65
  - Foxconn T99W373/T99W368 based on SDX62/SDX65
 
 Core changes:
 
  - During the recycle of event ring elements, compute the ctxt_wp based on the
    local cached value instead of reading from shared memory. This is to prevent
    the possible corruption of the ctxt_wp as some of the endpoint devices could
    modify the value in shared memory.
 
  - Add sysfs support for resetting the endpoint based on the MHI spec. The MHI
    spec allows the host to hard reset the device in the case of an unrecoverable
    error and all other reset mechanisms have failed.
 
  - During MHI shutdown, wait for the endpoint device to enter the ready state
    post reset before proceeding. This is to avoid a possible race where host
    would remove the interrupt handler and device will send ready state
    interrupt, resulting in IOMMU fault.
 
  - Bail out updating the MHI register if the read has failed during
    read/modify/write.
 
  - Use mhi_write_reg() instead of mhi_write_reg_field() for writing the whole
    register fields in mhi_init_mmio().
 
 MAINTAINERS change:
 
  - Since Qualcomm has moved the email domain for its employess from codeaurora
    domain to quicinc, update the same for Hemant.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEZ6VDKoFIy9ikWCeXVZ8R5v6RzvUFAmJ6AVoACgkQVZ8R5v6R
 zvW6mAf+J9D7EWR4W8YTJwN3Pdmu0tkeYQXg3ThOrDFLnljBOle0j3wOdHdMRR5t
 vzn66NfPJIJkugff0diasE07qRZk3ZV7xw+1IZMnHVNCTj4lOyO12jcz/WVIybjl
 6klnm/YEXdmtK5Owt01yNIA1GZYRdfmZw2E2hR6T05Pe/ov2wff/CJ6UcHFrb3Es
 0AdxQ/kpMhQvOzv53wB6ZYL611mMJMrDtcGb5pOH0LWJYN8nbhvh5JfULKBuyWCb
 NsZschX0veYA82OfxhejOiarO6/qKi6cdIfaqzbx26WxfGV2A4Iiqj5UzhTX3sWO
 zl0HyVPKtZIWOm3WvygvXR/1tvdMgg==
 =6jsl
 -----END PGP SIGNATURE-----

Merge tag 'mhi-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi into char-work-next

Manivannan writes:

MHI changes for v5.19

MHI Host
--------

Support for new modems:

 - Foxconn Cinterion MV32-WA/MV32-WB based on SDX62/SDX65
 - Telit FN980 v1 based on SDX55
 - Telit FN990 based on SDX65
 - Foxconn T99W373/T99W368 based on SDX62/SDX65

Core changes:

 - During the recycle of event ring elements, compute the ctxt_wp based on the
   local cached value instead of reading from shared memory. This is to prevent
   the possible corruption of the ctxt_wp as some of the endpoint devices could
   modify the value in shared memory.

 - Add sysfs support for resetting the endpoint based on the MHI spec. The MHI
   spec allows the host to hard reset the device in the case of an unrecoverable
   error and all other reset mechanisms have failed.

 - During MHI shutdown, wait for the endpoint device to enter the ready state
   post reset before proceeding. This is to avoid a possible race where host
   would remove the interrupt handler and device will send ready state
   interrupt, resulting in IOMMU fault.

 - Bail out updating the MHI register if the read has failed during
   read/modify/write.

 - Use mhi_write_reg() instead of mhi_write_reg_field() for writing the whole
   register fields in mhi_init_mmio().

MAINTAINERS change:

 - Since Qualcomm has moved the email domain for its employess from codeaurora
   domain to quicinc, update the same for Hemant.

* tag 'mhi-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi: (29 commits)
  bus: mhi: host: Add support for Foxconn T99W373 and T99W368
  bus: mhi: host: pci_generic: add Telit FN990
  bus: mhi: host: pci_generic: add Telit FN980 v1 hardware revision
  bus: mhi: host: Add support for Cinterion MV32-WA/MV32-WB
  bus: mhi: host: Optimize and update MMIO register write method
  bus: mhi: host: Bail on writing register fields if read fails
  bus: mhi: host: Wait for ready state after reset
  bus: mhi: host: Add soc_reset sysfs
  bus: mhi: host: pci_generic: Sort mhi_pci_id_table based on the PID
  bus: mhi: host: Use cached values for calculating the shared write pointer
  MAINTAINERS: Update Hemant's email id
  bus: mhi: ep: Add uevent support for module autoloading
  bus: mhi: ep: Add support for suspending and resuming channels
  bus: mhi: ep: Add support for queueing SKBs to the host
  bus: mhi: ep: Add support for processing channel rings
  bus: mhi: ep: Add support for reading from the host
  bus: mhi: ep: Add support for processing command rings
  bus: mhi: ep: Add support for handling SYS_ERR condition
  bus: mhi: ep: Add support for handling MHI_RESET
  bus: mhi: ep: Add support for powering down the MHI endpoint stack
  ...
This commit is contained in:
Greg Kroah-Hartman 2022-05-19 16:55:13 +02:00
commit 46ee6bcac9
8 changed files with 234 additions and 64 deletions

View file

@ -19,3 +19,13 @@ Description: The file holds the OEM PK Hash value of the endpoint device
read without having the device power on at least once, the file
will read all 0's.
Users: Any userspace application or clients interested in device info.
What: /sys/bus/mhi/devices/.../soc_reset
Date: April 2022
KernelVersion: 5.19
Contact: mhi@lists.linux.dev
Description: Initiates a SoC reset on the MHI controller. A SoC reset is
a reset of last resort, and will require a complete re-init.
This can be useful as a method of recovery if the device is
non-responsive, or as a means of loading new firmware as a
system administration task.

View file

@ -12809,7 +12809,7 @@ F: arch/arm64/boot/dts/marvell/armada-3720-uDPU.dts
MHI BUS
M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
R: Hemant Kumar <hemantk@codeaurora.org>
R: Hemant Kumar <quic_hemantk@quicinc.com>
L: mhi@lists.linux.dev
L: linux-arm-msm@vger.kernel.org
S: Maintained

View file

@ -19,8 +19,8 @@
#include "internal.h"
/* Setup RDDM vector table for RDDM transfer and program RXVEC */
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info)
int mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info)
{
struct mhi_buf *mhi_buf = img_info->mhi_buf;
struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
@ -28,6 +28,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct device *dev = &mhi_cntrl->mhi_dev->dev;
u32 sequence_id;
unsigned int i;
int ret;
for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
bhi_vec->dma_addr = mhi_buf->dma_addr;
@ -45,11 +46,17 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_RXVECSTATUS_SEQNUM_BMSK);
mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
BHIE_RXVECDB_SEQNUM_BMSK, sequence_id);
ret = mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
BHIE_RXVECDB_SEQNUM_BMSK, sequence_id);
if (ret) {
dev_err(dev, "Failed to write sequence ID for BHIE_RXVECDB\n");
return ret;
}
dev_dbg(dev, "Address: %p and len: 0x%zx sequence: %u\n",
&mhi_buf->dma_addr, mhi_buf->len, sequence_id);
return 0;
}
/* Collect RDDM buffer during kernel panic */
@ -198,10 +205,13 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
BHIE_TXVECDB_SEQNUM_BMSK, sequence_id);
ret = mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
BHIE_TXVECDB_SEQNUM_BMSK, sequence_id);
read_unlock_bh(pm_lock);
if (ret)
return ret;
/* Wait for the image download to complete */
ret = wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||

View file

@ -107,9 +107,23 @@ static ssize_t oem_pk_hash_show(struct device *dev,
}
static DEVICE_ATTR_RO(oem_pk_hash);
static ssize_t soc_reset_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t count)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
mhi_soc_reset(mhi_cntrl);
return count;
}
static DEVICE_ATTR_WO(soc_reset);
static struct attribute *mhi_dev_attrs[] = {
&dev_attr_serial_number.attr,
&dev_attr_oem_pk_hash.attr,
&dev_attr_soc_reset.attr,
NULL,
};
ATTRIBUTE_GROUPS(mhi_dev);
@ -424,74 +438,65 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
struct device *dev = &mhi_cntrl->mhi_dev->dev;
struct {
u32 offset;
u32 mask;
u32 val;
} reg_info[] = {
{
CCABAP_HIGHER, U32_MAX,
CCABAP_HIGHER,
upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
},
{
CCABAP_LOWER, U32_MAX,
CCABAP_LOWER,
lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
},
{
ECABAP_HIGHER, U32_MAX,
ECABAP_HIGHER,
upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
},
{
ECABAP_LOWER, U32_MAX,
ECABAP_LOWER,
lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
},
{
CRCBAP_HIGHER, U32_MAX,
CRCBAP_HIGHER,
upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
},
{
CRCBAP_LOWER, U32_MAX,
CRCBAP_LOWER,
lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
},
{
MHICFG, MHICFG_NER_MASK,
mhi_cntrl->total_ev_rings,
},
{
MHICFG, MHICFG_NHWER_MASK,
mhi_cntrl->hw_ev_rings,
},
{
MHICTRLBASE_HIGHER, U32_MAX,
MHICTRLBASE_HIGHER,
upper_32_bits(mhi_cntrl->iova_start),
},
{
MHICTRLBASE_LOWER, U32_MAX,
MHICTRLBASE_LOWER,
lower_32_bits(mhi_cntrl->iova_start),
},
{
MHIDATABASE_HIGHER, U32_MAX,
MHIDATABASE_HIGHER,
upper_32_bits(mhi_cntrl->iova_start),
},
{
MHIDATABASE_LOWER, U32_MAX,
MHIDATABASE_LOWER,
lower_32_bits(mhi_cntrl->iova_start),
},
{
MHICTRLLIMIT_HIGHER, U32_MAX,
MHICTRLLIMIT_HIGHER,
upper_32_bits(mhi_cntrl->iova_stop),
},
{
MHICTRLLIMIT_LOWER, U32_MAX,
MHICTRLLIMIT_LOWER,
lower_32_bits(mhi_cntrl->iova_stop),
},
{
MHIDATALIMIT_HIGHER, U32_MAX,
MHIDATALIMIT_HIGHER,
upper_32_bits(mhi_cntrl->iova_stop),
},
{
MHIDATALIMIT_LOWER, U32_MAX,
MHIDATALIMIT_LOWER,
lower_32_bits(mhi_cntrl->iova_stop),
},
{ 0, 0, 0 }
{0, 0}
};
dev_dbg(dev, "Initializing MHI registers\n");
@ -533,8 +538,22 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
/* Write to MMIO registers */
for (i = 0; reg_info[i].offset; i++)
mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
reg_info[i].mask, reg_info[i].val);
mhi_write_reg(mhi_cntrl, base, reg_info[i].offset,
reg_info[i].val);
ret = mhi_write_reg_field(mhi_cntrl, base, MHICFG, MHICFG_NER_MASK,
mhi_cntrl->total_ev_rings);
if (ret) {
dev_err(dev, "Unable to write MHICFG register\n");
return ret;
}
ret = mhi_write_reg_field(mhi_cntrl, base, MHICFG, MHICFG_NHWER_MASK,
mhi_cntrl->hw_ev_rings);
if (ret) {
dev_err(dev, "Unable to write MHICFG register\n");
return ret;
}
return 0;
}
@ -1102,8 +1121,15 @@ int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
*/
mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->rddm_image,
mhi_cntrl->rddm_size);
if (mhi_cntrl->rddm_image)
mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
if (mhi_cntrl->rddm_image) {
ret = mhi_rddm_prepare(mhi_cntrl,
mhi_cntrl->rddm_image);
if (ret) {
mhi_free_bhie_table(mhi_cntrl,
mhi_cntrl->rddm_image);
goto error_reg_offset;
}
}
}
mutex_unlock(&mhi_cntrl->pm_mutex);

View file

@ -324,8 +324,9 @@ int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
u32 val, u32 delayus);
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 val);
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 mask, u32 val);
int __must_check mhi_write_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 mask,
u32 val);
void mhi_ring_er_db(struct mhi_event *mhi_event);
void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
dma_addr_t db_val);
@ -339,7 +340,7 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
int mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info);
void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl);

View file

@ -65,19 +65,22 @@ void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
mhi_cntrl->write_reg(mhi_cntrl, base + offset, val);
}
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 mask, u32 val)
int __must_check mhi_write_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 mask,
u32 val)
{
int ret;
u32 tmp;
ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
if (ret)
return;
return ret;
tmp &= ~mask;
tmp |= (val << __ffs(mask));
mhi_write_reg(mhi_cntrl, base, offset, tmp);
return 0;
}
void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
@ -531,18 +534,13 @@ irqreturn_t mhi_intvec_handler(int irq_number, void *dev)
static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
struct mhi_ring *ring)
{
dma_addr_t ctxt_wp;
/* Update the WP */
ring->wp += ring->el_size;
ctxt_wp = le64_to_cpu(*ring->ctxt_wp) + ring->el_size;
if (ring->wp >= (ring->base + ring->len)) {
if (ring->wp >= (ring->base + ring->len))
ring->wp = ring->base;
ctxt_wp = ring->iommu_base;
}
*ring->ctxt_wp = cpu_to_le64(ctxt_wp);
*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + (ring->wp - ring->base));
/* Update the RP */
ring->rp += ring->el_size;

View file

@ -371,7 +371,16 @@ static const struct mhi_pci_dev_info mhi_foxconn_sdx55_info = {
.sideband_wake = false,
};
static const struct mhi_channel_config mhi_mv31_channels[] = {
static const struct mhi_pci_dev_info mhi_foxconn_sdx65_info = {
.name = "foxconn-sdx65",
.config = &modem_foxconn_sdx55_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.mru_default = 32768,
.sideband_wake = false,
};
static const struct mhi_channel_config mhi_mv3x_channels[] = {
MHI_CHANNEL_CONFIG_UL(0, "LOOPBACK", 64, 0),
MHI_CHANNEL_CONFIG_DL(1, "LOOPBACK", 64, 0),
/* MBIM Control Channel */
@ -382,25 +391,33 @@ static const struct mhi_channel_config mhi_mv31_channels[] = {
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 512, 3),
};
static struct mhi_event_config mhi_mv31_events[] = {
static struct mhi_event_config mhi_mv3x_events[] = {
MHI_EVENT_CONFIG_CTRL(0, 256),
MHI_EVENT_CONFIG_DATA(1, 256),
MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
MHI_EVENT_CONFIG_HW_DATA(3, 1024, 101),
};
static const struct mhi_controller_config modem_mv31_config = {
static const struct mhi_controller_config modem_mv3x_config = {
.max_channels = 128,
.timeout_ms = 20000,
.num_channels = ARRAY_SIZE(mhi_mv31_channels),
.ch_cfg = mhi_mv31_channels,
.num_events = ARRAY_SIZE(mhi_mv31_events),
.event_cfg = mhi_mv31_events,
.num_channels = ARRAY_SIZE(mhi_mv3x_channels),
.ch_cfg = mhi_mv3x_channels,
.num_events = ARRAY_SIZE(mhi_mv3x_events),
.event_cfg = mhi_mv3x_events,
};
static const struct mhi_pci_dev_info mhi_mv31_info = {
.name = "cinterion-mv31",
.config = &modem_mv31_config,
.config = &modem_mv3x_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.mru_default = 32768,
};
static const struct mhi_pci_dev_info mhi_mv32_info = {
.name = "cinterion-mv32",
.config = &modem_mv3x_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.mru_default = 32768,
@ -446,20 +463,100 @@ static const struct mhi_pci_dev_info mhi_sierra_em919x_info = {
.sideband_wake = false,
};
static const struct mhi_channel_config mhi_telit_fn980_hw_v1_channels[] = {
MHI_CHANNEL_CONFIG_UL(14, "QMI", 32, 0),
MHI_CHANNEL_CONFIG_DL(15, "QMI", 32, 0),
MHI_CHANNEL_CONFIG_UL(20, "IPCR", 16, 0),
MHI_CHANNEL_CONFIG_DL_AUTOQUEUE(21, "IPCR", 16, 0),
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0", 128, 1),
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0", 128, 2),
};
static struct mhi_event_config mhi_telit_fn980_hw_v1_events[] = {
MHI_EVENT_CONFIG_CTRL(0, 128),
MHI_EVENT_CONFIG_HW_DATA(1, 1024, 100),
MHI_EVENT_CONFIG_HW_DATA(2, 2048, 101)
};
static struct mhi_controller_config modem_telit_fn980_hw_v1_config = {
.max_channels = 128,
.timeout_ms = 20000,
.num_channels = ARRAY_SIZE(mhi_telit_fn980_hw_v1_channels),
.ch_cfg = mhi_telit_fn980_hw_v1_channels,
.num_events = ARRAY_SIZE(mhi_telit_fn980_hw_v1_events),
.event_cfg = mhi_telit_fn980_hw_v1_events,
};
static const struct mhi_pci_dev_info mhi_telit_fn980_hw_v1_info = {
.name = "telit-fn980-hwv1",
.fw = "qcom/sdx55m/sbl1.mbn",
.edl = "qcom/sdx55m/edl.mbn",
.config = &modem_telit_fn980_hw_v1_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.mru_default = 32768,
.sideband_wake = false,
};
static const struct mhi_channel_config mhi_telit_fn990_channels[] = {
MHI_CHANNEL_CONFIG_UL_SBL(2, "SAHARA", 32, 0),
MHI_CHANNEL_CONFIG_DL_SBL(3, "SAHARA", 32, 0),
MHI_CHANNEL_CONFIG_UL(4, "DIAG", 64, 1),
MHI_CHANNEL_CONFIG_DL(5, "DIAG", 64, 1),
MHI_CHANNEL_CONFIG_UL(12, "MBIM", 32, 0),
MHI_CHANNEL_CONFIG_DL(13, "MBIM", 32, 0),
MHI_CHANNEL_CONFIG_UL(32, "DUN", 32, 0),
MHI_CHANNEL_CONFIG_DL(33, "DUN", 32, 0),
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0_MBIM", 128, 2),
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 128, 3),
};
static struct mhi_event_config mhi_telit_fn990_events[] = {
MHI_EVENT_CONFIG_CTRL(0, 128),
MHI_EVENT_CONFIG_DATA(1, 128),
MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
MHI_EVENT_CONFIG_HW_DATA(3, 2048, 101)
};
static const struct mhi_controller_config modem_telit_fn990_config = {
.max_channels = 128,
.timeout_ms = 20000,
.num_channels = ARRAY_SIZE(mhi_telit_fn990_channels),
.ch_cfg = mhi_telit_fn990_channels,
.num_events = ARRAY_SIZE(mhi_telit_fn990_events),
.event_cfg = mhi_telit_fn990_events,
};
static const struct mhi_pci_dev_info mhi_telit_fn990_info = {
.name = "telit-fn990",
.config = &modem_telit_fn990_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.sideband_wake = false,
.mru_default = 32768,
};
/* Keep the list sorted based on the PID. New VID should be added as the last entry */
static const struct pci_device_id mhi_pci_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0304),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx24_info },
/* EM919x (sdx55), use the same vid:pid as qcom-sdx55m */
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0306, 0x18d7, 0x0200),
.driver_data = (kernel_ulong_t) &mhi_sierra_em919x_info },
/* Telit FN980 hardware revision v1 */
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0306, 0x1C5D, 0x2000),
.driver_data = (kernel_ulong_t) &mhi_telit_fn980_hw_v1_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0306),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx55_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0304),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx24_info },
/* Telit FN990 */
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0308, 0x1c5d, 0x2010),
.driver_data = (kernel_ulong_t) &mhi_telit_fn990_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
{ PCI_DEVICE(0x1eac, 0x1001), /* EM120R-GL (sdx24) */
.driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
{ PCI_DEVICE(0x1eac, 0x1002), /* EM160R-GL (sdx24) */
.driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
/* T99W175 (sdx55), Both for eSIM and Non-eSIM */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0ab),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
@ -472,9 +569,21 @@ static const struct pci_device_id mhi_pci_id_table[] = {
/* T99W175 (sdx55), Based on Qualcomm new baseline */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0bf),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
/* T99W368 (sdx65) */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0d8),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx65_info },
/* T99W373 (sdx62) */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0d9),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx65_info },
/* MV31-W (Cinterion) */
{ PCI_DEVICE(0x1269, 0x00b3),
.driver_data = (kernel_ulong_t) &mhi_mv31_info },
/* MV32-WA (Cinterion) */
{ PCI_DEVICE(0x1269, 0x00ba),
.driver_data = (kernel_ulong_t) &mhi_mv32_info },
/* MV32-WB (Cinterion) */
{ PCI_DEVICE(0x1269, 0x00bb),
.driver_data = (kernel_ulong_t) &mhi_mv32_info },
{ }
};
MODULE_DEVICE_TABLE(pci, mhi_pci_id_table);

View file

@ -129,13 +129,20 @@ enum mhi_pm_state __must_check mhi_tryset_pm_state(struct mhi_controller *mhi_cn
void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int ret;
if (state == MHI_STATE_RESET) {
mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_RESET_MASK, 1);
ret = mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_RESET_MASK, 1);
} else {
mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_MHISTATE_MASK, state);
ret = mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_MHISTATE_MASK, state);
}
if (ret)
dev_err(dev, "Failed to set MHI state to: %s\n",
mhi_state_str(state));
}
/* NOP for backward compatibility, host allowed to ring DB in M2 state */
@ -476,6 +483,15 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
* hence re-program it
*/
mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
if (!MHI_IN_PBL(mhi_get_exec_env(mhi_cntrl))) {
/* wait for ready to be set */
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs,
MHISTATUS,
MHISTATUS_READY_MASK, 1, 25000);
if (ret)
dev_err(dev, "Device failed to enter READY state\n");
}
}
dev_dbg(dev,