Merge tag 'drm-msm-next-2021-12-26' of ssh://gitlab.freedesktop.org/drm/msm into drm-next

* dpu plane state cleanup in prep for multirect
* dpu debugfs cleanup (and moving things to atomic_print_state) in prep
  for multirect
* dp support for sc7280
* struct_mutex removal
* include more GMU state in gpu devcore dumps
* add support for a506
* remove old eDP sub-driver (never was used in any upstream supported
  devices and modern things with eDP will use DP sub-driver instead)
* debugfs to disable hw gpu hang detect for (igt tests)
* debugfs for dumping display hw state
* and the usual assortment of cleanup and bug fixes

There still seems to be a timing issue with dpu, showing up on sc7180
devices, after the bridge probe-order change. Ie. things work great if
loglevel is high enough (or enough debug options are enabled, etc).
We'll continue to debug this in the new year.

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Rob Clark <robdclark@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGs+vwr0nkwgYzuYAsCoHtypWpWav+yVvLZGsEJy8tJ56A@mail.gmail.com
This commit is contained in:
Dave Airlie 2021-12-29 14:02:10 +10:00
commit 2b534e90a1
82 changed files with 1434 additions and 3906 deletions

View file

@ -10,10 +10,12 @@
# Please keep this list dictionary sorted.
#
Aaron Durbin <adurbin@google.com>
Abhinav Kumar <quic_abhinavk@quicinc.com> <abhinavk@codeaurora.org>
Adam Oldham <oldhamca@gmail.com>
Adam Radford <aradford@gmail.com>
Adriana Reus <adi.reus@gmail.com> <adriana.reus@intel.com>
Adrian Bunk <bunk@stusta.de>
Akhil P Oommen <quic_akhilpo@quicinc.com> <akhilpo@codeaurora.org>
Alan Cox <alan@lxorguk.ukuu.org.uk>
Alan Cox <root@hraefn.swansea.linux.org.uk>
Aleksandar Markovic <aleksandar.markovic@mips.com> <aleksandar.markovic@imgtec.com>
@ -172,6 +174,7 @@ Jeff Layton <jlayton@kernel.org> <jlayton@redhat.com>
Jens Axboe <axboe@suse.de>
Jens Osterkamp <Jens.Osterkamp@de.ibm.com>
Jernej Skrabec <jernej.skrabec@gmail.com> <jernej.skrabec@siol.net>
Jessica Zhang <quic_jesszhan@quicinc.com> <jesszhan@codeaurora.org>
Jiri Slaby <jirislaby@kernel.org> <jirislaby@gmail.com>
Jiri Slaby <jirislaby@kernel.org> <jslaby@novell.com>
Jiri Slaby <jirislaby@kernel.org> <jslaby@suse.com>
@ -191,6 +194,7 @@ Juha Yrjola <at solidboot.com>
Juha Yrjola <juha.yrjola@nokia.com>
Juha Yrjola <juha.yrjola@solidboot.com>
Julien Thierry <julien.thierry.kdev@gmail.com> <julien.thierry@arm.com>
Kalyan Thota <quic_kalyant@quicinc.com> <kalyan_t@codeaurora.org>
Kay Sievers <kay.sievers@vrfy.org>
Kees Cook <keescook@chromium.org> <kees.cook@canonical.com>
Kees Cook <keescook@chromium.org> <keescook@google.com>
@ -202,9 +206,11 @@ Kenneth W Chen <kenneth.w.chen@intel.com>
Konstantin Khlebnikov <koct9i@gmail.com> <khlebnikov@yandex-team.ru>
Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com>
Koushik <raghavendra.koushik@neterion.com>
Krishna Manikandan <quic_mkrishn@quicinc.com> <mkrishn@codeaurora.org>
Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com>
Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com>
Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Kuogee Hsieh <quic_khsieh@quicinc.com> <khsieh@codeaurora.org>
Leonardo Bras <leobras.c@gmail.com> <leonardo@linux.ibm.com>
Leonid I Ananiev <leonid.i.ananiev@intel.com>
Leon Romanovsky <leon@kernel.org> <leon@leon.nu>
@ -311,6 +317,7 @@ Qais Yousef <qsyousef@gmail.com> <qais.yousef@imgtec.com>
Quentin Monnet <quentin@isovalent.com> <quentin.monnet@netronome.com>
Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com>
Rafael J. Wysocki <rjw@rjwysocki.net> <rjw@sisk.pl>
Rajeev Nandan <quic_rajeevny@quicinc.com> <rajeevny@codeaurora.org>
Rajesh Shah <rajesh.shah@intel.com>
Ralf Baechle <ralf@linux-mips.org>
Ralf Wildenhues <Ralf.Wildenhues@gmx.de>
@ -325,6 +332,7 @@ Rui Saraiva <rmps@joel.ist.utl.pt>
Sachin P Sant <ssant@in.ibm.com>
Sakari Ailus <sakari.ailus@linux.intel.com> <sakari.ailus@iki.fi>
Sam Ravnborg <sam@mars.ravnborg.org>
Sankeerth Billakanti <quic_sbillaka@quicinc.com> <sbillaka@codeaurora.org>
Santosh Shilimkar <santosh.shilimkar@oracle.org>
Santosh Shilimkar <ssantosh@kernel.org>
Sarangdhar Joshi <spjoshi@codeaurora.org>

View file

@ -17,6 +17,8 @@ properties:
compatible:
enum:
- qcom,sc7180-dp
- qcom,sc7280-dp
- qcom,sc7280-edp
- qcom,sc8180x-dp
- qcom,sc8180x-edp

View file

@ -1,56 +0,0 @@
Qualcomm Technologies Inc. adreno/snapdragon eDP output
Required properties:
- compatible:
* "qcom,mdss-edp"
- reg: Physical base address and length of the registers of controller and PLL
- reg-names: The names of register regions. The following regions are required:
* "edp"
* "pll_base"
- interrupts: The interrupt signal from the eDP block.
- power-domains: Should be <&mmcc MDSS_GDSC>.
- clocks: device clocks
See Documentation/devicetree/bindings/clock/clock-bindings.txt for details.
- clock-names: the following clocks are required:
* "core"
* "iface"
* "mdp_core"
* "pixel"
* "link"
- #clock-cells: The value should be 1.
- vdda-supply: phandle to vdda regulator device node
- lvl-vdd-supply: phandle to regulator device node which is used to supply power
to HPD receiving chip
- panel-en-gpios: GPIO pin to supply power to panel.
- panel-hpd-gpios: GPIO pin used for eDP hpd.
Example:
mdss_edp: qcom,mdss_edp@fd923400 {
compatible = "qcom,mdss-edp";
reg-names =
"edp",
"pll_base";
reg = <0xfd923400 0x700>,
<0xfd923a00 0xd4>;
interrupt-parent = <&mdss_mdp>;
interrupts = <12 0>;
power-domains = <&mmcc MDSS_GDSC>;
clock-names =
"core",
"pixel",
"iface",
"link",
"mdp_core";
clocks =
<&mmcc MDSS_EDPAUX_CLK>,
<&mmcc MDSS_EDPPIXEL_CLK>,
<&mmcc MDSS_AHB_CLK>,
<&mmcc MDSS_EDPLINK_CLK>,
<&mmcc MDSS_MDP_CLK>;
#clock-cells = <1>;
vdda-supply = <&pma8084_l12>;
lvl-vdd-supply = <&lvl_vreg>;
panel-en-gpios = <&tlmm 137 0>;
panel-hpd-gpios = <&tlmm 103 0>;
};

View file

@ -6050,6 +6050,7 @@ F: drivers/gpu/drm/tiny/mi0283qt.c
DRM DRIVER FOR MSM ADRENO GPU
M: Rob Clark <robdclark@gmail.com>
M: Sean Paul <sean@poorly.run>
R: Abhinav Kumar <quic_abhinavk@quicinc.com>
L: linux-arm-msm@vger.kernel.org
L: dri-devel@lists.freedesktop.org
L: freedreno@lists.freedesktop.org

View file

@ -65,6 +65,7 @@ config DRM_MSM_HDMI_HDCP
config DRM_MSM_DP
bool "Enable DisplayPort support in MSM DRM driver"
depends on DRM_MSM
select RATIONAL
default y
help
Compile in support for DP driver in MSM DRM driver. DP external

View file

@ -19,7 +19,7 @@ msm-y := \
hdmi/hdmi.o \
hdmi/hdmi_audio.o \
hdmi/hdmi_bridge.o \
hdmi/hdmi_connector.o \
hdmi/hdmi_hpd.o \
hdmi/hdmi_i2c.o \
hdmi/hdmi_phy.o \
hdmi/hdmi_phy_8960.o \
@ -27,12 +27,6 @@ msm-y := \
hdmi/hdmi_phy_8x60.o \
hdmi/hdmi_phy_8x74.o \
hdmi/hdmi_pll_8960.o \
edp/edp.o \
edp/edp_aux.o \
edp/edp_bridge.o \
edp/edp_connector.o \
edp/edp_ctrl.o \
edp/edp_phy.o \
disp/mdp_format.o \
disp/mdp_kms.o \
disp/mdp4/mdp4_crtc.o \

View file

@ -12,7 +12,6 @@ static bool a2xx_idle(struct msm_gpu *gpu);
static void a2xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
struct msm_ringbuffer *ring = submit->ring;
unsigned int i;
@ -23,7 +22,7 @@ static void a2xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
/* ignore if there has not been a ctx switch: */
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:

View file

@ -30,7 +30,6 @@ static bool a3xx_idle(struct msm_gpu *gpu);
static void a3xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
struct msm_ringbuffer *ring = submit->ring;
unsigned int i;
@ -41,7 +40,7 @@ static void a3xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
/* ignore if there has not been a ctx switch: */
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:

View file

@ -24,7 +24,6 @@ static bool a4xx_idle(struct msm_gpu *gpu);
static void a4xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
struct msm_ringbuffer *ring = submit->ring;
unsigned int i;
@ -35,7 +34,7 @@ static void a4xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
/* ignore if there has not been a ctx switch: */
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:

View file

@ -107,7 +107,7 @@ reset_set(void *data, u64 val)
* try to reset an active GPU.
*/
mutex_lock(&dev->struct_mutex);
mutex_lock(&gpu->lock);
release_firmware(adreno_gpu->fw[ADRENO_FW_PM4]);
adreno_gpu->fw[ADRENO_FW_PM4] = NULL;
@ -133,7 +133,7 @@ reset_set(void *data, u64 val)
gpu->funcs->recover(gpu);
pm_runtime_put_sync(&gpu->pdev->dev);
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&gpu->lock);
return 0;
}

View file

@ -65,7 +65,6 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
struct msm_ringbuffer *ring = submit->ring;
struct msm_gem_object *obj;
uint32_t *ptr, dwords;
@ -76,7 +75,7 @@ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit
case MSM_SUBMIT_CMD_IB_TARGET_BUF:
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:
@ -126,12 +125,11 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
struct msm_drm_private *priv = gpu->dev->dev_private;
struct msm_ringbuffer *ring = submit->ring;
unsigned int i, ibs = 0;
if (IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) && submit->in_rb) {
priv->lastctx = NULL;
gpu->cur_ctx_seqno = 0;
a5xx_submit_in_rb(gpu, submit);
return;
}
@ -166,7 +164,7 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
case MSM_SUBMIT_CMD_IB_TARGET_BUF:
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:
@ -441,7 +439,7 @@ void a5xx_set_hwcg(struct msm_gpu *gpu, bool state)
const struct adreno_five_hwcg_regs *regs;
unsigned int i, sz;
if (adreno_is_a508(adreno_gpu)) {
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu)) {
regs = a50x_hwcg;
sz = ARRAY_SIZE(a50x_hwcg);
} else if (adreno_is_a509(adreno_gpu) || adreno_is_a512(adreno_gpu)) {
@ -485,7 +483,7 @@ static int a5xx_me_init(struct msm_gpu *gpu)
OUT_RING(ring, 0x00000000);
/* Specify workarounds for various microcode issues */
if (adreno_is_a530(adreno_gpu)) {
if (adreno_is_a506(adreno_gpu) || adreno_is_a530(adreno_gpu)) {
/* Workaround for token end syncs
* Force a WFI after every direct-render 3D mode draw and every
* 2D mode 3 draw
@ -620,8 +618,16 @@ static int a5xx_ucode_init(struct msm_gpu *gpu)
static int a5xx_zap_shader_resume(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
int ret;
/*
* Adreno 506 have CPZ Retention feature and doesn't require
* to resume zap shader
*/
if (adreno_is_a506(adreno_gpu))
return 0;
ret = qcom_scm_set_remote_state(SCM_GPU_ZAP_SHADER_RESUME, GPU_PAS_ID);
if (ret)
DRM_ERROR("%s: zap-shader resume failed: %d\n",
@ -733,9 +739,10 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
0x00100000 + adreno_gpu->gmem - 1);
gpu_write(gpu, REG_A5XX_UCHE_GMEM_RANGE_MAX_HI, 0x00000000);
if (adreno_is_a508(adreno_gpu) || adreno_is_a510(adreno_gpu)) {
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu) ||
adreno_is_a510(adreno_gpu)) {
gpu_write(gpu, REG_A5XX_CP_MEQ_THRESHOLDS, 0x20);
if (adreno_is_a508(adreno_gpu))
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu))
gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x400);
else
gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x20);
@ -751,7 +758,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_1, 0x40201B16);
}
if (adreno_is_a508(adreno_gpu))
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu))
gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL,
(0x100 << 11 | 0x100 << 22));
else if (adreno_is_a509(adreno_gpu) || adreno_is_a510(adreno_gpu) ||
@ -769,8 +776,8 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
* Disable the RB sampler datapath DP2 clock gating optimization
* for 1-SP GPUs, as it is enabled by default.
*/
if (adreno_is_a508(adreno_gpu) || adreno_is_a509(adreno_gpu) ||
adreno_is_a512(adreno_gpu))
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu) ||
adreno_is_a509(adreno_gpu) || adreno_is_a512(adreno_gpu))
gpu_rmw(gpu, REG_A5XX_RB_DBG_ECO_CNTL, 0, (1 << 9));
/* Disable UCHE global filter as SP can invalidate/flush independently */
@ -851,10 +858,8 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
/* UCHE */
gpu_write(gpu, REG_A5XX_CP_PROTECT(16), ADRENO_PROTECT_RW(0xE80, 16));
if (adreno_is_a508(adreno_gpu) || adreno_is_a509(adreno_gpu) ||
adreno_is_a510(adreno_gpu) || adreno_is_a512(adreno_gpu) ||
adreno_is_a530(adreno_gpu))
gpu_write(gpu, REG_A5XX_CP_PROTECT(17),
/* SMMU */
gpu_write(gpu, REG_A5XX_CP_PROTECT(17),
ADRENO_PROTECT_RW(0x10000, 0x8000));
gpu_write(gpu, REG_A5XX_RBBM_SECVID_TSB_CNTL, 0);
@ -895,8 +900,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
if (ret)
return ret;
if (!(adreno_is_a508(adreno_gpu) || adreno_is_a509(adreno_gpu) ||
adreno_is_a510(adreno_gpu) || adreno_is_a512(adreno_gpu)))
if (adreno_is_a530(adreno_gpu) || adreno_is_a540(adreno_gpu))
a5xx_gpmu_ucode_init(gpu);
ret = a5xx_ucode_init(gpu);
@ -927,6 +931,8 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
if (IS_ERR(a5xx_gpu->shadow))
return PTR_ERR(a5xx_gpu->shadow);
msm_gem_object_set_name(a5xx_gpu->shadow_bo, "shadow");
}
gpu_write64(gpu, REG_A5XX_CP_RB_RPTR_ADDR,
@ -1254,6 +1260,7 @@ static void a5xx_fault_detect_irq(struct msm_gpu *gpu)
static irqreturn_t a5xx_irq(struct msm_gpu *gpu)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
u32 status = gpu_read(gpu, REG_A5XX_RBBM_INT_0_STATUS);
/*
@ -1263,6 +1270,11 @@ static irqreturn_t a5xx_irq(struct msm_gpu *gpu)
gpu_write(gpu, REG_A5XX_RBBM_INT_CLEAR_CMD,
status & ~A5XX_RBBM_INT_0_MASK_RBBM_AHB_ERROR);
if (priv->disable_err_irq) {
status &= A5XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS |
A5XX_RBBM_INT_0_MASK_CP_SW;
}
/* Pass status to a5xx_rbbm_err_irq because we've already cleared it */
if (status & RBBM_ERROR_MASK)
a5xx_rbbm_err_irq(gpu, status);
@ -1338,7 +1350,7 @@ static int a5xx_pm_resume(struct msm_gpu *gpu)
if (ret)
return ret;
/* Adreno 508, 509, 510, 512 needs manual RBBM sus/res control */
/* Adreno 506, 508, 509, 510, 512 needs manual RBBM sus/res control */
if (!(adreno_is_a530(adreno_gpu) || adreno_is_a540(adreno_gpu))) {
/* Halt the sp_input_clk at HM level */
gpu_write(gpu, REG_A5XX_RBBM_CLOCK_CNTL, 0x00000055);
@ -1381,8 +1393,9 @@ static int a5xx_pm_suspend(struct msm_gpu *gpu)
u32 mask = 0xf;
int i, ret;
/* A508, A510 have 3 XIN ports in VBIF */
if (adreno_is_a508(adreno_gpu) || adreno_is_a510(adreno_gpu))
/* A506, A508, A510 have 3 XIN ports in VBIF */
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu) ||
adreno_is_a510(adreno_gpu))
mask = 0x7;
/* Clear the VBIF pipe before shutting down */

View file

@ -1146,7 +1146,7 @@ static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu)
}
static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo,
size_t size, u64 iova)
size_t size, u64 iova, const char *name)
{
struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
struct drm_device *dev = a6xx_gpu->base.base.dev;
@ -1181,6 +1181,8 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo,
bo->virt = msm_gem_get_vaddr(bo->obj);
bo->size = size;
msm_gem_object_set_name(bo->obj, name);
return 0;
}
@ -1515,7 +1517,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
*/
gmu->dummy.size = SZ_4K;
if (adreno_is_a660_family(adreno_gpu)) {
ret = a6xx_gmu_memory_alloc(gmu, &gmu->debug, SZ_4K * 7, 0x60400000);
ret = a6xx_gmu_memory_alloc(gmu, &gmu->debug, SZ_4K * 7,
0x60400000, "debug");
if (ret)
goto err_memory;
@ -1523,42 +1526,46 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
}
/* Allocate memory for the GMU dummy page */
ret = a6xx_gmu_memory_alloc(gmu, &gmu->dummy, gmu->dummy.size, 0x60000000);
ret = a6xx_gmu_memory_alloc(gmu, &gmu->dummy, gmu->dummy.size,
0x60000000, "dummy");
if (ret)
goto err_memory;
/* Note that a650 family also includes a660 family: */
if (adreno_is_a650_family(adreno_gpu)) {
ret = a6xx_gmu_memory_alloc(gmu, &gmu->icache,
SZ_16M - SZ_16K, 0x04000);
SZ_16M - SZ_16K, 0x04000, "icache");
if (ret)
goto err_memory;
} else if (adreno_is_a640_family(adreno_gpu)) {
ret = a6xx_gmu_memory_alloc(gmu, &gmu->icache,
SZ_256K - SZ_16K, 0x04000);
SZ_256K - SZ_16K, 0x04000, "icache");
if (ret)
goto err_memory;
ret = a6xx_gmu_memory_alloc(gmu, &gmu->dcache,
SZ_256K - SZ_16K, 0x44000);
SZ_256K - SZ_16K, 0x44000, "dcache");
if (ret)
goto err_memory;
} else {
BUG_ON(adreno_is_a660_family(adreno_gpu));
/* HFI v1, has sptprac */
gmu->legacy = true;
/* Allocate memory for the GMU debug region */
ret = a6xx_gmu_memory_alloc(gmu, &gmu->debug, SZ_16K, 0);
ret = a6xx_gmu_memory_alloc(gmu, &gmu->debug, SZ_16K, 0, "debug");
if (ret)
goto err_memory;
}
/* Allocate memory for for the HFI queues */
ret = a6xx_gmu_memory_alloc(gmu, &gmu->hfi, SZ_16K, 0);
ret = a6xx_gmu_memory_alloc(gmu, &gmu->hfi, SZ_16K, 0, "hfi");
if (ret)
goto err_memory;
/* Allocate memory for the GMU log region */
ret = a6xx_gmu_memory_alloc(gmu, &gmu->log, SZ_4K, 0);
ret = a6xx_gmu_memory_alloc(gmu, &gmu->log, SZ_4K, 0, "log");
if (ret)
goto err_memory;

View file

@ -106,7 +106,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
u32 asid;
u64 memptr = rbmemptr(ring, ttbr0);
if (ctx->seqno == a6xx_gpu->cur_ctx_seqno)
if (ctx->seqno == a6xx_gpu->base.base.cur_ctx_seqno)
return;
if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))
@ -138,14 +138,11 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
OUT_PKT7(ring, CP_EVENT_WRITE, 1);
OUT_RING(ring, 0x31);
a6xx_gpu->cur_ctx_seqno = ctx->seqno;
}
static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT;
struct msm_drm_private *priv = gpu->dev->dev_private;
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
struct msm_ringbuffer *ring = submit->ring;
@ -177,7 +174,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
case MSM_SUBMIT_CMD_IB_TARGET_BUF:
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:
@ -1071,6 +1068,8 @@ static int hw_init(struct msm_gpu *gpu)
if (IS_ERR(a6xx_gpu->shadow))
return PTR_ERR(a6xx_gpu->shadow);
msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow");
}
gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR_LO,
@ -1081,7 +1080,7 @@ static int hw_init(struct msm_gpu *gpu)
/* Always come up on rb 0 */
a6xx_gpu->cur_ring = gpu->rb[0];
a6xx_gpu->cur_ctx_seqno = 0;
gpu->cur_ctx_seqno = 0;
/* Enable the SQE_to start the CP engine */
gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1);
@ -1376,10 +1375,14 @@ static void a6xx_fault_detect_irq(struct msm_gpu *gpu)
static irqreturn_t a6xx_irq(struct msm_gpu *gpu)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
u32 status = gpu_read(gpu, REG_A6XX_RBBM_INT_0_STATUS);
gpu_write(gpu, REG_A6XX_RBBM_INT_CLEAR_CMD, status);
if (priv->disable_err_irq)
status &= A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS;
if (status & A6XX_RBBM_INT_0_MASK_RBBM_HANG_DETECT)
a6xx_fault_detect_irq(gpu);

View file

@ -20,16 +20,6 @@ struct a6xx_gpu {
struct msm_ringbuffer *cur_ring;
/**
* cur_ctx_seqno:
*
* The ctx->seqno value of the context with current pgtables
* installed. Tracked by seqno rather than pointer value to
* avoid dangling pointers, and cases where a ctx can be freed
* and a new one created with the same address.
*/
int cur_ctx_seqno;
struct a6xx_gmu gmu;
struct drm_gem_object *shadow_bo;

View file

@ -42,7 +42,15 @@ struct a6xx_gpu_state {
struct a6xx_gpu_state_obj *cx_debugbus;
int nr_cx_debugbus;
struct msm_gpu_state_bo *gmu_log;
struct msm_gpu_state_bo *gmu_hfi;
struct msm_gpu_state_bo *gmu_debug;
s32 hfi_queue_history[2][HFI_HISTORY_SZ];
struct list_head objs;
bool gpu_initialized;
};
static inline int CRASHDUMP_WRITE(u64 *in, u32 reg, u32 val)
@ -800,6 +808,45 @@ static void a6xx_get_gmu_registers(struct msm_gpu *gpu,
&a6xx_state->gmu_registers[2], false);
}
static struct msm_gpu_state_bo *a6xx_snapshot_gmu_bo(
struct a6xx_gpu_state *a6xx_state, struct a6xx_gmu_bo *bo)
{
struct msm_gpu_state_bo *snapshot;
snapshot = state_kcalloc(a6xx_state, 1, sizeof(*snapshot));
if (!snapshot)
return NULL;
snapshot->iova = bo->iova;
snapshot->size = bo->size;
snapshot->data = kvzalloc(snapshot->size, GFP_KERNEL);
if (!snapshot->data)
return NULL;
memcpy(snapshot->data, bo->virt, bo->size);
return snapshot;
}
static void a6xx_snapshot_gmu_hfi_history(struct msm_gpu *gpu,
struct a6xx_gpu_state *a6xx_state)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
struct a6xx_gmu *gmu = &a6xx_gpu->gmu;
unsigned i, j;
BUILD_BUG_ON(ARRAY_SIZE(gmu->queues) != ARRAY_SIZE(a6xx_state->hfi_queue_history));
for (i = 0; i < ARRAY_SIZE(gmu->queues); i++) {
struct a6xx_hfi_queue *queue = &gmu->queues[i];
for (j = 0; j < HFI_HISTORY_SZ; j++) {
unsigned idx = (j + queue->history_idx) % HFI_HISTORY_SZ;
a6xx_state->hfi_queue_history[i][j] = queue->history[idx];
}
}
}
#define A6XX_GBIF_REGLIST_SIZE 1
static void a6xx_get_registers(struct msm_gpu *gpu,
struct a6xx_gpu_state *a6xx_state,
@ -937,6 +984,12 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu)
a6xx_get_gmu_registers(gpu, a6xx_state);
a6xx_state->gmu_log = a6xx_snapshot_gmu_bo(a6xx_state, &a6xx_gpu->gmu.log);
a6xx_state->gmu_hfi = a6xx_snapshot_gmu_bo(a6xx_state, &a6xx_gpu->gmu.hfi);
a6xx_state->gmu_debug = a6xx_snapshot_gmu_bo(a6xx_state, &a6xx_gpu->gmu.debug);
a6xx_snapshot_gmu_hfi_history(gpu, a6xx_state);
/* If GX isn't on the rest of the data isn't going to be accessible */
if (!a6xx_gmu_gx_is_on(&a6xx_gpu->gmu))
return &a6xx_state->base;
@ -950,7 +1003,8 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu)
* write out GPU state, so we need to skip this when the SMMU is
* stalled in response to an iova fault
*/
if (!stalled && !a6xx_crashdumper_init(gpu, &_dumper)) {
if (!stalled && !gpu->needs_hw_init &&
!a6xx_crashdumper_init(gpu, &_dumper)) {
dumper = &_dumper;
}
@ -967,6 +1021,8 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu)
if (snapshot_debugbus)
a6xx_get_debugbus(gpu, a6xx_state);
a6xx_state->gpu_initialized = !gpu->needs_hw_init;
return &a6xx_state->base;
}
@ -978,6 +1034,12 @@ static void a6xx_gpu_state_destroy(struct kref *kref)
struct a6xx_gpu_state *a6xx_state = container_of(state,
struct a6xx_gpu_state, base);
if (a6xx_state->gmu_log)
kvfree(a6xx_state->gmu_log->data);
if (a6xx_state->gmu_hfi)
kvfree(a6xx_state->gmu_hfi->data);
list_for_each_entry_safe(obj, tmp, &a6xx_state->objs, node)
kfree(obj);
@ -1189,8 +1251,48 @@ void a6xx_show(struct msm_gpu *gpu, struct msm_gpu_state *state,
if (IS_ERR_OR_NULL(state))
return;
drm_printf(p, "gpu-initialized: %d\n", a6xx_state->gpu_initialized);
adreno_show(gpu, state, p);
drm_puts(p, "gmu-log:\n");
if (a6xx_state->gmu_log) {
struct msm_gpu_state_bo *gmu_log = a6xx_state->gmu_log;
drm_printf(p, " iova: 0x%016llx\n", gmu_log->iova);
drm_printf(p, " size: %zu\n", gmu_log->size);
adreno_show_object(p, &gmu_log->data, gmu_log->size,
&gmu_log->encoded);
}
drm_puts(p, "gmu-hfi:\n");
if (a6xx_state->gmu_hfi) {
struct msm_gpu_state_bo *gmu_hfi = a6xx_state->gmu_hfi;
unsigned i, j;
drm_printf(p, " iova: 0x%016llx\n", gmu_hfi->iova);
drm_printf(p, " size: %zu\n", gmu_hfi->size);
for (i = 0; i < ARRAY_SIZE(a6xx_state->hfi_queue_history); i++) {
drm_printf(p, " queue-history[%u]:", i);
for (j = 0; j < HFI_HISTORY_SZ; j++) {
drm_printf(p, " %d", a6xx_state->hfi_queue_history[i][j]);
}
drm_printf(p, "\n");
}
adreno_show_object(p, &gmu_hfi->data, gmu_hfi->size,
&gmu_hfi->encoded);
}
drm_puts(p, "gmu-debug:\n");
if (a6xx_state->gmu_debug) {
struct msm_gpu_state_bo *gmu_debug = a6xx_state->gmu_debug;
drm_printf(p, " iova: 0x%016llx\n", gmu_debug->iova);
drm_printf(p, " size: %zu\n", gmu_debug->size);
adreno_show_object(p, &gmu_debug->data, gmu_debug->size,
&gmu_debug->encoded);
}
drm_puts(p, "registers:\n");
for (i = 0; i < a6xx_state->nr_registers; i++) {
struct a6xx_gpu_state_obj *obj = &a6xx_state->registers[i];

View file

@ -36,6 +36,8 @@ static int a6xx_hfi_queue_read(struct a6xx_gmu *gmu,
hdr = queue->data[index];
queue->history[(queue->history_idx++) % HFI_HISTORY_SZ] = index;
/*
* If we are to assume that the GMU firmware is in fact a rational actor
* and is programmed to not send us a larger response than we expect
@ -75,6 +77,8 @@ static int a6xx_hfi_queue_write(struct a6xx_gmu *gmu,
return -ENOSPC;
}
queue->history[(queue->history_idx++) % HFI_HISTORY_SZ] = index;
for (i = 0; i < dwords; i++) {
queue->data[index] = data[i];
index = (index + 1) % header->size;
@ -600,6 +604,9 @@ void a6xx_hfi_stop(struct a6xx_gmu *gmu)
queue->header->read_index = 0;
queue->header->write_index = 0;
memset(&queue->history, 0xff, sizeof(queue->history));
queue->history_idx = 0;
}
}
@ -612,6 +619,9 @@ static void a6xx_hfi_queue_init(struct a6xx_hfi_queue *queue,
queue->data = virt;
atomic_set(&queue->seqnum, 0);
memset(&queue->history, 0xff, sizeof(queue->history));
queue->history_idx = 0;
/* Set up the shared memory header */
header->iova = iova;
header->type = 10 << 8 | id;

View file

@ -33,6 +33,17 @@ struct a6xx_hfi_queue {
spinlock_t lock;
u32 *data;
atomic_t seqnum;
/*
* Tracking for the start index of the last N messages in the
* queue, for the benefit of devcore dump / crashdec (since
* parsing in the reverse direction to decode the last N
* messages is difficult to do and would rely on heuristics
* which are not guaranteed to be correct)
*/
#define HFI_HISTORY_SZ 8
s32 history[HFI_HISTORY_SZ];
u8 history_idx;
};
/* This is the outgoing queue to the GMU */

View file

@ -131,6 +131,24 @@ static const struct adreno_info gpulist[] = {
.gmem = (SZ_1M + SZ_512K),
.inactive_period = DRM_MSM_INACTIVE_PERIOD,
.init = a4xx_gpu_init,
}, {
.rev = ADRENO_REV(5, 0, 6, ANY_ID),
.revn = 506,
.name = "A506",
.fw = {
[ADRENO_FW_PM4] = "a530_pm4.fw",
[ADRENO_FW_PFP] = "a530_pfp.fw",
},
.gmem = (SZ_128K + SZ_8K),
/*
* Increase inactive period to 250 to avoid bouncing
* the GDSC which appears to make it grumpy
*/
.inactive_period = 250,
.quirks = ADRENO_QUIRK_TWO_PASS_USE_WFI |
ADRENO_QUIRK_LMLOADKILL_DISABLE,
.init = a5xx_gpu_init,
.zapfw = "a506_zap.mdt",
}, {
.rev = ADRENO_REV(5, 0, 8, ANY_ID),
.revn = 508,
@ -408,9 +426,9 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
return NULL;
}
mutex_lock(&dev->struct_mutex);
mutex_lock(&gpu->lock);
ret = msm_gpu_hw_init(gpu);
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&gpu->lock);
pm_runtime_put_autosuspend(&pdev->dev);
if (ret) {
DRM_DEV_ERROR(dev->dev, "gpu hw init failed: %d\n", ret);
@ -427,13 +445,6 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
return gpu;
}
static void set_gpu_pdev(struct drm_device *dev,
struct platform_device *pdev)
{
struct msm_drm_private *priv = dev->dev_private;
priv->gpu_pdev = pdev;
}
static int find_chipid(struct device *dev, struct adreno_rev *rev)
{
struct device_node *node = dev->of_node;
@ -482,8 +493,8 @@ static int adreno_bind(struct device *dev, struct device *master, void *data)
{
static struct adreno_platform_config config = {};
const struct adreno_info *info;
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
struct drm_device *drm = priv->dev;
struct msm_gpu *gpu;
int ret;
@ -492,7 +503,7 @@ static int adreno_bind(struct device *dev, struct device *master, void *data)
return ret;
dev->platform_data = &config;
set_gpu_pdev(drm, to_platform_device(dev));
priv->gpu_pdev = to_platform_device(dev);
info = adreno_info(config.rev);
@ -521,12 +532,13 @@ static int adreno_bind(struct device *dev, struct device *master, void *data)
static void adreno_unbind(struct device *dev, struct device *master,
void *data)
{
struct msm_drm_private *priv = dev_get_drvdata(master);
struct msm_gpu *gpu = dev_to_gpu(dev);
pm_runtime_force_suspend(dev);
gpu->funcs->destroy(gpu);
set_gpu_pdev(dev_get_drvdata(master), NULL);
priv->gpu_pdev = NULL;
}
static const struct component_ops a3xx_ops = {

View file

@ -504,6 +504,8 @@ int adreno_gpu_state_get(struct msm_gpu *gpu, struct msm_gpu_state *state)
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
int i, count = 0;
WARN_ON(!mutex_is_locked(&gpu->lock));
kref_init(&state->ref);
ktime_get_real_ts64(&state->time);
@ -630,7 +632,7 @@ static char *adreno_gpu_ascii85_encode(u32 *src, size_t len)
}
/* len is expected to be in bytes */
static void adreno_show_object(struct drm_printer *p, void **ptr, int len,
void adreno_show_object(struct drm_printer *p, void **ptr, int len,
bool *encoded)
{
if (!*ptr || !len)

View file

@ -201,6 +201,11 @@ static inline int adreno_is_a430(struct adreno_gpu *gpu)
return gpu->revn == 430;
}
static inline int adreno_is_a506(struct adreno_gpu *gpu)
{
return gpu->revn == 506;
}
static inline int adreno_is_a508(struct adreno_gpu *gpu)
{
return gpu->revn == 508;
@ -306,6 +311,8 @@ void adreno_gpu_state_destroy(struct msm_gpu_state *state);
int adreno_gpu_state_get(struct msm_gpu *gpu, struct msm_gpu_state *state);
int adreno_gpu_state_put(struct msm_gpu_state *state);
void adreno_show_object(struct drm_printer *p, void **ptr, int len,
bool *encoded);
/*
* Common helper function to initialize the default address space for arm-smmu

View file

@ -337,7 +337,8 @@ static void _dpu_crtc_program_lm_output_roi(struct drm_crtc *crtc)
}
static void _dpu_crtc_blend_setup_mixer(struct drm_crtc *crtc,
struct dpu_crtc *dpu_crtc, struct dpu_crtc_mixer *mixer)
struct dpu_crtc *dpu_crtc, struct dpu_crtc_mixer *mixer,
struct dpu_hw_stage_cfg *stage_cfg)
{
struct drm_plane *plane;
struct drm_framebuffer *fb;
@ -346,7 +347,6 @@ static void _dpu_crtc_blend_setup_mixer(struct drm_crtc *crtc,
struct dpu_plane_state *pstate = NULL;
struct dpu_format *format;
struct dpu_hw_ctl *ctl = mixer->lm_ctl;
struct dpu_hw_stage_cfg *stage_cfg = &dpu_crtc->stage_cfg;
u32 flush_mask;
uint32_t stage_idx, lm_idx;
@ -422,6 +422,7 @@ static void _dpu_crtc_blend_setup(struct drm_crtc *crtc)
struct dpu_crtc_mixer *mixer = cstate->mixers;
struct dpu_hw_ctl *ctl;
struct dpu_hw_mixer *lm;
struct dpu_hw_stage_cfg stage_cfg;
int i;
DRM_DEBUG_ATOMIC("%s\n", dpu_crtc->name);
@ -435,9 +436,9 @@ static void _dpu_crtc_blend_setup(struct drm_crtc *crtc)
}
/* initialize stage cfg */
memset(&dpu_crtc->stage_cfg, 0, sizeof(struct dpu_hw_stage_cfg));
memset(&stage_cfg, 0, sizeof(struct dpu_hw_stage_cfg));
_dpu_crtc_blend_setup_mixer(crtc, dpu_crtc, mixer);
_dpu_crtc_blend_setup_mixer(crtc, dpu_crtc, mixer, &stage_cfg);
for (i = 0; i < cstate->num_mixers; i++) {
ctl = mixer[i].lm_ctl;
@ -458,7 +459,7 @@ static void _dpu_crtc_blend_setup(struct drm_crtc *crtc)
mixer[i].flush_mask);
ctl->ops.setup_blendstage(ctl, mixer[i].hw_lm->idx,
&dpu_crtc->stage_cfg);
&stage_cfg);
}
}
@ -923,6 +924,20 @@ static struct drm_crtc_state *dpu_crtc_duplicate_state(struct drm_crtc *crtc)
return &cstate->base;
}
static void dpu_crtc_atomic_print_state(struct drm_printer *p,
const struct drm_crtc_state *state)
{
const struct dpu_crtc_state *cstate = to_dpu_crtc_state(state);
int i;
for (i = 0; i < cstate->num_mixers; i++) {
drm_printf(p, "\tlm[%d]=%d\n", i, cstate->mixers[i].hw_lm->idx - LM_0);
drm_printf(p, "\tctl[%d]=%d\n", i, cstate->mixers[i].lm_ctl->idx - CTL_0);
if (cstate->mixers[i].hw_dspp)
drm_printf(p, "\tdspp[%d]=%d\n", i, cstate->mixers[i].hw_dspp->idx - DSPP_0);
}
}
static void dpu_crtc_disable(struct drm_crtc *crtc,
struct drm_atomic_state *state)
{
@ -1423,15 +1438,16 @@ DEFINE_SHOW_ATTRIBUTE(dpu_crtc_debugfs_state);
static int _dpu_crtc_init_debugfs(struct drm_crtc *crtc)
{
struct dpu_crtc *dpu_crtc = to_dpu_crtc(crtc);
struct dentry *debugfs_root;
dpu_crtc->debugfs_root = debugfs_create_dir(dpu_crtc->name,
debugfs_root = debugfs_create_dir(dpu_crtc->name,
crtc->dev->primary->debugfs_root);
debugfs_create_file("status", 0400,
dpu_crtc->debugfs_root,
debugfs_root,
dpu_crtc, &_dpu_debugfs_status_fops);
debugfs_create_file("state", 0600,
dpu_crtc->debugfs_root,
debugfs_root,
&dpu_crtc->base,
&dpu_crtc_debugfs_state_fops);
@ -1449,13 +1465,6 @@ static int dpu_crtc_late_register(struct drm_crtc *crtc)
return _dpu_crtc_init_debugfs(crtc);
}
static void dpu_crtc_early_unregister(struct drm_crtc *crtc)
{
struct dpu_crtc *dpu_crtc = to_dpu_crtc(crtc);
debugfs_remove_recursive(dpu_crtc->debugfs_root);
}
static const struct drm_crtc_funcs dpu_crtc_funcs = {
.set_config = drm_atomic_helper_set_config,
.destroy = dpu_crtc_destroy,
@ -1463,8 +1472,8 @@ static const struct drm_crtc_funcs dpu_crtc_funcs = {
.reset = dpu_crtc_reset,
.atomic_duplicate_state = dpu_crtc_duplicate_state,
.atomic_destroy_state = dpu_crtc_destroy_state,
.atomic_print_state = dpu_crtc_atomic_print_state,
.late_register = dpu_crtc_late_register,
.early_unregister = dpu_crtc_early_unregister,
.verify_crc_source = dpu_crtc_verify_crc_source,
.set_crc_source = dpu_crtc_set_crc_source,
.enable_vblank = msm_crtc_enable_vblank,

View file

@ -129,8 +129,6 @@ struct dpu_crtc_frame_event {
* @drm_requested_vblank : Whether vblanks have been enabled in the encoder
* @property_info : Opaque structure for generic property support
* @property_defaults : Array of default values for generic property support
* @stage_cfg : H/w mixer stage configuration
* @debugfs_root : Parent of debugfs node
* @vblank_cb_count : count of vblank callback since last reset
* @play_count : frame count between crtc enable and disable
* @vblank_cb_time : ktime at vblank count reset
@ -161,9 +159,6 @@ struct dpu_crtc {
struct drm_pending_vblank_event *event;
u32 vsync_count;
struct dpu_hw_stage_cfg stage_cfg;
struct dentry *debugfs_root;
u32 vblank_cb_count;
u64 play_count;
ktime_t vblank_cb_time;

View file

@ -995,9 +995,6 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
trace_dpu_enc_mode_set(DRMID(drm_enc));
if (drm_enc->encoder_type == DRM_MODE_ENCODER_TMDS)
msm_dp_display_mode_set(dpu_enc->dp, drm_enc, mode, adj_mode);
list_for_each_entry(conn_iter, connector_list, head)
if (conn_iter->encoder == drm_enc)
conn = conn_iter;
@ -1148,10 +1145,6 @@ static void dpu_encoder_virt_enable(struct drm_encoder *drm_enc)
struct msm_drm_private *priv;
struct drm_display_mode *cur_mode = NULL;
if (!drm_enc) {
DPU_ERROR("invalid encoder\n");
return;
}
dpu_enc = to_dpu_encoder_virt(drm_enc);
mutex_lock(&dpu_enc->enc_lock);
@ -1177,14 +1170,6 @@ static void dpu_encoder_virt_enable(struct drm_encoder *drm_enc)
_dpu_encoder_virt_enable_helper(drm_enc);
if (drm_enc->encoder_type == DRM_MODE_ENCODER_TMDS) {
ret = msm_dp_display_enable(dpu_enc->dp, drm_enc);
if (ret) {
DPU_ERROR_ENC(dpu_enc, "dp display enable failed: %d\n",
ret);
goto out;
}
}
dpu_enc->enabled = true;
out:
@ -1197,14 +1182,6 @@ static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc)
struct msm_drm_private *priv;
int i = 0;
if (!drm_enc) {
DPU_ERROR("invalid encoder\n");
return;
} else if (!drm_enc->dev) {
DPU_ERROR("invalid dev\n");
return;
}
dpu_enc = to_dpu_encoder_virt(drm_enc);
DPU_DEBUG_ENC(dpu_enc, "\n");
@ -1218,11 +1195,6 @@ static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc)
/* wait for idle */
dpu_encoder_wait_for_event(drm_enc, MSM_ENC_TX_COMPLETE);
if (drm_enc->encoder_type == DRM_MODE_ENCODER_TMDS) {
if (msm_dp_display_pre_disable(dpu_enc->dp, drm_enc))
DPU_ERROR_ENC(dpu_enc, "dp display push idle failed\n");
}
dpu_encoder_resource_control(drm_enc, DPU_ENC_RC_EVENT_PRE_STOP);
for (i = 0; i < dpu_enc->num_phys_encs; i++) {
@ -1247,11 +1219,6 @@ static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc)
DPU_DEBUG_ENC(dpu_enc, "encoder disabled\n");
if (drm_enc->encoder_type == DRM_MODE_ENCODER_TMDS) {
if (msm_dp_display_disable(dpu_enc->dp, drm_enc))
DPU_ERROR_ENC(dpu_enc, "dp display disable failed\n");
}
mutex_unlock(&dpu_enc->enc_lock);
}
@ -2128,11 +2095,8 @@ static void dpu_encoder_frame_done_timeout(struct timer_list *t)
static const struct drm_encoder_helper_funcs dpu_encoder_helper_funcs = {
.mode_set = dpu_encoder_virt_mode_set,
.disable = dpu_encoder_virt_disable,
.enable = dpu_kms_encoder_enable,
.enable = dpu_encoder_virt_enable,
.atomic_check = dpu_encoder_virt_atomic_check,
/* This is called by dpu_kms_encoder_enable */
.commit = dpu_encoder_virt_enable,
};
static const struct drm_encoder_funcs dpu_encoder_funcs = {

View file

@ -698,17 +698,17 @@ struct dpu_encoder_phys *dpu_encoder_phys_vid_init(
{
struct dpu_encoder_phys *phys_enc = NULL;
struct dpu_encoder_irq *irq;
int i, ret = 0;
int i;
if (!p) {
ret = -EINVAL;
goto fail;
DPU_ERROR("failed to create encoder due to invalid parameter\n");
return ERR_PTR(-EINVAL);
}
phys_enc = kzalloc(sizeof(*phys_enc), GFP_KERNEL);
if (!phys_enc) {
ret = -ENOMEM;
goto fail;
DPU_ERROR("failed to create encoder due to memory allocation error\n");
return ERR_PTR(-ENOMEM);
}
phys_enc->hw_mdptop = p->dpu_kms->hw_mdp;
@ -748,11 +748,4 @@ struct dpu_encoder_phys *dpu_encoder_phys_vid_init(
DPU_DEBUG_VIDENC(phys_enc, "created intf idx:%d\n", p->intf_idx);
return phys_enc;
fail:
DPU_ERROR("failed to create encoder\n");
if (phys_enc)
dpu_encoder_phys_vid_destroy(phys_enc);
return ERR_PTR(ret);
}

View file

@ -45,7 +45,7 @@
(PINGPONG_SDM845_MASK | BIT(DPU_PINGPONG_TE2))
#define CTL_SC7280_MASK \
(BIT(DPU_CTL_ACTIVE_CFG) | BIT(DPU_CTL_FETCH_ACTIVE))
(BIT(DPU_CTL_ACTIVE_CFG) | BIT(DPU_CTL_FETCH_ACTIVE) | BIT(DPU_CTL_VM_CFG))
#define MERGE_3D_SM8150_MASK (0)
@ -856,9 +856,9 @@ static const struct dpu_intf_cfg sm8150_intf[] = {
};
static const struct dpu_intf_cfg sc7280_intf[] = {
INTF_BLK("intf_0", INTF_0, 0x34000, INTF_DP, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
INTF_BLK("intf_0", INTF_0, 0x34000, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
INTF_BLK("intf_1", INTF_1, 0x35000, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
INTF_BLK("intf_5", INTF_5, 0x39000, INTF_EDP, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 22, 23),
INTF_BLK("intf_5", INTF_5, 0x39000, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 22, 23),
};
/*************************************************************

View file

@ -179,13 +179,16 @@ enum {
/**
* CTL sub-blocks
* @DPU_CTL_SPLIT_DISPLAY CTL supports video mode split display
* @DPU_CTL_SPLIT_DISPLAY: CTL supports video mode split display
* @DPU_CTL_FETCH_ACTIVE: Active CTL for fetch HW (SSPPs)
* @DPU_CTL_VM_CFG: CTL config to support multiple VMs
* @DPU_CTL_MAX
*/
enum {
DPU_CTL_SPLIT_DISPLAY = 0x1,
DPU_CTL_ACTIVE_CFG,
DPU_CTL_FETCH_ACTIVE,
DPU_CTL_VM_CFG,
DPU_CTL_MAX
};

View file

@ -36,6 +36,7 @@
#define MERGE_3D_IDX 23
#define INTF_IDX 31
#define CTL_INVALID_BIT 0xffff
#define CTL_DEFAULT_GROUP_ID 0xf
static const u32 fetch_tbl[SSPP_MAX] = {CTL_INVALID_BIT, 16, 17, 18, 19,
CTL_INVALID_BIT, CTL_INVALID_BIT, CTL_INVALID_BIT, CTL_INVALID_BIT, 0,
@ -498,6 +499,13 @@ static void dpu_hw_ctl_intf_cfg_v1(struct dpu_hw_ctl *ctx,
u32 intf_active = 0;
u32 mode_sel = 0;
/* CTL_TOP[31:28] carries group_id to collate CTL paths
* per VM. Explicitly disable it until VM support is
* added in SW. Power on reset value is not disable.
*/
if ((test_bit(DPU_CTL_VM_CFG, &ctx->caps->features)))
mode_sel = CTL_DEFAULT_GROUP_ID << 28;
if (cfg->intf_mode_sel == DPU_CTL_MODE_SEL_CMD)
mode_sel |= BIT(17);

View file

@ -30,6 +30,9 @@
#define MDP_AD4_INTR_STATUS_OFF 0x420
#define MDP_INTF_0_OFF_REV_7xxx 0x34000
#define MDP_INTF_1_OFF_REV_7xxx 0x35000
#define MDP_INTF_2_OFF_REV_7xxx 0x36000
#define MDP_INTF_3_OFF_REV_7xxx 0x37000
#define MDP_INTF_4_OFF_REV_7xxx 0x38000
#define MDP_INTF_5_OFF_REV_7xxx 0x39000
/**
@ -110,6 +113,21 @@ static const struct dpu_intr_reg dpu_intr_set[] = {
MDP_INTF_1_OFF_REV_7xxx+INTF_INTR_EN,
MDP_INTF_1_OFF_REV_7xxx+INTF_INTR_STATUS
},
{
MDP_INTF_2_OFF_REV_7xxx+INTF_INTR_CLEAR,
MDP_INTF_2_OFF_REV_7xxx+INTF_INTR_EN,
MDP_INTF_2_OFF_REV_7xxx+INTF_INTR_STATUS
},
{
MDP_INTF_3_OFF_REV_7xxx+INTF_INTR_CLEAR,
MDP_INTF_3_OFF_REV_7xxx+INTF_INTR_EN,
MDP_INTF_3_OFF_REV_7xxx+INTF_INTR_STATUS
},
{
MDP_INTF_4_OFF_REV_7xxx+INTF_INTR_CLEAR,
MDP_INTF_4_OFF_REV_7xxx+INTF_INTR_EN,
MDP_INTF_4_OFF_REV_7xxx+INTF_INTR_STATUS
},
{
MDP_INTF_5_OFF_REV_7xxx+INTF_INTR_CLEAR,
MDP_INTF_5_OFF_REV_7xxx+INTF_INTR_EN,

View file

@ -26,6 +26,9 @@ enum dpu_hw_intr_reg {
MDP_AD4_1_INTR,
MDP_INTF0_7xxx_INTR,
MDP_INTF1_7xxx_INTR,
MDP_INTF2_7xxx_INTR,
MDP_INTF3_7xxx_INTR,
MDP_INTF4_7xxx_INTR,
MDP_INTF5_7xxx_INTR,
MDP_INTR_MAX,
};

View file

@ -8,6 +8,8 @@
#include "dpu_hw_sspp.h"
#include "dpu_kms.h"
#include <drm/drm_file.h>
#define DPU_FETCH_CONFIG_RESET_VALUE 0x00000087
/* DPU_SSPP_SRC */
@ -75,6 +77,7 @@
#define SSPP_TRAFFIC_SHAPER 0x130
#define SSPP_CDP_CNTL 0x134
#define SSPP_UBWC_ERROR_STATUS 0x138
#define SSPP_CDP_CNTL_REC1 0x13c
#define SSPP_TRAFFIC_SHAPER_PREFILL 0x150
#define SSPP_TRAFFIC_SHAPER_REC1_PREFILL 0x154
#define SSPP_TRAFFIC_SHAPER_REC1 0x158
@ -413,13 +416,11 @@ static void dpu_hw_sspp_setup_pe_config(struct dpu_hw_pipe *ctx,
static void _dpu_hw_sspp_setup_scaler3(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_cfg *sspp,
struct dpu_hw_pixel_ext *pe,
void *scaler_cfg)
{
u32 idx;
struct dpu_hw_scaler3_cfg *scaler3_cfg = scaler_cfg;
(void)pe;
if (_sspp_subblk_offset(ctx, DPU_SSPP_SCALER_QSEED3, &idx) || !sspp
|| !scaler3_cfg)
return;
@ -539,7 +540,7 @@ static void dpu_hw_sspp_setup_sourceaddress(struct dpu_hw_pipe *ctx,
}
static void dpu_hw_sspp_setup_csc(struct dpu_hw_pipe *ctx,
struct dpu_csc_cfg *data)
const struct dpu_csc_cfg *data)
{
u32 idx;
bool csc10 = false;
@ -571,19 +572,20 @@ static void dpu_hw_sspp_setup_solidfill(struct dpu_hw_pipe *ctx, u32 color, enum
}
static void dpu_hw_sspp_setup_danger_safe_lut(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_qos_cfg *cfg)
u32 danger_lut,
u32 safe_lut)
{
u32 idx;
if (_sspp_subblk_offset(ctx, DPU_SSPP_SRC, &idx))
return;
DPU_REG_WRITE(&ctx->hw, SSPP_DANGER_LUT + idx, cfg->danger_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_SAFE_LUT + idx, cfg->safe_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_DANGER_LUT + idx, danger_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_SAFE_LUT + idx, safe_lut);
}
static void dpu_hw_sspp_setup_creq_lut(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_qos_cfg *cfg)
u64 creq_lut)
{
u32 idx;
@ -591,11 +593,11 @@ static void dpu_hw_sspp_setup_creq_lut(struct dpu_hw_pipe *ctx,
return;
if (ctx->cap && test_bit(DPU_SSPP_QOS_8LVL, &ctx->cap->features)) {
DPU_REG_WRITE(&ctx->hw, SSPP_CREQ_LUT_0 + idx, cfg->creq_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_CREQ_LUT_0 + idx, creq_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_CREQ_LUT_1 + idx,
cfg->creq_lut >> 32);
creq_lut >> 32);
} else {
DPU_REG_WRITE(&ctx->hw, SSPP_CREQ_LUT + idx, cfg->creq_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_CREQ_LUT + idx, creq_lut);
}
}
@ -625,10 +627,12 @@ static void dpu_hw_sspp_setup_qos_ctrl(struct dpu_hw_pipe *ctx,
}
static void dpu_hw_sspp_setup_cdp(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_cdp_cfg *cfg)
struct dpu_hw_pipe_cdp_cfg *cfg,
enum dpu_sspp_multirect_index index)
{
u32 idx;
u32 cdp_cntl = 0;
u32 cdp_cntl_offset = 0;
if (!ctx || !cfg)
return;
@ -636,6 +640,11 @@ static void dpu_hw_sspp_setup_cdp(struct dpu_hw_pipe *ctx,
if (_sspp_subblk_offset(ctx, DPU_SSPP_SRC, &idx))
return;
if (index == DPU_SSPP_RECT_SOLO || index == DPU_SSPP_RECT_0)
cdp_cntl_offset = SSPP_CDP_CNTL;
else
cdp_cntl_offset = SSPP_CDP_CNTL_REC1;
if (cfg->enable)
cdp_cntl |= BIT(0);
if (cfg->ubwc_meta_enable)
@ -645,7 +654,7 @@ static void dpu_hw_sspp_setup_cdp(struct dpu_hw_pipe *ctx,
if (cfg->preload_ahead == DPU_SSPP_CDP_PRELOAD_AHEAD_64)
cdp_cntl |= BIT(3);
DPU_REG_WRITE(&ctx->hw, SSPP_CDP_CNTL, cdp_cntl);
DPU_REG_WRITE(&ctx->hw, cdp_cntl_offset, cdp_cntl);
}
static void _setup_layer_ops(struct dpu_hw_pipe *c,
@ -685,6 +694,71 @@ static void _setup_layer_ops(struct dpu_hw_pipe *c,
c->ops.setup_cdp = dpu_hw_sspp_setup_cdp;
}
#ifdef CONFIG_DEBUG_FS
int _dpu_hw_sspp_init_debugfs(struct dpu_hw_pipe *hw_pipe, struct dpu_kms *kms, struct dentry *entry)
{
const struct dpu_sspp_cfg *cfg = hw_pipe->cap;
const struct dpu_sspp_sub_blks *sblk = cfg->sblk;
struct dentry *debugfs_root;
char sspp_name[32];
snprintf(sspp_name, sizeof(sspp_name), "%d", hw_pipe->idx);
/* create overall sub-directory for the pipe */
debugfs_root =
debugfs_create_dir(sspp_name, entry);
/* don't error check these */
debugfs_create_xul("features", 0600,
debugfs_root, (unsigned long *)&hw_pipe->cap->features);
/* add register dump support */
dpu_debugfs_create_regset32("src_blk", 0400,
debugfs_root,
sblk->src_blk.base + cfg->base,
sblk->src_blk.len,
kms);
if (cfg->features & BIT(DPU_SSPP_SCALER_QSEED3) ||
cfg->features & BIT(DPU_SSPP_SCALER_QSEED3LITE) ||
cfg->features & BIT(DPU_SSPP_SCALER_QSEED2) ||
cfg->features & BIT(DPU_SSPP_SCALER_QSEED4))
dpu_debugfs_create_regset32("scaler_blk", 0400,
debugfs_root,
sblk->scaler_blk.base + cfg->base,
sblk->scaler_blk.len,
kms);
if (cfg->features & BIT(DPU_SSPP_CSC) ||
cfg->features & BIT(DPU_SSPP_CSC_10BIT))
dpu_debugfs_create_regset32("csc_blk", 0400,
debugfs_root,
sblk->csc_blk.base + cfg->base,
sblk->csc_blk.len,
kms);
debugfs_create_u32("xin_id",
0400,
debugfs_root,
(u32 *) &cfg->xin_id);
debugfs_create_u32("clk_ctrl",
0400,
debugfs_root,
(u32 *) &cfg->clk_ctrl);
debugfs_create_x32("creq_vblank",
0600,
debugfs_root,
(u32 *) &sblk->creq_vblank);
debugfs_create_x32("danger_vblank",
0600,
debugfs_root,
(u32 *) &sblk->danger_vblank);
return 0;
}
#endif
static const struct dpu_sspp_cfg *_sspp_offset(enum dpu_sspp sspp,
void __iomem *addr,
struct dpu_mdss_cfg *catalog,

View file

@ -25,11 +25,17 @@ struct dpu_hw_pipe;
/**
* Define all scaler feature bits in catalog
*/
#define DPU_SSPP_SCALER ((1UL << DPU_SSPP_SCALER_RGB) | \
(1UL << DPU_SSPP_SCALER_QSEED2) | \
(1UL << DPU_SSPP_SCALER_QSEED3) | \
(1UL << DPU_SSPP_SCALER_QSEED3LITE) | \
(1UL << DPU_SSPP_SCALER_QSEED4))
#define DPU_SSPP_SCALER (BIT(DPU_SSPP_SCALER_RGB) | \
BIT(DPU_SSPP_SCALER_QSEED2) | \
BIT(DPU_SSPP_SCALER_QSEED3) | \
BIT(DPU_SSPP_SCALER_QSEED3LITE) | \
BIT(DPU_SSPP_SCALER_QSEED4))
/*
* Define all CSC feature bits in catalog
*/
#define DPU_SSPP_CSC_ANY (BIT(DPU_SSPP_CSC) | \
BIT(DPU_SSPP_CSC_10BIT))
/**
* Component indices
@ -166,18 +172,12 @@ struct dpu_hw_pipe_cfg {
/**
* struct dpu_hw_pipe_qos_cfg : Source pipe QoS configuration
* @danger_lut: LUT for generate danger level based on fill level
* @safe_lut: LUT for generate safe level based on fill level
* @creq_lut: LUT for generate creq level based on fill level
* @creq_vblank: creq value generated to vbif during vertical blanking
* @danger_vblank: danger value generated during vertical blanking
* @vblank_en: enable creq_vblank and danger_vblank during vblank
* @danger_safe_en: enable danger safe generation
*/
struct dpu_hw_pipe_qos_cfg {
u32 danger_lut;
u32 safe_lut;
u64 creq_lut;
u32 creq_vblank;
u32 danger_vblank;
bool vblank_en;
@ -268,7 +268,7 @@ struct dpu_hw_sspp_ops {
* @ctx: Pointer to pipe context
* @data: Pointer to config structure
*/
void (*setup_csc)(struct dpu_hw_pipe *ctx, struct dpu_csc_cfg *data);
void (*setup_csc)(struct dpu_hw_pipe *ctx, const struct dpu_csc_cfg *data);
/**
* setup_solidfill - enable/disable colorfill
@ -302,20 +302,22 @@ struct dpu_hw_sspp_ops {
/**
* setup_danger_safe_lut - setup danger safe LUTs
* @ctx: Pointer to pipe context
* @cfg: Pointer to pipe QoS configuration
* @danger_lut: LUT for generate danger level based on fill level
* @safe_lut: LUT for generate safe level based on fill level
*
*/
void (*setup_danger_safe_lut)(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_qos_cfg *cfg);
u32 danger_lut,
u32 safe_lut);
/**
* setup_creq_lut - setup CREQ LUT
* @ctx: Pointer to pipe context
* @cfg: Pointer to pipe QoS configuration
* @creq_lut: LUT for generate creq level based on fill level
*
*/
void (*setup_creq_lut)(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_qos_cfg *cfg);
u64 creq_lut);
/**
* setup_qos_ctrl - setup QoS control
@ -338,12 +340,10 @@ struct dpu_hw_sspp_ops {
* setup_scaler - setup scaler
* @ctx: Pointer to pipe context
* @pipe_cfg: Pointer to pipe configuration
* @pe_cfg: Pointer to pixel extension configuration
* @scaler_cfg: Pointer to scaler configuration
*/
void (*setup_scaler)(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_cfg *pipe_cfg,
struct dpu_hw_pixel_ext *pe_cfg,
void *scaler_cfg);
/**
@ -356,9 +356,11 @@ struct dpu_hw_sspp_ops {
* setup_cdp - setup client driven prefetch
* @ctx: Pointer to pipe context
* @cfg: Pointer to cdp configuration
* @index: rectangle index in multirect
*/
void (*setup_cdp)(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_cdp_cfg *cfg);
struct dpu_hw_pipe_cdp_cfg *cfg,
enum dpu_sspp_multirect_index index);
};
/**
@ -385,6 +387,7 @@ struct dpu_hw_pipe {
struct dpu_hw_sspp_ops ops;
};
struct dpu_kms;
/**
* dpu_hw_sspp_init - initializes the sspp hw driver object.
* Should be called once before accessing every pipe.
@ -404,5 +407,8 @@ struct dpu_hw_pipe *dpu_hw_sspp_init(enum dpu_sspp idx,
*/
void dpu_hw_sspp_destroy(struct dpu_hw_pipe *ctx);
void dpu_debugfs_sspp_init(struct dpu_kms *dpu_kms, struct dentry *debugfs_root);
int _dpu_hw_sspp_init_debugfs(struct dpu_hw_pipe *hw_pipe, struct dpu_kms *kms, struct dentry *entry);
#endif /*_DPU_HW_SSPP_H */

View file

@ -374,7 +374,7 @@ u32 dpu_hw_get_scaler3_ver(struct dpu_hw_blk_reg_map *c,
void dpu_hw_csc_setup(struct dpu_hw_blk_reg_map *c,
u32 csc_reg_off,
struct dpu_csc_cfg *data, bool csc10)
const struct dpu_csc_cfg *data, bool csc10)
{
static const u32 matrix_shift = 7;
u32 clamp_shift = csc10 ? 16 : 8;

View file

@ -322,6 +322,6 @@ u32 dpu_hw_get_scaler3_ver(struct dpu_hw_blk_reg_map *c,
void dpu_hw_csc_setup(struct dpu_hw_blk_reg_map *c,
u32 csc_reg_off,
struct dpu_csc_cfg *data, bool csc10);
const struct dpu_csc_cfg *data, bool csc10);
#endif /* _DPU_HW_UTIL_H */

View file

@ -21,14 +21,14 @@
#include "msm_gem.h"
#include "disp/msm_disp_snapshot.h"
#include "dpu_kms.h"
#include "dpu_core_irq.h"
#include "dpu_crtc.h"
#include "dpu_encoder.h"
#include "dpu_formats.h"
#include "dpu_hw_vbif.h"
#include "dpu_vbif.h"
#include "dpu_encoder.h"
#include "dpu_kms.h"
#include "dpu_plane.h"
#include "dpu_crtc.h"
#include "dpu_vbif.h"
#define CREATE_TRACE_POINTS
#include "dpu_trace.h"
@ -73,8 +73,8 @@ static int _dpu_danger_signal_status(struct seq_file *s,
&status);
} else {
seq_puts(s, "\nSafe signal status:\n");
if (kms->hw_mdp->ops.get_danger_status)
kms->hw_mdp->ops.get_danger_status(kms->hw_mdp,
if (kms->hw_mdp->ops.get_safe_status)
kms->hw_mdp->ops.get_safe_status(kms->hw_mdp,
&status);
}
pm_runtime_put_sync(&kms->pdev->dev);
@ -82,7 +82,7 @@ static int _dpu_danger_signal_status(struct seq_file *s,
seq_printf(s, "MDP : 0x%x\n", status.mdp);
for (i = SSPP_VIG0; i < SSPP_MAX; i++)
seq_printf(s, "SSPP%d : 0x%x \t", i - SSPP_VIG0,
seq_printf(s, "SSPP%d : 0x%x \n", i - SSPP_VIG0,
status.sspp[i]);
seq_puts(s, "\n");
@ -101,6 +101,73 @@ static int dpu_debugfs_safe_stats_show(struct seq_file *s, void *v)
}
DEFINE_SHOW_ATTRIBUTE(dpu_debugfs_safe_stats);
static ssize_t _dpu_plane_danger_read(struct file *file,
char __user *buff, size_t count, loff_t *ppos)
{
struct dpu_kms *kms = file->private_data;
int len;
char buf[40];
len = scnprintf(buf, sizeof(buf), "%d\n", !kms->has_danger_ctrl);
return simple_read_from_buffer(buff, count, ppos, buf, len);
}
static void _dpu_plane_set_danger_state(struct dpu_kms *kms, bool enable)
{
struct drm_plane *plane;
drm_for_each_plane(plane, kms->dev) {
if (plane->fb && plane->state) {
dpu_plane_danger_signal_ctrl(plane, enable);
DPU_DEBUG("plane:%d img:%dx%d ",
plane->base.id, plane->fb->width,
plane->fb->height);
DPU_DEBUG("src[%d,%d,%d,%d] dst[%d,%d,%d,%d]\n",
plane->state->src_x >> 16,
plane->state->src_y >> 16,
plane->state->src_w >> 16,
plane->state->src_h >> 16,
plane->state->crtc_x, plane->state->crtc_y,
plane->state->crtc_w, plane->state->crtc_h);
} else {
DPU_DEBUG("Inactive plane:%d\n", plane->base.id);
}
}
}
static ssize_t _dpu_plane_danger_write(struct file *file,
const char __user *user_buf, size_t count, loff_t *ppos)
{
struct dpu_kms *kms = file->private_data;
int disable_panic;
int ret;
ret = kstrtouint_from_user(user_buf, count, 0, &disable_panic);
if (ret)
return ret;
if (disable_panic) {
/* Disable panic signal for all active pipes */
DPU_DEBUG("Disabling danger:\n");
_dpu_plane_set_danger_state(kms, false);
kms->has_danger_ctrl = false;
} else {
/* Enable panic signal for all active pipes */
DPU_DEBUG("Enabling danger:\n");
kms->has_danger_ctrl = true;
_dpu_plane_set_danger_state(kms, true);
}
return count;
}
static const struct file_operations dpu_plane_danger_enable = {
.open = simple_open,
.read = _dpu_plane_danger_read,
.write = _dpu_plane_danger_write,
};
static void dpu_debugfs_danger_init(struct dpu_kms *dpu_kms,
struct dentry *parent)
{
@ -110,8 +177,20 @@ static void dpu_debugfs_danger_init(struct dpu_kms *dpu_kms,
dpu_kms, &dpu_debugfs_danger_stats_fops);
debugfs_create_file("safe_status", 0600, entry,
dpu_kms, &dpu_debugfs_safe_stats_fops);
debugfs_create_file("disable_danger", 0600, entry,
dpu_kms, &dpu_plane_danger_enable);
}
/*
* Companion structure for dpu_debugfs_create_regset32.
*/
struct dpu_debugfs_regset32 {
uint32_t offset;
uint32_t blk_len;
struct dpu_kms *dpu_kms;
};
static int _dpu_debugfs_show_regset32(struct seq_file *s, void *data)
{
struct dpu_debugfs_regset32 *regset = s->private;
@ -159,24 +238,23 @@ static const struct file_operations dpu_fops_regset32 = {
.release = single_release,
};
void dpu_debugfs_setup_regset32(struct dpu_debugfs_regset32 *regset,
void dpu_debugfs_create_regset32(const char *name, umode_t mode,
void *parent,
uint32_t offset, uint32_t length, struct dpu_kms *dpu_kms)
{
if (regset) {
regset->offset = offset;
regset->blk_len = length;
regset->dpu_kms = dpu_kms;
}
}
struct dpu_debugfs_regset32 *regset;
void dpu_debugfs_create_regset32(const char *name, umode_t mode,
void *parent, struct dpu_debugfs_regset32 *regset)
{
if (!name || !regset || !regset->dpu_kms || !regset->blk_len)
if (WARN_ON(!name || !dpu_kms || !length))
return;
regset = devm_kzalloc(&dpu_kms->pdev->dev, sizeof(*regset), GFP_KERNEL);
if (!regset)
return;
/* make sure offset is a multiple of 4 */
regset->offset = round_down(regset->offset, 4);
regset->offset = round_down(offset, 4);
regset->blk_len = length;
regset->dpu_kms = dpu_kms;
debugfs_create_file(name, mode, parent, regset, &dpu_fops_regset32);
}
@ -203,6 +281,7 @@ static int dpu_kms_debugfs_init(struct msm_kms *kms, struct drm_minor *minor)
dpu_debugfs_danger_init(dpu_kms, entry);
dpu_debugfs_vbif_init(dpu_kms, entry);
dpu_debugfs_core_irq_init(dpu_kms, entry);
dpu_debugfs_sspp_init(dpu_kms, entry);
for (i = 0; i < ARRAY_SIZE(priv->dp); i++) {
if (priv->dp[i])
@ -384,28 +463,6 @@ static void dpu_kms_flush_commit(struct msm_kms *kms, unsigned crtc_mask)
}
}
/*
* Override the encoder enable since we need to setup the inline rotator and do
* some crtc magic before enabling any bridge that might be present.
*/
void dpu_kms_encoder_enable(struct drm_encoder *encoder)
{
const struct drm_encoder_helper_funcs *funcs = encoder->helper_private;
struct drm_device *dev = encoder->dev;
struct drm_crtc *crtc;
/* Forward this enable call to the commit hook */
if (funcs && funcs->commit)
funcs->commit(encoder);
drm_for_each_crtc(crtc, dev) {
if (!(crtc->state->encoder_mask & drm_encoder_mask(encoder)))
continue;
trace_dpu_kms_enc_enable(DRMID(crtc));
}
}
static void dpu_kms_complete_commit(struct msm_kms *kms, unsigned crtc_mask)
{
struct dpu_kms *dpu_kms = to_dpu_kms(kms);
@ -863,6 +920,11 @@ static void dpu_kms_mdp_snapshot(struct msm_disp_state *disp_state, struct msm_k
msm_disp_snapshot_add_block(disp_state, cat->sspp[i].len,
dpu_kms->mmio + cat->sspp[i].base, "sspp_%d", i);
/* dump LM sub-blocks HW regs info */
for (i = 0; i < cat->mixer_count; i++)
msm_disp_snapshot_add_block(disp_state, cat->mixer[i].len,
dpu_kms->mmio + cat->mixer[i].base, "lm_%d", i);
msm_disp_snapshot_add_block(disp_state, top->hw.length,
dpu_kms->mmio + top->hw.blk_off, "top");
@ -1153,9 +1215,9 @@ struct msm_kms *dpu_kms_init(struct drm_device *dev)
static int dpu_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *ddev = dev_get_drvdata(master);
struct msm_drm_private *priv = dev_get_drvdata(master);
struct platform_device *pdev = to_platform_device(dev);
struct msm_drm_private *priv = ddev->dev_private;
struct drm_device *ddev = priv->dev;
struct dpu_kms *dpu_kms;
struct dss_module_power *mp;
int ret = 0;
@ -1285,7 +1347,7 @@ static const struct dev_pm_ops dpu_pm_ops = {
pm_runtime_force_resume)
};
static const struct of_device_id dpu_dt_match[] = {
const struct of_device_id dpu_dt_match[] = {
{ .compatible = "qcom,sdm845-dpu", },
{ .compatible = "qcom,sc7180-dpu", },
{ .compatible = "qcom,sc7280-dpu", },

View file

@ -160,33 +160,9 @@ struct dpu_global_state
*
* Documentation/filesystems/debugfs.rst
*
* @dpu_debugfs_setup_regset32: Initialize data for dpu_debugfs_create_regset32
* @dpu_debugfs_create_regset32: Create 32-bit register dump file
* @dpu_debugfs_get_root: Get root dentry for DPU_KMS's debugfs node
*/
/**
* Companion structure for dpu_debugfs_create_regset32. Do not initialize the
* members of this structure explicitly; use dpu_debugfs_setup_regset32 instead.
*/
struct dpu_debugfs_regset32 {
uint32_t offset;
uint32_t blk_len;
struct dpu_kms *dpu_kms;
};
/**
* dpu_debugfs_setup_regset32 - Initialize register block definition for debugfs
* This function is meant to initialize dpu_debugfs_regset32 structures for use
* with dpu_debugfs_create_regset32.
* @regset: opaque register definition structure
* @offset: sub-block offset
* @length: sub-block length, in bytes
* @dpu_kms: pointer to dpu kms structure
*/
void dpu_debugfs_setup_regset32(struct dpu_debugfs_regset32 *regset,
uint32_t offset, uint32_t length, struct dpu_kms *dpu_kms);
/**
* dpu_debugfs_create_regset32 - Create register read back file for debugfs
*
@ -195,20 +171,16 @@ void dpu_debugfs_setup_regset32(struct dpu_debugfs_regset32 *regset,
* names/offsets do not need to be provided. The 'read' function simply outputs
* sequential register values over a specified range.
*
* Similar to the related debugfs_create_regset32 API, the structure pointed to
* by regset needs to persist for the lifetime of the created file. The calling
* code is responsible for initialization/management of this structure.
*
* The structure pointed to by regset is meant to be opaque. Please use
* dpu_debugfs_setup_regset32 to initialize it.
*
* @name: File name within debugfs
* @mode: File mode within debugfs
* @parent: Parent directory entry within debugfs, can be NULL
* @regset: Pointer to persistent register block definition
* @offset: sub-block offset
* @length: sub-block length, in bytes
* @dpu_kms: pointer to dpu kms structure
*/
void dpu_debugfs_create_regset32(const char *name, umode_t mode,
void *parent, struct dpu_debugfs_regset32 *regset);
void *parent,
uint32_t offset, uint32_t length, struct dpu_kms *dpu_kms);
/**
* dpu_debugfs_get_root - Return root directory entry for KMS's debugfs
@ -235,8 +207,6 @@ void *dpu_debugfs_get_root(struct dpu_kms *dpu_kms);
int dpu_enable_vblank(struct msm_kms *kms, struct drm_crtc *crtc);
void dpu_disable_vblank(struct msm_kms *kms, struct drm_crtc *crtc);
void dpu_kms_encoder_enable(struct drm_encoder *encoder);
/**
* dpu_kms_get_clk_rate() - get the clock rate
* @dpu_kms: pointer to dpu_kms structure

View file

@ -111,7 +111,7 @@ static int _dpu_mdss_irq_domain_add(struct dpu_mdss *dpu_mdss)
struct device *dev;
struct irq_domain *domain;
dev = dpu_mdss->base.dev->dev;
dev = dpu_mdss->base.dev;
domain = irq_domain_add_linear(dev->of_node, 32,
&dpu_mdss_irqdomain_ops, dpu_mdss);
@ -184,16 +184,15 @@ static int dpu_mdss_disable(struct msm_mdss *mdss)
return ret;
}
static void dpu_mdss_destroy(struct drm_device *dev)
static void dpu_mdss_destroy(struct msm_mdss *mdss)
{
struct platform_device *pdev = to_platform_device(dev->dev);
struct msm_drm_private *priv = dev->dev_private;
struct dpu_mdss *dpu_mdss = to_dpu_mdss(priv->mdss);
struct platform_device *pdev = to_platform_device(mdss->dev);
struct dpu_mdss *dpu_mdss = to_dpu_mdss(mdss);
struct dss_module_power *mp = &dpu_mdss->mp;
int irq;
pm_runtime_suspend(dev->dev);
pm_runtime_disable(dev->dev);
pm_runtime_suspend(mdss->dev);
pm_runtime_disable(mdss->dev);
_dpu_mdss_irq_domain_fini(dpu_mdss);
irq = platform_get_irq(pdev, 0);
irq_set_chained_handler_and_data(irq, NULL, NULL);
@ -203,7 +202,6 @@ static void dpu_mdss_destroy(struct drm_device *dev)
if (dpu_mdss->mmio)
devm_iounmap(&pdev->dev, dpu_mdss->mmio);
dpu_mdss->mmio = NULL;
priv->mdss = NULL;
}
static const struct msm_mdss_funcs mdss_funcs = {
@ -212,16 +210,15 @@ static const struct msm_mdss_funcs mdss_funcs = {
.destroy = dpu_mdss_destroy,
};
int dpu_mdss_init(struct drm_device *dev)
int dpu_mdss_init(struct platform_device *pdev)
{
struct platform_device *pdev = to_platform_device(dev->dev);
struct msm_drm_private *priv = dev->dev_private;
struct msm_drm_private *priv = platform_get_drvdata(pdev);
struct dpu_mdss *dpu_mdss;
struct dss_module_power *mp;
int ret;
int irq;
dpu_mdss = devm_kzalloc(dev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
dpu_mdss = devm_kzalloc(&pdev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
if (!dpu_mdss)
return -ENOMEM;
@ -238,7 +235,7 @@ int dpu_mdss_init(struct drm_device *dev)
goto clk_parse_err;
}
dpu_mdss->base.dev = dev;
dpu_mdss->base.dev = &pdev->dev;
dpu_mdss->base.funcs = &mdss_funcs;
ret = _dpu_mdss_irq_domain_add(dpu_mdss);
@ -256,7 +253,7 @@ int dpu_mdss_init(struct drm_device *dev)
priv->mdss = &dpu_mdss->base;
pm_runtime_enable(dev->dev);
pm_runtime_enable(&pdev->dev);
return 0;

View file

@ -13,7 +13,6 @@
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_uapi.h>
#include <drm/drm_damage_helper.h>
#include <drm/drm_file.h>
#include <drm/drm_gem_atomic_helper.h>
#include "msm_drv.h"
@ -90,7 +89,6 @@ enum dpu_plane_qos {
/*
* struct dpu_plane - local dpu plane structure
* @aspace: address space pointer
* @csc_ptr: Points to dpu_csc_cfg structure to use for current
* @mplane_list: List of multirect planes of the same pipe
* @catalog: Points to dpu catalog structure
* @revalidate: force revalidation of all the plane properties
@ -101,29 +99,14 @@ struct dpu_plane {
struct mutex lock;
enum dpu_sspp pipe;
uint32_t features; /* capabilities from catalog */
struct dpu_hw_pipe *pipe_hw;
struct dpu_hw_pipe_cfg pipe_cfg;
struct dpu_hw_pipe_qos_cfg pipe_qos_cfg;
uint32_t color_fill;
bool is_error;
bool is_rt_pipe;
bool is_virtual;
struct list_head mplane_list;
struct dpu_mdss_cfg *catalog;
struct dpu_csc_cfg *csc_ptr;
const struct dpu_sspp_sub_blks *pipe_sblk;
char pipe_name[DPU_NAME_SIZE];
/* debugfs related stuff */
struct dentry *debugfs_root;
struct dpu_debugfs_regset32 debugfs_src;
struct dpu_debugfs_regset32 debugfs_scaler;
struct dpu_debugfs_regset32 debugfs_csc;
bool debugfs_default_scale;
};
static const uint64_t supported_format_modifiers[] = {
@ -145,14 +128,15 @@ static struct dpu_kms *_dpu_plane_get_kms(struct drm_plane *plane)
* _dpu_plane_calc_bw - calculate bandwidth required for a plane
* @plane: Pointer to drm plane.
* @fb: Pointer to framebuffer associated with the given plane
* @pipe_cfg: Pointer to pipe configuration
* Result: Updates calculated bandwidth in the plane state.
* BW Equation: src_w * src_h * bpp * fps * (v_total / v_dest)
* Prefill BW Equation: line src bytes * line_time
*/
static void _dpu_plane_calc_bw(struct drm_plane *plane,
struct drm_framebuffer *fb)
struct drm_framebuffer *fb,
struct dpu_hw_pipe_cfg *pipe_cfg)
{
struct dpu_plane *pdpu = to_dpu_plane(plane);
struct dpu_plane_state *pstate;
struct drm_display_mode *mode;
const struct dpu_format *fmt = NULL;
@ -169,9 +153,9 @@ static void _dpu_plane_calc_bw(struct drm_plane *plane,
fmt = dpu_get_dpu_format_ext(fb->format->format, fb->modifier);
src_width = drm_rect_width(&pdpu->pipe_cfg.src_rect);
src_height = drm_rect_height(&pdpu->pipe_cfg.src_rect);
dst_height = drm_rect_height(&pdpu->pipe_cfg.dst_rect);
src_width = drm_rect_width(&pipe_cfg->src_rect);
src_height = drm_rect_height(&pipe_cfg->src_rect);
dst_height = drm_rect_height(&pipe_cfg->dst_rect);
fps = drm_mode_vrefresh(mode);
vbp = mode->vtotal - mode->vsync_end;
vpw = mode->vsync_end - mode->vsync_start;
@ -202,12 +186,12 @@ static void _dpu_plane_calc_bw(struct drm_plane *plane,
/**
* _dpu_plane_calc_clk - calculate clock required for a plane
* @plane: Pointer to drm plane.
* @pipe_cfg: Pointer to pipe configuration
* Result: Updates calculated clock in the plane state.
* Clock equation: dst_w * v_total * fps * (src_h / dst_h)
*/
static void _dpu_plane_calc_clk(struct drm_plane *plane)
static void _dpu_plane_calc_clk(struct drm_plane *plane, struct dpu_hw_pipe_cfg *pipe_cfg)
{
struct dpu_plane *pdpu = to_dpu_plane(plane);
struct dpu_plane_state *pstate;
struct drm_display_mode *mode;
int dst_width, src_height, dst_height, fps;
@ -215,9 +199,9 @@ static void _dpu_plane_calc_clk(struct drm_plane *plane)
pstate = to_dpu_plane_state(plane->state);
mode = &plane->state->crtc->mode;
src_height = drm_rect_height(&pdpu->pipe_cfg.src_rect);
dst_width = drm_rect_width(&pdpu->pipe_cfg.dst_rect);
dst_height = drm_rect_height(&pdpu->pipe_cfg.dst_rect);
src_height = drm_rect_height(&pipe_cfg->src_rect);
dst_width = drm_rect_width(&pipe_cfg->dst_rect);
dst_height = drm_rect_height(&pipe_cfg->dst_rect);
fps = drm_mode_vrefresh(mode);
pstate->plane_clk =
@ -254,14 +238,17 @@ static int _dpu_plane_calc_fill_level(struct drm_plane *plane,
fixed_buff_size = pdpu->catalog->caps->pixel_ram_size;
list_for_each_entry(tmp, &pdpu->mplane_list, mplane_list) {
u32 tmp_width;
if (!tmp->base.state->visible)
continue;
tmp_width = drm_rect_width(&tmp->base.state->src) >> 16;
DPU_DEBUG("plane%d/%d src_width:%d/%d\n",
pdpu->base.base.id, tmp->base.base.id,
src_width,
drm_rect_width(&tmp->pipe_cfg.src_rect));
tmp_width);
src_width = max_t(u32, src_width,
drm_rect_width(&tmp->pipe_cfg.src_rect));
tmp_width);
}
if (fmt->fetch_planes == DPU_PLANE_PSEUDO_PLANAR) {
@ -321,9 +308,10 @@ static u64 _dpu_plane_get_qos_lut(const struct dpu_qos_lut_tbl *tbl,
* _dpu_plane_set_qos_lut - set QoS LUT of the given plane
* @plane: Pointer to drm plane
* @fb: Pointer to framebuffer associated with the given plane
* @pipe_cfg: Pointer to pipe configuration
*/
static void _dpu_plane_set_qos_lut(struct drm_plane *plane,
struct drm_framebuffer *fb)
struct drm_framebuffer *fb, struct dpu_hw_pipe_cfg *pipe_cfg)
{
struct dpu_plane *pdpu = to_dpu_plane(plane);
const struct dpu_format *fmt = NULL;
@ -337,7 +325,7 @@ static void _dpu_plane_set_qos_lut(struct drm_plane *plane,
fb->format->format,
fb->modifier);
total_fl = _dpu_plane_calc_fill_level(plane, fmt,
drm_rect_width(&pdpu->pipe_cfg.src_rect));
drm_rect_width(&pipe_cfg->src_rect));
if (fmt && DPU_FORMAT_IS_LINEAR(fmt))
lut_usage = DPU_QOS_LUT_USAGE_LINEAR;
@ -348,8 +336,6 @@ static void _dpu_plane_set_qos_lut(struct drm_plane *plane,
qos_lut = _dpu_plane_get_qos_lut(
&pdpu->catalog->perf.qos_lut_tbl[lut_usage], total_fl);
pdpu->pipe_qos_cfg.creq_lut = qos_lut;
trace_dpu_perf_set_qos_luts(pdpu->pipe - SSPP_VIG0,
(fmt) ? fmt->base.pixel_format : 0,
pdpu->is_rt_pipe, total_fl, qos_lut, lut_usage);
@ -359,7 +345,7 @@ static void _dpu_plane_set_qos_lut(struct drm_plane *plane,
fmt ? (char *)&fmt->base.pixel_format : NULL,
pdpu->is_rt_pipe, total_fl, qos_lut);
pdpu->pipe_hw->ops.setup_creq_lut(pdpu->pipe_hw, &pdpu->pipe_qos_cfg);
pdpu->pipe_hw->ops.setup_creq_lut(pdpu->pipe_hw, qos_lut);
}
/**
@ -397,24 +383,21 @@ static void _dpu_plane_set_danger_lut(struct drm_plane *plane,
}
}
pdpu->pipe_qos_cfg.danger_lut = danger_lut;
pdpu->pipe_qos_cfg.safe_lut = safe_lut;
trace_dpu_perf_set_danger_luts(pdpu->pipe - SSPP_VIG0,
(fmt) ? fmt->base.pixel_format : 0,
(fmt) ? fmt->fetch_mode : 0,
pdpu->pipe_qos_cfg.danger_lut,
pdpu->pipe_qos_cfg.safe_lut);
danger_lut,
safe_lut);
DPU_DEBUG_PLANE(pdpu, "pnum:%d fmt: %4.4s mode:%d luts[0x%x, 0x%x]\n",
pdpu->pipe - SSPP_VIG0,
fmt ? (char *)&fmt->base.pixel_format : NULL,
fmt ? fmt->fetch_mode : -1,
pdpu->pipe_qos_cfg.danger_lut,
pdpu->pipe_qos_cfg.safe_lut);
danger_lut,
safe_lut);
pdpu->pipe_hw->ops.setup_danger_safe_lut(pdpu->pipe_hw,
&pdpu->pipe_qos_cfg);
danger_lut, safe_lut);
}
/**
@ -427,47 +410,51 @@ static void _dpu_plane_set_qos_ctrl(struct drm_plane *plane,
bool enable, u32 flags)
{
struct dpu_plane *pdpu = to_dpu_plane(plane);
struct dpu_hw_pipe_qos_cfg pipe_qos_cfg;
memset(&pipe_qos_cfg, 0, sizeof(pipe_qos_cfg));
if (flags & DPU_PLANE_QOS_VBLANK_CTRL) {
pdpu->pipe_qos_cfg.creq_vblank = pdpu->pipe_sblk->creq_vblank;
pdpu->pipe_qos_cfg.danger_vblank =
pdpu->pipe_sblk->danger_vblank;
pdpu->pipe_qos_cfg.vblank_en = enable;
pipe_qos_cfg.creq_vblank = pdpu->pipe_hw->cap->sblk->creq_vblank;
pipe_qos_cfg.danger_vblank =
pdpu->pipe_hw->cap->sblk->danger_vblank;
pipe_qos_cfg.vblank_en = enable;
}
if (flags & DPU_PLANE_QOS_VBLANK_AMORTIZE) {
/* this feature overrules previous VBLANK_CTRL */
pdpu->pipe_qos_cfg.vblank_en = false;
pdpu->pipe_qos_cfg.creq_vblank = 0; /* clear vblank bits */
pipe_qos_cfg.vblank_en = false;
pipe_qos_cfg.creq_vblank = 0; /* clear vblank bits */
}
if (flags & DPU_PLANE_QOS_PANIC_CTRL)
pdpu->pipe_qos_cfg.danger_safe_en = enable;
pipe_qos_cfg.danger_safe_en = enable;
if (!pdpu->is_rt_pipe) {
pdpu->pipe_qos_cfg.vblank_en = false;
pdpu->pipe_qos_cfg.danger_safe_en = false;
pipe_qos_cfg.vblank_en = false;
pipe_qos_cfg.danger_safe_en = false;
}
DPU_DEBUG_PLANE(pdpu, "pnum:%d ds:%d vb:%d pri[0x%x, 0x%x] is_rt:%d\n",
pdpu->pipe - SSPP_VIG0,
pdpu->pipe_qos_cfg.danger_safe_en,
pdpu->pipe_qos_cfg.vblank_en,
pdpu->pipe_qos_cfg.creq_vblank,
pdpu->pipe_qos_cfg.danger_vblank,
pipe_qos_cfg.danger_safe_en,
pipe_qos_cfg.vblank_en,
pipe_qos_cfg.creq_vblank,
pipe_qos_cfg.danger_vblank,
pdpu->is_rt_pipe);
pdpu->pipe_hw->ops.setup_qos_ctrl(pdpu->pipe_hw,
&pdpu->pipe_qos_cfg);
&pipe_qos_cfg);
}
/**
* _dpu_plane_set_ot_limit - set OT limit for the given plane
* @plane: Pointer to drm plane
* @crtc: Pointer to drm crtc
* @pipe_cfg: Pointer to pipe configuration
*/
static void _dpu_plane_set_ot_limit(struct drm_plane *plane,
struct drm_crtc *crtc)
struct drm_crtc *crtc, struct dpu_hw_pipe_cfg *pipe_cfg)
{
struct dpu_plane *pdpu = to_dpu_plane(plane);
struct dpu_vbif_set_ot_params ot_params;
@ -476,8 +463,8 @@ static void _dpu_plane_set_ot_limit(struct drm_plane *plane,
memset(&ot_params, 0, sizeof(ot_params));
ot_params.xin_id = pdpu->pipe_hw->cap->xin_id;
ot_params.num = pdpu->pipe_hw->idx - SSPP_NONE;
ot_params.width = drm_rect_width(&pdpu->pipe_cfg.src_rect);
ot_params.height = drm_rect_height(&pdpu->pipe_cfg.src_rect);
ot_params.width = drm_rect_width(&pipe_cfg->src_rect);
ot_params.height = drm_rect_height(&pipe_cfg->src_rect);
ot_params.is_wfd = !pdpu->is_rt_pipe;
ot_params.frame_rate = drm_mode_vrefresh(&crtc->mode);
ot_params.vbif_idx = VBIF_RT;
@ -541,14 +528,12 @@ static void _dpu_plane_setup_scaler3(struct dpu_plane *pdpu,
struct dpu_plane_state *pstate,
uint32_t src_w, uint32_t src_h, uint32_t dst_w, uint32_t dst_h,
struct dpu_hw_scaler3_cfg *scale_cfg,
struct dpu_hw_pixel_ext *pixel_ext,
const struct dpu_format *fmt,
uint32_t chroma_subsmpl_h, uint32_t chroma_subsmpl_v)
{
uint32_t i;
memset(scale_cfg, 0, sizeof(*scale_cfg));
memset(&pstate->pixel_ext, 0, sizeof(struct dpu_hw_pixel_ext));
scale_cfg->phase_step_x[DPU_SSPP_COMP_0] =
mult_frac((1 << PHASE_STEP_SHIFT), src_w, dst_w);
scale_cfg->phase_step_y[DPU_SSPP_COMP_0] =
@ -587,9 +572,9 @@ static void _dpu_plane_setup_scaler3(struct dpu_plane *pdpu,
scale_cfg->preload_y[i] = DPU_QSEED3_DEFAULT_PRELOAD_V;
}
pstate->pixel_ext.num_ext_pxls_top[i] =
pixel_ext->num_ext_pxls_top[i] =
scale_cfg->src_height[i];
pstate->pixel_ext.num_ext_pxls_left[i] =
pixel_ext->num_ext_pxls_left[i] =
scale_cfg->src_width[i];
}
if (!(DPU_FORMAT_IS_YUV(fmt)) && (src_h == dst_h)
@ -606,68 +591,97 @@ static void _dpu_plane_setup_scaler3(struct dpu_plane *pdpu,
scale_cfg->enable = 1;
}
static void _dpu_plane_setup_csc(struct dpu_plane *pdpu)
{
static const struct dpu_csc_cfg dpu_csc_YUV2RGB_601L = {
{
/* S15.16 format */
0x00012A00, 0x00000000, 0x00019880,
0x00012A00, 0xFFFF9B80, 0xFFFF3000,
0x00012A00, 0x00020480, 0x00000000,
static const struct dpu_csc_cfg dpu_csc_YUV2RGB_601L = {
{
/* S15.16 format */
0x00012A00, 0x00000000, 0x00019880,
0x00012A00, 0xFFFF9B80, 0xFFFF3000,
0x00012A00, 0x00020480, 0x00000000,
},
/* signed bias */
{ 0xfff0, 0xff80, 0xff80,},
{ 0x0, 0x0, 0x0,},
/* unsigned clamp */
{ 0x10, 0xeb, 0x10, 0xf0, 0x10, 0xf0,},
{ 0x00, 0xff, 0x00, 0xff, 0x00, 0xff,},
};
static const struct dpu_csc_cfg dpu_csc10_YUV2RGB_601L = {
{
/* S15.16 format */
0x00012A00, 0x00000000, 0x00019880,
0x00012A00, 0xFFFF9B80, 0xFFFF3000,
0x00012A00, 0x00020480, 0x00000000,
},
/* signed bias */
{ 0xfff0, 0xff80, 0xff80,},
{ 0x0, 0x0, 0x0,},
/* unsigned clamp */
{ 0x10, 0xeb, 0x10, 0xf0, 0x10, 0xf0,},
{ 0x00, 0xff, 0x00, 0xff, 0x00, 0xff,},
};
static const struct dpu_csc_cfg dpu_csc10_YUV2RGB_601L = {
{
/* S15.16 format */
0x00012A00, 0x00000000, 0x00019880,
0x00012A00, 0xFFFF9B80, 0xFFFF3000,
0x00012A00, 0x00020480, 0x00000000,
},
/* signed bias */
{ 0xffc0, 0xfe00, 0xfe00,},
{ 0x0, 0x0, 0x0,},
/* unsigned clamp */
{ 0x40, 0x3ac, 0x40, 0x3c0, 0x40, 0x3c0,},
{ 0x00, 0x3ff, 0x00, 0x3ff, 0x00, 0x3ff,},
};
/* signed bias */
{ 0xffc0, 0xfe00, 0xfe00,},
{ 0x0, 0x0, 0x0,},
/* unsigned clamp */
{ 0x40, 0x3ac, 0x40, 0x3c0, 0x40, 0x3c0,},
{ 0x00, 0x3ff, 0x00, 0x3ff, 0x00, 0x3ff,},
};
static const struct dpu_csc_cfg *_dpu_plane_get_csc(struct dpu_plane *pdpu, const struct dpu_format *fmt)
{
const struct dpu_csc_cfg *csc_ptr;
if (!pdpu) {
DPU_ERROR("invalid plane\n");
return;
return NULL;
}
if (BIT(DPU_SSPP_CSC_10BIT) & pdpu->features)
pdpu->csc_ptr = (struct dpu_csc_cfg *)&dpu_csc10_YUV2RGB_601L;
if (!DPU_FORMAT_IS_YUV(fmt))
return NULL;
if (BIT(DPU_SSPP_CSC_10BIT) & pdpu->pipe_hw->cap->features)
csc_ptr = &dpu_csc10_YUV2RGB_601L;
else
pdpu->csc_ptr = (struct dpu_csc_cfg *)&dpu_csc_YUV2RGB_601L;
csc_ptr = &dpu_csc_YUV2RGB_601L;
DPU_DEBUG_PLANE(pdpu, "using 0x%X 0x%X 0x%X...\n",
pdpu->csc_ptr->csc_mv[0],
pdpu->csc_ptr->csc_mv[1],
pdpu->csc_ptr->csc_mv[2]);
csc_ptr->csc_mv[0],
csc_ptr->csc_mv[1],
csc_ptr->csc_mv[2]);
return csc_ptr;
}
static void _dpu_plane_setup_scaler(struct dpu_plane *pdpu,
struct dpu_plane_state *pstate,
const struct dpu_format *fmt, bool color_fill)
const struct dpu_format *fmt, bool color_fill,
struct dpu_hw_pipe_cfg *pipe_cfg)
{
const struct drm_format_info *info = drm_format_info(fmt->base.pixel_format);
struct dpu_hw_scaler3_cfg scaler3_cfg;
struct dpu_hw_pixel_ext pixel_ext;
memset(&scaler3_cfg, 0, sizeof(scaler3_cfg));
memset(&pixel_ext, 0, sizeof(pixel_ext));
/* don't chroma subsample if decimating */
/* update scaler. calculate default config for QSEED3 */
_dpu_plane_setup_scaler3(pdpu, pstate,
drm_rect_width(&pdpu->pipe_cfg.src_rect),
drm_rect_height(&pdpu->pipe_cfg.src_rect),
drm_rect_width(&pdpu->pipe_cfg.dst_rect),
drm_rect_height(&pdpu->pipe_cfg.dst_rect),
&pstate->scaler3_cfg, fmt,
drm_rect_width(&pipe_cfg->src_rect),
drm_rect_height(&pipe_cfg->src_rect),
drm_rect_width(&pipe_cfg->dst_rect),
drm_rect_height(&pipe_cfg->dst_rect),
&scaler3_cfg, &pixel_ext, fmt,
info->hsub, info->vsub);
if (pdpu->pipe_hw->ops.setup_pe)
pdpu->pipe_hw->ops.setup_pe(pdpu->pipe_hw,
&pixel_ext);
/**
* when programmed in multirect mode, scalar block will be
* bypassed. Still we need to update alpha and bitwidth
* ONLY for RECT0
*/
if (pdpu->pipe_hw->ops.setup_scaler &&
pstate->multirect_index != DPU_SSPP_RECT_1)
pdpu->pipe_hw->ops.setup_scaler(pdpu->pipe_hw,
pipe_cfg,
&scaler3_cfg);
}
/**
@ -683,6 +697,7 @@ static int _dpu_plane_color_fill(struct dpu_plane *pdpu,
const struct dpu_format *fmt;
const struct drm_plane *plane = &pdpu->base;
struct dpu_plane_state *pstate = to_dpu_plane_state(plane->state);
struct dpu_hw_pipe_cfg pipe_cfg;
DPU_DEBUG_PLANE(pdpu, "\n");
@ -699,13 +714,14 @@ static int _dpu_plane_color_fill(struct dpu_plane *pdpu,
pstate->multirect_index);
/* override scaler/decimation if solid fill */
pdpu->pipe_cfg.src_rect.x1 = 0;
pdpu->pipe_cfg.src_rect.y1 = 0;
pdpu->pipe_cfg.src_rect.x2 =
drm_rect_width(&pdpu->pipe_cfg.dst_rect);
pdpu->pipe_cfg.src_rect.y2 =
drm_rect_height(&pdpu->pipe_cfg.dst_rect);
_dpu_plane_setup_scaler(pdpu, pstate, fmt, true);
pipe_cfg.dst_rect = pstate->base.dst;
pipe_cfg.src_rect.x1 = 0;
pipe_cfg.src_rect.y1 = 0;
pipe_cfg.src_rect.x2 =
drm_rect_width(&pipe_cfg.dst_rect);
pipe_cfg.src_rect.y2 =
drm_rect_height(&pipe_cfg.dst_rect);
if (pdpu->pipe_hw->ops.setup_format)
pdpu->pipe_hw->ops.setup_format(pdpu->pipe_hw,
@ -714,18 +730,10 @@ static int _dpu_plane_color_fill(struct dpu_plane *pdpu,
if (pdpu->pipe_hw->ops.setup_rects)
pdpu->pipe_hw->ops.setup_rects(pdpu->pipe_hw,
&pdpu->pipe_cfg,
&pipe_cfg,
pstate->multirect_index);
if (pdpu->pipe_hw->ops.setup_pe)
pdpu->pipe_hw->ops.setup_pe(pdpu->pipe_hw,
&pstate->pixel_ext);
if (pdpu->pipe_hw->ops.setup_scaler &&
pstate->multirect_index != DPU_SSPP_RECT_1)
pdpu->pipe_hw->ops.setup_scaler(pdpu->pipe_hw,
&pdpu->pipe_cfg, &pstate->pixel_ext,
&pstate->scaler3_cfg);
_dpu_plane_setup_scaler(pdpu, pstate, fmt, true, &pipe_cfg);
}
return 0;
@ -964,10 +972,10 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
crtc_state = drm_atomic_get_new_crtc_state(state,
new_plane_state->crtc);
min_scale = FRAC_16_16(1, pdpu->pipe_sblk->maxupscale);
min_scale = FRAC_16_16(1, pdpu->pipe_hw->cap->sblk->maxupscale);
ret = drm_atomic_helper_check_plane_state(new_plane_state, crtc_state,
min_scale,
pdpu->pipe_sblk->maxdwnscale << 16,
pdpu->pipe_hw->cap->sblk->maxdwnscale << 16,
true, true);
if (ret) {
DPU_DEBUG_PLANE(pdpu, "Check plane state failed (%d)\n", ret);
@ -993,9 +1001,8 @@ static int dpu_plane_atomic_check(struct drm_plane *plane,
min_src_size = DPU_FORMAT_IS_YUV(fmt) ? 2 : 1;
if (DPU_FORMAT_IS_YUV(fmt) &&
(!(pdpu->features & DPU_SSPP_SCALER) ||
!(pdpu->features & (BIT(DPU_SSPP_CSC)
| BIT(DPU_SSPP_CSC_10BIT))))) {
(!(pdpu->pipe_hw->cap->features & DPU_SSPP_SCALER) ||
!(pdpu->pipe_hw->cap->features & DPU_SSPP_CSC_ANY))) {
DPU_DEBUG_PLANE(pdpu,
"plane doesn't have scaler/csc for yuv\n");
return -EINVAL;
@ -1056,8 +1063,13 @@ void dpu_plane_flush(struct drm_plane *plane)
else if (pdpu->color_fill & DPU_PLANE_COLOR_FILL_FLAG)
/* force 100% alpha */
_dpu_plane_color_fill(pdpu, pdpu->color_fill, 0xFF);
else if (pdpu->pipe_hw && pdpu->csc_ptr && pdpu->pipe_hw->ops.setup_csc)
pdpu->pipe_hw->ops.setup_csc(pdpu->pipe_hw, pdpu->csc_ptr);
else if (pdpu->pipe_hw && pdpu->pipe_hw->ops.setup_csc) {
const struct dpu_format *fmt = to_dpu_format(msm_framebuffer_format(plane->state->fb));
const struct dpu_csc_cfg *csc_ptr = _dpu_plane_get_csc(pdpu, fmt);
if (csc_ptr)
pdpu->pipe_hw->ops.setup_csc(pdpu->pipe_hw, csc_ptr);
}
/* flag h/w flush complete */
if (plane->state)
@ -1091,10 +1103,11 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
bool is_rt_pipe, update_qos_remap;
const struct dpu_format *fmt =
to_dpu_format(msm_framebuffer_format(fb));
struct dpu_hw_pipe_cfg pipe_cfg;
memset(&(pdpu->pipe_cfg), 0, sizeof(struct dpu_hw_pipe_cfg));
memset(&pipe_cfg, 0, sizeof(struct dpu_hw_pipe_cfg));
_dpu_plane_set_scanout(plane, pstate, &pdpu->pipe_cfg, fb);
_dpu_plane_set_scanout(plane, pstate, &pipe_cfg, fb);
pstate->pending = true;
@ -1106,17 +1119,15 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
crtc->base.id, DRM_RECT_ARG(&state->dst),
(char *)&fmt->base.pixel_format, DPU_FORMAT_IS_UBWC(fmt));
pdpu->pipe_cfg.src_rect = state->src;
pipe_cfg.src_rect = state->src;
/* state->src is 16.16, src_rect is not */
pdpu->pipe_cfg.src_rect.x1 >>= 16;
pdpu->pipe_cfg.src_rect.x2 >>= 16;
pdpu->pipe_cfg.src_rect.y1 >>= 16;
pdpu->pipe_cfg.src_rect.y2 >>= 16;
pipe_cfg.src_rect.x1 >>= 16;
pipe_cfg.src_rect.x2 >>= 16;
pipe_cfg.src_rect.y1 >>= 16;
pipe_cfg.src_rect.y2 >>= 16;
pdpu->pipe_cfg.dst_rect = state->dst;
_dpu_plane_setup_scaler(pdpu, pstate, fmt, false);
pipe_cfg.dst_rect = state->dst;
/* override for color fill */
if (pdpu->color_fill & DPU_PLANE_COLOR_FILL_FLAG) {
@ -1126,25 +1137,11 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
if (pdpu->pipe_hw->ops.setup_rects) {
pdpu->pipe_hw->ops.setup_rects(pdpu->pipe_hw,
&pdpu->pipe_cfg,
&pipe_cfg,
pstate->multirect_index);
}
if (pdpu->pipe_hw->ops.setup_pe &&
(pstate->multirect_index != DPU_SSPP_RECT_1))
pdpu->pipe_hw->ops.setup_pe(pdpu->pipe_hw,
&pstate->pixel_ext);
/**
* when programmed in multirect mode, scalar block will be
* bypassed. Still we need to update alpha and bitwidth
* ONLY for RECT0
*/
if (pdpu->pipe_hw->ops.setup_scaler &&
pstate->multirect_index != DPU_SSPP_RECT_1)
pdpu->pipe_hw->ops.setup_scaler(pdpu->pipe_hw,
&pdpu->pipe_cfg, &pstate->pixel_ext,
&pstate->scaler3_cfg);
_dpu_plane_setup_scaler(pdpu, pstate, fmt, false, &pipe_cfg);
if (pdpu->pipe_hw->ops.setup_multirect)
pdpu->pipe_hw->ops.setup_multirect(
@ -1173,35 +1170,29 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
pstate->multirect_index);
if (pdpu->pipe_hw->ops.setup_cdp) {
struct dpu_hw_pipe_cdp_cfg *cdp_cfg = &pstate->cdp_cfg;
struct dpu_hw_pipe_cdp_cfg cdp_cfg;
memset(cdp_cfg, 0, sizeof(struct dpu_hw_pipe_cdp_cfg));
memset(&cdp_cfg, 0, sizeof(struct dpu_hw_pipe_cdp_cfg));
cdp_cfg->enable = pdpu->catalog->perf.cdp_cfg
cdp_cfg.enable = pdpu->catalog->perf.cdp_cfg
[DPU_PERF_CDP_USAGE_RT].rd_enable;
cdp_cfg->ubwc_meta_enable =
cdp_cfg.ubwc_meta_enable =
DPU_FORMAT_IS_UBWC(fmt);
cdp_cfg->tile_amortize_enable =
cdp_cfg.tile_amortize_enable =
DPU_FORMAT_IS_UBWC(fmt) ||
DPU_FORMAT_IS_TILE(fmt);
cdp_cfg->preload_ahead = DPU_SSPP_CDP_PRELOAD_AHEAD_64;
cdp_cfg.preload_ahead = DPU_SSPP_CDP_PRELOAD_AHEAD_64;
pdpu->pipe_hw->ops.setup_cdp(pdpu->pipe_hw, cdp_cfg);
pdpu->pipe_hw->ops.setup_cdp(pdpu->pipe_hw, &cdp_cfg, pstate->multirect_index);
}
/* update csc */
if (DPU_FORMAT_IS_YUV(fmt))
_dpu_plane_setup_csc(pdpu);
else
pdpu->csc_ptr = NULL;
}
_dpu_plane_set_qos_lut(plane, fb);
_dpu_plane_set_qos_lut(plane, fb, &pipe_cfg);
_dpu_plane_set_danger_lut(plane, fb);
if (plane->type != DRM_PLANE_TYPE_CURSOR) {
_dpu_plane_set_qos_ctrl(plane, true, DPU_PLANE_QOS_PANIC_CTRL);
_dpu_plane_set_ot_limit(plane, crtc);
_dpu_plane_set_ot_limit(plane, crtc, &pipe_cfg);
}
update_qos_remap = (is_rt_pipe != pdpu->is_rt_pipe) ||
@ -1215,9 +1206,9 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
_dpu_plane_set_qos_remap(plane);
}
_dpu_plane_calc_bw(plane, fb);
_dpu_plane_calc_bw(plane, fb, &pipe_cfg);
_dpu_plane_calc_clk(plane);
_dpu_plane_calc_clk(plane, &pipe_cfg);
}
static void _dpu_plane_atomic_disable(struct drm_plane *plane)
@ -1314,6 +1305,46 @@ dpu_plane_duplicate_state(struct drm_plane *plane)
return &pstate->base;
}
static const char * const multirect_mode_name[] = {
[DPU_SSPP_MULTIRECT_NONE] = "none",
[DPU_SSPP_MULTIRECT_PARALLEL] = "parallel",
[DPU_SSPP_MULTIRECT_TIME_MX] = "time_mx",
};
static const char * const multirect_index_name[] = {
[DPU_SSPP_RECT_SOLO] = "solo",
[DPU_SSPP_RECT_0] = "rect_0",
[DPU_SSPP_RECT_1] = "rect_1",
};
static const char *dpu_get_multirect_mode(enum dpu_sspp_multirect_mode mode)
{
if (WARN_ON(mode >= ARRAY_SIZE(multirect_mode_name)))
return "unknown";
return multirect_mode_name[mode];
}
static const char *dpu_get_multirect_index(enum dpu_sspp_multirect_index index)
{
if (WARN_ON(index >= ARRAY_SIZE(multirect_index_name)))
return "unknown";
return multirect_index_name[index];
}
static void dpu_plane_atomic_print_state(struct drm_printer *p,
const struct drm_plane_state *state)
{
const struct dpu_plane_state *pstate = to_dpu_plane_state(state);
const struct dpu_plane *pdpu = to_dpu_plane(state->plane);
drm_printf(p, "\tstage=%d\n", pstate->stage);
drm_printf(p, "\tsspp=%s\n", pdpu->pipe_hw->cap->name);
drm_printf(p, "\tmultirect_mode=%s\n", dpu_get_multirect_mode(pstate->multirect_mode));
drm_printf(p, "\tmultirect_index=%s\n", dpu_get_multirect_index(pstate->multirect_index));
}
static void dpu_plane_reset(struct drm_plane *plane)
{
struct dpu_plane *pdpu;
@ -1343,7 +1374,7 @@ static void dpu_plane_reset(struct drm_plane *plane)
}
#ifdef CONFIG_DEBUG_FS
static void dpu_plane_danger_signal_ctrl(struct drm_plane *plane, bool enable)
void dpu_plane_danger_signal_ctrl(struct drm_plane *plane, bool enable)
{
struct dpu_plane *pdpu = to_dpu_plane(plane);
struct dpu_kms *dpu_kms = _dpu_plane_get_kms(plane);
@ -1356,167 +1387,23 @@ static void dpu_plane_danger_signal_ctrl(struct drm_plane *plane, bool enable)
pm_runtime_put_sync(&dpu_kms->pdev->dev);
}
static ssize_t _dpu_plane_danger_read(struct file *file,
char __user *buff, size_t count, loff_t *ppos)
{
struct dpu_kms *kms = file->private_data;
int len;
char buf[40];
len = scnprintf(buf, sizeof(buf), "%d\n", !kms->has_danger_ctrl);
return simple_read_from_buffer(buff, count, ppos, buf, len);
}
static void _dpu_plane_set_danger_state(struct dpu_kms *kms, bool enable)
/* SSPP live inside dpu_plane private data only. Enumerate them here. */
void dpu_debugfs_sspp_init(struct dpu_kms *dpu_kms, struct dentry *debugfs_root)
{
struct drm_plane *plane;
struct dentry *entry = debugfs_create_dir("sspp", debugfs_root);
drm_for_each_plane(plane, kms->dev) {
if (plane->fb && plane->state) {
dpu_plane_danger_signal_ctrl(plane, enable);
DPU_DEBUG("plane:%d img:%dx%d ",
plane->base.id, plane->fb->width,
plane->fb->height);
DPU_DEBUG("src[%d,%d,%d,%d] dst[%d,%d,%d,%d]\n",
plane->state->src_x >> 16,
plane->state->src_y >> 16,
plane->state->src_w >> 16,
plane->state->src_h >> 16,
plane->state->crtc_x, plane->state->crtc_y,
plane->state->crtc_w, plane->state->crtc_h);
} else {
DPU_DEBUG("Inactive plane:%d\n", plane->base.id);
}
if (IS_ERR(entry))
return;
drm_for_each_plane(plane, dpu_kms->dev) {
struct dpu_plane *pdpu = to_dpu_plane(plane);
_dpu_hw_sspp_init_debugfs(pdpu->pipe_hw, dpu_kms, entry);
}
}
static ssize_t _dpu_plane_danger_write(struct file *file,
const char __user *user_buf, size_t count, loff_t *ppos)
{
struct dpu_kms *kms = file->private_data;
int disable_panic;
int ret;
ret = kstrtouint_from_user(user_buf, count, 0, &disable_panic);
if (ret)
return ret;
if (disable_panic) {
/* Disable panic signal for all active pipes */
DPU_DEBUG("Disabling danger:\n");
_dpu_plane_set_danger_state(kms, false);
kms->has_danger_ctrl = false;
} else {
/* Enable panic signal for all active pipes */
DPU_DEBUG("Enabling danger:\n");
kms->has_danger_ctrl = true;
_dpu_plane_set_danger_state(kms, true);
}
return count;
}
static const struct file_operations dpu_plane_danger_enable = {
.open = simple_open,
.read = _dpu_plane_danger_read,
.write = _dpu_plane_danger_write,
};
static int _dpu_plane_init_debugfs(struct drm_plane *plane)
{
struct dpu_plane *pdpu = to_dpu_plane(plane);
struct dpu_kms *kms = _dpu_plane_get_kms(plane);
const struct dpu_sspp_cfg *cfg = pdpu->pipe_hw->cap;
const struct dpu_sspp_sub_blks *sblk = cfg->sblk;
/* create overall sub-directory for the pipe */
pdpu->debugfs_root =
debugfs_create_dir(pdpu->pipe_name,
plane->dev->primary->debugfs_root);
/* don't error check these */
debugfs_create_x32("features", 0600,
pdpu->debugfs_root, &pdpu->features);
/* add register dump support */
dpu_debugfs_setup_regset32(&pdpu->debugfs_src,
sblk->src_blk.base + cfg->base,
sblk->src_blk.len,
kms);
dpu_debugfs_create_regset32("src_blk", 0400,
pdpu->debugfs_root, &pdpu->debugfs_src);
if (cfg->features & BIT(DPU_SSPP_SCALER_QSEED3) ||
cfg->features & BIT(DPU_SSPP_SCALER_QSEED3LITE) ||
cfg->features & BIT(DPU_SSPP_SCALER_QSEED2) ||
cfg->features & BIT(DPU_SSPP_SCALER_QSEED4)) {
dpu_debugfs_setup_regset32(&pdpu->debugfs_scaler,
sblk->scaler_blk.base + cfg->base,
sblk->scaler_blk.len,
kms);
dpu_debugfs_create_regset32("scaler_blk", 0400,
pdpu->debugfs_root,
&pdpu->debugfs_scaler);
debugfs_create_bool("default_scaling",
0600,
pdpu->debugfs_root,
&pdpu->debugfs_default_scale);
}
if (cfg->features & BIT(DPU_SSPP_CSC) ||
cfg->features & BIT(DPU_SSPP_CSC_10BIT)) {
dpu_debugfs_setup_regset32(&pdpu->debugfs_csc,
sblk->csc_blk.base + cfg->base,
sblk->csc_blk.len,
kms);
dpu_debugfs_create_regset32("csc_blk", 0400,
pdpu->debugfs_root, &pdpu->debugfs_csc);
}
debugfs_create_u32("xin_id",
0400,
pdpu->debugfs_root,
(u32 *) &cfg->xin_id);
debugfs_create_u32("clk_ctrl",
0400,
pdpu->debugfs_root,
(u32 *) &cfg->clk_ctrl);
debugfs_create_x32("creq_vblank",
0600,
pdpu->debugfs_root,
(u32 *) &sblk->creq_vblank);
debugfs_create_x32("danger_vblank",
0600,
pdpu->debugfs_root,
(u32 *) &sblk->danger_vblank);
debugfs_create_file("disable_danger",
0600,
pdpu->debugfs_root,
kms, &dpu_plane_danger_enable);
return 0;
}
#else
static int _dpu_plane_init_debugfs(struct drm_plane *plane)
{
return 0;
}
#endif
static int dpu_plane_late_register(struct drm_plane *plane)
{
return _dpu_plane_init_debugfs(plane);
}
static void dpu_plane_early_unregister(struct drm_plane *plane)
{
struct dpu_plane *pdpu = to_dpu_plane(plane);
debugfs_remove_recursive(pdpu->debugfs_root);
}
static bool dpu_plane_format_mod_supported(struct drm_plane *plane,
uint32_t format, uint64_t modifier)
{
@ -1541,8 +1428,7 @@ static const struct drm_plane_funcs dpu_plane_funcs = {
.reset = dpu_plane_reset,
.atomic_duplicate_state = dpu_plane_duplicate_state,
.atomic_destroy_state = dpu_plane_destroy_state,
.late_register = dpu_plane_late_register,
.early_unregister = dpu_plane_early_unregister,
.atomic_print_state = dpu_plane_atomic_print_state,
.format_mod_supported = dpu_plane_format_mod_supported,
};
@ -1609,21 +1495,13 @@ struct drm_plane *dpu_plane_init(struct drm_device *dev,
goto clean_sspp;
}
/* cache features mask for later */
pdpu->features = pdpu->pipe_hw->cap->features;
pdpu->pipe_sblk = pdpu->pipe_hw->cap->sblk;
if (!pdpu->pipe_sblk) {
DPU_ERROR("[%u]invalid sblk\n", pipe);
goto clean_sspp;
}
if (pdpu->is_virtual) {
format_list = pdpu->pipe_sblk->virt_format_list;
num_formats = pdpu->pipe_sblk->virt_num_formats;
format_list = pdpu->pipe_hw->cap->sblk->virt_format_list;
num_formats = pdpu->pipe_hw->cap->sblk->virt_num_formats;
}
else {
format_list = pdpu->pipe_sblk->format_list;
num_formats = pdpu->pipe_sblk->num_formats;
format_list = pdpu->pipe_hw->cap->sblk->format_list;
num_formats = pdpu->pipe_hw->cap->sblk->num_formats;
}
ret = drm_universal_plane_init(dev, plane, 0xff, &dpu_plane_funcs,
@ -1663,12 +1541,9 @@ struct drm_plane *dpu_plane_init(struct drm_device *dev,
/* success! finalize initialization */
drm_plane_helper_add(plane, &dpu_plane_helper_funcs);
/* save user friendly pipe name for later */
snprintf(pdpu->pipe_name, DPU_NAME_SIZE, "plane%u", plane->base.id);
mutex_init(&pdpu->lock);
DPU_DEBUG("%s created for pipe:%u id:%u virtual:%u\n", pdpu->pipe_name,
DPU_DEBUG("%s created for pipe:%u id:%u virtual:%u\n", plane->name,
pipe, plane->base.id, master_plane_id);
return plane;
@ -1676,6 +1551,7 @@ struct drm_plane *dpu_plane_init(struct drm_device *dev,
if (pdpu && pdpu->pipe_hw)
dpu_hw_sspp_destroy(pdpu->pipe_hw);
clean_plane:
list_del(&pdpu->mplane_list);
kfree(pdpu);
return ERR_PTR(ret);
}

View file

@ -23,9 +23,6 @@
* @multirect_index: index of the rectangle of SSPP
* @multirect_mode: parallel or time multiplex multirect mode
* @pending: whether the current update is still pending
* @scaler3_cfg: configuration data for scaler3
* @pixel_ext: configuration data for pixel extensions
* @cdp_cfg: CDP configuration
* @plane_fetch_bw: calculated BW per plane
* @plane_clk: calculated clk per plane
*/
@ -38,11 +35,6 @@ struct dpu_plane_state {
uint32_t multirect_mode;
bool pending;
/* scaler configuration */
struct dpu_hw_scaler3_cfg scaler3_cfg;
struct dpu_hw_pixel_ext pixel_ext;
struct dpu_hw_pipe_cdp_cfg cdp_cfg;
u64 plane_fetch_bw;
u64 plane_clk;
};
@ -134,4 +126,10 @@ void dpu_plane_clear_multirect(const struct drm_plane_state *drm_state);
int dpu_plane_color_fill(struct drm_plane *plane,
uint32_t color, uint32_t alpha);
#ifdef CONFIG_DEBUG_FS
void dpu_plane_danger_signal_ctrl(struct drm_plane *plane, bool enable);
#else
static inline void dpu_plane_danger_signal_ctrl(struct drm_plane *plane, bool enable) {}
#endif
#endif /* _DPU_PLANE_H_ */

View file

@ -266,10 +266,6 @@ DEFINE_EVENT(dpu_drm_obj_template, dpu_crtc_complete_commit,
TP_PROTO(uint32_t drm_id),
TP_ARGS(drm_id)
);
DEFINE_EVENT(dpu_drm_obj_template, dpu_kms_enc_enable,
TP_PROTO(uint32_t drm_id),
TP_ARGS(drm_id)
);
DEFINE_EVENT(dpu_drm_obj_template, dpu_kms_commit,
TP_PROTO(uint32_t drm_id),
TP_ARGS(drm_id)

View file

@ -370,22 +370,7 @@ static int modeset_init_intf(struct mdp5_kms *mdp5_kms,
switch (intf->type) {
case INTF_eDP:
if (!priv->edp)
break;
ctl = mdp5_ctlm_request(ctlm, intf->num);
if (!ctl) {
ret = -EINVAL;
break;
}
encoder = construct_encoder(mdp5_kms, intf, ctl);
if (IS_ERR(encoder)) {
ret = PTR_ERR(encoder);
break;
}
ret = msm_edp_modeset_init(priv->edp, dev, encoder);
DRM_DEV_INFO(dev->dev, "Skipping eDP interface %d\n", intf->num);
break;
case INTF_HDMI:
if (!priv->hdmi)
@ -936,7 +921,8 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
static int mdp5_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *ddev = dev_get_drvdata(master);
struct msm_drm_private *priv = dev_get_drvdata(master);
struct drm_device *ddev = priv->dev;
struct platform_device *pdev = to_platform_device(dev);
DBG("");
@ -1031,7 +1017,7 @@ static const struct dev_pm_ops mdp5_pm_ops = {
SET_RUNTIME_PM_OPS(mdp5_runtime_suspend, mdp5_runtime_resume, NULL)
};
static const struct of_device_id mdp5_dt_match[] = {
const struct of_device_id mdp5_dt_match[] = {
{ .compatible = "qcom,mdp5", },
/* to support downstream DT files */
{ .compatible = "qcom,mdss_mdp", },

View file

@ -16,8 +16,6 @@ struct mdp5_mdss {
void __iomem *mmio, *vbif;
struct regulator *vdd;
struct clk *ahb_clk;
struct clk *axi_clk;
struct clk *vsync_clk;
@ -114,7 +112,7 @@ static const struct irq_domain_ops mdss_hw_irqdomain_ops = {
static int mdss_irq_domain_init(struct mdp5_mdss *mdp5_mdss)
{
struct device *dev = mdp5_mdss->base.dev->dev;
struct device *dev = mdp5_mdss->base.dev;
struct irq_domain *d;
d = irq_domain_add_linear(dev->of_node, 32, &mdss_hw_irqdomain_ops,
@ -157,7 +155,7 @@ static int mdp5_mdss_disable(struct msm_mdss *mdss)
static int msm_mdss_get_clocks(struct mdp5_mdss *mdp5_mdss)
{
struct platform_device *pdev =
to_platform_device(mdp5_mdss->base.dev->dev);
to_platform_device(mdp5_mdss->base.dev);
mdp5_mdss->ahb_clk = msm_clk_get(pdev, "iface");
if (IS_ERR(mdp5_mdss->ahb_clk))
@ -174,10 +172,9 @@ static int msm_mdss_get_clocks(struct mdp5_mdss *mdp5_mdss)
return 0;
}
static void mdp5_mdss_destroy(struct drm_device *dev)
static void mdp5_mdss_destroy(struct msm_mdss *mdss)
{
struct msm_drm_private *priv = dev->dev_private;
struct mdp5_mdss *mdp5_mdss = to_mdp5_mdss(priv->mdss);
struct mdp5_mdss *mdp5_mdss = to_mdp5_mdss(mdss);
if (!mdp5_mdss)
return;
@ -185,9 +182,7 @@ static void mdp5_mdss_destroy(struct drm_device *dev)
irq_domain_remove(mdp5_mdss->irqcontroller.domain);
mdp5_mdss->irqcontroller.domain = NULL;
regulator_disable(mdp5_mdss->vdd);
pm_runtime_disable(dev->dev);
pm_runtime_disable(mdss->dev);
}
static const struct msm_mdss_funcs mdss_funcs = {
@ -196,25 +191,24 @@ static const struct msm_mdss_funcs mdss_funcs = {
.destroy = mdp5_mdss_destroy,
};
int mdp5_mdss_init(struct drm_device *dev)
int mdp5_mdss_init(struct platform_device *pdev)
{
struct platform_device *pdev = to_platform_device(dev->dev);
struct msm_drm_private *priv = dev->dev_private;
struct msm_drm_private *priv = platform_get_drvdata(pdev);
struct mdp5_mdss *mdp5_mdss;
int ret;
DBG("");
if (!of_device_is_compatible(dev->dev->of_node, "qcom,mdss"))
if (!of_device_is_compatible(pdev->dev.of_node, "qcom,mdss"))
return 0;
mdp5_mdss = devm_kzalloc(dev->dev, sizeof(*mdp5_mdss), GFP_KERNEL);
mdp5_mdss = devm_kzalloc(&pdev->dev, sizeof(*mdp5_mdss), GFP_KERNEL);
if (!mdp5_mdss) {
ret = -ENOMEM;
goto fail;
}
mdp5_mdss->base.dev = dev;
mdp5_mdss->base.dev = &pdev->dev;
mdp5_mdss->mmio = msm_ioremap(pdev, "mdss_phys", "MDSS");
if (IS_ERR(mdp5_mdss->mmio)) {
@ -230,45 +224,29 @@ int mdp5_mdss_init(struct drm_device *dev)
ret = msm_mdss_get_clocks(mdp5_mdss);
if (ret) {
DRM_DEV_ERROR(dev->dev, "failed to get clocks: %d\n", ret);
DRM_DEV_ERROR(&pdev->dev, "failed to get clocks: %d\n", ret);
goto fail;
}
/* Regulator to enable GDSCs in downstream kernels */
mdp5_mdss->vdd = devm_regulator_get(dev->dev, "vdd");
if (IS_ERR(mdp5_mdss->vdd)) {
ret = PTR_ERR(mdp5_mdss->vdd);
goto fail;
}
ret = regulator_enable(mdp5_mdss->vdd);
if (ret) {
DRM_DEV_ERROR(dev->dev, "failed to enable regulator vdd: %d\n",
ret);
goto fail;
}
ret = devm_request_irq(dev->dev, platform_get_irq(pdev, 0),
ret = devm_request_irq(&pdev->dev, platform_get_irq(pdev, 0),
mdss_irq, 0, "mdss_isr", mdp5_mdss);
if (ret) {
DRM_DEV_ERROR(dev->dev, "failed to init irq: %d\n", ret);
goto fail_irq;
DRM_DEV_ERROR(&pdev->dev, "failed to init irq: %d\n", ret);
goto fail;
}
ret = mdss_irq_domain_init(mdp5_mdss);
if (ret) {
DRM_DEV_ERROR(dev->dev, "failed to init sub-block irqs: %d\n", ret);
goto fail_irq;
DRM_DEV_ERROR(&pdev->dev, "failed to init sub-block irqs: %d\n", ret);
goto fail;
}
mdp5_mdss->base.funcs = &mdss_funcs;
priv->mdss = &mdp5_mdss->base;
pm_runtime_enable(dev->dev);
pm_runtime_enable(&pdev->dev);
return 0;
fail_irq:
regulator_disable(mdp5_mdss->vdd);
fail:
return ret;
}

View file

@ -28,29 +28,42 @@ static ssize_t __maybe_unused disp_devcoredump_read(char *buffer, loff_t offset,
return count - iter.remain;
}
static void _msm_disp_snapshot_work(struct kthread_work *work)
struct msm_disp_state *
msm_disp_snapshot_state_sync(struct msm_kms *kms)
{
struct msm_kms *kms = container_of(work, struct msm_kms, dump_work);
struct drm_device *drm_dev = kms->dev;
struct msm_disp_state *disp_state;
struct drm_printer p;
WARN_ON(!mutex_is_locked(&kms->dump_mutex));
disp_state = kzalloc(sizeof(struct msm_disp_state), GFP_KERNEL);
if (!disp_state)
return;
return ERR_PTR(-ENOMEM);
disp_state->dev = drm_dev->dev;
disp_state->drm_dev = drm_dev;
INIT_LIST_HEAD(&disp_state->blocks);
/* Serialize dumping here */
mutex_lock(&kms->dump_mutex);
msm_disp_snapshot_capture_state(disp_state);
return disp_state;
}
static void _msm_disp_snapshot_work(struct kthread_work *work)
{
struct msm_kms *kms = container_of(work, struct msm_kms, dump_work);
struct msm_disp_state *disp_state;
struct drm_printer p;
/* Serialize dumping here */
mutex_lock(&kms->dump_mutex);
disp_state = msm_disp_snapshot_state_sync(kms);
mutex_unlock(&kms->dump_mutex);
if (IS_ERR(disp_state))
return;
if (MSM_DISP_SNAPSHOT_DUMP_IN_CONSOLE) {
p = drm_info_printer(disp_state->drm_dev->dev);
msm_disp_state_print(disp_state, &p);

View file

@ -39,7 +39,7 @@
* @dev: device pointer
* @drm_dev: drm device pointer
* @atomic_state: atomic state duplicated at the time of the error
* @timestamp: timestamp at which the coredump was captured
* @time: timestamp at which the coredump was captured
*/
struct msm_disp_state {
struct device *dev;
@ -49,7 +49,7 @@ struct msm_disp_state {
struct drm_atomic_state *atomic_state;
ktime_t timestamp;
struct timespec64 time;
};
/**
@ -84,6 +84,16 @@ int msm_disp_snapshot_init(struct drm_device *drm_dev);
*/
void msm_disp_snapshot_destroy(struct drm_device *drm_dev);
/**
* msm_disp_snapshot_state_sync - synchronously snapshot display state
* @kms: the kms object
*
* Returns state or error
*
* Must be called with &kms->dump_mutex held
*/
struct msm_disp_state *msm_disp_snapshot_state_sync(struct msm_kms *kms);
/**
* msm_disp_snapshot_state - trigger to dump the display snapshot
* @drm_dev: handle to drm device

View file

@ -5,6 +5,8 @@
#define pr_fmt(fmt) "[drm:%s:%d] " fmt, __func__, __LINE__
#include <generated/utsrelease.h>
#include "msm_disp_snapshot.h"
static void msm_disp_state_dump_regs(u32 **reg, u32 aligned_len, void __iomem *base_addr)
@ -79,10 +81,11 @@ void msm_disp_state_print(struct msm_disp_state *state, struct drm_printer *p)
}
drm_printf(p, "---\n");
drm_printf(p, "kernel: " UTS_RELEASE "\n");
drm_printf(p, "module: " KBUILD_MODNAME "\n");
drm_printf(p, "dpu devcoredump\n");
drm_printf(p, "timestamp %lld\n", ktime_to_ns(state->timestamp));
drm_printf(p, "time: %lld.%09ld\n",
state->time.tv_sec, state->time.tv_nsec);
list_for_each_entry_safe(block, tmp, &state->blocks, node) {
drm_printf(p, "====================%s================\n", block->name);
@ -100,7 +103,7 @@ static void msm_disp_capture_atomic_state(struct msm_disp_state *disp_state)
struct drm_device *ddev;
struct drm_modeset_acquire_ctx ctx;
disp_state->timestamp = ktime_get();
ktime_get_real_ts64(&disp_state->time);
ddev = disp_state->drm_dev;

View file

@ -119,13 +119,13 @@ void dp_ctrl_push_idle(struct dp_ctrl *dp_ctrl)
static void dp_ctrl_config_ctrl(struct dp_ctrl_private *ctrl)
{
u32 config = 0, tbd;
u8 *dpcd = ctrl->panel->dpcd;
const u8 *dpcd = ctrl->panel->dpcd;
/* Default-> LSCLK DIV: 1/4 LCLK */
config |= (2 << DP_CONFIGURATION_CTRL_LSCLK_DIV_SHIFT);
/* Scrambler reset enable */
if (dpcd[DP_EDP_CONFIGURATION_CAP] & DP_ALTERNATE_SCRAMBLER_RESET_CAP)
if (drm_dp_alternate_scrambler_reset_cap(dpcd))
config |= DP_CONFIGURATION_CTRL_ASSR;
tbd = dp_link_get_test_bits_depth(ctrl->link,
@ -1228,7 +1228,10 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
int *training_step)
{
int ret = 0;
const u8 *dpcd = ctrl->panel->dpcd;
u8 encoding = DP_SET_ANSI_8B10B;
u8 ssc;
u8 assr;
struct dp_link_info link_info = {0};
dp_ctrl_config_ctrl(ctrl);
@ -1238,9 +1241,21 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
link_info.capabilities = DP_LINK_CAP_ENHANCED_FRAMING;
dp_aux_link_configure(ctrl->aux, &link_info);
if (drm_dp_max_downspread(dpcd)) {
ssc = DP_SPREAD_AMP_0_5;
drm_dp_dpcd_write(ctrl->aux, DP_DOWNSPREAD_CTRL, &ssc, 1);
}
drm_dp_dpcd_write(ctrl->aux, DP_MAIN_LINK_CHANNEL_CODING_SET,
&encoding, 1);
if (drm_dp_alternate_scrambler_reset_cap(dpcd)) {
assr = DP_ALTERNATE_SCRAMBLER_RESET_ENABLE;
drm_dp_dpcd_write(ctrl->aux, DP_EDP_CONFIGURATION_SET,
&assr, 1);
}
ret = dp_ctrl_link_train_1(ctrl, training_step);
if (ret) {
DRM_ERROR("link training #1 failed. ret=%d\n", ret);
@ -1312,9 +1327,11 @@ static int dp_ctrl_enable_mainlink_clocks(struct dp_ctrl_private *ctrl)
struct dp_io *dp_io = &ctrl->parser->io;
struct phy *phy = dp_io->phy;
struct phy_configure_opts_dp *opts_dp = &dp_io->phy_opts.dp;
const u8 *dpcd = ctrl->panel->dpcd;
opts_dp->lanes = ctrl->link->link_params.num_lanes;
opts_dp->link_rate = ctrl->link->link_params.rate / 100;
opts_dp->ssc = drm_dp_max_downspread(dpcd);
dp_ctrl_set_clock_rate(ctrl, DP_CTRL_PM, "ctrl_link",
ctrl->link->link_params.rate * 1000);
@ -1406,7 +1423,7 @@ void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl)
static bool dp_ctrl_use_fixed_nvid(struct dp_ctrl_private *ctrl)
{
u8 *dpcd = ctrl->panel->dpcd;
const u8 *dpcd = ctrl->panel->dpcd;
/*
* For better interop experience, used a fixed NVID=0x8000

View file

@ -135,8 +135,18 @@ static const struct msm_dp_config sc7180_dp_cfg = {
.num_descs = 1,
};
static const struct msm_dp_config sc7280_dp_cfg = {
.descs = (const struct msm_dp_desc[]) {
[MSM_DP_CONTROLLER_0] = { .io_start = 0x0ae90000, .connector_type = DRM_MODE_CONNECTOR_DisplayPort },
[MSM_DP_CONTROLLER_1] = { .io_start = 0x0aea0000, .connector_type = DRM_MODE_CONNECTOR_eDP },
},
.num_descs = 2,
};
static const struct of_device_id dp_dt_match[] = {
{ .compatible = "qcom,sc7180-dp", .data = &sc7180_dp_cfg },
{ .compatible = "qcom,sc7280-dp", .data = &sc7280_dp_cfg },
{ .compatible = "qcom,sc7280-edp", .data = &sc7280_dp_cfg },
{}
};
@ -224,13 +234,10 @@ static int dp_display_bind(struct device *dev, struct device *master,
{
int rc = 0;
struct dp_display_private *dp = dev_get_dp_display_private(dev);
struct msm_drm_private *priv;
struct drm_device *drm;
drm = dev_get_drvdata(master);
struct msm_drm_private *priv = dev_get_drvdata(master);
struct drm_device *drm = priv->dev;
dp->dp_display.drm_dev = drm;
priv = drm->dev_private;
priv->dp[dp->id] = &dp->dp_display;
rc = dp->parser->parse(dp->parser, dp->dp_display.connector_type);
@ -266,8 +273,7 @@ static void dp_display_unbind(struct device *dev, struct device *master,
void *data)
{
struct dp_display_private *dp = dev_get_dp_display_private(dev);
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
dp_power_client_deinit(dp->power);
dp_aux_unregister(dp->aux);
@ -410,12 +416,11 @@ static int dp_display_usbpd_configure_cb(struct device *dev)
static int dp_display_usbpd_disconnect_cb(struct device *dev)
{
int rc = 0;
struct dp_display_private *dp = dev_get_dp_display_private(dev);
dp_add_event(dp, EV_USER_NOTIFICATION, false, 0);
return rc;
return 0;
}
static void dp_display_handle_video_request(struct dp_display_private *dp)
@ -522,11 +527,8 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
dp->hpd_state = ST_CONNECT_PENDING;
hpd->hpd_high = 1;
ret = dp_display_usbpd_configure_cb(&dp->pdev->dev);
if (ret) { /* link train failed */
hpd->hpd_high = 0;
dp->hpd_state = ST_DISCONNECTED;
if (ret == -ECONNRESET) { /* cable unplugged */
@ -603,7 +605,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
/* triggered by irq_hdp with sink_count = 0 */
if (dp->link->sink_count == 0) {
dp_ctrl_off_phy(dp->ctrl);
hpd->hpd_high = 0;
dp->core_initialized = false;
}
mutex_unlock(&dp->event_mutex);
@ -627,8 +628,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
/* disable HPD plug interrupts */
dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, false);
hpd->hpd_high = 0;
/*
* We don't need separate work for disconnect as
* connect/attention interrupts are disabled
@ -693,9 +692,15 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
return 0;
}
ret = dp_display_usbpd_attention_cb(&dp->pdev->dev);
if (ret == -ECONNRESET) { /* cable unplugged */
dp->core_initialized = false;
/*
* dp core (ahb/aux clks) must be initialized before
* irq_hpd be handled
*/
if (dp->core_initialized) {
ret = dp_display_usbpd_attention_cb(&dp->pdev->dev);
if (ret == -ECONNRESET) { /* cable unplugged */
dp->core_initialized = false;
}
}
DRM_DEBUG_DP("hpd_state=%d\n", state);
@ -707,9 +712,9 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
static void dp_display_deinit_sub_modules(struct dp_display_private *dp)
{
dp_debug_put(dp->debug);
dp_audio_put(dp->audio);
dp_panel_put(dp->panel);
dp_aux_put(dp->aux);
dp_audio_put(dp->audio);
}
static int dp_init_sub_modules(struct dp_display_private *dp)
@ -1481,6 +1486,18 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
}
priv->connectors[priv->num_connectors++] = dp_display->connector;
dp_display->bridge = msm_dp_bridge_init(dp_display, dev, encoder);
if (IS_ERR(dp_display->bridge)) {
ret = PTR_ERR(dp_display->bridge);
DRM_DEV_ERROR(dev->dev,
"failed to create dp bridge: %d\n", ret);
dp_display->bridge = NULL;
return ret;
}
priv->bridges[priv->num_bridges++] = dp_display->bridge;
return 0;
}
@ -1584,8 +1601,8 @@ int msm_dp_display_disable(struct msm_dp *dp, struct drm_encoder *encoder)
}
void msm_dp_display_mode_set(struct msm_dp *dp, struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
struct dp_display_private *dp_display;

View file

@ -13,6 +13,7 @@
struct msm_dp {
struct drm_device *drm_dev;
struct device *codec_dev;
struct drm_bridge *bridge;
struct drm_connector *connector;
struct drm_encoder *encoder;
struct drm_bridge *panel_bridge;

View file

@ -12,6 +12,14 @@
#include "msm_kms.h"
#include "dp_drm.h"
struct msm_dp_bridge {
struct drm_bridge bridge;
struct msm_dp *dp_display;
};
#define to_dp_display(x) container_of((x), struct msm_dp_bridge, bridge)
struct dp_connector {
struct drm_connector base;
struct msm_dp *dp_display;
@ -173,3 +181,70 @@ struct drm_connector *dp_drm_connector_init(struct msm_dp *dp_display)
return connector;
}
static void dp_bridge_mode_set(struct drm_bridge *drm_bridge,
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
struct msm_dp_bridge *dp_bridge = to_dp_display(drm_bridge);
struct msm_dp *dp_display = dp_bridge->dp_display;
msm_dp_display_mode_set(dp_display, drm_bridge->encoder, mode, adjusted_mode);
}
static void dp_bridge_enable(struct drm_bridge *drm_bridge)
{
struct msm_dp_bridge *dp_bridge = to_dp_display(drm_bridge);
struct msm_dp *dp_display = dp_bridge->dp_display;
msm_dp_display_enable(dp_display, drm_bridge->encoder);
}
static void dp_bridge_disable(struct drm_bridge *drm_bridge)
{
struct msm_dp_bridge *dp_bridge = to_dp_display(drm_bridge);
struct msm_dp *dp_display = dp_bridge->dp_display;
msm_dp_display_pre_disable(dp_display, drm_bridge->encoder);
}
static void dp_bridge_post_disable(struct drm_bridge *drm_bridge)
{
struct msm_dp_bridge *dp_bridge = to_dp_display(drm_bridge);
struct msm_dp *dp_display = dp_bridge->dp_display;
msm_dp_display_disable(dp_display, drm_bridge->encoder);
}
static const struct drm_bridge_funcs dp_bridge_ops = {
.enable = dp_bridge_enable,
.disable = dp_bridge_disable,
.post_disable = dp_bridge_post_disable,
.mode_set = dp_bridge_mode_set,
};
struct drm_bridge *msm_dp_bridge_init(struct msm_dp *dp_display, struct drm_device *dev,
struct drm_encoder *encoder)
{
int rc;
struct msm_dp_bridge *dp_bridge;
struct drm_bridge *bridge;
dp_bridge = devm_kzalloc(dev->dev, sizeof(*dp_bridge), GFP_KERNEL);
if (!dp_bridge)
return ERR_PTR(-ENOMEM);
dp_bridge->dp_display = dp_display;
bridge = &dp_bridge->bridge;
bridge->funcs = &dp_bridge_ops;
bridge->encoder = encoder;
rc = drm_bridge_attach(encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR);
if (rc) {
DRM_ERROR("failed to attach bridge, rc=%d\n", rc);
return ERR_PTR(rc);
}
return bridge;
}

View file

@ -32,8 +32,6 @@ int dp_hpd_connect(struct dp_usbpd *dp_usbpd, bool hpd)
hpd_priv = container_of(dp_usbpd, struct dp_hpd_private,
dp_usbpd);
dp_usbpd->hpd_high = hpd;
if (!hpd_priv->dp_cb || !hpd_priv->dp_cb->configure
|| !hpd_priv->dp_cb->disconnect) {
pr_err("hpd dp_cb not initialized\n");

View file

@ -26,7 +26,6 @@ enum plug_orientation {
* @multi_func: multi-function preferred
* @usb_config_req: request to switch to usb
* @exit_dp_mode: request exit from displayport mode
* @hpd_high: Hot Plug Detect signal is high.
* @hpd_irq: Change in the status since last message
* @alt_mode_cfg_done: bool to specify alt mode status
* @debug_en: bool to specify debug mode
@ -39,7 +38,6 @@ struct dp_usbpd {
bool multi_func;
bool usb_config_req;
bool exit_dp_mode;
bool hpd_high;
bool hpd_irq;
bool alt_mode_cfg_done;
bool debug_en;

View file

@ -737,18 +737,25 @@ static int dp_link_parse_sink_count(struct dp_link *dp_link)
return 0;
}
static void dp_link_parse_sink_status_field(struct dp_link_private *link)
static int dp_link_parse_sink_status_field(struct dp_link_private *link)
{
int len = 0;
link->prev_sink_count = link->dp_link.sink_count;
dp_link_parse_sink_count(&link->dp_link);
len = dp_link_parse_sink_count(&link->dp_link);
if (len < 0) {
DRM_ERROR("DP parse sink count failed\n");
return len;
}
len = drm_dp_dpcd_read_link_status(link->aux,
link->link_status);
if (len < DP_LINK_STATUS_SIZE)
if (len < DP_LINK_STATUS_SIZE) {
DRM_ERROR("DP link status read failed\n");
dp_link_parse_request(link);
return len;
}
return dp_link_parse_request(link);
}
/**
@ -1023,7 +1030,9 @@ int dp_link_process_request(struct dp_link *dp_link)
dp_link_reset_data(link);
dp_link_parse_sink_status_field(link);
ret = dp_link_parse_sink_status_field(link);
if (ret)
return ret;
if (link->request.test_requested == DP_TEST_LINK_EDID_READ) {
dp_link->sink_request |= DP_TEST_LINK_EDID_READ;

View file

@ -110,8 +110,7 @@ static struct msm_dsi *dsi_init(struct platform_device *pdev)
static int dsi_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
struct msm_dsi *msm_dsi = dev_get_drvdata(dev);
priv->dsi[msm_dsi->id] = msm_dsi;
@ -122,8 +121,7 @@ static int dsi_bind(struct device *dev, struct device *master, void *data)
static void dsi_unbind(struct device *dev, struct device *master,
void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
struct msm_dsi *msm_dsi = dev_get_drvdata(dev);
priv->dsi[msm_dsi->id] = NULL;
@ -225,9 +223,13 @@ int msm_dsi_modeset_init(struct msm_dsi *msm_dsi, struct drm_device *dev,
goto fail;
}
if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) {
ret = -EINVAL;
goto fail;
if (msm_dsi_is_bonded_dsi(msm_dsi) &&
!msm_dsi_is_master_dsi(msm_dsi)) {
/*
* Do not return an eror here,
* Just skip creating encoder/connector for the slave-DSI.
*/
return 0;
}
msm_dsi->encoder = encoder;

View file

@ -82,7 +82,6 @@ int msm_dsi_manager_cmd_xfer(int id, const struct mipi_dsi_msg *msg);
bool msm_dsi_manager_cmd_xfer_trigger(int id, u32 dma_base, u32 len);
int msm_dsi_manager_register(struct msm_dsi *msm_dsi);
void msm_dsi_manager_unregister(struct msm_dsi *msm_dsi);
bool msm_dsi_manager_validate_current_config(u8 id);
void msm_dsi_manager_tpg_enable(void);
/* msm dsi */
@ -120,6 +119,8 @@ unsigned long msm_dsi_host_get_mode_flags(struct mipi_dsi_host *host);
struct drm_bridge *msm_dsi_host_get_bridge(struct mipi_dsi_host *host);
int msm_dsi_host_register(struct mipi_dsi_host *host);
void msm_dsi_host_unregister(struct mipi_dsi_host *host);
void msm_dsi_host_set_phy_mode(struct mipi_dsi_host *host,
struct msm_dsi_phy *src_phy);
int msm_dsi_host_set_src_pll(struct mipi_dsi_host *host,
struct msm_dsi_phy *src_phy);
void msm_dsi_host_reset_phy(struct mipi_dsi_host *host);
@ -173,8 +174,6 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
void msm_dsi_phy_disable(struct msm_dsi_phy *phy);
void msm_dsi_phy_set_usecase(struct msm_dsi_phy *phy,
enum msm_dsi_phy_usecase uc);
int msm_dsi_phy_get_clk_provider(struct msm_dsi_phy *phy,
struct clk **byte_clk_provider, struct clk **pixel_clk_provider);
void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy);
int msm_dsi_phy_pll_restore_state(struct msm_dsi_phy *phy);
void msm_dsi_phy_snapshot(struct msm_disp_state *disp_state, struct msm_dsi_phy *phy);

View file

@ -2020,7 +2020,7 @@ void msm_dsi_host_xfer_restore(struct mipi_dsi_host *host,
/* TODO: unvote for bus bandwidth */
cfg_hnd->ops->link_clk_disable(msm_host);
pm_runtime_put_autosuspend(&msm_host->pdev->dev);
pm_runtime_put(&msm_host->pdev->dev);
}
int msm_dsi_host_cmd_tx(struct mipi_dsi_host *host,
@ -2179,57 +2179,12 @@ void msm_dsi_host_cmd_xfer_commit(struct mipi_dsi_host *host, u32 dma_base,
wmb();
}
int msm_dsi_host_set_src_pll(struct mipi_dsi_host *host,
void msm_dsi_host_set_phy_mode(struct mipi_dsi_host *host,
struct msm_dsi_phy *src_phy)
{
struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
struct clk *byte_clk_provider, *pixel_clk_provider;
int ret;
msm_host->cphy_mode = src_phy->cphy_mode;
ret = msm_dsi_phy_get_clk_provider(src_phy,
&byte_clk_provider, &pixel_clk_provider);
if (ret) {
pr_info("%s: can't get provider from pll, don't set parent\n",
__func__);
return 0;
}
ret = clk_set_parent(msm_host->byte_clk_src, byte_clk_provider);
if (ret) {
pr_err("%s: can't set parent to byte_clk_src. ret=%d\n",
__func__, ret);
goto exit;
}
ret = clk_set_parent(msm_host->pixel_clk_src, pixel_clk_provider);
if (ret) {
pr_err("%s: can't set parent to pixel_clk_src. ret=%d\n",
__func__, ret);
goto exit;
}
if (msm_host->dsi_clk_src) {
ret = clk_set_parent(msm_host->dsi_clk_src, pixel_clk_provider);
if (ret) {
pr_err("%s: can't set parent to dsi_clk_src. ret=%d\n",
__func__, ret);
goto exit;
}
}
if (msm_host->esc_clk_src) {
ret = clk_set_parent(msm_host->esc_clk_src, byte_clk_provider);
if (ret) {
pr_err("%s: can't set parent to esc_clk_src. ret=%d\n",
__func__, ret);
goto exit;
}
}
exit:
return ret;
}
void msm_dsi_host_reset_phy(struct mipi_dsi_host *host)
@ -2297,7 +2252,7 @@ int msm_dsi_host_enable(struct mipi_dsi_host *host)
*/
/* if (msm_panel->mode == MSM_DSI_CMD_MODE) {
* dsi_link_clk_disable(msm_host);
* pm_runtime_put_autosuspend(&msm_host->pdev->dev);
* pm_runtime_put(&msm_host->pdev->dev);
* }
*/
msm_host->enabled = true;
@ -2389,7 +2344,7 @@ int msm_dsi_host_power_on(struct mipi_dsi_host *host,
fail_disable_clk:
cfg_hnd->ops->link_clk_disable(msm_host);
pm_runtime_put_autosuspend(&msm_host->pdev->dev);
pm_runtime_put(&msm_host->pdev->dev);
fail_disable_reg:
dsi_host_regulator_disable(msm_host);
unlock_ret:
@ -2416,7 +2371,7 @@ int msm_dsi_host_power_off(struct mipi_dsi_host *host)
pinctrl_pm_select_sleep_state(&msm_host->pdev->dev);
cfg_hnd->ops->link_clk_disable(msm_host);
pm_runtime_put_autosuspend(&msm_host->pdev->dev);
pm_runtime_put(&msm_host->pdev->dev);
dsi_host_regulator_disable(msm_host);

View file

@ -79,10 +79,8 @@ static int dsi_mgr_setup_components(int id)
return ret;
msm_dsi_phy_set_usecase(msm_dsi->phy, MSM_DSI_PHY_STANDALONE);
ret = msm_dsi_host_set_src_pll(msm_dsi->host, msm_dsi->phy);
} else if (!other_dsi) {
ret = 0;
} else {
msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
} else if (other_dsi) {
struct msm_dsi *master_link_dsi = IS_MASTER_DSI_LINK(id) ?
msm_dsi : other_dsi;
struct msm_dsi *slave_link_dsi = IS_MASTER_DSI_LINK(id) ?
@ -106,13 +104,11 @@ static int dsi_mgr_setup_components(int id)
MSM_DSI_PHY_MASTER);
msm_dsi_phy_set_usecase(clk_slave_dsi->phy,
MSM_DSI_PHY_SLAVE);
ret = msm_dsi_host_set_src_pll(msm_dsi->host, clk_master_dsi->phy);
if (ret)
return ret;
ret = msm_dsi_host_set_src_pll(other_dsi->host, clk_master_dsi->phy);
msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
msm_dsi_host_set_phy_mode(other_dsi->host, other_dsi->phy);
}
return ret;
return 0;
}
static int enable_phy(struct msm_dsi *msm_dsi,
@ -649,23 +645,6 @@ struct drm_connector *msm_dsi_manager_connector_init(u8 id)
return ERR_PTR(ret);
}
bool msm_dsi_manager_validate_current_config(u8 id)
{
bool is_bonded_dsi = IS_BONDED_DSI();
/*
* For bonded DSI, we only have one drm panel. For this
* use case, we register only one bridge/connector.
* Skip bridge/connector initialisation if it is
* slave-DSI for bonded DSI configuration.
*/
if (is_bonded_dsi && !IS_MASTER_DSI_LINK(id)) {
DBG("Skip bridge registration for slave DSI->id: %d\n", id);
return false;
}
return true;
}
/* initialize bridge */
struct drm_bridge *msm_dsi_manager_bridge_init(u8 id)
{

View file

@ -602,7 +602,7 @@ static int dsi_phy_enable_resource(struct msm_dsi_phy *phy)
static void dsi_phy_disable_resource(struct msm_dsi_phy *phy)
{
clk_disable_unprepare(phy->ahb_clk);
pm_runtime_put_autosuspend(&phy->pdev->dev);
pm_runtime_put(&phy->pdev->dev);
}
static const struct of_device_id dsi_phy_dt_match[] = {
@ -892,17 +892,6 @@ bool msm_dsi_phy_set_continuous_clock(struct msm_dsi_phy *phy, bool enable)
return phy->cfg->ops.set_continuous_clock(phy, enable);
}
int msm_dsi_phy_get_clk_provider(struct msm_dsi_phy *phy,
struct clk **byte_clk_provider, struct clk **pixel_clk_provider)
{
if (byte_clk_provider)
*byte_clk_provider = phy->provided_clocks->hws[DSI_BYTE_PLL_CLK]->clk;
if (pixel_clk_provider)
*pixel_clk_provider = phy->provided_clocks->hws[DSI_PIXEL_PLL_CLK]->clk;
return 0;
}
void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy)
{
if (phy->cfg->ops.save_pll_state) {

View file

@ -1,198 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#include <linux/of_irq.h>
#include "edp.h"
static irqreturn_t edp_irq(int irq, void *dev_id)
{
struct msm_edp *edp = dev_id;
/* Process eDP irq */
return msm_edp_ctrl_irq(edp->ctrl);
}
static void edp_destroy(struct platform_device *pdev)
{
struct msm_edp *edp = platform_get_drvdata(pdev);
if (!edp)
return;
if (edp->ctrl) {
msm_edp_ctrl_destroy(edp->ctrl);
edp->ctrl = NULL;
}
platform_set_drvdata(pdev, NULL);
}
/* construct eDP at bind/probe time, grab all the resources. */
static struct msm_edp *edp_init(struct platform_device *pdev)
{
struct msm_edp *edp = NULL;
int ret;
if (!pdev) {
pr_err("no eDP device\n");
ret = -ENXIO;
goto fail;
}
edp = devm_kzalloc(&pdev->dev, sizeof(*edp), GFP_KERNEL);
if (!edp) {
ret = -ENOMEM;
goto fail;
}
DBG("eDP probed=%p", edp);
edp->pdev = pdev;
platform_set_drvdata(pdev, edp);
ret = msm_edp_ctrl_init(edp);
if (ret)
goto fail;
return edp;
fail:
if (edp)
edp_destroy(pdev);
return ERR_PTR(ret);
}
static int edp_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_edp *edp;
DBG("");
edp = edp_init(to_platform_device(dev));
if (IS_ERR(edp))
return PTR_ERR(edp);
priv->edp = edp;
return 0;
}
static void edp_unbind(struct device *dev, struct device *master, void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
DBG("");
if (priv->edp) {
edp_destroy(to_platform_device(dev));
priv->edp = NULL;
}
}
static const struct component_ops edp_ops = {
.bind = edp_bind,
.unbind = edp_unbind,
};
static int edp_dev_probe(struct platform_device *pdev)
{
DBG("");
return component_add(&pdev->dev, &edp_ops);
}
static int edp_dev_remove(struct platform_device *pdev)
{
DBG("");
component_del(&pdev->dev, &edp_ops);
return 0;
}
static const struct of_device_id dt_match[] = {
{ .compatible = "qcom,mdss-edp" },
{}
};
static struct platform_driver edp_driver = {
.probe = edp_dev_probe,
.remove = edp_dev_remove,
.driver = {
.name = "msm_edp",
.of_match_table = dt_match,
},
};
void __init msm_edp_register(void)
{
DBG("");
platform_driver_register(&edp_driver);
}
void __exit msm_edp_unregister(void)
{
DBG("");
platform_driver_unregister(&edp_driver);
}
/* Second part of initialization, the drm/kms level modeset_init */
int msm_edp_modeset_init(struct msm_edp *edp, struct drm_device *dev,
struct drm_encoder *encoder)
{
struct platform_device *pdev = edp->pdev;
struct msm_drm_private *priv = dev->dev_private;
int ret;
edp->encoder = encoder;
edp->dev = dev;
edp->bridge = msm_edp_bridge_init(edp);
if (IS_ERR(edp->bridge)) {
ret = PTR_ERR(edp->bridge);
DRM_DEV_ERROR(dev->dev, "failed to create eDP bridge: %d\n", ret);
edp->bridge = NULL;
goto fail;
}
edp->connector = msm_edp_connector_init(edp);
if (IS_ERR(edp->connector)) {
ret = PTR_ERR(edp->connector);
DRM_DEV_ERROR(dev->dev, "failed to create eDP connector: %d\n", ret);
edp->connector = NULL;
goto fail;
}
edp->irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
if (edp->irq < 0) {
ret = edp->irq;
DRM_DEV_ERROR(dev->dev, "failed to get IRQ: %d\n", ret);
goto fail;
}
ret = devm_request_irq(&pdev->dev, edp->irq,
edp_irq, IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
"edp_isr", edp);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev, "failed to request IRQ%u: %d\n",
edp->irq, ret);
goto fail;
}
priv->bridges[priv->num_bridges++] = edp->bridge;
priv->connectors[priv->num_connectors++] = edp->connector;
return 0;
fail:
/* bridge/connector are normally destroyed by drm */
if (edp->bridge) {
edp_bridge_destroy(edp->bridge);
edp->bridge = NULL;
}
if (edp->connector) {
edp->connector->funcs->destroy(edp->connector);
edp->connector = NULL;
}
return ret;
}

View file

@ -1,77 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#ifndef __EDP_CONNECTOR_H__
#define __EDP_CONNECTOR_H__
#include <linux/i2c.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <drm/drm_bridge.h>
#include <drm/drm_crtc.h>
#include <drm/drm_dp_helper.h>
#include "msm_drv.h"
#define edp_read(offset) msm_readl((offset))
#define edp_write(offset, data) msm_writel((data), (offset))
struct edp_ctrl;
struct edp_aux;
struct edp_phy;
struct msm_edp {
struct drm_device *dev;
struct platform_device *pdev;
struct drm_connector *connector;
struct drm_bridge *bridge;
/* the encoder we are hooked to (outside of eDP block) */
struct drm_encoder *encoder;
struct edp_ctrl *ctrl;
int irq;
};
/* eDP bridge */
struct drm_bridge *msm_edp_bridge_init(struct msm_edp *edp);
void edp_bridge_destroy(struct drm_bridge *bridge);
/* eDP connector */
struct drm_connector *msm_edp_connector_init(struct msm_edp *edp);
/* AUX */
void *msm_edp_aux_init(struct msm_edp *edp, void __iomem *regbase, struct drm_dp_aux **drm_aux);
void msm_edp_aux_destroy(struct device *dev, struct edp_aux *aux);
irqreturn_t msm_edp_aux_irq(struct edp_aux *aux, u32 isr);
void msm_edp_aux_ctrl(struct edp_aux *aux, int enable);
/* Phy */
bool msm_edp_phy_ready(struct edp_phy *phy);
void msm_edp_phy_ctrl(struct edp_phy *phy, int enable);
void msm_edp_phy_vm_pe_init(struct edp_phy *phy);
void msm_edp_phy_vm_pe_cfg(struct edp_phy *phy, u32 v0, u32 v1);
void msm_edp_phy_lane_power_ctrl(struct edp_phy *phy, bool up, u32 max_lane);
void *msm_edp_phy_init(struct device *dev, void __iomem *regbase);
/* Ctrl */
irqreturn_t msm_edp_ctrl_irq(struct edp_ctrl *ctrl);
void msm_edp_ctrl_power(struct edp_ctrl *ctrl, bool on);
int msm_edp_ctrl_init(struct msm_edp *edp);
void msm_edp_ctrl_destroy(struct edp_ctrl *ctrl);
bool msm_edp_ctrl_panel_connected(struct edp_ctrl *ctrl);
int msm_edp_ctrl_get_panel_info(struct edp_ctrl *ctrl,
struct drm_connector *connector, struct edid **edid);
int msm_edp_ctrl_timing_cfg(struct edp_ctrl *ctrl,
const struct drm_display_mode *mode,
const struct drm_display_info *info);
/* @pixel_rate is in kHz */
bool msm_edp_ctrl_pixel_clock_valid(struct edp_ctrl *ctrl,
u32 pixel_rate, u32 *pm, u32 *pn);
#endif /* __EDP_CONNECTOR_H__ */

View file

@ -1,388 +0,0 @@
#ifndef EDP_XML
#define EDP_XML
/* Autogenerated file, DO NOT EDIT manually!
This file was generated by the rules-ng-ng headergen tool in this git repository:
http://github.com/freedreno/envytools/
git clone https://github.com/freedreno/envytools.git
The rules-ng-ng source files this header was generated from are:
- /home/robclark/src/mesa/mesa/src/freedreno/registers/msm.xml ( 981 bytes, from 2021-06-05 21:37:42)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/freedreno_copyright.xml ( 1572 bytes, from 2021-02-18 16:45:44)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/mdp/mdp4.xml ( 20912 bytes, from 2021-02-18 16:45:44)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/mdp/mdp_common.xml ( 2849 bytes, from 2021-02-18 16:45:44)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/mdp/mdp5.xml ( 37461 bytes, from 2021-02-18 16:45:44)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/dsi.xml ( 15291 bytes, from 2021-06-15 22:36:13)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/dsi_phy_v2.xml ( 3236 bytes, from 2021-06-05 21:37:42)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/dsi_phy_28nm_8960.xml ( 4935 bytes, from 2021-05-21 19:18:08)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/dsi_phy_28nm.xml ( 7004 bytes, from 2021-05-21 19:18:08)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/dsi_phy_20nm.xml ( 3712 bytes, from 2021-05-21 19:18:08)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/dsi_phy_14nm.xml ( 5381 bytes, from 2021-05-21 19:18:08)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/dsi_phy_10nm.xml ( 4499 bytes, from 2021-05-21 19:18:08)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/dsi_phy_7nm.xml ( 10953 bytes, from 2021-05-21 19:18:08)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/dsi_phy_5nm.xml ( 10900 bytes, from 2021-05-21 19:18:08)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/sfpb.xml ( 602 bytes, from 2021-02-18 16:45:44)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/dsi/mmss_cc.xml ( 1686 bytes, from 2021-02-18 16:45:44)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/hdmi/qfprom.xml ( 600 bytes, from 2021-02-18 16:45:44)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/hdmi/hdmi.xml ( 41874 bytes, from 2021-02-18 16:45:44)
- /home/robclark/src/mesa/mesa/src/freedreno/registers/edp/edp.xml ( 10416 bytes, from 2021-02-18 16:45:44)
Copyright (C) 2013-2021 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
- Ilia Mirkin <imirkin@alum.mit.edu> (imirkin)
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice (including the
next paragraph) shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE COPYRIGHT OWNER(S) AND/OR ITS SUPPLIERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
enum edp_color_depth {
EDP_6BIT = 0,
EDP_8BIT = 1,
EDP_10BIT = 2,
EDP_12BIT = 3,
EDP_16BIT = 4,
};
enum edp_component_format {
EDP_RGB = 0,
EDP_YUV422 = 1,
EDP_YUV444 = 2,
};
#define REG_EDP_MAINLINK_CTRL 0x00000004
#define EDP_MAINLINK_CTRL_ENABLE 0x00000001
#define EDP_MAINLINK_CTRL_RESET 0x00000002
#define REG_EDP_STATE_CTRL 0x00000008
#define EDP_STATE_CTRL_TRAIN_PATTERN_1 0x00000001
#define EDP_STATE_CTRL_TRAIN_PATTERN_2 0x00000002
#define EDP_STATE_CTRL_TRAIN_PATTERN_3 0x00000004
#define EDP_STATE_CTRL_SYMBOL_ERR_RATE_MEAS 0x00000008
#define EDP_STATE_CTRL_PRBS7 0x00000010
#define EDP_STATE_CTRL_CUSTOM_80_BIT_PATTERN 0x00000020
#define EDP_STATE_CTRL_SEND_VIDEO 0x00000040
#define EDP_STATE_CTRL_PUSH_IDLE 0x00000080
#define REG_EDP_CONFIGURATION_CTRL 0x0000000c
#define EDP_CONFIGURATION_CTRL_SYNC_CLK 0x00000001
#define EDP_CONFIGURATION_CTRL_STATIC_MVID 0x00000002
#define EDP_CONFIGURATION_CTRL_PROGRESSIVE 0x00000004
#define EDP_CONFIGURATION_CTRL_LANES__MASK 0x00000030
#define EDP_CONFIGURATION_CTRL_LANES__SHIFT 4
static inline uint32_t EDP_CONFIGURATION_CTRL_LANES(uint32_t val)
{
return ((val) << EDP_CONFIGURATION_CTRL_LANES__SHIFT) & EDP_CONFIGURATION_CTRL_LANES__MASK;
}
#define EDP_CONFIGURATION_CTRL_ENHANCED_FRAMING 0x00000040
#define EDP_CONFIGURATION_CTRL_COLOR__MASK 0x00000100
#define EDP_CONFIGURATION_CTRL_COLOR__SHIFT 8
static inline uint32_t EDP_CONFIGURATION_CTRL_COLOR(enum edp_color_depth val)
{
return ((val) << EDP_CONFIGURATION_CTRL_COLOR__SHIFT) & EDP_CONFIGURATION_CTRL_COLOR__MASK;
}
#define REG_EDP_SOFTWARE_MVID 0x00000014
#define REG_EDP_SOFTWARE_NVID 0x00000018
#define REG_EDP_TOTAL_HOR_VER 0x0000001c
#define EDP_TOTAL_HOR_VER_HORIZ__MASK 0x0000ffff
#define EDP_TOTAL_HOR_VER_HORIZ__SHIFT 0
static inline uint32_t EDP_TOTAL_HOR_VER_HORIZ(uint32_t val)
{
return ((val) << EDP_TOTAL_HOR_VER_HORIZ__SHIFT) & EDP_TOTAL_HOR_VER_HORIZ__MASK;
}
#define EDP_TOTAL_HOR_VER_VERT__MASK 0xffff0000
#define EDP_TOTAL_HOR_VER_VERT__SHIFT 16
static inline uint32_t EDP_TOTAL_HOR_VER_VERT(uint32_t val)
{
return ((val) << EDP_TOTAL_HOR_VER_VERT__SHIFT) & EDP_TOTAL_HOR_VER_VERT__MASK;
}
#define REG_EDP_START_HOR_VER_FROM_SYNC 0x00000020
#define EDP_START_HOR_VER_FROM_SYNC_HORIZ__MASK 0x0000ffff
#define EDP_START_HOR_VER_FROM_SYNC_HORIZ__SHIFT 0
static inline uint32_t EDP_START_HOR_VER_FROM_SYNC_HORIZ(uint32_t val)
{
return ((val) << EDP_START_HOR_VER_FROM_SYNC_HORIZ__SHIFT) & EDP_START_HOR_VER_FROM_SYNC_HORIZ__MASK;
}
#define EDP_START_HOR_VER_FROM_SYNC_VERT__MASK 0xffff0000
#define EDP_START_HOR_VER_FROM_SYNC_VERT__SHIFT 16
static inline uint32_t EDP_START_HOR_VER_FROM_SYNC_VERT(uint32_t val)
{
return ((val) << EDP_START_HOR_VER_FROM_SYNC_VERT__SHIFT) & EDP_START_HOR_VER_FROM_SYNC_VERT__MASK;
}
#define REG_EDP_HSYNC_VSYNC_WIDTH_POLARITY 0x00000024
#define EDP_HSYNC_VSYNC_WIDTH_POLARITY_HORIZ__MASK 0x00007fff
#define EDP_HSYNC_VSYNC_WIDTH_POLARITY_HORIZ__SHIFT 0
static inline uint32_t EDP_HSYNC_VSYNC_WIDTH_POLARITY_HORIZ(uint32_t val)
{
return ((val) << EDP_HSYNC_VSYNC_WIDTH_POLARITY_HORIZ__SHIFT) & EDP_HSYNC_VSYNC_WIDTH_POLARITY_HORIZ__MASK;
}
#define EDP_HSYNC_VSYNC_WIDTH_POLARITY_NHSYNC 0x00008000
#define EDP_HSYNC_VSYNC_WIDTH_POLARITY_VERT__MASK 0x7fff0000
#define EDP_HSYNC_VSYNC_WIDTH_POLARITY_VERT__SHIFT 16
static inline uint32_t EDP_HSYNC_VSYNC_WIDTH_POLARITY_VERT(uint32_t val)
{
return ((val) << EDP_HSYNC_VSYNC_WIDTH_POLARITY_VERT__SHIFT) & EDP_HSYNC_VSYNC_WIDTH_POLARITY_VERT__MASK;
}
#define EDP_HSYNC_VSYNC_WIDTH_POLARITY_NVSYNC 0x80000000
#define REG_EDP_ACTIVE_HOR_VER 0x00000028
#define EDP_ACTIVE_HOR_VER_HORIZ__MASK 0x0000ffff
#define EDP_ACTIVE_HOR_VER_HORIZ__SHIFT 0
static inline uint32_t EDP_ACTIVE_HOR_VER_HORIZ(uint32_t val)
{
return ((val) << EDP_ACTIVE_HOR_VER_HORIZ__SHIFT) & EDP_ACTIVE_HOR_VER_HORIZ__MASK;
}
#define EDP_ACTIVE_HOR_VER_VERT__MASK 0xffff0000
#define EDP_ACTIVE_HOR_VER_VERT__SHIFT 16
static inline uint32_t EDP_ACTIVE_HOR_VER_VERT(uint32_t val)
{
return ((val) << EDP_ACTIVE_HOR_VER_VERT__SHIFT) & EDP_ACTIVE_HOR_VER_VERT__MASK;
}
#define REG_EDP_MISC1_MISC0 0x0000002c
#define EDP_MISC1_MISC0_MISC0__MASK 0x000000ff
#define EDP_MISC1_MISC0_MISC0__SHIFT 0
static inline uint32_t EDP_MISC1_MISC0_MISC0(uint32_t val)
{
return ((val) << EDP_MISC1_MISC0_MISC0__SHIFT) & EDP_MISC1_MISC0_MISC0__MASK;
}
#define EDP_MISC1_MISC0_SYNC 0x00000001
#define EDP_MISC1_MISC0_COMPONENT_FORMAT__MASK 0x00000006
#define EDP_MISC1_MISC0_COMPONENT_FORMAT__SHIFT 1
static inline uint32_t EDP_MISC1_MISC0_COMPONENT_FORMAT(enum edp_component_format val)
{
return ((val) << EDP_MISC1_MISC0_COMPONENT_FORMAT__SHIFT) & EDP_MISC1_MISC0_COMPONENT_FORMAT__MASK;
}
#define EDP_MISC1_MISC0_CEA 0x00000008
#define EDP_MISC1_MISC0_BT709_5 0x00000010
#define EDP_MISC1_MISC0_COLOR__MASK 0x000000e0
#define EDP_MISC1_MISC0_COLOR__SHIFT 5
static inline uint32_t EDP_MISC1_MISC0_COLOR(enum edp_color_depth val)
{
return ((val) << EDP_MISC1_MISC0_COLOR__SHIFT) & EDP_MISC1_MISC0_COLOR__MASK;
}
#define EDP_MISC1_MISC0_MISC1__MASK 0x0000ff00
#define EDP_MISC1_MISC0_MISC1__SHIFT 8
static inline uint32_t EDP_MISC1_MISC0_MISC1(uint32_t val)
{
return ((val) << EDP_MISC1_MISC0_MISC1__SHIFT) & EDP_MISC1_MISC0_MISC1__MASK;
}
#define EDP_MISC1_MISC0_INTERLACED_ODD 0x00000100
#define EDP_MISC1_MISC0_STEREO__MASK 0x00000600
#define EDP_MISC1_MISC0_STEREO__SHIFT 9
static inline uint32_t EDP_MISC1_MISC0_STEREO(uint32_t val)
{
return ((val) << EDP_MISC1_MISC0_STEREO__SHIFT) & EDP_MISC1_MISC0_STEREO__MASK;
}
#define REG_EDP_PHY_CTRL 0x00000074
#define EDP_PHY_CTRL_SW_RESET_PLL 0x00000001
#define EDP_PHY_CTRL_SW_RESET 0x00000004
#define REG_EDP_MAINLINK_READY 0x00000084
#define EDP_MAINLINK_READY_TRAIN_PATTERN_1_READY 0x00000008
#define EDP_MAINLINK_READY_TRAIN_PATTERN_2_READY 0x00000010
#define EDP_MAINLINK_READY_TRAIN_PATTERN_3_READY 0x00000020
#define REG_EDP_AUX_CTRL 0x00000300
#define EDP_AUX_CTRL_ENABLE 0x00000001
#define EDP_AUX_CTRL_RESET 0x00000002
#define REG_EDP_INTERRUPT_REG_1 0x00000308
#define EDP_INTERRUPT_REG_1_HPD 0x00000001
#define EDP_INTERRUPT_REG_1_HPD_ACK 0x00000002
#define EDP_INTERRUPT_REG_1_HPD_EN 0x00000004
#define EDP_INTERRUPT_REG_1_AUX_I2C_DONE 0x00000008
#define EDP_INTERRUPT_REG_1_AUX_I2C_DONE_ACK 0x00000010
#define EDP_INTERRUPT_REG_1_AUX_I2C_DONE_EN 0x00000020
#define EDP_INTERRUPT_REG_1_WRONG_ADDR 0x00000040
#define EDP_INTERRUPT_REG_1_WRONG_ADDR_ACK 0x00000080
#define EDP_INTERRUPT_REG_1_WRONG_ADDR_EN 0x00000100
#define EDP_INTERRUPT_REG_1_TIMEOUT 0x00000200
#define EDP_INTERRUPT_REG_1_TIMEOUT_ACK 0x00000400
#define EDP_INTERRUPT_REG_1_TIMEOUT_EN 0x00000800
#define EDP_INTERRUPT_REG_1_NACK_DEFER 0x00001000
#define EDP_INTERRUPT_REG_1_NACK_DEFER_ACK 0x00002000
#define EDP_INTERRUPT_REG_1_NACK_DEFER_EN 0x00004000
#define EDP_INTERRUPT_REG_1_WRONG_DATA_CNT 0x00008000
#define EDP_INTERRUPT_REG_1_WRONG_DATA_CNT_ACK 0x00010000
#define EDP_INTERRUPT_REG_1_WRONG_DATA_CNT_EN 0x00020000
#define EDP_INTERRUPT_REG_1_I2C_NACK 0x00040000
#define EDP_INTERRUPT_REG_1_I2C_NACK_ACK 0x00080000
#define EDP_INTERRUPT_REG_1_I2C_NACK_EN 0x00100000
#define EDP_INTERRUPT_REG_1_I2C_DEFER 0x00200000
#define EDP_INTERRUPT_REG_1_I2C_DEFER_ACK 0x00400000
#define EDP_INTERRUPT_REG_1_I2C_DEFER_EN 0x00800000
#define EDP_INTERRUPT_REG_1_PLL_UNLOCK 0x01000000
#define EDP_INTERRUPT_REG_1_PLL_UNLOCK_ACK 0x02000000
#define EDP_INTERRUPT_REG_1_PLL_UNLOCK_EN 0x04000000
#define EDP_INTERRUPT_REG_1_AUX_ERROR 0x08000000
#define EDP_INTERRUPT_REG_1_AUX_ERROR_ACK 0x10000000
#define EDP_INTERRUPT_REG_1_AUX_ERROR_EN 0x20000000
#define REG_EDP_INTERRUPT_REG_2 0x0000030c
#define EDP_INTERRUPT_REG_2_READY_FOR_VIDEO 0x00000001
#define EDP_INTERRUPT_REG_2_READY_FOR_VIDEO_ACK 0x00000002
#define EDP_INTERRUPT_REG_2_READY_FOR_VIDEO_EN 0x00000004
#define EDP_INTERRUPT_REG_2_IDLE_PATTERNs_SENT 0x00000008
#define EDP_INTERRUPT_REG_2_IDLE_PATTERNs_SENT_ACK 0x00000010
#define EDP_INTERRUPT_REG_2_IDLE_PATTERNs_SENT_EN 0x00000020
#define EDP_INTERRUPT_REG_2_FRAME_END 0x00000200
#define EDP_INTERRUPT_REG_2_FRAME_END_ACK 0x00000080
#define EDP_INTERRUPT_REG_2_FRAME_END_EN 0x00000100
#define EDP_INTERRUPT_REG_2_CRC_UPDATED 0x00000200
#define EDP_INTERRUPT_REG_2_CRC_UPDATED_ACK 0x00000400
#define EDP_INTERRUPT_REG_2_CRC_UPDATED_EN 0x00000800
#define REG_EDP_INTERRUPT_TRANS_NUM 0x00000310
#define REG_EDP_AUX_DATA 0x00000314
#define EDP_AUX_DATA_READ 0x00000001
#define EDP_AUX_DATA_DATA__MASK 0x0000ff00
#define EDP_AUX_DATA_DATA__SHIFT 8
static inline uint32_t EDP_AUX_DATA_DATA(uint32_t val)
{
return ((val) << EDP_AUX_DATA_DATA__SHIFT) & EDP_AUX_DATA_DATA__MASK;
}
#define EDP_AUX_DATA_INDEX__MASK 0x00ff0000
#define EDP_AUX_DATA_INDEX__SHIFT 16
static inline uint32_t EDP_AUX_DATA_INDEX(uint32_t val)
{
return ((val) << EDP_AUX_DATA_INDEX__SHIFT) & EDP_AUX_DATA_INDEX__MASK;
}
#define EDP_AUX_DATA_INDEX_WRITE 0x80000000
#define REG_EDP_AUX_TRANS_CTRL 0x00000318
#define EDP_AUX_TRANS_CTRL_I2C 0x00000100
#define EDP_AUX_TRANS_CTRL_GO 0x00000200
#define REG_EDP_AUX_STATUS 0x00000324
static inline uint32_t REG_EDP_PHY_LN(uint32_t i0) { return 0x00000400 + 0x40*i0; }
static inline uint32_t REG_EDP_PHY_LN_PD_CTL(uint32_t i0) { return 0x00000404 + 0x40*i0; }
#define REG_EDP_PHY_GLB_VM_CFG0 0x00000510
#define REG_EDP_PHY_GLB_VM_CFG1 0x00000514
#define REG_EDP_PHY_GLB_MISC9 0x00000518
#define REG_EDP_PHY_GLB_CFG 0x00000528
#define REG_EDP_PHY_GLB_PD_CTL 0x0000052c
#define REG_EDP_PHY_GLB_PHY_STATUS 0x00000598
#define REG_EDP_28nm_PHY_PLL_REFCLK_CFG 0x00000000
#define REG_EDP_28nm_PHY_PLL_POSTDIV1_CFG 0x00000004
#define REG_EDP_28nm_PHY_PLL_CHGPUMP_CFG 0x00000008
#define REG_EDP_28nm_PHY_PLL_VCOLPF_CFG 0x0000000c
#define REG_EDP_28nm_PHY_PLL_VREG_CFG 0x00000010
#define REG_EDP_28nm_PHY_PLL_PWRGEN_CFG 0x00000014
#define REG_EDP_28nm_PHY_PLL_DMUX_CFG 0x00000018
#define REG_EDP_28nm_PHY_PLL_AMUX_CFG 0x0000001c
#define REG_EDP_28nm_PHY_PLL_GLB_CFG 0x00000020
#define EDP_28nm_PHY_PLL_GLB_CFG_PLL_PWRDN_B 0x00000001
#define EDP_28nm_PHY_PLL_GLB_CFG_PLL_LDO_PWRDN_B 0x00000002
#define EDP_28nm_PHY_PLL_GLB_CFG_PLL_PWRGEN_PWRDN_B 0x00000004
#define EDP_28nm_PHY_PLL_GLB_CFG_PLL_ENABLE 0x00000008
#define REG_EDP_28nm_PHY_PLL_POSTDIV2_CFG 0x00000024
#define REG_EDP_28nm_PHY_PLL_POSTDIV3_CFG 0x00000028
#define REG_EDP_28nm_PHY_PLL_LPFR_CFG 0x0000002c
#define REG_EDP_28nm_PHY_PLL_LPFC1_CFG 0x00000030
#define REG_EDP_28nm_PHY_PLL_LPFC2_CFG 0x00000034
#define REG_EDP_28nm_PHY_PLL_SDM_CFG0 0x00000038
#define REG_EDP_28nm_PHY_PLL_SDM_CFG1 0x0000003c
#define REG_EDP_28nm_PHY_PLL_SDM_CFG2 0x00000040
#define REG_EDP_28nm_PHY_PLL_SDM_CFG3 0x00000044
#define REG_EDP_28nm_PHY_PLL_SDM_CFG4 0x00000048
#define REG_EDP_28nm_PHY_PLL_SSC_CFG0 0x0000004c
#define REG_EDP_28nm_PHY_PLL_SSC_CFG1 0x00000050
#define REG_EDP_28nm_PHY_PLL_SSC_CFG2 0x00000054
#define REG_EDP_28nm_PHY_PLL_SSC_CFG3 0x00000058
#define REG_EDP_28nm_PHY_PLL_LKDET_CFG0 0x0000005c
#define REG_EDP_28nm_PHY_PLL_LKDET_CFG1 0x00000060
#define REG_EDP_28nm_PHY_PLL_LKDET_CFG2 0x00000064
#define REG_EDP_28nm_PHY_PLL_TEST_CFG 0x00000068
#define EDP_28nm_PHY_PLL_TEST_CFG_PLL_SW_RESET 0x00000001
#define REG_EDP_28nm_PHY_PLL_CAL_CFG0 0x0000006c
#define REG_EDP_28nm_PHY_PLL_CAL_CFG1 0x00000070
#define REG_EDP_28nm_PHY_PLL_CAL_CFG2 0x00000074
#define REG_EDP_28nm_PHY_PLL_CAL_CFG3 0x00000078
#define REG_EDP_28nm_PHY_PLL_CAL_CFG4 0x0000007c
#define REG_EDP_28nm_PHY_PLL_CAL_CFG5 0x00000080
#define REG_EDP_28nm_PHY_PLL_CAL_CFG6 0x00000084
#define REG_EDP_28nm_PHY_PLL_CAL_CFG7 0x00000088
#define REG_EDP_28nm_PHY_PLL_CAL_CFG8 0x0000008c
#define REG_EDP_28nm_PHY_PLL_CAL_CFG9 0x00000090
#define REG_EDP_28nm_PHY_PLL_CAL_CFG10 0x00000094
#define REG_EDP_28nm_PHY_PLL_CAL_CFG11 0x00000098
#define REG_EDP_28nm_PHY_PLL_EFUSE_CFG 0x0000009c
#define REG_EDP_28nm_PHY_PLL_DEBUG_BUS_SEL 0x000000a0
#endif /* EDP_XML */

View file

@ -1,265 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#include "edp.h"
#include "edp.xml.h"
#define AUX_CMD_FIFO_LEN 144
#define AUX_CMD_NATIVE_MAX 16
#define AUX_CMD_I2C_MAX 128
#define EDP_INTR_AUX_I2C_ERR \
(EDP_INTERRUPT_REG_1_WRONG_ADDR | EDP_INTERRUPT_REG_1_TIMEOUT | \
EDP_INTERRUPT_REG_1_NACK_DEFER | EDP_INTERRUPT_REG_1_WRONG_DATA_CNT | \
EDP_INTERRUPT_REG_1_I2C_NACK | EDP_INTERRUPT_REG_1_I2C_DEFER)
#define EDP_INTR_TRANS_STATUS \
(EDP_INTERRUPT_REG_1_AUX_I2C_DONE | EDP_INTR_AUX_I2C_ERR)
struct edp_aux {
void __iomem *base;
bool msg_err;
struct completion msg_comp;
/* To prevent the message transaction routine from reentry. */
struct mutex msg_mutex;
struct drm_dp_aux drm_aux;
};
#define to_edp_aux(x) container_of(x, struct edp_aux, drm_aux)
static int edp_msg_fifo_tx(struct edp_aux *aux, struct drm_dp_aux_msg *msg)
{
u32 data[4];
u32 reg, len;
bool native = msg->request & (DP_AUX_NATIVE_WRITE & DP_AUX_NATIVE_READ);
bool read = msg->request & (DP_AUX_I2C_READ & DP_AUX_NATIVE_READ);
u8 *msgdata = msg->buffer;
int i;
if (read)
len = 4;
else
len = msg->size + 4;
/*
* cmd fifo only has depth of 144 bytes
*/
if (len > AUX_CMD_FIFO_LEN)
return -EINVAL;
/* Pack cmd and write to HW */
data[0] = (msg->address >> 16) & 0xf; /* addr[19:16] */
if (read)
data[0] |= BIT(4); /* R/W */
data[1] = (msg->address >> 8) & 0xff; /* addr[15:8] */
data[2] = msg->address & 0xff; /* addr[7:0] */
data[3] = (msg->size - 1) & 0xff; /* len[7:0] */
for (i = 0; i < len; i++) {
reg = (i < 4) ? data[i] : msgdata[i - 4];
reg = EDP_AUX_DATA_DATA(reg); /* index = 0, write */
if (i == 0)
reg |= EDP_AUX_DATA_INDEX_WRITE;
edp_write(aux->base + REG_EDP_AUX_DATA, reg);
}
reg = 0; /* Transaction number is always 1 */
if (!native) /* i2c */
reg |= EDP_AUX_TRANS_CTRL_I2C;
reg |= EDP_AUX_TRANS_CTRL_GO;
edp_write(aux->base + REG_EDP_AUX_TRANS_CTRL, reg);
return 0;
}
static int edp_msg_fifo_rx(struct edp_aux *aux, struct drm_dp_aux_msg *msg)
{
u32 data;
u8 *dp;
int i;
u32 len = msg->size;
edp_write(aux->base + REG_EDP_AUX_DATA,
EDP_AUX_DATA_INDEX_WRITE | EDP_AUX_DATA_READ); /* index = 0 */
dp = msg->buffer;
/* discard first byte */
data = edp_read(aux->base + REG_EDP_AUX_DATA);
for (i = 0; i < len; i++) {
data = edp_read(aux->base + REG_EDP_AUX_DATA);
dp[i] = (u8)((data >> 8) & 0xff);
}
return 0;
}
/*
* This function does the real job to process an AUX transaction.
* It will call msm_edp_aux_ctrl() function to reset the AUX channel,
* if the waiting is timeout.
* The caller who triggers the transaction should avoid the
* msm_edp_aux_ctrl() running concurrently in other threads, i.e.
* start transaction only when AUX channel is fully enabled.
*/
static ssize_t edp_aux_transfer(struct drm_dp_aux *drm_aux,
struct drm_dp_aux_msg *msg)
{
struct edp_aux *aux = to_edp_aux(drm_aux);
ssize_t ret;
unsigned long time_left;
bool native = msg->request & (DP_AUX_NATIVE_WRITE & DP_AUX_NATIVE_READ);
bool read = msg->request & (DP_AUX_I2C_READ & DP_AUX_NATIVE_READ);
/* Ignore address only message */
if ((msg->size == 0) || (msg->buffer == NULL)) {
msg->reply = native ?
DP_AUX_NATIVE_REPLY_ACK : DP_AUX_I2C_REPLY_ACK;
return msg->size;
}
/* msg sanity check */
if ((native && (msg->size > AUX_CMD_NATIVE_MAX)) ||
(msg->size > AUX_CMD_I2C_MAX)) {
pr_err("%s: invalid msg: size(%zu), request(%x)\n",
__func__, msg->size, msg->request);
return -EINVAL;
}
mutex_lock(&aux->msg_mutex);
aux->msg_err = false;
reinit_completion(&aux->msg_comp);
ret = edp_msg_fifo_tx(aux, msg);
if (ret < 0)
goto unlock_exit;
DBG("wait_for_completion");
time_left = wait_for_completion_timeout(&aux->msg_comp,
msecs_to_jiffies(300));
if (!time_left) {
/*
* Clear GO and reset AUX channel
* to cancel the current transaction.
*/
edp_write(aux->base + REG_EDP_AUX_TRANS_CTRL, 0);
msm_edp_aux_ctrl(aux, 1);
pr_err("%s: aux timeout,\n", __func__);
ret = -ETIMEDOUT;
goto unlock_exit;
}
DBG("completion");
if (!aux->msg_err) {
if (read) {
ret = edp_msg_fifo_rx(aux, msg);
if (ret < 0)
goto unlock_exit;
}
msg->reply = native ?
DP_AUX_NATIVE_REPLY_ACK : DP_AUX_I2C_REPLY_ACK;
} else {
/* Reply defer to retry */
msg->reply = native ?
DP_AUX_NATIVE_REPLY_DEFER : DP_AUX_I2C_REPLY_DEFER;
/*
* The sleep time in caller is not long enough to make sure
* our H/W completes transactions. Add more defer time here.
*/
msleep(100);
}
/* Return requested size for success or retry */
ret = msg->size;
unlock_exit:
mutex_unlock(&aux->msg_mutex);
return ret;
}
void *msm_edp_aux_init(struct msm_edp *edp, void __iomem *regbase, struct drm_dp_aux **drm_aux)
{
struct device *dev = &edp->pdev->dev;
struct edp_aux *aux = NULL;
int ret;
DBG("");
aux = devm_kzalloc(dev, sizeof(*aux), GFP_KERNEL);
if (!aux)
return NULL;
aux->base = regbase;
mutex_init(&aux->msg_mutex);
init_completion(&aux->msg_comp);
aux->drm_aux.name = "msm_edp_aux";
aux->drm_aux.dev = dev;
aux->drm_aux.drm_dev = edp->dev;
aux->drm_aux.transfer = edp_aux_transfer;
ret = drm_dp_aux_register(&aux->drm_aux);
if (ret) {
pr_err("%s: failed to register drm aux: %d\n", __func__, ret);
mutex_destroy(&aux->msg_mutex);
}
if (drm_aux && aux)
*drm_aux = &aux->drm_aux;
return aux;
}
void msm_edp_aux_destroy(struct device *dev, struct edp_aux *aux)
{
if (aux) {
drm_dp_aux_unregister(&aux->drm_aux);
mutex_destroy(&aux->msg_mutex);
}
}
irqreturn_t msm_edp_aux_irq(struct edp_aux *aux, u32 isr)
{
if (isr & EDP_INTR_TRANS_STATUS) {
DBG("isr=%x", isr);
edp_write(aux->base + REG_EDP_AUX_TRANS_CTRL, 0);
if (isr & EDP_INTR_AUX_I2C_ERR)
aux->msg_err = true;
else
aux->msg_err = false;
complete(&aux->msg_comp);
}
return IRQ_HANDLED;
}
void msm_edp_aux_ctrl(struct edp_aux *aux, int enable)
{
u32 data;
DBG("enable=%d", enable);
data = edp_read(aux->base + REG_EDP_AUX_CTRL);
if (enable) {
data |= EDP_AUX_CTRL_RESET;
edp_write(aux->base + REG_EDP_AUX_CTRL, data);
/* Make sure full reset */
wmb();
usleep_range(500, 1000);
data &= ~EDP_AUX_CTRL_RESET;
data |= EDP_AUX_CTRL_ENABLE;
edp_write(aux->base + REG_EDP_AUX_CTRL, data);
} else {
data &= ~EDP_AUX_CTRL_ENABLE;
edp_write(aux->base + REG_EDP_AUX_CTRL, data);
}
}

View file

@ -1,111 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#include "edp.h"
struct edp_bridge {
struct drm_bridge base;
struct msm_edp *edp;
};
#define to_edp_bridge(x) container_of(x, struct edp_bridge, base)
void edp_bridge_destroy(struct drm_bridge *bridge)
{
}
static void edp_bridge_pre_enable(struct drm_bridge *bridge)
{
struct edp_bridge *edp_bridge = to_edp_bridge(bridge);
struct msm_edp *edp = edp_bridge->edp;
DBG("");
msm_edp_ctrl_power(edp->ctrl, true);
}
static void edp_bridge_enable(struct drm_bridge *bridge)
{
DBG("");
}
static void edp_bridge_disable(struct drm_bridge *bridge)
{
DBG("");
}
static void edp_bridge_post_disable(struct drm_bridge *bridge)
{
struct edp_bridge *edp_bridge = to_edp_bridge(bridge);
struct msm_edp *edp = edp_bridge->edp;
DBG("");
msm_edp_ctrl_power(edp->ctrl, false);
}
static void edp_bridge_mode_set(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
struct drm_device *dev = bridge->dev;
struct drm_connector *connector;
struct edp_bridge *edp_bridge = to_edp_bridge(bridge);
struct msm_edp *edp = edp_bridge->edp;
DBG("set mode: " DRM_MODE_FMT, DRM_MODE_ARG(mode));
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
struct drm_encoder *encoder = connector->encoder;
struct drm_bridge *first_bridge;
if (!connector->encoder)
continue;
first_bridge = drm_bridge_chain_get_first_bridge(encoder);
if (bridge == first_bridge) {
msm_edp_ctrl_timing_cfg(edp->ctrl,
adjusted_mode, &connector->display_info);
break;
}
}
}
static const struct drm_bridge_funcs edp_bridge_funcs = {
.pre_enable = edp_bridge_pre_enable,
.enable = edp_bridge_enable,
.disable = edp_bridge_disable,
.post_disable = edp_bridge_post_disable,
.mode_set = edp_bridge_mode_set,
};
/* initialize bridge */
struct drm_bridge *msm_edp_bridge_init(struct msm_edp *edp)
{
struct drm_bridge *bridge = NULL;
struct edp_bridge *edp_bridge;
int ret;
edp_bridge = devm_kzalloc(edp->dev->dev,
sizeof(*edp_bridge), GFP_KERNEL);
if (!edp_bridge) {
ret = -ENOMEM;
goto fail;
}
edp_bridge->edp = edp;
bridge = &edp_bridge->base;
bridge->funcs = &edp_bridge_funcs;
ret = drm_bridge_attach(edp->encoder, bridge, NULL, 0);
if (ret)
goto fail;
return bridge;
fail:
if (bridge)
edp_bridge_destroy(bridge);
return ERR_PTR(ret);
}

View file

@ -1,132 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#include "drm/drm_edid.h"
#include "msm_kms.h"
#include "edp.h"
struct edp_connector {
struct drm_connector base;
struct msm_edp *edp;
};
#define to_edp_connector(x) container_of(x, struct edp_connector, base)
static enum drm_connector_status edp_connector_detect(
struct drm_connector *connector, bool force)
{
struct edp_connector *edp_connector = to_edp_connector(connector);
struct msm_edp *edp = edp_connector->edp;
DBG("");
return msm_edp_ctrl_panel_connected(edp->ctrl) ?
connector_status_connected : connector_status_disconnected;
}
static void edp_connector_destroy(struct drm_connector *connector)
{
struct edp_connector *edp_connector = to_edp_connector(connector);
DBG("");
drm_connector_cleanup(connector);
kfree(edp_connector);
}
static int edp_connector_get_modes(struct drm_connector *connector)
{
struct edp_connector *edp_connector = to_edp_connector(connector);
struct msm_edp *edp = edp_connector->edp;
struct edid *drm_edid = NULL;
int ret = 0;
DBG("");
ret = msm_edp_ctrl_get_panel_info(edp->ctrl, connector, &drm_edid);
if (ret)
return ret;
drm_connector_update_edid_property(connector, drm_edid);
if (drm_edid)
ret = drm_add_edid_modes(connector, drm_edid);
return ret;
}
static int edp_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct edp_connector *edp_connector = to_edp_connector(connector);
struct msm_edp *edp = edp_connector->edp;
struct msm_drm_private *priv = connector->dev->dev_private;
struct msm_kms *kms = priv->kms;
long actual, requested;
requested = 1000 * mode->clock;
actual = kms->funcs->round_pixclk(kms,
requested, edp_connector->edp->encoder);
DBG("requested=%ld, actual=%ld", requested, actual);
if (actual != requested)
return MODE_CLOCK_RANGE;
if (!msm_edp_ctrl_pixel_clock_valid(
edp->ctrl, mode->clock, NULL, NULL))
return MODE_CLOCK_RANGE;
/* Invalidate all modes if color format is not supported */
if (connector->display_info.bpc > 8)
return MODE_BAD;
return MODE_OK;
}
static const struct drm_connector_funcs edp_connector_funcs = {
.detect = edp_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = edp_connector_destroy,
.reset = drm_atomic_helper_connector_reset,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static const struct drm_connector_helper_funcs edp_connector_helper_funcs = {
.get_modes = edp_connector_get_modes,
.mode_valid = edp_connector_mode_valid,
};
/* initialize connector */
struct drm_connector *msm_edp_connector_init(struct msm_edp *edp)
{
struct drm_connector *connector = NULL;
struct edp_connector *edp_connector;
int ret;
edp_connector = kzalloc(sizeof(*edp_connector), GFP_KERNEL);
if (!edp_connector)
return ERR_PTR(-ENOMEM);
edp_connector->edp = edp;
connector = &edp_connector->base;
ret = drm_connector_init(edp->dev, connector, &edp_connector_funcs,
DRM_MODE_CONNECTOR_eDP);
if (ret)
return ERR_PTR(ret);
drm_connector_helper_add(connector, &edp_connector_helper_funcs);
/* We don't support HPD, so only poll status until connected. */
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
/* Display driver doesn't support interlace now. */
connector->interlace_allowed = false;
connector->doublescan_allowed = false;
drm_connector_attach_encoder(connector, edp->encoder);
return connector;
}

File diff suppressed because it is too large Load diff

View file

@ -1,98 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#include "edp.h"
#include "edp.xml.h"
#define EDP_MAX_LANE 4
struct edp_phy {
void __iomem *base;
};
bool msm_edp_phy_ready(struct edp_phy *phy)
{
u32 status;
int cnt = 100;
while (--cnt) {
status = edp_read(phy->base +
REG_EDP_PHY_GLB_PHY_STATUS);
if (status & 0x01)
break;
usleep_range(500, 1000);
}
if (cnt == 0) {
pr_err("%s: PHY NOT ready\n", __func__);
return false;
} else {
return true;
}
}
void msm_edp_phy_ctrl(struct edp_phy *phy, int enable)
{
DBG("enable=%d", enable);
if (enable) {
/* Reset */
edp_write(phy->base + REG_EDP_PHY_CTRL,
EDP_PHY_CTRL_SW_RESET | EDP_PHY_CTRL_SW_RESET_PLL);
/* Make sure fully reset */
wmb();
usleep_range(500, 1000);
edp_write(phy->base + REG_EDP_PHY_CTRL, 0x000);
edp_write(phy->base + REG_EDP_PHY_GLB_PD_CTL, 0x3f);
edp_write(phy->base + REG_EDP_PHY_GLB_CFG, 0x1);
} else {
edp_write(phy->base + REG_EDP_PHY_GLB_PD_CTL, 0xc0);
}
}
/* voltage mode and pre emphasis cfg */
void msm_edp_phy_vm_pe_init(struct edp_phy *phy)
{
edp_write(phy->base + REG_EDP_PHY_GLB_VM_CFG0, 0x3);
edp_write(phy->base + REG_EDP_PHY_GLB_VM_CFG1, 0x64);
edp_write(phy->base + REG_EDP_PHY_GLB_MISC9, 0x6c);
}
void msm_edp_phy_vm_pe_cfg(struct edp_phy *phy, u32 v0, u32 v1)
{
edp_write(phy->base + REG_EDP_PHY_GLB_VM_CFG0, v0);
edp_write(phy->base + REG_EDP_PHY_GLB_VM_CFG1, v1);
}
void msm_edp_phy_lane_power_ctrl(struct edp_phy *phy, bool up, u32 max_lane)
{
u32 i;
u32 data;
if (up)
data = 0; /* power up */
else
data = 0x7; /* power down */
for (i = 0; i < max_lane; i++)
edp_write(phy->base + REG_EDP_PHY_LN_PD_CTL(i) , data);
/* power down unused lane */
data = 0x7; /* power down */
for (i = max_lane; i < EDP_MAX_LANE; i++)
edp_write(phy->base + REG_EDP_PHY_LN_PD_CTL(i) , data);
}
void *msm_edp_phy_init(struct device *dev, void __iomem *regbase)
{
struct edp_phy *phy = NULL;
phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL);
if (!phy)
return NULL;
phy->base = regbase;
return phy;
}

View file

@ -8,6 +8,8 @@
#include <linux/of_irq.h>
#include <linux/of_gpio.h>
#include <drm/drm_bridge_connector.h>
#include <sound/hdmi-codec.h>
#include "hdmi.h"
@ -41,7 +43,7 @@ static irqreturn_t msm_hdmi_irq(int irq, void *dev_id)
struct hdmi *hdmi = dev_id;
/* Process HPD: */
msm_hdmi_connector_irq(hdmi->connector);
msm_hdmi_hpd_irq(hdmi->bridge);
/* Process DDC: */
msm_hdmi_i2c_irq(hdmi->i2c);
@ -281,7 +283,7 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
goto fail;
}
hdmi->connector = msm_hdmi_connector_init(hdmi);
hdmi->connector = drm_bridge_connector_init(hdmi->dev, encoder);
if (IS_ERR(hdmi->connector)) {
ret = PTR_ERR(hdmi->connector);
DRM_DEV_ERROR(dev->dev, "failed to create HDMI connector: %d\n", ret);
@ -289,6 +291,8 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
goto fail;
}
drm_connector_attach_encoder(hdmi->connector, hdmi->encoder);
hdmi->irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
if (hdmi->irq < 0) {
ret = hdmi->irq;
@ -305,7 +309,9 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
goto fail;
}
ret = msm_hdmi_hpd_enable(hdmi->connector);
drm_bridge_connector_enable_hpd(hdmi->connector);
ret = msm_hdmi_hpd_enable(hdmi->bridge);
if (ret < 0) {
DRM_DEV_ERROR(&hdmi->pdev->dev, "failed to enable HPD: %d\n", ret);
goto fail;
@ -514,8 +520,7 @@ static int msm_hdmi_register_audio_driver(struct hdmi *hdmi, struct device *dev)
static int msm_hdmi_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
struct hdmi_platform_config *hdmi_cfg;
struct hdmi *hdmi;
struct device_node *of_node = dev->of_node;
@ -586,8 +591,8 @@ static int msm_hdmi_bind(struct device *dev, struct device *master, void *data)
static void msm_hdmi_unbind(struct device *dev, struct device *master,
void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
if (priv->hdmi) {
if (priv->hdmi->audio_pdev)
platform_device_unregister(priv->hdmi->audio_pdev);

View file

@ -114,6 +114,13 @@ struct hdmi_platform_config {
struct hdmi_gpio_data gpios[HDMI_MAX_NUM_GPIO];
};
struct hdmi_bridge {
struct drm_bridge base;
struct hdmi *hdmi;
struct work_struct hpd_work;
};
#define to_hdmi_bridge(x) container_of(x, struct hdmi_bridge, base)
void msm_hdmi_set_mode(struct hdmi *hdmi, bool power_on);
static inline void hdmi_write(struct hdmi *hdmi, u32 reg, u32 data)
@ -230,13 +237,11 @@ void msm_hdmi_audio_set_sample_rate(struct hdmi *hdmi, int rate);
struct drm_bridge *msm_hdmi_bridge_init(struct hdmi *hdmi);
void msm_hdmi_bridge_destroy(struct drm_bridge *bridge);
/*
* hdmi connector:
*/
void msm_hdmi_connector_irq(struct drm_connector *connector);
struct drm_connector *msm_hdmi_connector_init(struct hdmi *hdmi);
int msm_hdmi_hpd_enable(struct drm_connector *connector);
void msm_hdmi_hpd_irq(struct drm_bridge *bridge);
enum drm_connector_status msm_hdmi_bridge_detect(
struct drm_bridge *bridge);
int msm_hdmi_hpd_enable(struct drm_bridge *bridge);
void msm_hdmi_hpd_disable(struct hdmi_bridge *hdmi_bridge);
/*
* i2c adapter for ddc:

View file

@ -5,17 +5,16 @@
*/
#include <linux/delay.h>
#include <drm/drm_bridge_connector.h>
#include "msm_kms.h"
#include "hdmi.h"
struct hdmi_bridge {
struct drm_bridge base;
struct hdmi *hdmi;
};
#define to_hdmi_bridge(x) container_of(x, struct hdmi_bridge, base)
void msm_hdmi_bridge_destroy(struct drm_bridge *bridge)
{
struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
msm_hdmi_hpd_disable(hdmi_bridge);
}
static void msm_hdmi_power_on(struct drm_bridge *bridge)
@ -70,7 +69,7 @@ static void power_off(struct drm_bridge *bridge)
if (ret)
DRM_DEV_ERROR(dev->dev, "failed to disable pwr regulator: %d\n", ret);
pm_runtime_put_autosuspend(&hdmi->pdev->dev);
pm_runtime_put(&hdmi->pdev->dev);
}
#define AVI_IFRAME_LINE_NUMBER 1
@ -251,14 +250,76 @@ static void msm_hdmi_bridge_mode_set(struct drm_bridge *bridge,
msm_hdmi_audio_update(hdmi);
}
static struct edid *msm_hdmi_bridge_get_edid(struct drm_bridge *bridge,
struct drm_connector *connector)
{
struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
struct hdmi *hdmi = hdmi_bridge->hdmi;
struct edid *edid;
uint32_t hdmi_ctrl;
hdmi_ctrl = hdmi_read(hdmi, REG_HDMI_CTRL);
hdmi_write(hdmi, REG_HDMI_CTRL, hdmi_ctrl | HDMI_CTRL_ENABLE);
edid = drm_get_edid(connector, hdmi->i2c);
hdmi_write(hdmi, REG_HDMI_CTRL, hdmi_ctrl);
hdmi->hdmi_mode = drm_detect_hdmi_monitor(edid);
return edid;
}
static enum drm_mode_status msm_hdmi_bridge_mode_valid(struct drm_bridge *bridge,
const struct drm_display_info *info,
const struct drm_display_mode *mode)
{
struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
struct hdmi *hdmi = hdmi_bridge->hdmi;
const struct hdmi_platform_config *config = hdmi->config;
struct msm_drm_private *priv = bridge->dev->dev_private;
struct msm_kms *kms = priv->kms;
long actual, requested;
requested = 1000 * mode->clock;
actual = kms->funcs->round_pixclk(kms,
requested, hdmi_bridge->hdmi->encoder);
/* for mdp5/apq8074, we manage our own pixel clk (as opposed to
* mdp4/dtv stuff where pixel clk is assigned to mdp/encoder
* instead):
*/
if (config->pwr_clk_cnt > 0)
actual = clk_round_rate(hdmi->pwr_clks[0], actual);
DBG("requested=%ld, actual=%ld", requested, actual);
if (actual != requested)
return MODE_CLOCK_RANGE;
return 0;
}
static const struct drm_bridge_funcs msm_hdmi_bridge_funcs = {
.pre_enable = msm_hdmi_bridge_pre_enable,
.enable = msm_hdmi_bridge_enable,
.disable = msm_hdmi_bridge_disable,
.post_disable = msm_hdmi_bridge_post_disable,
.mode_set = msm_hdmi_bridge_mode_set,
.mode_valid = msm_hdmi_bridge_mode_valid,
.get_edid = msm_hdmi_bridge_get_edid,
.detect = msm_hdmi_bridge_detect,
};
static void
msm_hdmi_hotplug_work(struct work_struct *work)
{
struct hdmi_bridge *hdmi_bridge =
container_of(work, struct hdmi_bridge, hpd_work);
struct drm_bridge *bridge = &hdmi_bridge->base;
drm_bridge_hpd_notify(bridge, drm_bridge_detect(bridge));
}
/* initialize bridge */
struct drm_bridge *msm_hdmi_bridge_init(struct hdmi *hdmi)
@ -275,11 +336,17 @@ struct drm_bridge *msm_hdmi_bridge_init(struct hdmi *hdmi)
}
hdmi_bridge->hdmi = hdmi;
INIT_WORK(&hdmi_bridge->hpd_work, msm_hdmi_hotplug_work);
bridge = &hdmi_bridge->base;
bridge->funcs = &msm_hdmi_bridge_funcs;
bridge->ddc = hdmi->i2c;
bridge->type = DRM_MODE_CONNECTOR_HDMIA;
bridge->ops = DRM_BRIDGE_OP_HPD |
DRM_BRIDGE_OP_DETECT |
DRM_BRIDGE_OP_EDID;
ret = drm_bridge_attach(hdmi->encoder, bridge, NULL, 0);
ret = drm_bridge_attach(hdmi->encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR);
if (ret)
goto fail;

View file

@ -11,13 +11,6 @@
#include "msm_kms.h"
#include "hdmi.h"
struct hdmi_connector {
struct drm_connector base;
struct hdmi *hdmi;
struct work_struct hpd_work;
};
#define to_hdmi_connector(x) container_of(x, struct hdmi_connector, base)
static void msm_hdmi_phy_reset(struct hdmi *hdmi)
{
unsigned int val;
@ -139,10 +132,10 @@ static void enable_hpd_clocks(struct hdmi *hdmi, bool enable)
}
}
int msm_hdmi_hpd_enable(struct drm_connector *connector)
int msm_hdmi_hpd_enable(struct drm_bridge *bridge)
{
struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
struct hdmi *hdmi = hdmi_connector->hdmi;
struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
struct hdmi *hdmi = hdmi_bridge->hdmi;
const struct hdmi_platform_config *config = hdmi->config;
struct device *dev = &hdmi->pdev->dev;
uint32_t hpd_ctrl;
@ -199,9 +192,9 @@ int msm_hdmi_hpd_enable(struct drm_connector *connector)
return ret;
}
static void hdp_disable(struct hdmi_connector *hdmi_connector)
void msm_hdmi_hpd_disable(struct hdmi_bridge *hdmi_bridge)
{
struct hdmi *hdmi = hdmi_connector->hdmi;
struct hdmi *hdmi = hdmi_bridge->hdmi;
const struct hdmi_platform_config *config = hdmi->config;
struct device *dev = &hdmi->pdev->dev;
int ret;
@ -212,7 +205,7 @@ static void hdp_disable(struct hdmi_connector *hdmi_connector)
msm_hdmi_set_mode(hdmi, false);
enable_hpd_clocks(hdmi, false);
pm_runtime_put_autosuspend(dev);
pm_runtime_put(dev);
ret = gpio_config(hdmi, false);
if (ret)
@ -227,19 +220,10 @@ static void hdp_disable(struct hdmi_connector *hdmi_connector)
dev_warn(dev, "failed to disable hpd regulator: %d\n", ret);
}
static void
msm_hdmi_hotplug_work(struct work_struct *work)
void msm_hdmi_hpd_irq(struct drm_bridge *bridge)
{
struct hdmi_connector *hdmi_connector =
container_of(work, struct hdmi_connector, hpd_work);
struct drm_connector *connector = &hdmi_connector->base;
drm_helper_hpd_irq_event(connector->dev);
}
void msm_hdmi_connector_irq(struct drm_connector *connector)
{
struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
struct hdmi *hdmi = hdmi_connector->hdmi;
struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
struct hdmi *hdmi = hdmi_bridge->hdmi;
uint32_t hpd_int_status, hpd_int_ctrl;
/* Process HPD: */
@ -262,7 +246,7 @@ void msm_hdmi_connector_irq(struct drm_connector *connector)
hpd_int_ctrl |= HDMI_HPD_INT_CTRL_INT_CONNECT;
hdmi_write(hdmi, REG_HDMI_HPD_INT_CTRL, hpd_int_ctrl);
queue_work(hdmi->workq, &hdmi_connector->hpd_work);
queue_work(hdmi->workq, &hdmi_bridge->hpd_work);
}
}
@ -276,7 +260,7 @@ static enum drm_connector_status detect_reg(struct hdmi *hdmi)
hpd_int_status = hdmi_read(hdmi, REG_HDMI_HPD_INT_STATUS);
enable_hpd_clocks(hdmi, false);
pm_runtime_put_autosuspend(&hdmi->pdev->dev);
pm_runtime_put(&hdmi->pdev->dev);
return (hpd_int_status & HDMI_HPD_INT_STATUS_CABLE_DETECTED) ?
connector_status_connected : connector_status_disconnected;
@ -293,11 +277,11 @@ static enum drm_connector_status detect_gpio(struct hdmi *hdmi)
connector_status_disconnected;
}
static enum drm_connector_status hdmi_connector_detect(
struct drm_connector *connector, bool force)
enum drm_connector_status msm_hdmi_bridge_detect(
struct drm_bridge *bridge)
{
struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
struct hdmi *hdmi = hdmi_connector->hdmi;
struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
struct hdmi *hdmi = hdmi_bridge->hdmi;
const struct hdmi_platform_config *config = hdmi->config;
struct hdmi_gpio_data hpd_gpio = config->gpios[HPD_GPIO_INDEX];
enum drm_connector_status stat_gpio, stat_reg;
@ -331,115 +315,3 @@ static enum drm_connector_status hdmi_connector_detect(
return stat_gpio;
}
static void hdmi_connector_destroy(struct drm_connector *connector)
{
struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
hdp_disable(hdmi_connector);
drm_connector_cleanup(connector);
kfree(hdmi_connector);
}
static int msm_hdmi_connector_get_modes(struct drm_connector *connector)
{
struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
struct hdmi *hdmi = hdmi_connector->hdmi;
struct edid *edid;
uint32_t hdmi_ctrl;
int ret = 0;
hdmi_ctrl = hdmi_read(hdmi, REG_HDMI_CTRL);
hdmi_write(hdmi, REG_HDMI_CTRL, hdmi_ctrl | HDMI_CTRL_ENABLE);
edid = drm_get_edid(connector, hdmi->i2c);
hdmi_write(hdmi, REG_HDMI_CTRL, hdmi_ctrl);
hdmi->hdmi_mode = drm_detect_hdmi_monitor(edid);
drm_connector_update_edid_property(connector, edid);
if (edid) {
ret = drm_add_edid_modes(connector, edid);
kfree(edid);
}
return ret;
}
static int msm_hdmi_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct hdmi_connector *hdmi_connector = to_hdmi_connector(connector);
struct hdmi *hdmi = hdmi_connector->hdmi;
const struct hdmi_platform_config *config = hdmi->config;
struct msm_drm_private *priv = connector->dev->dev_private;
struct msm_kms *kms = priv->kms;
long actual, requested;
requested = 1000 * mode->clock;
actual = kms->funcs->round_pixclk(kms,
requested, hdmi_connector->hdmi->encoder);
/* for mdp5/apq8074, we manage our own pixel clk (as opposed to
* mdp4/dtv stuff where pixel clk is assigned to mdp/encoder
* instead):
*/
if (config->pwr_clk_cnt > 0)
actual = clk_round_rate(hdmi->pwr_clks[0], actual);
DBG("requested=%ld, actual=%ld", requested, actual);
if (actual != requested)
return MODE_CLOCK_RANGE;
return 0;
}
static const struct drm_connector_funcs hdmi_connector_funcs = {
.detect = hdmi_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = hdmi_connector_destroy,
.reset = drm_atomic_helper_connector_reset,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static const struct drm_connector_helper_funcs msm_hdmi_connector_helper_funcs = {
.get_modes = msm_hdmi_connector_get_modes,
.mode_valid = msm_hdmi_connector_mode_valid,
};
/* initialize connector */
struct drm_connector *msm_hdmi_connector_init(struct hdmi *hdmi)
{
struct drm_connector *connector = NULL;
struct hdmi_connector *hdmi_connector;
hdmi_connector = kzalloc(sizeof(*hdmi_connector), GFP_KERNEL);
if (!hdmi_connector)
return ERR_PTR(-ENOMEM);
hdmi_connector->hdmi = hdmi;
INIT_WORK(&hdmi_connector->hpd_work, msm_hdmi_hotplug_work);
connector = &hdmi_connector->base;
drm_connector_init_with_ddc(hdmi->dev, connector,
&hdmi_connector_funcs,
DRM_MODE_CONNECTOR_HDMIA,
hdmi->i2c);
drm_connector_helper_add(connector, &msm_hdmi_connector_helper_funcs);
connector->polled = DRM_CONNECTOR_POLL_CONNECT |
DRM_CONNECTOR_POLL_DISCONNECT;
connector->interlace_allowed = 0;
connector->doublescan_allowed = 0;
drm_connector_attach_encoder(connector, hdmi->encoder);
return connector;
}

View file

@ -15,6 +15,11 @@
#include "msm_gpu.h"
#include "msm_kms.h"
#include "msm_debugfs.h"
#include "disp/msm_disp_snapshot.h"
/*
* GPU Snapshot:
*/
struct msm_gpu_show_priv {
struct msm_gpu_state *state;
@ -29,14 +34,14 @@ static int msm_gpu_show(struct seq_file *m, void *arg)
struct msm_gpu *gpu = priv->gpu;
int ret;
ret = mutex_lock_interruptible(&show_priv->dev->struct_mutex);
ret = mutex_lock_interruptible(&gpu->lock);
if (ret)
return ret;
drm_printf(&p, "%s Status:\n", gpu->name);
gpu->funcs->show(gpu, show_priv->state, &p);
mutex_unlock(&show_priv->dev->struct_mutex);
mutex_unlock(&gpu->lock);
return 0;
}
@ -48,9 +53,9 @@ static int msm_gpu_release(struct inode *inode, struct file *file)
struct msm_drm_private *priv = show_priv->dev->dev_private;
struct msm_gpu *gpu = priv->gpu;
mutex_lock(&show_priv->dev->struct_mutex);
mutex_lock(&gpu->lock);
gpu->funcs->gpu_state_put(show_priv->state);
mutex_unlock(&show_priv->dev->struct_mutex);
mutex_unlock(&gpu->lock);
kfree(show_priv);
@ -72,7 +77,7 @@ static int msm_gpu_open(struct inode *inode, struct file *file)
if (!show_priv)
return -ENOMEM;
ret = mutex_lock_interruptible(&dev->struct_mutex);
ret = mutex_lock_interruptible(&gpu->lock);
if (ret)
goto free_priv;
@ -81,7 +86,7 @@ static int msm_gpu_open(struct inode *inode, struct file *file)
show_priv->state = gpu->funcs->gpu_state_get(gpu);
pm_runtime_put_sync(&gpu->pdev->dev);
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&gpu->lock);
if (IS_ERR(show_priv->state)) {
ret = PTR_ERR(show_priv->state);
@ -109,6 +114,73 @@ static const struct file_operations msm_gpu_fops = {
.release = msm_gpu_release,
};
/*
* Display Snapshot:
*/
static int msm_kms_show(struct seq_file *m, void *arg)
{
struct drm_printer p = drm_seq_file_printer(m);
struct msm_disp_state *state = m->private;
msm_disp_state_print(state, &p);
return 0;
}
static int msm_kms_release(struct inode *inode, struct file *file)
{
struct seq_file *m = file->private_data;
struct msm_disp_state *state = m->private;
msm_disp_state_free(state);
return single_release(inode, file);
}
static int msm_kms_open(struct inode *inode, struct file *file)
{
struct drm_device *dev = inode->i_private;
struct msm_drm_private *priv = dev->dev_private;
struct msm_disp_state *state;
int ret;
if (!priv->kms)
return -ENODEV;
ret = mutex_lock_interruptible(&priv->kms->dump_mutex);
if (ret)
return ret;
state = msm_disp_snapshot_state_sync(priv->kms);
mutex_unlock(&priv->kms->dump_mutex);
if (IS_ERR(state)) {
return PTR_ERR(state);
}
ret = single_open(file, msm_kms_show, state);
if (ret) {
msm_disp_state_free(state);
return ret;
}
return 0;
}
static const struct file_operations msm_kms_fops = {
.owner = THIS_MODULE,
.open = msm_kms_open,
.read = seq_read,
.llseek = seq_lseek,
.release = msm_kms_release,
};
/*
* Other debugfs:
*/
static unsigned long last_shrink_freed;
static int
@ -134,8 +206,10 @@ DEFINE_SIMPLE_ATTRIBUTE(shrink_fops,
"0x%08llx\n");
static int msm_gem_show(struct drm_device *dev, struct seq_file *m)
static int msm_gem_show(struct seq_file *m, void *arg)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct msm_drm_private *priv = dev->dev_private;
int ret;
@ -150,8 +224,10 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m)
return 0;
}
static int msm_mm_show(struct drm_device *dev, struct seq_file *m)
static int msm_mm_show(struct seq_file *m, void *arg)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct drm_printer p = drm_seq_file_printer(m);
drm_mm_print(&dev->vma_offset_manager->vm_addr_space_mm, &p);
@ -159,8 +235,10 @@ static int msm_mm_show(struct drm_device *dev, struct seq_file *m)
return 0;
}
static int msm_fb_show(struct drm_device *dev, struct seq_file *m)
static int msm_fb_show(struct seq_file *m, void *arg)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct msm_drm_private *priv = dev->dev_private;
struct drm_framebuffer *fb, *fbdev_fb = NULL;
@ -183,29 +261,10 @@ static int msm_fb_show(struct drm_device *dev, struct seq_file *m)
return 0;
}
static int show_locked(struct seq_file *m, void *arg)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
int (*show)(struct drm_device *dev, struct seq_file *m) =
node->info_ent->data;
int ret;
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
ret = show(dev, m);
mutex_unlock(&dev->struct_mutex);
return ret;
}
static struct drm_info_list msm_debugfs_list[] = {
{"gem", show_locked, 0, msm_gem_show},
{ "mm", show_locked, 0, msm_mm_show },
{ "fb", show_locked, 0, msm_fb_show },
{"gem", msm_gem_show},
{ "mm", msm_mm_show },
{ "fb", msm_fb_show },
};
static int late_init_minor(struct drm_minor *minor)
@ -252,9 +311,15 @@ void msm_debugfs_init(struct drm_minor *minor)
debugfs_create_file("gpu", S_IRUSR, minor->debugfs_root,
dev, &msm_gpu_fops);
debugfs_create_file("kms", S_IRUSR, minor->debugfs_root,
dev, &msm_kms_fops);
debugfs_create_u32("hangcheck_period_ms", 0600, minor->debugfs_root,
&priv->hangcheck_period);
debugfs_create_bool("disable_err_irq", 0600, minor->debugfs_root,
&priv->disable_err_irq);
debugfs_create_file("shrink", S_IRWXU, minor->debugfs_root,
dev, &shrink_fops);

View file

@ -339,10 +339,9 @@ static int vblank_ctrl_queue_work(struct msm_drm_private *priv,
static int msm_drm_uninit(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct drm_device *ddev = platform_get_drvdata(pdev);
struct msm_drm_private *priv = ddev->dev_private;
struct msm_drm_private *priv = platform_get_drvdata(pdev);
struct drm_device *ddev = priv->dev;
struct msm_kms *kms = priv->kms;
struct msm_mdss *mdss = priv->mdss;
int i;
/*
@ -402,14 +401,10 @@ static int msm_drm_uninit(struct device *dev)
component_unbind_all(dev, ddev);
if (mdss && mdss->funcs)
mdss->funcs->destroy(ddev);
ddev->dev_private = NULL;
drm_dev_put(ddev);
destroy_workqueue(priv->wq);
kfree(priv);
return 0;
}
@ -512,8 +507,8 @@ static int msm_init_vram(struct drm_device *dev)
static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
{
struct platform_device *pdev = to_platform_device(dev);
struct msm_drm_private *priv = dev_get_drvdata(dev);
struct drm_device *ddev;
struct msm_drm_private *priv;
struct msm_kms *kms;
struct msm_mdss *mdss;
int ret, i;
@ -523,32 +518,9 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
DRM_DEV_ERROR(dev, "failed to allocate drm_device\n");
return PTR_ERR(ddev);
}
platform_set_drvdata(pdev, ddev);
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) {
ret = -ENOMEM;
goto err_put_drm_dev;
}
ddev->dev_private = priv;
priv->dev = ddev;
switch (get_mdp_ver(pdev)) {
case KMS_MDP5:
ret = mdp5_mdss_init(ddev);
break;
case KMS_DPU:
ret = dpu_mdss_init(ddev);
break;
default:
ret = 0;
break;
}
if (ret)
goto err_free_priv;
mdss = priv->mdss;
priv->wq = alloc_ordered_workqueue("msm", 0);
@ -571,12 +543,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
ret = msm_init_vram(ddev);
if (ret)
goto err_destroy_mdss;
return ret;
/* Bind all our sub-components: */
ret = component_bind_all(dev, ddev);
if (ret)
goto err_destroy_mdss;
return ret;
dma_set_max_seg_size(dev, UINT_MAX);
@ -682,15 +654,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
err_msm_uninit:
msm_drm_uninit(dev);
return ret;
err_destroy_mdss:
if (mdss && mdss->funcs)
mdss->funcs->destroy(ddev);
err_free_priv:
kfree(priv);
err_put_drm_dev:
drm_dev_put(ddev);
platform_set_drvdata(pdev, NULL);
return ret;
}
/*
@ -752,14 +715,8 @@ static void context_close(struct msm_file_private *ctx)
static void msm_postclose(struct drm_device *dev, struct drm_file *file)
{
struct msm_drm_private *priv = dev->dev_private;
struct msm_file_private *ctx = file->driver_priv;
mutex_lock(&dev->struct_mutex);
if (ctx == priv->lastctx)
priv->lastctx = NULL;
mutex_unlock(&dev->struct_mutex);
context_close(ctx);
}
@ -973,7 +930,7 @@ static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id,
struct dma_fence *fence;
int ret;
if (fence_id > queue->last_fence) {
if (fence_after(fence_id, queue->last_fence)) {
DRM_ERROR_RATELIMITED("waiting on invalid fence: %u (of %u)\n",
fence_id, queue->last_fence);
return -EINVAL;
@ -1142,8 +1099,7 @@ static const struct drm_driver msm_driver = {
static int __maybe_unused msm_runtime_suspend(struct device *dev)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct msm_drm_private *priv = ddev->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(dev);
struct msm_mdss *mdss = priv->mdss;
DBG("");
@ -1156,8 +1112,7 @@ static int __maybe_unused msm_runtime_suspend(struct device *dev)
static int __maybe_unused msm_runtime_resume(struct device *dev)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct msm_drm_private *priv = ddev->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(dev);
struct msm_mdss *mdss = priv->mdss;
DBG("");
@ -1187,8 +1142,8 @@ static int __maybe_unused msm_pm_resume(struct device *dev)
static int __maybe_unused msm_pm_prepare(struct device *dev)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct msm_drm_private *priv = ddev ? ddev->dev_private : NULL;
struct msm_drm_private *priv = dev_get_drvdata(dev);
struct drm_device *ddev = priv ? priv->dev : NULL;
if (!priv || !priv->kms)
return 0;
@ -1198,8 +1153,8 @@ static int __maybe_unused msm_pm_prepare(struct device *dev)
static void __maybe_unused msm_pm_complete(struct device *dev)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct msm_drm_private *priv = ddev ? ddev->dev_private : NULL;
struct msm_drm_private *priv = dev_get_drvdata(dev);
struct drm_device *ddev = priv ? priv->dev : NULL;
if (!priv || !priv->kms)
return;
@ -1292,9 +1247,10 @@ static int add_components_mdp(struct device *mdp_dev,
return 0;
}
static int compare_name_mdp(struct device *dev, void *data)
static int find_mdp_node(struct device *dev, void *data)
{
return (strstr(dev_name(dev), "mdp") != NULL);
return of_match_node(dpu_dt_match, dev->of_node) ||
of_match_node(mdp5_dt_match, dev->of_node);
}
static int add_display_components(struct platform_device *pdev,
@ -1319,7 +1275,7 @@ static int add_display_components(struct platform_device *pdev,
return ret;
}
mdp_dev = device_find_child(dev, NULL, compare_name_mdp);
mdp_dev = device_find_child(dev, NULL, find_mdp_node);
if (!mdp_dev) {
DRM_DEV_ERROR(dev, "failed to find MDSS MDP node\n");
of_platform_depopulate(dev);
@ -1397,12 +1353,35 @@ static const struct component_master_ops msm_drm_ops = {
static int msm_pdev_probe(struct platform_device *pdev)
{
struct component_match *match = NULL;
struct msm_drm_private *priv;
int ret;
priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
platform_set_drvdata(pdev, priv);
switch (get_mdp_ver(pdev)) {
case KMS_MDP5:
ret = mdp5_mdss_init(pdev);
break;
case KMS_DPU:
ret = dpu_mdss_init(pdev);
break;
default:
ret = 0;
break;
}
if (ret) {
platform_set_drvdata(pdev, NULL);
return ret;
}
if (get_mdp_ver(pdev)) {
ret = add_display_components(pdev, &match);
if (ret)
return ret;
goto fail;
}
ret = add_gpu_components(&pdev->dev, &match);
@ -1424,21 +1403,31 @@ static int msm_pdev_probe(struct platform_device *pdev)
fail:
of_platform_depopulate(&pdev->dev);
if (priv->mdss && priv->mdss->funcs)
priv->mdss->funcs->destroy(priv->mdss);
return ret;
}
static int msm_pdev_remove(struct platform_device *pdev)
{
struct msm_drm_private *priv = platform_get_drvdata(pdev);
struct msm_mdss *mdss = priv->mdss;
component_master_del(&pdev->dev, &msm_drm_ops);
of_platform_depopulate(&pdev->dev);
if (mdss && mdss->funcs)
mdss->funcs->destroy(mdss);
return 0;
}
static void msm_pdev_shutdown(struct platform_device *pdev)
{
struct drm_device *drm = platform_get_drvdata(pdev);
struct msm_drm_private *priv = drm ? drm->dev_private : NULL;
struct msm_drm_private *priv = platform_get_drvdata(pdev);
struct drm_device *drm = priv ? priv->dev : NULL;
if (!priv || !priv->kms)
return;
@ -1478,7 +1467,6 @@ static int __init msm_drm_register(void)
msm_mdp_register();
msm_dpu_register();
msm_dsi_register();
msm_edp_register();
msm_hdmi_register();
msm_dp_register();
adreno_register();
@ -1492,7 +1480,6 @@ static void __exit msm_drm_unregister(void)
msm_dp_unregister();
msm_hdmi_unregister();
adreno_unregister();
msm_edp_unregister();
msm_dsi_unregister();
msm_mdp_unregister();
msm_dpu_unregister();

View file

@ -151,12 +151,6 @@ struct msm_drm_private {
*/
struct hdmi *hdmi;
/* eDP is for mdp5 only, but kms has not been created
* when edp_bind() and edp_init() are called. Here is the only
* place to keep the edp instance.
*/
struct msm_edp *edp;
/* DSI is shared by mdp4 and mdp5 */
struct msm_dsi *dsi[2];
@ -164,7 +158,7 @@ struct msm_drm_private {
/* when we have more than one 'msm_gpu' these need to be an array: */
struct msm_gpu *gpu;
struct msm_file_private *lastctx;
/* gpu is only set on open(), but we need this info earlier */
bool is_a2xx;
bool has_cached_coherent;
@ -246,6 +240,15 @@ struct msm_drm_private {
/* For hang detection, in ms */
unsigned int hangcheck_period;
/**
* disable_err_irq:
*
* Disable handling of GPU hw error interrupts, to force fallback to
* sw hangcheck timer. Written (via debugfs) by igt tests to test
* the sw hangcheck mechanism.
*/
bool disable_err_irq;
};
struct msm_format {
@ -335,12 +338,6 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi, struct drm_device *dev,
void __init msm_hdmi_register(void);
void __exit msm_hdmi_unregister(void);
struct msm_edp;
void __init msm_edp_register(void);
void __exit msm_edp_unregister(void);
int msm_edp_modeset_init(struct msm_edp *edp, struct drm_device *dev,
struct drm_encoder *encoder);
struct msm_dsi;
#ifdef CONFIG_DRM_MSM_DSI
int dsi_dev_attach(struct platform_device *pdev);
@ -392,8 +389,12 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder);
int msm_dp_display_disable(struct msm_dp *dp, struct drm_encoder *encoder);
int msm_dp_display_pre_disable(struct msm_dp *dp, struct drm_encoder *encoder);
void msm_dp_display_mode_set(struct msm_dp *dp, struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode);
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode);
struct drm_bridge *msm_dp_bridge_init(struct msm_dp *dp_display,
struct drm_device *dev,
struct drm_encoder *encoder);
void msm_dp_irq_postinstall(struct msm_dp *dp_display);
void msm_dp_snapshot(struct msm_disp_state *disp_state, struct msm_dp *dp_display);
@ -430,8 +431,8 @@ static inline int msm_dp_display_pre_disable(struct msm_dp *dp,
}
static inline void msm_dp_display_mode_set(struct msm_dp *dp,
struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
}

View file

@ -81,8 +81,6 @@ static int msm_fbdev_create(struct drm_fb_helper *helper,
bo = msm_framebuffer_bo(fb, 0);
mutex_lock(&dev->struct_mutex);
/*
* NOTE: if we can be guaranteed to be able to map buffer
* in panic (ie. lock-safe, etc) we could avoid pinning the
@ -91,14 +89,14 @@ static int msm_fbdev_create(struct drm_fb_helper *helper,
ret = msm_gem_get_and_pin_iova(bo, priv->kms->aspace, &paddr);
if (ret) {
DRM_DEV_ERROR(dev->dev, "failed to get buffer obj iova: %d\n", ret);
goto fail_unlock;
goto fail;
}
fbi = drm_fb_helper_alloc_fbi(helper);
if (IS_ERR(fbi)) {
DRM_DEV_ERROR(dev->dev, "failed to allocate fb info\n");
ret = PTR_ERR(fbi);
goto fail_unlock;
goto fail;
}
DBG("fbi=%p, dev=%p", fbi, dev);
@ -115,7 +113,7 @@ static int msm_fbdev_create(struct drm_fb_helper *helper,
fbi->screen_base = msm_gem_get_vaddr(bo);
if (IS_ERR(fbi->screen_base)) {
ret = PTR_ERR(fbi->screen_base);
goto fail_unlock;
goto fail;
}
fbi->screen_size = bo->size;
fbi->fix.smem_start = paddr;
@ -124,12 +122,9 @@ static int msm_fbdev_create(struct drm_fb_helper *helper,
DBG("par=%p, %dx%d", fbi->par, fbi->var.xres, fbi->var.yres);
DBG("allocated %dx%d fb", fbdev->fb->width, fbdev->fb->height);
mutex_unlock(&dev->struct_mutex);
return 0;
fail_unlock:
mutex_unlock(&dev->struct_mutex);
fail:
drm_framebuffer_remove(fb);
return ret;
}

View file

@ -60,4 +60,16 @@ void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence);
struct dma_fence * msm_fence_alloc(struct msm_fence_context *fctx);
static inline bool
fence_before(uint32_t a, uint32_t b)
{
return (int32_t)(a - b) < 0;
}
static inline bool
fence_after(uint32_t a, uint32_t b)
{
return (int32_t)(a - b) > 0;
}
#endif

View file

@ -881,7 +881,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
* to the underlying fence.
*/
submit->fence_id = idr_alloc_cyclic(&queue->fence_idr,
submit->user_fence, 0, INT_MAX, GFP_KERNEL);
submit->user_fence, 1, INT_MAX, GFP_KERNEL);
if (submit->fence_id < 0) {
ret = submit->fence_id = 0;
submit->fence_id = 0;

View file

@ -150,7 +150,7 @@ int msm_gpu_hw_init(struct msm_gpu *gpu)
{
int ret;
WARN_ON(!mutex_is_locked(&gpu->dev->struct_mutex));
WARN_ON(!mutex_is_locked(&gpu->lock));
if (!gpu->needs_hw_init)
return 0;
@ -172,7 +172,7 @@ static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
spin_lock_irqsave(&ring->submit_lock, flags);
list_for_each_entry(submit, &ring->submits, node) {
if (submit->seqno > fence)
if (fence_after(submit->seqno, fence))
break;
msm_update_fence(submit->ring->fctx,
@ -361,7 +361,7 @@ static void recover_worker(struct kthread_work *work)
char *comm = NULL, *cmd = NULL;
int i;
mutex_lock(&dev->struct_mutex);
mutex_lock(&gpu->lock);
DRM_DEV_ERROR(dev->dev, "%s: hangcheck recover!\n", gpu->name);
@ -442,7 +442,7 @@ static void recover_worker(struct kthread_work *work)
}
}
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&gpu->lock);
msm_gpu_retire(gpu);
}
@ -450,12 +450,11 @@ static void recover_worker(struct kthread_work *work)
static void fault_worker(struct kthread_work *work)
{
struct msm_gpu *gpu = container_of(work, struct msm_gpu, fault_work);
struct drm_device *dev = gpu->dev;
struct msm_gem_submit *submit;
struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu);
char *comm = NULL, *cmd = NULL;
mutex_lock(&dev->struct_mutex);
mutex_lock(&gpu->lock);
submit = find_submit(cur_ring, cur_ring->memptrs->fence + 1);
if (submit && submit->fault_dumped)
@ -490,7 +489,7 @@ static void fault_worker(struct kthread_work *work)
memset(&gpu->fault_info, 0, sizeof(gpu->fault_info));
gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu);
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&gpu->lock);
}
static void hangcheck_timer_reset(struct msm_gpu *gpu)
@ -510,7 +509,7 @@ static void hangcheck_handler(struct timer_list *t)
if (fence != ring->hangcheck_fence) {
/* some progress has been made.. ya! */
ring->hangcheck_fence = fence;
} else if (fence < ring->seqno) {
} else if (fence_before(fence, ring->seqno)) {
/* no progress and not done.. hung! */
ring->hangcheck_fence = fence;
DRM_DEV_ERROR(dev->dev, "%s: hangcheck detected gpu lockup rb %d!\n",
@ -524,7 +523,7 @@ static void hangcheck_handler(struct timer_list *t)
}
/* if still more pending work, reset the hangcheck timer: */
if (ring->seqno > ring->hangcheck_fence)
if (fence_after(ring->seqno, ring->hangcheck_fence))
hangcheck_timer_reset(gpu);
/* workaround for missing irq: */
@ -733,7 +732,7 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
struct msm_ringbuffer *ring = submit->ring;
unsigned long flags;
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
WARN_ON(!mutex_is_locked(&gpu->lock));
pm_runtime_get_sync(&gpu->pdev->dev);
@ -763,7 +762,7 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
mutex_unlock(&gpu->active_lock);
gpu->funcs->submit(gpu, submit);
priv->lastctx = submit->queue->ctx;
gpu->cur_ctx_seqno = submit->queue->ctx->seqno;
hangcheck_timer_reset(gpu);
}
@ -848,6 +847,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
INIT_LIST_HEAD(&gpu->active_list);
mutex_init(&gpu->active_lock);
mutex_init(&gpu->lock);
kthread_init_work(&gpu->retire_work, retire_worker);
kthread_init_work(&gpu->recover_work, recover_worker);
kthread_init_work(&gpu->fault_work, fault_worker);

View file

@ -87,6 +87,21 @@ struct msm_gpu_devfreq {
/** devfreq: devfreq instance */
struct devfreq *devfreq;
/**
* idle_constraint:
*
* A PM QoS constraint to limit max freq while the GPU is idle.
*/
struct dev_pm_qos_request idle_freq;
/**
* boost_constraint:
*
* A PM QoS constraint to boost min freq for a period of time
* until the boost expires.
*/
struct dev_pm_qos_request boost_freq;
/**
* busy_cycles:
*
@ -102,23 +117,20 @@ struct msm_gpu_devfreq {
/** idle_time: Time of last transition to idle: */
ktime_t idle_time;
/**
* idle_freq:
*
* Shadow frequency used while the GPU is idle. From the PoV of
* the devfreq governor, we are continuing to sample busyness and
* adjust frequency while the GPU is idle, but we use this shadow
* value as the GPU is actually clamped to minimum frequency while
* it is inactive.
*/
unsigned long idle_freq;
/**
* idle_work:
*
* Used to delay clamping to idle freq on active->idle transition.
*/
struct msm_hrtimer_work idle_work;
/**
* boost_work:
*
* Used to reset the boost_constraint after the boost period has
* elapsed
*/
struct msm_hrtimer_work boost_work;
};
struct msm_gpu {
@ -144,19 +156,40 @@ struct msm_gpu {
struct msm_ringbuffer *rb[MSM_GPU_MAX_RINGS];
int nr_rings;
/**
* cur_ctx_seqno:
*
* The ctx->seqno value of the last context to submit rendering,
* and the one with current pgtables installed (for generations
* that support per-context pgtables). Tracked by seqno rather
* than pointer value to avoid dangling pointers, and cases where
* a ctx can be freed and a new one created with the same address.
*/
int cur_ctx_seqno;
/*
* List of GEM active objects on this gpu. Protected by
* msm_drm_private::mm_lock
*/
struct list_head active_list;
/**
* lock:
*
* General lock for serializing all the gpu things.
*
* TODO move to per-ring locking where feasible (ie. submit/retire
* path, etc)
*/
struct mutex lock;
/**
* active_submits:
*
* The number of submitted but not yet retired submits, used to
* determine transitions between active and idle.
*
* Protected by lock
* Protected by active_lock
*/
int active_submits;
@ -241,7 +274,7 @@ static inline bool msm_gpu_active(struct msm_gpu *gpu)
for (i = 0; i < gpu->nr_rings; i++) {
struct msm_ringbuffer *ring = gpu->rb[i];
if (ring->seqno > ring->memptrs->fence)
if (fence_after(ring->seqno, ring->memptrs->fence))
return true;
}
@ -501,6 +534,7 @@ void msm_devfreq_init(struct msm_gpu *gpu);
void msm_devfreq_cleanup(struct msm_gpu *gpu);
void msm_devfreq_resume(struct msm_gpu *gpu);
void msm_devfreq_suspend(struct msm_gpu *gpu);
void msm_devfreq_boost(struct msm_gpu *gpu, unsigned factor);
void msm_devfreq_active(struct msm_gpu *gpu);
void msm_devfreq_idle(struct msm_gpu *gpu);
@ -537,28 +571,28 @@ static inline struct msm_gpu_state *msm_gpu_crashstate_get(struct msm_gpu *gpu)
{
struct msm_gpu_state *state = NULL;
mutex_lock(&gpu->dev->struct_mutex);
mutex_lock(&gpu->lock);
if (gpu->crashstate) {
kref_get(&gpu->crashstate->ref);
state = gpu->crashstate;
}
mutex_unlock(&gpu->dev->struct_mutex);
mutex_unlock(&gpu->lock);
return state;
}
static inline void msm_gpu_crashstate_put(struct msm_gpu *gpu)
{
mutex_lock(&gpu->dev->struct_mutex);
mutex_lock(&gpu->lock);
if (gpu->crashstate) {
if (gpu->funcs->gpu_state_put(gpu->crashstate))
gpu->crashstate = NULL;
}
mutex_unlock(&gpu->dev->struct_mutex);
mutex_unlock(&gpu->lock);
}
/*

View file

@ -9,6 +9,7 @@
#include <linux/devfreq.h>
#include <linux/devfreq_cooling.h>
#include <linux/units.h>
/*
* Power Management:
@ -25,17 +26,6 @@ static int msm_devfreq_target(struct device *dev, unsigned long *freq,
* to something that actually is in the opp table:
*/
opp = devfreq_recommended_opp(dev, freq, flags);
/*
* If the GPU is idle, devfreq is not aware, so just ignore
* it's requests
*/
if (gpu->devfreq.idle_freq) {
gpu->devfreq.idle_freq = *freq;
dev_pm_opp_put(opp);
return 0;
}
if (IS_ERR(opp))
return PTR_ERR(opp);
@ -53,9 +43,6 @@ static int msm_devfreq_target(struct device *dev, unsigned long *freq,
static unsigned long get_freq(struct msm_gpu *gpu)
{
if (gpu->devfreq.idle_freq)
return gpu->devfreq.idle_freq;
if (gpu->funcs->gpu_get_freq)
return gpu->funcs->gpu_get_freq(gpu);
@ -93,6 +80,7 @@ static struct devfreq_dev_profile msm_devfreq_profile = {
.get_cur_freq = msm_devfreq_get_cur_freq,
};
static void msm_devfreq_boost_work(struct kthread_work *work);
static void msm_devfreq_idle_work(struct kthread_work *work);
void msm_devfreq_init(struct msm_gpu *gpu)
@ -103,6 +91,12 @@ void msm_devfreq_init(struct msm_gpu *gpu)
if (!gpu->funcs->gpu_busy)
return;
dev_pm_qos_add_request(&gpu->pdev->dev, &df->idle_freq,
DEV_PM_QOS_MAX_FREQUENCY,
PM_QOS_MAX_FREQUENCY_DEFAULT_VALUE);
dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
DEV_PM_QOS_MIN_FREQUENCY, 0);
msm_devfreq_profile.initial_freq = gpu->fast_rate;
/*
@ -133,13 +127,19 @@ void msm_devfreq_init(struct msm_gpu *gpu)
gpu->cooling = NULL;
}
msm_hrtimer_work_init(&df->boost_work, gpu->worker, msm_devfreq_boost_work,
CLOCK_MONOTONIC, HRTIMER_MODE_REL);
msm_hrtimer_work_init(&df->idle_work, gpu->worker, msm_devfreq_idle_work,
CLOCK_MONOTONIC, HRTIMER_MODE_REL);
}
void msm_devfreq_cleanup(struct msm_gpu *gpu)
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
devfreq_cooling_unregister(gpu->cooling);
dev_pm_qos_remove_request(&df->boost_freq);
dev_pm_qos_remove_request(&df->idle_freq);
}
void msm_devfreq_resume(struct msm_gpu *gpu)
@ -155,12 +155,40 @@ void msm_devfreq_suspend(struct msm_gpu *gpu)
devfreq_suspend_device(gpu->devfreq.devfreq);
}
static void msm_devfreq_boost_work(struct kthread_work *work)
{
struct msm_gpu_devfreq *df = container_of(work,
struct msm_gpu_devfreq, boost_work.work);
dev_pm_qos_update_request(&df->boost_freq, 0);
}
void msm_devfreq_boost(struct msm_gpu *gpu, unsigned factor)
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
uint64_t freq;
freq = get_freq(gpu);
freq *= factor;
/*
* A nice little trap is that PM QoS operates in terms of KHz,
* while devfreq operates in terms of Hz:
*/
do_div(freq, HZ_PER_KHZ);
dev_pm_qos_update_request(&df->boost_freq, freq);
msm_hrtimer_queue_work(&df->boost_work,
ms_to_ktime(msm_devfreq_profile.polling_ms),
HRTIMER_MODE_REL);
}
void msm_devfreq_active(struct msm_gpu *gpu)
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
struct devfreq_dev_status status;
unsigned int idle_time;
unsigned long target_freq = df->idle_freq;
if (!df->devfreq)
return;
@ -170,12 +198,6 @@ void msm_devfreq_active(struct msm_gpu *gpu)
*/
hrtimer_cancel(&df->idle_work.timer);
/*
* Hold devfreq lock to synchronize with get_dev_status()/
* target() callbacks
*/
mutex_lock(&df->devfreq->lock);
idle_time = ktime_to_ms(ktime_sub(ktime_get(), df->idle_time));
/*
@ -183,21 +205,18 @@ void msm_devfreq_active(struct msm_gpu *gpu)
* interval, then we won't meet the threshold of busyness for
* the governor to ramp up the freq.. so give some boost
*/
if (idle_time > msm_devfreq_profile.polling_ms/2) {
target_freq *= 2;
if (idle_time > msm_devfreq_profile.polling_ms) {
msm_devfreq_boost(gpu, 2);
}
df->idle_freq = 0;
msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
dev_pm_qos_update_request(&df->idle_freq,
PM_QOS_MAX_FREQUENCY_DEFAULT_VALUE);
/*
* Reset the polling interval so we aren't inconsistent
* about freq vs busy/total cycles
*/
msm_devfreq_get_dev_status(&gpu->pdev->dev, &status);
mutex_unlock(&df->devfreq->lock);
}
@ -206,23 +225,11 @@ static void msm_devfreq_idle_work(struct kthread_work *work)
struct msm_gpu_devfreq *df = container_of(work,
struct msm_gpu_devfreq, idle_work.work);
struct msm_gpu *gpu = container_of(df, struct msm_gpu, devfreq);
unsigned long idle_freq, target_freq = 0;
/*
* Hold devfreq lock to synchronize with get_dev_status()/
* target() callbacks
*/
mutex_lock(&df->devfreq->lock);
idle_freq = get_freq(gpu);
if (gpu->clamp_to_idle)
msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
df->idle_time = ktime_get();
df->idle_freq = idle_freq;
mutex_unlock(&df->devfreq->lock);
if (gpu->clamp_to_idle)
dev_pm_qos_update_request(&df->idle_freq, 0);
}
void msm_devfreq_idle(struct msm_gpu *gpu)

View file

@ -198,19 +198,22 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev);
struct msm_kms *mdp5_kms_init(struct drm_device *dev);
struct msm_kms *dpu_kms_init(struct drm_device *dev);
extern const struct of_device_id dpu_dt_match[];
extern const struct of_device_id mdp5_dt_match[];
struct msm_mdss_funcs {
int (*enable)(struct msm_mdss *mdss);
int (*disable)(struct msm_mdss *mdss);
void (*destroy)(struct drm_device *dev);
void (*destroy)(struct msm_mdss *mdss);
};
struct msm_mdss {
struct drm_device *dev;
struct device *dev;
const struct msm_mdss_funcs *funcs;
};
int mdp5_mdss_init(struct drm_device *dev);
int dpu_mdss_init(struct drm_device *dev);
int mdp5_mdss_init(struct platform_device *dev);
int dpu_mdss_init(struct platform_device *dev);
#define for_each_crtc_mask(dev, crtc, crtc_mask) \
drm_for_each_crtc(crtc, dev) \

View file

@ -155,9 +155,12 @@ static int perf_open(struct inode *inode, struct file *file)
struct msm_gpu *gpu = priv->gpu;
int ret = 0;
mutex_lock(&dev->struct_mutex);
if (!gpu)
return -ENODEV;
if (perf->open || !gpu) {
mutex_lock(&gpu->lock);
if (perf->open) {
ret = -EBUSY;
goto out;
}
@ -171,7 +174,7 @@ static int perf_open(struct inode *inode, struct file *file)
perf->next_jiffies = jiffies + SAMPLE_TIME;
out:
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&gpu->lock);
return ret;
}

View file

@ -86,7 +86,7 @@ struct msm_rd_state {
struct msm_gem_submit *submit;
/* fifo access is synchronized on the producer side by
* struct_mutex held by submit code (otherwise we could
* gpu->lock held by submit code (otherwise we could
* end up w/ cmds logged in different order than they
* were executed). And read_lock synchronizes the reads
*/
@ -181,9 +181,12 @@ static int rd_open(struct inode *inode, struct file *file)
uint32_t gpu_id;
int ret = 0;
mutex_lock(&dev->struct_mutex);
if (!gpu)
return -ENODEV;
if (rd->open || !gpu) {
mutex_lock(&gpu->lock);
if (rd->open) {
ret = -EBUSY;
goto out;
}
@ -200,7 +203,7 @@ static int rd_open(struct inode *inode, struct file *file)
rd_write_section(rd, RD_GPU_ID, &gpu_id, sizeof(gpu_id));
out:
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&gpu->lock);
return ret;
}
@ -340,11 +343,10 @@ static void snapshot_buf(struct msm_rd_state *rd,
msm_gem_unlock(&obj->base);
}
/* called under struct_mutex */
/* called under gpu->lock */
void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit,
const char *fmt, ...)
{
struct drm_device *dev = submit->dev;
struct task_struct *task;
char msg[256];
int i, n;
@ -355,7 +357,7 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit,
/* writing into fifo is serialized by caller, and
* rd->read_lock is used to serialize the reads
*/
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
WARN_ON(!mutex_is_locked(&submit->gpu->lock));
if (fmt) {
va_list args;

View file

@ -21,11 +21,11 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job)
pm_runtime_get_sync(&gpu->pdev->dev);
/* TODO move submit path over to using a per-ring lock.. */
mutex_lock(&gpu->dev->struct_mutex);
mutex_lock(&gpu->lock);
msm_gpu_submit(gpu, submit);
mutex_unlock(&gpu->dev->struct_mutex);
mutex_unlock(&gpu->lock);
pm_runtime_put(&gpu->pdev->dev);

View file

@ -1783,6 +1783,13 @@ drm_dp_tps3_supported(const u8 dpcd[DP_RECEIVER_CAP_SIZE])
dpcd[DP_MAX_LANE_COUNT] & DP_TPS3_SUPPORTED;
}
static inline bool
drm_dp_max_downspread(const u8 dpcd[DP_RECEIVER_CAP_SIZE])
{
return dpcd[DP_DPCD_REV] >= 0x11 ||
dpcd[DP_MAX_DOWNSPREAD] & DP_MAX_DOWNSPREAD_0_5;
}
static inline bool
drm_dp_tps4_supported(const u8 dpcd[DP_RECEIVER_CAP_SIZE])
{