Core changes:

- Add Miquel as a NAND maintainer
 - Add access mode to the nand_page_io_req struct
 - Fix kernel-doc in rawnand.h
 - Support bit-wise majority to recover from corrupted ONFI parameter
   pages
 - Stop checking FAIL bit after a SET_FEATURES, as documented in the
   ONFI spec
 
 Raw NAND Driver changes:
 - Fix and cleanup the error path of many NAND controller drivers
 - GPMI:
   * Cleanup/simplification of a few aspects in the driver
   * Take ECC setup specified in the DT into account
 - sunxi: remove support for GPIO-based R/B polling
 - MTK:
   * Use of_device_get_match_data() instead of of_match_device()
   * Add an entry in MAINTAINERS for this driver
   * Fix nand-ecc-step-size and nand-ecc-strength description in the DT
     bindings doc
 - fsl_ifc: fix ->cmdfunc() to read more than one ONFI parameter page
 
 OneNAND driver changes:
 - samsung: use dev_get_drvdata() instead of platform_get_drvdata()
 -----BEGIN PGP SIGNATURE-----
 
 iQI5BAABCAAjBQJbFP6lHBxib3Jpcy5icmV6aWxsb25AYm9vdGxpbi5jb20ACgkQ
 Ze02AX4ItwBVzRAAotaLsXzKMQXwDSTkRdTNFtpCuzgmmqO22PyKWai7lYAN29D4
 1g4V9UGkKxp5Wiy5iWADze41uxoOOZjlhxghnUNtqTLeB6FtAByBSai7hqV6jVwR
 QhSaf24a/bWcpmvMkYUM+odztwcFR1T1uq21JxDzMnt35RFG/Td4Fh7hz1eLxksB
 x7t3uiDDz5MoPo6/EGuw4MychXoYUnWxm6JNUQsQdZvAEBo1Rqd/8OIy+BHMH6BY
 GlNKJJtdhXoSmX1G48HzwB2FZdxwcaJT36Zcu1DgzWSN/Ivx6oKQDFPDzCAcg3S8
 7yUUNX1X2HavhVMiINi0lgLpI7kwiwl2bYZtHO5xUycHh9EKdgBiesQ1OkXg8w4Q
 zTJQH2krpS44VDa/iJhrc/zGnBPWn8jPScswZ89doymwkZA1pBfFjKoPvJVybvtv
 h84FAMHORpfyf9FHZhs72RFihy/pHPTtQlQpDl4o8e+vjne1+nCfCgBpXucjtUMK
 z99R/6//38StdWlGJCTapZ9Veg0lMiMWPNLWuOrIkzMY3GuEQjq2qjF1BCLs0Di9
 fUPo5VLtijL0vuR4YXg2XkXQ8qrqUrZjgxHUlhBsj31s3EGBDkPErkoDTdtN2rJF
 38jwPwSqIoMfOlRRkqq5FGopXrWMS5NEZkEdmVoymYZdauwCTuozGP8Xz7k=
 =MzWF
 -----END PGP SIGNATURE-----

Merge tag 'nand/for-4.18' of git://git.infradead.org/linux-mtd into mtd/next

Core changes:
- Add Miquel as a NAND maintainer
- Add access mode to the nand_page_io_req struct
- Fix kernel-doc in rawnand.h
- Support bit-wise majority to recover from corrupted ONFI parameter
  pages
- Stop checking FAIL bit after a SET_FEATURES, as documented in the
  ONFI spec

Raw NAND Driver changes:
- Fix and cleanup the error path of many NAND controller drivers
- GPMI:
  * Cleanup/simplification of a few aspects in the driver
  * Take ECC setup specified in the DT into account
- sunxi: remove support for GPIO-based R/B polling
- MTK:
  * Use of_device_get_match_data() instead of of_match_device()
  * Add an entry in MAINTAINERS for this driver
  * Fix nand-ecc-step-size and nand-ecc-strength description in the DT
    bindings doc
- fsl_ifc: fix ->cmdfunc() to read more than one ONFI parameter page

OneNAND driver changes:
- samsung: use dev_get_drvdata() instead of platform_get_drvdata()
This commit is contained in:
Boris Brezillon 2018-06-07 22:52:56 +02:00
commit 6e89b84e28
22 changed files with 329 additions and 417 deletions

View file

@ -47,6 +47,11 @@ Optional properties:
partitions written from Linux with this feature partitions written from Linux with this feature
turned on may not be accessible by the BootROM turned on may not be accessible by the BootROM
code. code.
- nand-ecc-strength: integer representing the number of bits to correct
per ECC step. Needs to be a multiple of 2.
- nand-ecc-step-size: integer representing the number of data bytes
that are covered by a single ECC step. The driver
supports 512 and 1024.
The device tree may optionally contain sub-nodes describing partitions of the The device tree may optionally contain sub-nodes describing partitions of the
address space. See partition.txt for more detail. address space. See partition.txt for more detail.

View file

@ -50,14 +50,19 @@ Optional:
- nand-on-flash-bbt: Store BBT on NAND Flash. - nand-on-flash-bbt: Store BBT on NAND Flash.
- nand-ecc-mode: the NAND ecc mode (check driver for supported modes) - nand-ecc-mode: the NAND ecc mode (check driver for supported modes)
- nand-ecc-step-size: Number of data bytes covered by a single ECC step. - nand-ecc-step-size: Number of data bytes covered by a single ECC step.
valid values: 512 and 1024. valid values:
512 and 1024 on mt2701 and mt2712.
512 only on mt7622.
1024 is recommended for large page NANDs. 1024 is recommended for large page NANDs.
- nand-ecc-strength: Number of bits to correct per ECC step. - nand-ecc-strength: Number of bits to correct per ECC step.
The valid values that the controller supports are: 4, 6, The valid values that each controller supports:
8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36, 40, 44, mt2701: 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28,
48, 52, 56, 60. 32, 36, 40, 44, 48, 52, 56, 60.
mt2712: 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28,
32, 36, 40, 44, 48, 52, 56, 60, 68, 72, 80.
mt7622: 4, 6, 8, 10, 12, 14, 16.
The strength should be calculated as follows: The strength should be calculated as follows:
E = (S - F) * 8 / 14 E = (S - F) * 8 / B
S = O / (P / Q) S = O / (P / Q)
E : nand-ecc-strength. E : nand-ecc-strength.
S : spare size per sector. S : spare size per sector.
@ -66,6 +71,15 @@ Optional:
O : oob size. O : oob size.
P : page size. P : page size.
Q : nand-ecc-step-size. Q : nand-ecc-step-size.
B : number of parity bits needed to correct
1 bitflip.
According to MTK NAND controller design,
this number depends on max ecc step size
that MTK NAND controller supports.
If max ecc step size supported is 1024,
then it should be always 14. And if max
ecc step size is 512, then it should be
always 13.
If the result does not match any one of the listed If the result does not match any one of the listed
choices above, please select the smaller valid value from choices above, please select the smaller valid value from
the list. the list.

View file

@ -22,8 +22,6 @@ Optional properties:
- reset : phandle + reset specifier pair - reset : phandle + reset specifier pair
- reset-names : must contain "ahb" - reset-names : must contain "ahb"
- allwinner,rb : shall contain the native Ready/Busy ids. - allwinner,rb : shall contain the native Ready/Busy ids.
or
- rb-gpios : shall contain the gpios used as R/B pins.
- nand-ecc-mode : one of the supported ECC modes ("hw", "soft", "soft_bch" or - nand-ecc-mode : one of the supported ECC modes ("hw", "soft", "soft_bch" or
"none") "none")

View file

@ -8941,6 +8941,13 @@ L: linux-wireless@vger.kernel.org
S: Maintained S: Maintained
F: drivers/net/wireless/mediatek/mt7601u/ F: drivers/net/wireless/mediatek/mt7601u/
MEDIATEK NAND CONTROLLER DRIVER
M: Xiaolei Li <xiaolei.li@mediatek.com>
L: linux-mtd@lists.infradead.org
S: Maintained
F: drivers/mtd/nand/raw/mtk_*
F: Documentation/devicetree/bindings/mtd/mtk-nand.txt
MEDIATEK RANDOM NUMBER GENERATOR SUPPORT MEDIATEK RANDOM NUMBER GENERATOR SUPPORT
M: Sean Wang <sean.wang@mediatek.com> M: Sean Wang <sean.wang@mediatek.com>
S: Maintained S: Maintained
@ -9574,6 +9581,7 @@ F: drivers/net/ethernet/myricom/myri10ge/
NAND FLASH SUBSYSTEM NAND FLASH SUBSYSTEM
M: Boris Brezillon <boris.brezillon@bootlin.com> M: Boris Brezillon <boris.brezillon@bootlin.com>
M: Miquel Raynal <miquel.raynal@bootlin.com>
R: Richard Weinberger <richard@nod.at> R: Richard Weinberger <richard@nod.at>
L: linux-mtd@lists.infradead.org L: linux-mtd@lists.infradead.org
W: http://www.linux-mtd.infradead.org/ W: http://www.linux-mtd.infradead.org/

View file

@ -958,8 +958,7 @@ static int s3c_onenand_remove(struct platform_device *pdev)
static int s3c_pm_ops_suspend(struct device *dev) static int s3c_pm_ops_suspend(struct device *dev)
{ {
struct platform_device *pdev = to_platform_device(dev); struct mtd_info *mtd = dev_get_drvdata(dev);
struct mtd_info *mtd = platform_get_drvdata(pdev);
struct onenand_chip *this = mtd->priv; struct onenand_chip *this = mtd->priv;
this->wait(mtd, FL_PM_SUSPENDED); this->wait(mtd, FL_PM_SUSPENDED);
@ -968,8 +967,7 @@ static int s3c_pm_ops_suspend(struct device *dev)
static int s3c_pm_ops_resume(struct device *dev) static int s3c_pm_ops_resume(struct device *dev)
{ {
struct platform_device *pdev = to_platform_device(dev); struct mtd_info *mtd = dev_get_drvdata(dev);
struct mtd_info *mtd = platform_get_drvdata(pdev);
struct onenand_chip *this = mtd->priv; struct onenand_chip *this = mtd->priv;
this->unlock_all(mtd); this->unlock_all(mtd);

View file

@ -27,7 +27,6 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/clk.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/mtd/rawnand.h> #include <linux/mtd/rawnand.h>
#include <linux/mtd/partitions.h> #include <linux/mtd/partitions.h>
@ -55,7 +54,6 @@ struct davinci_nand_info {
struct nand_chip chip; struct nand_chip chip;
struct device *dev; struct device *dev;
struct clk *clk;
bool is_readmode; bool is_readmode;
@ -703,22 +701,6 @@ static int nand_davinci_probe(struct platform_device *pdev)
/* Use board-specific ECC config */ /* Use board-specific ECC config */
info->chip.ecc.mode = pdata->ecc_mode; info->chip.ecc.mode = pdata->ecc_mode;
ret = -EINVAL;
info->clk = devm_clk_get(&pdev->dev, "aemif");
if (IS_ERR(info->clk)) {
ret = PTR_ERR(info->clk);
dev_dbg(&pdev->dev, "unable to get AEMIF clock, err %d\n", ret);
return ret;
}
ret = clk_prepare_enable(info->clk);
if (ret < 0) {
dev_dbg(&pdev->dev, "unable to enable AEMIF clock, err %d\n",
ret);
goto err_clk_enable;
}
spin_lock_irq(&davinci_nand_lock); spin_lock_irq(&davinci_nand_lock);
/* put CSxNAND into NAND mode */ /* put CSxNAND into NAND mode */
@ -732,7 +714,7 @@ static int nand_davinci_probe(struct platform_device *pdev)
ret = nand_scan_ident(mtd, pdata->mask_chipsel ? 2 : 1, NULL); ret = nand_scan_ident(mtd, pdata->mask_chipsel ? 2 : 1, NULL);
if (ret < 0) { if (ret < 0) {
dev_dbg(&pdev->dev, "no NAND chip(s) found\n"); dev_dbg(&pdev->dev, "no NAND chip(s) found\n");
goto err; return ret;
} }
switch (info->chip.ecc.mode) { switch (info->chip.ecc.mode) {
@ -838,9 +820,6 @@ static int nand_davinci_probe(struct platform_device *pdev)
nand_cleanup(&info->chip); nand_cleanup(&info->chip);
err: err:
clk_disable_unprepare(info->clk);
err_clk_enable:
spin_lock_irq(&davinci_nand_lock); spin_lock_irq(&davinci_nand_lock);
if (info->chip.ecc.mode == NAND_ECC_HW_SYNDROME) if (info->chip.ecc.mode == NAND_ECC_HW_SYNDROME)
ecc4_busy = false; ecc4_busy = false;
@ -859,8 +838,6 @@ static int nand_davinci_remove(struct platform_device *pdev)
nand_release(nand_to_mtd(&info->chip)); nand_release(nand_to_mtd(&info->chip));
clk_disable_unprepare(info->clk);
return 0; return 0;
} }

View file

@ -1481,12 +1481,12 @@ static int __init doc_probe(unsigned long physadr)
WriteDOC(tmp, virtadr, Mplus_DOCControl); WriteDOC(tmp, virtadr, Mplus_DOCControl);
WriteDOC(~tmp, virtadr, Mplus_CtrlConfirm); WriteDOC(~tmp, virtadr, Mplus_CtrlConfirm);
mdelay(1); usleep_range(1000, 2000);
/* Enable the Millennium Plus ASIC */ /* Enable the Millennium Plus ASIC */
tmp = DOC_MODE_NORMAL | DOC_MODE_MDWREN | DOC_MODE_RST_LAT | DOC_MODE_BDECT; tmp = DOC_MODE_NORMAL | DOC_MODE_MDWREN | DOC_MODE_RST_LAT | DOC_MODE_BDECT;
WriteDOC(tmp, virtadr, Mplus_DOCControl); WriteDOC(tmp, virtadr, Mplus_DOCControl);
WriteDOC(~tmp, virtadr, Mplus_CtrlConfirm); WriteDOC(~tmp, virtadr, Mplus_CtrlConfirm);
mdelay(1); usleep_range(1000, 2000);
ChipID = ReadDOC(virtadr, ChipID); ChipID = ReadDOC(virtadr, ChipID);

View file

@ -813,8 +813,6 @@ static int fsl_elbc_chip_remove(struct fsl_elbc_mtd *priv)
struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = priv->ctrl->nand; struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = priv->ctrl->nand;
struct mtd_info *mtd = nand_to_mtd(&priv->chip); struct mtd_info *mtd = nand_to_mtd(&priv->chip);
nand_release(mtd);
kfree(mtd->name); kfree(mtd->name);
if (priv->vbase) if (priv->vbase)
@ -926,15 +924,20 @@ static int fsl_elbc_nand_probe(struct platform_device *pdev)
/* First look for RedBoot table or partitions on the command /* First look for RedBoot table or partitions on the command
* line, these take precedence over device tree information */ * line, these take precedence over device tree information */
mtd_device_parse_register(mtd, part_probe_types, NULL, ret = mtd_device_parse_register(mtd, part_probe_types, NULL, NULL, 0);
NULL, 0); if (ret)
goto cleanup_nand;
pr_info("eLBC NAND device at 0x%llx, bank %d\n", pr_info("eLBC NAND device at 0x%llx, bank %d\n",
(unsigned long long)res.start, priv->bank); (unsigned long long)res.start, priv->bank);
return 0; return 0;
cleanup_nand:
nand_cleanup(&priv->chip);
err: err:
fsl_elbc_chip_remove(priv); fsl_elbc_chip_remove(priv);
return ret; return ret;
} }
@ -942,7 +945,9 @@ static int fsl_elbc_nand_remove(struct platform_device *pdev)
{ {
struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = fsl_lbc_ctrl_dev->nand; struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = fsl_lbc_ctrl_dev->nand;
struct fsl_elbc_mtd *priv = dev_get_drvdata(&pdev->dev); struct fsl_elbc_mtd *priv = dev_get_drvdata(&pdev->dev);
struct mtd_info *mtd = nand_to_mtd(&priv->chip);
nand_release(mtd);
fsl_elbc_chip_remove(priv); fsl_elbc_chip_remove(priv);
mutex_lock(&fsl_elbc_nand_mutex); mutex_lock(&fsl_elbc_nand_mutex);

View file

@ -342,9 +342,16 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
case NAND_CMD_READID: case NAND_CMD_READID:
case NAND_CMD_PARAM: { case NAND_CMD_PARAM: {
/*
* For READID, read 8 bytes that are currently used.
* For PARAM, read all 3 copies of 256-bytes pages.
*/
int len = 8;
int timing = IFC_FIR_OP_RB; int timing = IFC_FIR_OP_RB;
if (command == NAND_CMD_PARAM) if (command == NAND_CMD_PARAM) {
timing = IFC_FIR_OP_RBCD; timing = IFC_FIR_OP_RBCD;
len = 256 * 3;
}
ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
(IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) | (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) |
@ -354,12 +361,8 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
&ifc->ifc_nand.nand_fcr0); &ifc->ifc_nand.nand_fcr0);
ifc_out32(column, &ifc->ifc_nand.row3); ifc_out32(column, &ifc->ifc_nand.row3);
/* ifc_out32(len, &ifc->ifc_nand.nand_fbcr);
* although currently it's 8 bytes for READID, we always read ifc_nand_ctrl->read_bytes = len;
* the maximum 256 bytes(for PARAM)
*/
ifc_out32(256, &ifc->ifc_nand.nand_fbcr);
ifc_nand_ctrl->read_bytes = 256;
set_addr(mtd, 0, 0, 0); set_addr(mtd, 0, 0, 0);
fsl_ifc_run_command(mtd); fsl_ifc_run_command(mtd);
@ -924,8 +927,6 @@ static int fsl_ifc_chip_remove(struct fsl_ifc_mtd *priv)
{ {
struct mtd_info *mtd = nand_to_mtd(&priv->chip); struct mtd_info *mtd = nand_to_mtd(&priv->chip);
nand_release(mtd);
kfree(mtd->name); kfree(mtd->name);
if (priv->vbase) if (priv->vbase)
@ -1059,21 +1060,29 @@ static int fsl_ifc_nand_probe(struct platform_device *dev)
/* First look for RedBoot table or partitions on the command /* First look for RedBoot table or partitions on the command
* line, these take precedence over device tree information */ * line, these take precedence over device tree information */
mtd_device_parse_register(mtd, part_probe_types, NULL, NULL, 0); ret = mtd_device_parse_register(mtd, part_probe_types, NULL, NULL, 0);
if (ret)
goto cleanup_nand;
dev_info(priv->dev, "IFC NAND device at 0x%llx, bank %d\n", dev_info(priv->dev, "IFC NAND device at 0x%llx, bank %d\n",
(unsigned long long)res.start, priv->bank); (unsigned long long)res.start, priv->bank);
return 0; return 0;
cleanup_nand:
nand_cleanup(&priv->chip);
err: err:
fsl_ifc_chip_remove(priv); fsl_ifc_chip_remove(priv);
return ret; return ret;
} }
static int fsl_ifc_nand_remove(struct platform_device *dev) static int fsl_ifc_nand_remove(struct platform_device *dev)
{ {
struct fsl_ifc_mtd *priv = dev_get_drvdata(&dev->dev); struct fsl_ifc_mtd *priv = dev_get_drvdata(&dev->dev);
struct mtd_info *mtd = nand_to_mtd(&priv->chip);
nand_release(mtd);
fsl_ifc_chip_remove(priv); fsl_ifc_chip_remove(priv);
mutex_lock(&fsl_ifc_nand_mutex); mutex_lock(&fsl_ifc_nand_mutex);

View file

@ -1022,12 +1022,12 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
host->read_dma_chan = dma_request_channel(mask, filter, NULL); host->read_dma_chan = dma_request_channel(mask, filter, NULL);
if (!host->read_dma_chan) { if (!host->read_dma_chan) {
dev_err(&pdev->dev, "Unable to get read dma channel\n"); dev_err(&pdev->dev, "Unable to get read dma channel\n");
goto err_req_read_chnl; goto disable_clk;
} }
host->write_dma_chan = dma_request_channel(mask, filter, NULL); host->write_dma_chan = dma_request_channel(mask, filter, NULL);
if (!host->write_dma_chan) { if (!host->write_dma_chan) {
dev_err(&pdev->dev, "Unable to get write dma channel\n"); dev_err(&pdev->dev, "Unable to get write dma channel\n");
goto err_req_write_chnl; goto release_dma_read_chan;
} }
} }
@ -1050,7 +1050,7 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
ret = nand_scan_ident(mtd, 1, NULL); ret = nand_scan_ident(mtd, 1, NULL);
if (ret) { if (ret) {
dev_err(&pdev->dev, "No NAND Device found!\n"); dev_err(&pdev->dev, "No NAND Device found!\n");
goto err_scan_ident; goto release_dma_write_chan;
} }
if (AMBA_REV_BITS(host->pid) >= 8) { if (AMBA_REV_BITS(host->pid) >= 8) {
@ -1065,7 +1065,7 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
dev_warn(&pdev->dev, "No oob scheme defined for oobsize %d\n", dev_warn(&pdev->dev, "No oob scheme defined for oobsize %d\n",
mtd->oobsize); mtd->oobsize);
ret = -EINVAL; ret = -EINVAL;
goto err_probe; goto release_dma_write_chan;
} }
mtd_set_ooblayout(mtd, &fsmc_ecc4_ooblayout_ops); mtd_set_ooblayout(mtd, &fsmc_ecc4_ooblayout_ops);
@ -1090,7 +1090,7 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
default: default:
dev_err(&pdev->dev, "Unsupported ECC mode!\n"); dev_err(&pdev->dev, "Unsupported ECC mode!\n");
goto err_probe; goto release_dma_write_chan;
} }
/* /*
@ -1110,7 +1110,7 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
"No oob scheme defined for oobsize %d\n", "No oob scheme defined for oobsize %d\n",
mtd->oobsize); mtd->oobsize);
ret = -EINVAL; ret = -EINVAL;
goto err_probe; goto release_dma_write_chan;
} }
} }
} }
@ -1118,26 +1118,29 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
/* Second stage of scan to fill MTD data-structures */ /* Second stage of scan to fill MTD data-structures */
ret = nand_scan_tail(mtd); ret = nand_scan_tail(mtd);
if (ret) if (ret)
goto err_probe; goto release_dma_write_chan;
mtd->name = "nand"; mtd->name = "nand";
ret = mtd_device_register(mtd, NULL, 0); ret = mtd_device_register(mtd, NULL, 0);
if (ret) if (ret)
goto err_probe; goto cleanup_nand;
platform_set_drvdata(pdev, host); platform_set_drvdata(pdev, host);
dev_info(&pdev->dev, "FSMC NAND driver registration successful\n"); dev_info(&pdev->dev, "FSMC NAND driver registration successful\n");
return 0; return 0;
err_probe: cleanup_nand:
err_scan_ident: nand_cleanup(nand);
release_dma_write_chan:
if (host->mode == USE_DMA_ACCESS) if (host->mode == USE_DMA_ACCESS)
dma_release_channel(host->write_dma_chan); dma_release_channel(host->write_dma_chan);
err_req_write_chnl: release_dma_read_chan:
if (host->mode == USE_DMA_ACCESS) if (host->mode == USE_DMA_ACCESS)
dma_release_channel(host->read_dma_chan); dma_release_channel(host->read_dma_chan);
err_req_read_chnl: disable_clk:
clk_disable_unprepare(host->clk); clk_disable_unprepare(host->clk);
return ret; return ret;
} }

View file

@ -258,8 +258,9 @@ int bch_set_geometry(struct gpmi_nand_data *this)
unsigned int gf_len; unsigned int gf_len;
int ret; int ret;
if (common_nfc_set_geometry(this)) ret = common_nfc_set_geometry(this);
return !0; if (ret)
return ret;
block_count = bch_geo->ecc_chunk_count - 1; block_count = bch_geo->ecc_chunk_count - 1;
block_size = bch_geo->ecc_chunk_size; block_size = bch_geo->ecc_chunk_size;
@ -544,19 +545,13 @@ int gpmi_is_ready(struct gpmi_nand_data *this, unsigned chip)
return reg & mask; return reg & mask;
} }
static inline void set_dma_type(struct gpmi_nand_data *this,
enum dma_ops_type type)
{
this->last_dma_type = this->dma_type;
this->dma_type = type;
}
int gpmi_send_command(struct gpmi_nand_data *this) int gpmi_send_command(struct gpmi_nand_data *this)
{ {
struct dma_chan *channel = get_dma_chan(this); struct dma_chan *channel = get_dma_chan(this);
struct dma_async_tx_descriptor *desc; struct dma_async_tx_descriptor *desc;
struct scatterlist *sgl; struct scatterlist *sgl;
int chip = this->current_chip; int chip = this->current_chip;
int ret;
u32 pio[3]; u32 pio[3];
/* [1] send out the PIO words */ /* [1] send out the PIO words */
@ -586,15 +581,19 @@ int gpmi_send_command(struct gpmi_nand_data *this)
return -EINVAL; return -EINVAL;
/* [3] submit the DMA */ /* [3] submit the DMA */
set_dma_type(this, DMA_FOR_COMMAND); ret = start_dma_without_bch_irq(this, desc);
return start_dma_without_bch_irq(this, desc);
dma_unmap_sg(this->dev, sgl, 1, DMA_TO_DEVICE);
return ret;
} }
int gpmi_send_data(struct gpmi_nand_data *this) int gpmi_send_data(struct gpmi_nand_data *this, const void *buf, int len)
{ {
struct dma_async_tx_descriptor *desc; struct dma_async_tx_descriptor *desc;
struct dma_chan *channel = get_dma_chan(this); struct dma_chan *channel = get_dma_chan(this);
int chip = this->current_chip; int chip = this->current_chip;
int ret;
uint32_t command_mode; uint32_t command_mode;
uint32_t address; uint32_t address;
u32 pio[2]; u32 pio[2];
@ -608,7 +607,7 @@ int gpmi_send_data(struct gpmi_nand_data *this)
| BF_GPMI_CTRL0_CS(chip, this) | BF_GPMI_CTRL0_CS(chip, this)
| BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this) | BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this)
| BF_GPMI_CTRL0_ADDRESS(address) | BF_GPMI_CTRL0_ADDRESS(address)
| BF_GPMI_CTRL0_XFER_COUNT(this->upper_len); | BF_GPMI_CTRL0_XFER_COUNT(len);
pio[1] = 0; pio[1] = 0;
desc = dmaengine_prep_slave_sg(channel, (struct scatterlist *)pio, desc = dmaengine_prep_slave_sg(channel, (struct scatterlist *)pio,
ARRAY_SIZE(pio), DMA_TRANS_NONE, 0); ARRAY_SIZE(pio), DMA_TRANS_NONE, 0);
@ -616,7 +615,7 @@ int gpmi_send_data(struct gpmi_nand_data *this)
return -EINVAL; return -EINVAL;
/* [2] send DMA request */ /* [2] send DMA request */
prepare_data_dma(this, DMA_TO_DEVICE); prepare_data_dma(this, buf, len, DMA_TO_DEVICE);
desc = dmaengine_prep_slave_sg(channel, &this->data_sgl, desc = dmaengine_prep_slave_sg(channel, &this->data_sgl,
1, DMA_MEM_TO_DEV, 1, DMA_MEM_TO_DEV,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK); DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
@ -624,16 +623,21 @@ int gpmi_send_data(struct gpmi_nand_data *this)
return -EINVAL; return -EINVAL;
/* [3] submit the DMA */ /* [3] submit the DMA */
set_dma_type(this, DMA_FOR_WRITE_DATA); ret = start_dma_without_bch_irq(this, desc);
return start_dma_without_bch_irq(this, desc);
dma_unmap_sg(this->dev, &this->data_sgl, 1, DMA_TO_DEVICE);
return ret;
} }
int gpmi_read_data(struct gpmi_nand_data *this) int gpmi_read_data(struct gpmi_nand_data *this, void *buf, int len)
{ {
struct dma_async_tx_descriptor *desc; struct dma_async_tx_descriptor *desc;
struct dma_chan *channel = get_dma_chan(this); struct dma_chan *channel = get_dma_chan(this);
int chip = this->current_chip; int chip = this->current_chip;
int ret;
u32 pio[2]; u32 pio[2];
bool direct;
/* [1] : send PIO */ /* [1] : send PIO */
pio[0] = BF_GPMI_CTRL0_COMMAND_MODE(BV_GPMI_CTRL0_COMMAND_MODE__READ) pio[0] = BF_GPMI_CTRL0_COMMAND_MODE(BV_GPMI_CTRL0_COMMAND_MODE__READ)
@ -641,7 +645,7 @@ int gpmi_read_data(struct gpmi_nand_data *this)
| BF_GPMI_CTRL0_CS(chip, this) | BF_GPMI_CTRL0_CS(chip, this)
| BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this) | BF_GPMI_CTRL0_LOCK_CS(LOCK_CS_ENABLE, this)
| BF_GPMI_CTRL0_ADDRESS(BV_GPMI_CTRL0_ADDRESS__NAND_DATA) | BF_GPMI_CTRL0_ADDRESS(BV_GPMI_CTRL0_ADDRESS__NAND_DATA)
| BF_GPMI_CTRL0_XFER_COUNT(this->upper_len); | BF_GPMI_CTRL0_XFER_COUNT(len);
pio[1] = 0; pio[1] = 0;
desc = dmaengine_prep_slave_sg(channel, desc = dmaengine_prep_slave_sg(channel,
(struct scatterlist *)pio, (struct scatterlist *)pio,
@ -650,7 +654,7 @@ int gpmi_read_data(struct gpmi_nand_data *this)
return -EINVAL; return -EINVAL;
/* [2] : send DMA request */ /* [2] : send DMA request */
prepare_data_dma(this, DMA_FROM_DEVICE); direct = prepare_data_dma(this, buf, len, DMA_FROM_DEVICE);
desc = dmaengine_prep_slave_sg(channel, &this->data_sgl, desc = dmaengine_prep_slave_sg(channel, &this->data_sgl,
1, DMA_DEV_TO_MEM, 1, DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK); DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
@ -658,8 +662,14 @@ int gpmi_read_data(struct gpmi_nand_data *this)
return -EINVAL; return -EINVAL;
/* [3] : submit the DMA */ /* [3] : submit the DMA */
set_dma_type(this, DMA_FOR_READ_DATA);
return start_dma_without_bch_irq(this, desc); ret = start_dma_without_bch_irq(this, desc);
dma_unmap_sg(this->dev, &this->data_sgl, 1, DMA_FROM_DEVICE);
if (!direct)
memcpy(buf, this->data_buffer_dma, len);
return ret;
} }
int gpmi_send_page(struct gpmi_nand_data *this, int gpmi_send_page(struct gpmi_nand_data *this,
@ -703,7 +713,6 @@ int gpmi_send_page(struct gpmi_nand_data *this,
if (!desc) if (!desc)
return -EINVAL; return -EINVAL;
set_dma_type(this, DMA_FOR_WRITE_ECC_PAGE);
return start_dma_with_bch_irq(this, desc); return start_dma_with_bch_irq(this, desc);
} }
@ -785,7 +794,6 @@ int gpmi_read_page(struct gpmi_nand_data *this,
return -EINVAL; return -EINVAL;
/* [4] submit the DMA */ /* [4] submit the DMA */
set_dma_type(this, DMA_FOR_READ_ECC_PAGE);
return start_dma_with_bch_irq(this, desc); return start_dma_with_bch_irq(this, desc);
} }

View file

@ -198,17 +198,16 @@ static inline bool gpmi_check_ecc(struct gpmi_nand_data *this)
* *
* We may have available oob space in this case. * We may have available oob space in this case.
*/ */
static int set_geometry_by_ecc_info(struct gpmi_nand_data *this) static int set_geometry_by_ecc_info(struct gpmi_nand_data *this,
unsigned int ecc_strength,
unsigned int ecc_step)
{ {
struct bch_geometry *geo = &this->bch_geometry; struct bch_geometry *geo = &this->bch_geometry;
struct nand_chip *chip = &this->nand; struct nand_chip *chip = &this->nand;
struct mtd_info *mtd = nand_to_mtd(chip); struct mtd_info *mtd = nand_to_mtd(chip);
unsigned int block_mark_bit_offset; unsigned int block_mark_bit_offset;
if (!(chip->ecc_strength_ds > 0 && chip->ecc_step_ds > 0)) switch (ecc_step) {
return -EINVAL;
switch (chip->ecc_step_ds) {
case SZ_512: case SZ_512:
geo->gf_len = 13; geo->gf_len = 13;
break; break;
@ -221,8 +220,8 @@ static int set_geometry_by_ecc_info(struct gpmi_nand_data *this)
chip->ecc_strength_ds, chip->ecc_step_ds); chip->ecc_strength_ds, chip->ecc_step_ds);
return -EINVAL; return -EINVAL;
} }
geo->ecc_chunk_size = chip->ecc_step_ds; geo->ecc_chunk_size = ecc_step;
geo->ecc_strength = round_up(chip->ecc_strength_ds, 2); geo->ecc_strength = round_up(ecc_strength, 2);
if (!gpmi_check_ecc(this)) if (!gpmi_check_ecc(this))
return -EINVAL; return -EINVAL;
@ -230,7 +229,7 @@ static int set_geometry_by_ecc_info(struct gpmi_nand_data *this)
if (geo->ecc_chunk_size < mtd->oobsize) { if (geo->ecc_chunk_size < mtd->oobsize) {
dev_err(this->dev, dev_err(this->dev,
"unsupported nand chip. ecc size: %d, oob size : %d\n", "unsupported nand chip. ecc size: %d, oob size : %d\n",
chip->ecc_step_ds, mtd->oobsize); ecc_step, mtd->oobsize);
return -EINVAL; return -EINVAL;
} }
@ -423,9 +422,20 @@ static int legacy_set_geometry(struct gpmi_nand_data *this)
int common_nfc_set_geometry(struct gpmi_nand_data *this) int common_nfc_set_geometry(struct gpmi_nand_data *this)
{ {
struct nand_chip *chip = &this->nand;
if (chip->ecc.strength > 0 && chip->ecc.size > 0)
return set_geometry_by_ecc_info(this, chip->ecc.strength,
chip->ecc.size);
if ((of_property_read_bool(this->dev->of_node, "fsl,use-minimum-ecc")) if ((of_property_read_bool(this->dev->of_node, "fsl,use-minimum-ecc"))
|| legacy_set_geometry(this)) || legacy_set_geometry(this)) {
return set_geometry_by_ecc_info(this); if (!(chip->ecc_strength_ds > 0 && chip->ecc_step_ds > 0))
return -EINVAL;
return set_geometry_by_ecc_info(this, chip->ecc_strength_ds,
chip->ecc_step_ds);
}
return 0; return 0;
} }
@ -437,33 +447,32 @@ struct dma_chan *get_dma_chan(struct gpmi_nand_data *this)
} }
/* Can we use the upper's buffer directly for DMA? */ /* Can we use the upper's buffer directly for DMA? */
void prepare_data_dma(struct gpmi_nand_data *this, enum dma_data_direction dr) bool prepare_data_dma(struct gpmi_nand_data *this, const void *buf, int len,
enum dma_data_direction dr)
{ {
struct scatterlist *sgl = &this->data_sgl; struct scatterlist *sgl = &this->data_sgl;
int ret; int ret;
/* first try to map the upper buffer directly */ /* first try to map the upper buffer directly */
if (virt_addr_valid(this->upper_buf) && if (virt_addr_valid(buf) && !object_is_on_stack(buf)) {
!object_is_on_stack(this->upper_buf)) { sg_init_one(sgl, buf, len);
sg_init_one(sgl, this->upper_buf, this->upper_len);
ret = dma_map_sg(this->dev, sgl, 1, dr); ret = dma_map_sg(this->dev, sgl, 1, dr);
if (ret == 0) if (ret == 0)
goto map_fail; goto map_fail;
this->direct_dma_map_ok = true; return true;
return;
} }
map_fail: map_fail:
/* We have to use our own DMA buffer. */ /* We have to use our own DMA buffer. */
sg_init_one(sgl, this->data_buffer_dma, this->upper_len); sg_init_one(sgl, this->data_buffer_dma, len);
if (dr == DMA_TO_DEVICE) if (dr == DMA_TO_DEVICE)
memcpy(this->data_buffer_dma, this->upper_buf, this->upper_len); memcpy(this->data_buffer_dma, buf, len);
dma_map_sg(this->dev, sgl, 1, dr); dma_map_sg(this->dev, sgl, 1, dr);
this->direct_dma_map_ok = false; return false;
} }
/* This will be called after the DMA operation is finished. */ /* This will be called after the DMA operation is finished. */
@ -472,31 +481,6 @@ static void dma_irq_callback(void *param)
struct gpmi_nand_data *this = param; struct gpmi_nand_data *this = param;
struct completion *dma_c = &this->dma_done; struct completion *dma_c = &this->dma_done;
switch (this->dma_type) {
case DMA_FOR_COMMAND:
dma_unmap_sg(this->dev, &this->cmd_sgl, 1, DMA_TO_DEVICE);
break;
case DMA_FOR_READ_DATA:
dma_unmap_sg(this->dev, &this->data_sgl, 1, DMA_FROM_DEVICE);
if (this->direct_dma_map_ok == false)
memcpy(this->upper_buf, this->data_buffer_dma,
this->upper_len);
break;
case DMA_FOR_WRITE_DATA:
dma_unmap_sg(this->dev, &this->data_sgl, 1, DMA_TO_DEVICE);
break;
case DMA_FOR_READ_ECC_PAGE:
case DMA_FOR_WRITE_ECC_PAGE:
/* We have to wait the BCH interrupt to finish. */
break;
default:
dev_err(this->dev, "in wrong DMA operation.\n");
}
complete(dma_c); complete(dma_c);
} }
@ -516,8 +500,7 @@ int start_dma_without_bch_irq(struct gpmi_nand_data *this,
/* Wait for the interrupt from the DMA block. */ /* Wait for the interrupt from the DMA block. */
timeout = wait_for_completion_timeout(dma_c, msecs_to_jiffies(1000)); timeout = wait_for_completion_timeout(dma_c, msecs_to_jiffies(1000));
if (!timeout) { if (!timeout) {
dev_err(this->dev, "DMA timeout, last DMA :%d\n", dev_err(this->dev, "DMA timeout, last DMA\n");
this->last_dma_type);
gpmi_dump_info(this); gpmi_dump_info(this);
return -ETIMEDOUT; return -ETIMEDOUT;
} }
@ -546,8 +529,7 @@ int start_dma_with_bch_irq(struct gpmi_nand_data *this,
/* Wait for the interrupt from the BCH block. */ /* Wait for the interrupt from the BCH block. */
timeout = wait_for_completion_timeout(bch_c, msecs_to_jiffies(1000)); timeout = wait_for_completion_timeout(bch_c, msecs_to_jiffies(1000));
if (!timeout) { if (!timeout) {
dev_err(this->dev, "BCH timeout, last DMA :%d\n", dev_err(this->dev, "BCH timeout\n");
this->last_dma_type);
gpmi_dump_info(this); gpmi_dump_info(this);
return -ETIMEDOUT; return -ETIMEDOUT;
} }
@ -695,56 +677,6 @@ static void release_resources(struct gpmi_nand_data *this)
release_dma_channels(this); release_dma_channels(this);
} }
static int read_page_prepare(struct gpmi_nand_data *this,
void *destination, unsigned length,
void *alt_virt, dma_addr_t alt_phys, unsigned alt_size,
void **use_virt, dma_addr_t *use_phys)
{
struct device *dev = this->dev;
if (virt_addr_valid(destination)) {
dma_addr_t dest_phys;
dest_phys = dma_map_single(dev, destination,
length, DMA_FROM_DEVICE);
if (dma_mapping_error(dev, dest_phys)) {
if (alt_size < length) {
dev_err(dev, "Alternate buffer is too small\n");
return -ENOMEM;
}
goto map_failed;
}
*use_virt = destination;
*use_phys = dest_phys;
this->direct_dma_map_ok = true;
return 0;
}
map_failed:
*use_virt = alt_virt;
*use_phys = alt_phys;
this->direct_dma_map_ok = false;
return 0;
}
static inline void read_page_end(struct gpmi_nand_data *this,
void *destination, unsigned length,
void *alt_virt, dma_addr_t alt_phys, unsigned alt_size,
void *used_virt, dma_addr_t used_phys)
{
if (this->direct_dma_map_ok)
dma_unmap_single(this->dev, used_phys, length, DMA_FROM_DEVICE);
}
static inline void read_page_swap_end(struct gpmi_nand_data *this,
void *destination, unsigned length,
void *alt_virt, dma_addr_t alt_phys, unsigned alt_size,
void *used_virt, dma_addr_t used_phys)
{
if (!this->direct_dma_map_ok)
memcpy(destination, alt_virt, length);
}
static int send_page_prepare(struct gpmi_nand_data *this, static int send_page_prepare(struct gpmi_nand_data *this,
const void *source, unsigned length, const void *source, unsigned length,
void *alt_virt, dma_addr_t alt_phys, unsigned alt_size, void *alt_virt, dma_addr_t alt_phys, unsigned alt_size,
@ -946,10 +878,8 @@ static void gpmi_read_buf(struct mtd_info *mtd, uint8_t *buf, int len)
struct gpmi_nand_data *this = nand_get_controller_data(chip); struct gpmi_nand_data *this = nand_get_controller_data(chip);
dev_dbg(this->dev, "len is %d\n", len); dev_dbg(this->dev, "len is %d\n", len);
this->upper_buf = buf;
this->upper_len = len;
gpmi_read_data(this); gpmi_read_data(this, buf, len);
} }
static void gpmi_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len) static void gpmi_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len)
@ -958,10 +888,8 @@ static void gpmi_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len)
struct gpmi_nand_data *this = nand_get_controller_data(chip); struct gpmi_nand_data *this = nand_get_controller_data(chip);
dev_dbg(this->dev, "len is %d\n", len); dev_dbg(this->dev, "len is %d\n", len);
this->upper_buf = (uint8_t *)buf;
this->upper_len = len;
gpmi_send_data(this); gpmi_send_data(this, buf, len);
} }
static uint8_t gpmi_read_byte(struct mtd_info *mtd) static uint8_t gpmi_read_byte(struct mtd_info *mtd)
@ -1031,44 +959,46 @@ static int gpmi_ecc_read_page_data(struct nand_chip *chip,
struct mtd_info *mtd = nand_to_mtd(chip); struct mtd_info *mtd = nand_to_mtd(chip);
void *payload_virt; void *payload_virt;
dma_addr_t payload_phys; dma_addr_t payload_phys;
void *auxiliary_virt;
dma_addr_t auxiliary_phys;
unsigned int i; unsigned int i;
unsigned char *status; unsigned char *status;
unsigned int max_bitflips = 0; unsigned int max_bitflips = 0;
int ret; int ret;
bool direct = false;
dev_dbg(this->dev, "page number is : %d\n", page); dev_dbg(this->dev, "page number is : %d\n", page);
ret = read_page_prepare(this, buf, nfc_geo->payload_size,
this->payload_virt, this->payload_phys, payload_virt = this->payload_virt;
nfc_geo->payload_size, payload_phys = this->payload_phys;
&payload_virt, &payload_phys);
if (ret) { if (virt_addr_valid(buf)) {
dev_err(this->dev, "Inadequate DMA buffer\n"); dma_addr_t dest_phys;
ret = -ENOMEM;
return ret; dest_phys = dma_map_single(this->dev, buf, nfc_geo->payload_size,
DMA_FROM_DEVICE);
if (!dma_mapping_error(this->dev, dest_phys)) {
payload_virt = buf;
payload_phys = dest_phys;
direct = true;
}
} }
auxiliary_virt = this->auxiliary_virt;
auxiliary_phys = this->auxiliary_phys;
/* go! */ /* go! */
ret = gpmi_read_page(this, payload_phys, auxiliary_phys); ret = gpmi_read_page(this, payload_phys, this->auxiliary_phys);
read_page_end(this, buf, nfc_geo->payload_size,
this->payload_virt, this->payload_phys, if (direct)
nfc_geo->payload_size, dma_unmap_single(this->dev, payload_phys, nfc_geo->payload_size,
payload_virt, payload_phys); DMA_FROM_DEVICE);
if (ret) { if (ret) {
dev_err(this->dev, "Error in ECC-based read: %d\n", ret); dev_err(this->dev, "Error in ECC-based read: %d\n", ret);
return ret; return ret;
} }
/* Loop over status bytes, accumulating ECC status. */ /* Loop over status bytes, accumulating ECC status. */
status = auxiliary_virt + nfc_geo->auxiliary_status_offset; status = this->auxiliary_virt + nfc_geo->auxiliary_status_offset;
read_page_swap_end(this, buf, nfc_geo->payload_size, if (!direct)
this->payload_virt, this->payload_phys, memcpy(buf, this->payload_virt, nfc_geo->payload_size);
nfc_geo->payload_size,
payload_virt, payload_phys);
for (i = 0; i < nfc_geo->ecc_chunk_count; i++, status++) { for (i = 0; i < nfc_geo->ecc_chunk_count; i++, status++) {
if ((*status == STATUS_GOOD) || (*status == STATUS_ERASED)) if ((*status == STATUS_GOOD) || (*status == STATUS_ERASED))
@ -1123,7 +1053,7 @@ static int gpmi_ecc_read_page_data(struct nand_chip *chip,
buf + i * nfc_geo->ecc_chunk_size, buf + i * nfc_geo->ecc_chunk_size,
nfc_geo->ecc_chunk_size, nfc_geo->ecc_chunk_size,
eccbuf, eccbytes, eccbuf, eccbytes,
auxiliary_virt, this->auxiliary_virt,
nfc_geo->metadata_size, nfc_geo->metadata_size,
nfc_geo->ecc_strength); nfc_geo->ecc_strength);
} else { } else {
@ -1151,7 +1081,7 @@ static int gpmi_ecc_read_page_data(struct nand_chip *chip,
} }
/* handle the block mark swapping */ /* handle the block mark swapping */
block_mark_swapping(this, buf, auxiliary_virt); block_mark_swapping(this, buf, this->auxiliary_virt);
if (oob_required) { if (oob_required) {
/* /*
@ -1165,7 +1095,7 @@ static int gpmi_ecc_read_page_data(struct nand_chip *chip,
* the block mark. * the block mark.
*/ */
memset(chip->oob_poi, ~0, mtd->oobsize); memset(chip->oob_poi, ~0, mtd->oobsize);
chip->oob_poi[0] = ((uint8_t *) auxiliary_virt)[0]; chip->oob_poi[0] = ((uint8_t *)this->auxiliary_virt)[0];
} }
return max_bitflips; return max_bitflips;

View file

@ -77,15 +77,6 @@ struct boot_rom_geometry {
unsigned int search_area_stride_exponent; unsigned int search_area_stride_exponent;
}; };
/* DMA operations types */
enum dma_ops_type {
DMA_FOR_COMMAND = 1,
DMA_FOR_READ_DATA,
DMA_FOR_WRITE_DATA,
DMA_FOR_READ_ECC_PAGE,
DMA_FOR_WRITE_ECC_PAGE
};
enum gpmi_type { enum gpmi_type {
IS_MX23, IS_MX23,
IS_MX28, IS_MX28,
@ -150,13 +141,6 @@ struct gpmi_nand_data {
int current_chip; int current_chip;
unsigned int command_length; unsigned int command_length;
/* passed from upper layer */
uint8_t *upper_buf;
int upper_len;
/* for DMA operations */
bool direct_dma_map_ok;
struct scatterlist cmd_sgl; struct scatterlist cmd_sgl;
char *cmd_buffer; char *cmd_buffer;
@ -178,8 +162,6 @@ struct gpmi_nand_data {
/* DMA channels */ /* DMA channels */
#define DMA_CHANS 8 #define DMA_CHANS 8
struct dma_chan *dma_chans[DMA_CHANS]; struct dma_chan *dma_chans[DMA_CHANS];
enum dma_ops_type last_dma_type;
enum dma_ops_type dma_type;
struct completion dma_done; struct completion dma_done;
/* private */ /* private */
@ -189,7 +171,7 @@ struct gpmi_nand_data {
/* Common Services */ /* Common Services */
int common_nfc_set_geometry(struct gpmi_nand_data *); int common_nfc_set_geometry(struct gpmi_nand_data *);
struct dma_chan *get_dma_chan(struct gpmi_nand_data *); struct dma_chan *get_dma_chan(struct gpmi_nand_data *);
void prepare_data_dma(struct gpmi_nand_data *, bool prepare_data_dma(struct gpmi_nand_data *, const void *buf, int len,
enum dma_data_direction dr); enum dma_data_direction dr);
int start_dma_without_bch_irq(struct gpmi_nand_data *, int start_dma_without_bch_irq(struct gpmi_nand_data *,
struct dma_async_tx_descriptor *); struct dma_async_tx_descriptor *);
@ -208,8 +190,9 @@ int gpmi_disable_clk(struct gpmi_nand_data *this);
int gpmi_setup_data_interface(struct mtd_info *mtd, int chipnr, int gpmi_setup_data_interface(struct mtd_info *mtd, int chipnr,
const struct nand_data_interface *conf); const struct nand_data_interface *conf);
void gpmi_nfc_apply_timings(struct gpmi_nand_data *this); void gpmi_nfc_apply_timings(struct gpmi_nand_data *this);
int gpmi_read_data(struct gpmi_nand_data *); int gpmi_read_data(struct gpmi_nand_data *, void *buf, int len);
int gpmi_send_data(struct gpmi_nand_data *); int gpmi_send_data(struct gpmi_nand_data *, const void *buf, int len);
int gpmi_send_page(struct gpmi_nand_data *, int gpmi_send_page(struct gpmi_nand_data *,
dma_addr_t payload, dma_addr_t auxiliary); dma_addr_t payload, dma_addr_t auxiliary);
int gpmi_read_page(struct gpmi_nand_data *, int gpmi_read_page(struct gpmi_nand_data *,

View file

@ -731,23 +731,19 @@ static int hisi_nfc_probe(struct platform_device *pdev)
irq = platform_get_irq(pdev, 0); irq = platform_get_irq(pdev, 0);
if (irq < 0) { if (irq < 0) {
dev_err(dev, "no IRQ resource defined\n"); dev_err(dev, "no IRQ resource defined\n");
ret = -ENXIO; return -ENXIO;
goto err_res;
} }
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
host->iobase = devm_ioremap_resource(dev, res); host->iobase = devm_ioremap_resource(dev, res);
if (IS_ERR(host->iobase)) { if (IS_ERR(host->iobase))
ret = PTR_ERR(host->iobase); return PTR_ERR(host->iobase);
goto err_res;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 1); res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
host->mmio = devm_ioremap_resource(dev, res); host->mmio = devm_ioremap_resource(dev, res);
if (IS_ERR(host->mmio)) { if (IS_ERR(host->mmio)) {
ret = PTR_ERR(host->mmio);
dev_err(dev, "devm_ioremap_resource[1] fail\n"); dev_err(dev, "devm_ioremap_resource[1] fail\n");
goto err_res; return PTR_ERR(host->mmio);
} }
mtd->name = "hisi_nand"; mtd->name = "hisi_nand";
@ -770,19 +766,17 @@ static int hisi_nfc_probe(struct platform_device *pdev)
ret = devm_request_irq(dev, irq, hinfc_irq_handle, 0x0, "nandc", host); ret = devm_request_irq(dev, irq, hinfc_irq_handle, 0x0, "nandc", host);
if (ret) { if (ret) {
dev_err(dev, "failed to request IRQ\n"); dev_err(dev, "failed to request IRQ\n");
goto err_res; return ret;
} }
ret = nand_scan_ident(mtd, max_chips, NULL); ret = nand_scan_ident(mtd, max_chips, NULL);
if (ret) if (ret)
goto err_res; return ret;
host->buffer = dmam_alloc_coherent(dev, mtd->writesize + mtd->oobsize, host->buffer = dmam_alloc_coherent(dev, mtd->writesize + mtd->oobsize,
&host->dma_buffer, GFP_KERNEL); &host->dma_buffer, GFP_KERNEL);
if (!host->buffer) { if (!host->buffer)
ret = -ENOMEM; return -ENOMEM;
goto err_res;
}
host->dma_oob = host->dma_buffer + mtd->writesize; host->dma_oob = host->dma_buffer + mtd->writesize;
memset(host->buffer, 0xff, mtd->writesize + mtd->oobsize); memset(host->buffer, 0xff, mtd->writesize + mtd->oobsize);
@ -798,8 +792,7 @@ static int hisi_nfc_probe(struct platform_device *pdev)
*/ */
default: default:
dev_err(dev, "NON-2KB page size nand flash\n"); dev_err(dev, "NON-2KB page size nand flash\n");
ret = -EINVAL; return -EINVAL;
goto err_res;
} }
hinfc_write(host, flag, HINFC504_CON); hinfc_write(host, flag, HINFC504_CON);
@ -809,21 +802,17 @@ static int hisi_nfc_probe(struct platform_device *pdev)
ret = nand_scan_tail(mtd); ret = nand_scan_tail(mtd);
if (ret) { if (ret) {
dev_err(dev, "nand_scan_tail failed: %d\n", ret); dev_err(dev, "nand_scan_tail failed: %d\n", ret);
goto err_res; return ret;
} }
ret = mtd_device_register(mtd, NULL, 0); ret = mtd_device_register(mtd, NULL, 0);
if (ret) { if (ret) {
dev_err(dev, "Err MTD partition=%d\n", ret); dev_err(dev, "Err MTD partition=%d\n", ret);
goto err_mtd; nand_cleanup(chip);
return ret;
} }
return 0; return 0;
err_mtd:
nand_release(mtd);
err_res:
return ret;
} }
static int hisi_nfc_remove(struct platform_device *pdev) static int hisi_nfc_remove(struct platform_device *pdev)

View file

@ -673,7 +673,7 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
host->io_base = devm_ioremap_resource(&pdev->dev, rc); host->io_base = devm_ioremap_resource(&pdev->dev, rc);
if (IS_ERR(host->io_base)) if (IS_ERR(host->io_base))
return PTR_ERR(host->io_base); return PTR_ERR(host->io_base);
host->io_base_phy = rc->start; host->io_base_phy = rc->start;
nand_chip = &host->nand_chip; nand_chip = &host->nand_chip;
@ -706,11 +706,11 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
if (IS_ERR(host->clk)) { if (IS_ERR(host->clk)) {
dev_err(&pdev->dev, "Clock initialization failure\n"); dev_err(&pdev->dev, "Clock initialization failure\n");
res = -ENOENT; res = -ENOENT;
goto err_exit1; goto free_gpio;
} }
res = clk_prepare_enable(host->clk); res = clk_prepare_enable(host->clk);
if (res) if (res)
goto err_put_clk; goto put_clk;
nand_chip->cmd_ctrl = lpc32xx_nand_cmd_ctrl; nand_chip->cmd_ctrl = lpc32xx_nand_cmd_ctrl;
nand_chip->dev_ready = lpc32xx_nand_device_ready; nand_chip->dev_ready = lpc32xx_nand_device_ready;
@ -744,7 +744,7 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
res = lpc32xx_dma_setup(host); res = lpc32xx_dma_setup(host);
if (res) { if (res) {
res = -EIO; res = -EIO;
goto err_exit2; goto unprepare_clk;
} }
} }
@ -754,18 +754,18 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
*/ */
res = nand_scan_ident(mtd, 1, NULL); res = nand_scan_ident(mtd, 1, NULL);
if (res) if (res)
goto err_exit3; goto release_dma_chan;
host->dma_buf = devm_kzalloc(&pdev->dev, mtd->writesize, GFP_KERNEL); host->dma_buf = devm_kzalloc(&pdev->dev, mtd->writesize, GFP_KERNEL);
if (!host->dma_buf) { if (!host->dma_buf) {
res = -ENOMEM; res = -ENOMEM;
goto err_exit3; goto release_dma_chan;
} }
host->dummy_buf = devm_kzalloc(&pdev->dev, mtd->writesize, GFP_KERNEL); host->dummy_buf = devm_kzalloc(&pdev->dev, mtd->writesize, GFP_KERNEL);
if (!host->dummy_buf) { if (!host->dummy_buf) {
res = -ENOMEM; res = -ENOMEM;
goto err_exit3; goto release_dma_chan;
} }
nand_chip->ecc.mode = NAND_ECC_HW; nand_chip->ecc.mode = NAND_ECC_HW;
@ -783,14 +783,14 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
if (host->irq < 0) { if (host->irq < 0) {
dev_err(&pdev->dev, "failed to get platform irq\n"); dev_err(&pdev->dev, "failed to get platform irq\n");
res = -EINVAL; res = -EINVAL;
goto err_exit3; goto release_dma_chan;
} }
if (request_irq(host->irq, (irq_handler_t)&lpc3xxx_nand_irq, if (request_irq(host->irq, (irq_handler_t)&lpc3xxx_nand_irq,
IRQF_TRIGGER_HIGH, DRV_NAME, host)) { IRQF_TRIGGER_HIGH, DRV_NAME, host)) {
dev_err(&pdev->dev, "Error requesting NAND IRQ\n"); dev_err(&pdev->dev, "Error requesting NAND IRQ\n");
res = -ENXIO; res = -ENXIO;
goto err_exit3; goto release_dma_chan;
} }
/* /*
@ -799,27 +799,29 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
*/ */
res = nand_scan_tail(mtd); res = nand_scan_tail(mtd);
if (res) if (res)
goto err_exit4; goto free_irq;
mtd->name = DRV_NAME; mtd->name = DRV_NAME;
res = mtd_device_register(mtd, host->ncfg->parts, res = mtd_device_register(mtd, host->ncfg->parts,
host->ncfg->num_parts); host->ncfg->num_parts);
if (!res) if (res)
return res; goto cleanup_nand;
nand_release(mtd); return 0;
err_exit4: cleanup_nand:
nand_cleanup(nand_chip);
free_irq:
free_irq(host->irq, host); free_irq(host->irq, host);
err_exit3: release_dma_chan:
if (use_dma) if (use_dma)
dma_release_channel(host->dma_chan); dma_release_channel(host->dma_chan);
err_exit2: unprepare_clk:
clk_disable_unprepare(host->clk); clk_disable_unprepare(host->clk);
err_put_clk: put_clk:
clk_put(host->clk); clk_put(host->clk);
err_exit1: free_gpio:
lpc32xx_wp_enable(host); lpc32xx_wp_enable(host);
gpio_free(host->ncfg->wp_gpio); gpio_free(host->ncfg->wp_gpio);

View file

@ -831,11 +831,11 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
if (IS_ERR(host->clk)) { if (IS_ERR(host->clk)) {
dev_err(&pdev->dev, "Clock failure\n"); dev_err(&pdev->dev, "Clock failure\n");
res = -ENOENT; res = -ENOENT;
goto err_exit1; goto enable_wp;
} }
res = clk_prepare_enable(host->clk); res = clk_prepare_enable(host->clk);
if (res) if (res)
goto err_exit1; goto enable_wp;
/* Set NAND IO addresses and command/ready functions */ /* Set NAND IO addresses and command/ready functions */
chip->IO_ADDR_R = SLC_DATA(host->io_base); chip->IO_ADDR_R = SLC_DATA(host->io_base);
@ -874,19 +874,19 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
GFP_KERNEL); GFP_KERNEL);
if (host->data_buf == NULL) { if (host->data_buf == NULL) {
res = -ENOMEM; res = -ENOMEM;
goto err_exit2; goto unprepare_clk;
} }
res = lpc32xx_nand_dma_setup(host); res = lpc32xx_nand_dma_setup(host);
if (res) { if (res) {
res = -EIO; res = -EIO;
goto err_exit2; goto unprepare_clk;
} }
/* Find NAND device */ /* Find NAND device */
res = nand_scan_ident(mtd, 1, NULL); res = nand_scan_ident(mtd, 1, NULL);
if (res) if (res)
goto err_exit3; goto release_dma;
/* OOB and ECC CPU and DMA work areas */ /* OOB and ECC CPU and DMA work areas */
host->ecc_buf = (uint32_t *)(host->data_buf + LPC32XX_DMA_DATA_SIZE); host->ecc_buf = (uint32_t *)(host->data_buf + LPC32XX_DMA_DATA_SIZE);
@ -920,21 +920,23 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
*/ */
res = nand_scan_tail(mtd); res = nand_scan_tail(mtd);
if (res) if (res)
goto err_exit3; goto release_dma;
mtd->name = "nxp_lpc3220_slc"; mtd->name = "nxp_lpc3220_slc";
res = mtd_device_register(mtd, host->ncfg->parts, res = mtd_device_register(mtd, host->ncfg->parts,
host->ncfg->num_parts); host->ncfg->num_parts);
if (!res) if (res)
return res; goto cleanup_nand;
nand_release(mtd); return 0;
err_exit3: cleanup_nand:
nand_cleanup(chip);
release_dma:
dma_release_channel(host->dma_chan); dma_release_channel(host->dma_chan);
err_exit2: unprepare_clk:
clk_disable_unprepare(host->clk); clk_disable_unprepare(host->clk);
err_exit1: enable_wp:
lpc32xx_wp_enable(host); lpc32xx_wp_enable(host);
return res; return res;

View file

@ -500,7 +500,6 @@ static int mtk_ecc_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct mtk_ecc *ecc; struct mtk_ecc *ecc;
struct resource *res; struct resource *res;
const struct of_device_id *of_ecc_id = NULL;
u32 max_eccdata_size; u32 max_eccdata_size;
int irq, ret; int irq, ret;
@ -508,11 +507,7 @@ static int mtk_ecc_probe(struct platform_device *pdev)
if (!ecc) if (!ecc)
return -ENOMEM; return -ENOMEM;
of_ecc_id = of_match_device(mtk_ecc_dt_match, &pdev->dev); ecc->caps = of_device_get_match_data(dev);
if (!of_ecc_id)
return -ENODEV;
ecc->caps = of_ecc_id->data;
max_eccdata_size = ecc->caps->num_ecc_strength - 1; max_eccdata_size = ecc->caps->num_ecc_strength - 1;
max_eccdata_size = ecc->caps->ecc_strength[max_eccdata_size]; max_eccdata_size = ecc->caps->ecc_strength[max_eccdata_size];

View file

@ -1434,7 +1434,6 @@ static int mtk_nfc_probe(struct platform_device *pdev)
struct device_node *np = dev->of_node; struct device_node *np = dev->of_node;
struct mtk_nfc *nfc; struct mtk_nfc *nfc;
struct resource *res; struct resource *res;
const struct of_device_id *of_nfc_id = NULL;
int ret, irq; int ret, irq;
nfc = devm_kzalloc(dev, sizeof(*nfc), GFP_KERNEL); nfc = devm_kzalloc(dev, sizeof(*nfc), GFP_KERNEL);
@ -1452,6 +1451,7 @@ static int mtk_nfc_probe(struct platform_device *pdev)
else if (!nfc->ecc) else if (!nfc->ecc)
return -ENODEV; return -ENODEV;
nfc->caps = of_device_get_match_data(dev);
nfc->dev = dev; nfc->dev = dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
@ -1498,14 +1498,6 @@ static int mtk_nfc_probe(struct platform_device *pdev)
goto clk_disable; goto clk_disable;
} }
of_nfc_id = of_match_device(mtk_nfc_id_table, &pdev->dev);
if (!of_nfc_id) {
ret = -ENODEV;
goto clk_disable;
}
nfc->caps = of_nfc_id->data;
platform_set_drvdata(pdev, nfc); platform_set_drvdata(pdev, nfc);
ret = mtk_nfc_nand_chips_init(dev, nfc); ret = mtk_nfc_nand_chips_init(dev, nfc);

View file

@ -2169,7 +2169,6 @@ static int nand_set_features_op(struct nand_chip *chip, u8 feature,
struct mtd_info *mtd = nand_to_mtd(chip); struct mtd_info *mtd = nand_to_mtd(chip);
const u8 *params = data; const u8 *params = data;
int i, ret; int i, ret;
u8 status;
if (chip->exec_op) { if (chip->exec_op) {
const struct nand_sdr_timings *sdr = const struct nand_sdr_timings *sdr =
@ -2183,26 +2182,18 @@ static int nand_set_features_op(struct nand_chip *chip, u8 feature,
}; };
struct nand_operation op = NAND_OPERATION(instrs); struct nand_operation op = NAND_OPERATION(instrs);
ret = nand_exec_op(chip, &op); return nand_exec_op(chip, &op);
if (ret)
return ret;
ret = nand_status_op(chip, &status);
if (ret)
return ret;
} else {
chip->cmdfunc(mtd, NAND_CMD_SET_FEATURES, feature, -1);
for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i)
chip->write_byte(mtd, params[i]);
ret = chip->waitfunc(mtd, chip);
if (ret < 0)
return ret;
status = ret;
} }
if (status & NAND_STATUS_FAIL) chip->cmdfunc(mtd, NAND_CMD_SET_FEATURES, feature, -1);
for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i)
chip->write_byte(mtd, params[i]);
ret = chip->waitfunc(mtd, chip);
if (ret < 0)
return ret;
if (ret & NAND_STATUS_FAIL)
return -EIO; return -EIO;
return 0; return 0;
@ -5086,6 +5077,37 @@ static int nand_flash_detect_ext_param_page(struct nand_chip *chip,
return ret; return ret;
} }
/*
* Recover data with bit-wise majority
*/
static void nand_bit_wise_majority(const void **srcbufs,
unsigned int nsrcbufs,
void *dstbuf,
unsigned int bufsize)
{
int i, j, k;
for (i = 0; i < bufsize; i++) {
u8 val = 0;
for (j = 0; j < 8; j++) {
unsigned int cnt = 0;
for (k = 0; k < nsrcbufs; k++) {
const u8 *srcbuf = srcbufs[k];
if (srcbuf[i] & BIT(j))
cnt++;
}
if (cnt > nsrcbufs / 2)
val |= BIT(j);
}
((u8 *)dstbuf)[i] = val;
}
}
/* /*
* Check if the NAND chip is ONFI compliant, returns 1 if it is, 0 otherwise. * Check if the NAND chip is ONFI compliant, returns 1 if it is, 0 otherwise.
*/ */
@ -5102,7 +5124,7 @@ static int nand_flash_detect_onfi(struct nand_chip *chip)
return 0; return 0;
/* ONFI chip: allocate a buffer to hold its parameter page */ /* ONFI chip: allocate a buffer to hold its parameter page */
p = kzalloc(sizeof(*p), GFP_KERNEL); p = kzalloc((sizeof(*p) * 3), GFP_KERNEL);
if (!p) if (!p)
return -ENOMEM; return -ENOMEM;
@ -5113,21 +5135,32 @@ static int nand_flash_detect_onfi(struct nand_chip *chip)
} }
for (i = 0; i < 3; i++) { for (i = 0; i < 3; i++) {
ret = nand_read_data_op(chip, p, sizeof(*p), true); ret = nand_read_data_op(chip, &p[i], sizeof(*p), true);
if (ret) { if (ret) {
ret = 0; ret = 0;
goto free_onfi_param_page; goto free_onfi_param_page;
} }
if (onfi_crc16(ONFI_CRC_BASE, (uint8_t *)p, 254) == if (onfi_crc16(ONFI_CRC_BASE, (u8 *)&p[i], 254) ==
le16_to_cpu(p->crc)) { le16_to_cpu(p->crc)) {
if (i)
memcpy(p, &p[i], sizeof(*p));
break; break;
} }
} }
if (i == 3) { if (i == 3) {
pr_err("Could not find valid ONFI parameter page; aborting\n"); const void *srcbufs[3] = {p, p + 1, p + 2};
goto free_onfi_param_page;
pr_warn("Could not find a valid ONFI parameter page, trying bit-wise majority to recover it\n");
nand_bit_wise_majority(srcbufs, ARRAY_SIZE(srcbufs), p,
sizeof(*p));
if (onfi_crc16(ONFI_CRC_BASE, (u8 *)p, 254) !=
le16_to_cpu(p->crc)) {
pr_err("ONFI parameter recovery failed, aborting\n");
goto free_onfi_param_page;
}
} }
/* Check version */ /* Check version */
@ -6630,24 +6663,26 @@ EXPORT_SYMBOL(nand_scan_tail);
#endif #endif
/** /**
* nand_scan - [NAND Interface] Scan for the NAND device * nand_scan_with_ids - [NAND Interface] Scan for the NAND device
* @mtd: MTD device structure * @mtd: MTD device structure
* @maxchips: number of chips to scan for * @maxchips: number of chips to scan for
* @ids: optional flash IDs table
* *
* This fills out all the uninitialized function pointers with the defaults. * This fills out all the uninitialized function pointers with the defaults.
* The flash ID is read and the mtd/chip structures are filled with the * The flash ID is read and the mtd/chip structures are filled with the
* appropriate values. * appropriate values.
*/ */
int nand_scan(struct mtd_info *mtd, int maxchips) int nand_scan_with_ids(struct mtd_info *mtd, int maxchips,
struct nand_flash_dev *ids)
{ {
int ret; int ret;
ret = nand_scan_ident(mtd, maxchips, NULL); ret = nand_scan_ident(mtd, maxchips, ids);
if (!ret) if (!ret)
ret = nand_scan_tail(mtd); ret = nand_scan_tail(mtd);
return ret; return ret;
} }
EXPORT_SYMBOL(nand_scan); EXPORT_SYMBOL(nand_scan_with_ids);
/** /**
* nand_cleanup - [NAND Interface] Free resources held by the NAND device * nand_cleanup - [NAND Interface] Free resources held by the NAND device

View file

@ -165,49 +165,16 @@
#define NFC_MAX_CS 7 #define NFC_MAX_CS 7
/*
* Ready/Busy detection type: describes the Ready/Busy detection modes
*
* @RB_NONE: no external detection available, rely on STATUS command
* and software timeouts
* @RB_NATIVE: use sunxi NAND controller Ready/Busy support. The Ready/Busy
* pin of the NAND flash chip must be connected to one of the
* native NAND R/B pins (those which can be muxed to the NAND
* Controller)
* @RB_GPIO: use a simple GPIO to handle Ready/Busy status. The Ready/Busy
* pin of the NAND flash chip must be connected to a GPIO capable
* pin.
*/
enum sunxi_nand_rb_type {
RB_NONE,
RB_NATIVE,
RB_GPIO,
};
/*
* Ready/Busy structure: stores information related to Ready/Busy detection
*
* @type: the Ready/Busy detection mode
* @info: information related to the R/B detection mode. Either a gpio
* id or a native R/B id (those supported by the NAND controller).
*/
struct sunxi_nand_rb {
enum sunxi_nand_rb_type type;
union {
int gpio;
int nativeid;
} info;
};
/* /*
* Chip Select structure: stores information related to NAND Chip Select * Chip Select structure: stores information related to NAND Chip Select
* *
* @cs: the NAND CS id used to communicate with a NAND Chip * @cs: the NAND CS id used to communicate with a NAND Chip
* @rb: the Ready/Busy description * @rb: the Ready/Busy pin ID. -1 means no R/B pin connected to the
* NFC
*/ */
struct sunxi_nand_chip_sel { struct sunxi_nand_chip_sel {
u8 cs; u8 cs;
struct sunxi_nand_rb rb; s8 rb;
}; };
/* /*
@ -440,30 +407,19 @@ static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
struct nand_chip *nand = mtd_to_nand(mtd); struct nand_chip *nand = mtd_to_nand(mtd);
struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller); struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
struct sunxi_nand_rb *rb; u32 mask;
int ret;
if (sunxi_nand->selected < 0) if (sunxi_nand->selected < 0)
return 0; return 0;
rb = &sunxi_nand->sels[sunxi_nand->selected].rb; if (sunxi_nand->sels[sunxi_nand->selected].rb < 0) {
switch (rb->type) {
case RB_NATIVE:
ret = !!(readl(nfc->regs + NFC_REG_ST) &
NFC_RB_STATE(rb->info.nativeid));
break;
case RB_GPIO:
ret = gpio_get_value(rb->info.gpio);
break;
case RB_NONE:
default:
ret = 0;
dev_err(nfc->dev, "cannot check R/B NAND status!\n"); dev_err(nfc->dev, "cannot check R/B NAND status!\n");
break; return 0;
} }
return ret; mask = NFC_RB_STATE(sunxi_nand->sels[sunxi_nand->selected].rb);
return !!(readl(nfc->regs + NFC_REG_ST) & mask);
} }
static void sunxi_nfc_select_chip(struct mtd_info *mtd, int chip) static void sunxi_nfc_select_chip(struct mtd_info *mtd, int chip)
@ -488,12 +444,11 @@ static void sunxi_nfc_select_chip(struct mtd_info *mtd, int chip)
ctl |= NFC_CE_SEL(sel->cs) | NFC_EN | ctl |= NFC_CE_SEL(sel->cs) | NFC_EN |
NFC_PAGE_SHIFT(nand->page_shift); NFC_PAGE_SHIFT(nand->page_shift);
if (sel->rb.type == RB_NONE) { if (sel->rb < 0) {
nand->dev_ready = NULL; nand->dev_ready = NULL;
} else { } else {
nand->dev_ready = sunxi_nfc_dev_ready; nand->dev_ready = sunxi_nfc_dev_ready;
if (sel->rb.type == RB_NATIVE) ctl |= NFC_RB_SEL(sel->rb);
ctl |= NFC_RB_SEL(sel->rb.info.nativeid);
} }
writel(mtd->writesize, nfc->regs + NFC_REG_SPARE_AREA); writel(mtd->writesize, nfc->regs + NFC_REG_SPARE_AREA);
@ -1946,26 +1901,10 @@ static int sunxi_nand_chip_init(struct device *dev, struct sunxi_nfc *nfc,
chip->sels[i].cs = tmp; chip->sels[i].cs = tmp;
if (!of_property_read_u32_index(np, "allwinner,rb", i, &tmp) && if (!of_property_read_u32_index(np, "allwinner,rb", i, &tmp) &&
tmp < 2) { tmp < 2)
chip->sels[i].rb.type = RB_NATIVE; chip->sels[i].rb = tmp;
chip->sels[i].rb.info.nativeid = tmp; else
} else { chip->sels[i].rb = -1;
ret = of_get_named_gpio(np, "rb-gpios", i);
if (ret >= 0) {
tmp = ret;
chip->sels[i].rb.type = RB_GPIO;
chip->sels[i].rb.info.gpio = tmp;
ret = devm_gpio_request(dev, tmp, "nand-rb");
if (ret)
return ret;
ret = gpio_direction_input(tmp);
if (ret)
return ret;
} else {
chip->sels[i].rb.type = RB_NONE;
}
}
} }
nand = &chip->nand; nand = &chip->nand;

View file

@ -86,6 +86,7 @@ struct nand_pos {
* @ooboffs: the OOB offset within the page * @ooboffs: the OOB offset within the page
* @ooblen: the number of OOB bytes to read from/write to this page * @ooblen: the number of OOB bytes to read from/write to this page
* @oobbuf: buffer to store OOB data in or get OOB data from * @oobbuf: buffer to store OOB data in or get OOB data from
* @mode: one of the %MTD_OPS_XXX mode
* *
* This object is used to pass per-page I/O requests to NAND sub-layers. This * This object is used to pass per-page I/O requests to NAND sub-layers. This
* way all useful information are already formatted in a useful way and * way all useful information are already formatted in a useful way and
@ -106,6 +107,7 @@ struct nand_page_io_req {
const void *out; const void *out;
void *in; void *in;
} oobbuf; } oobbuf;
int mode;
}; };
/** /**
@ -599,6 +601,7 @@ static inline void nanddev_io_iter_init(struct nand_device *nand,
{ {
struct mtd_info *mtd = nanddev_to_mtd(nand); struct mtd_info *mtd = nanddev_to_mtd(nand);
iter->req.mode = req->mode;
iter->req.dataoffs = nanddev_offs_to_pos(nand, offs, &iter->req.pos); iter->req.dataoffs = nanddev_offs_to_pos(nand, offs, &iter->req.pos);
iter->req.ooboffs = req->ooboffs; iter->req.ooboffs = req->ooboffs;
iter->oobbytes_per_page = mtd_oobavail(mtd, req); iter->oobbytes_per_page = mtd_oobavail(mtd, req);

View file

@ -28,7 +28,14 @@ struct nand_flash_dev;
struct device_node; struct device_node;
/* Scan and identify a NAND device */ /* Scan and identify a NAND device */
int nand_scan(struct mtd_info *mtd, int max_chips); int nand_scan_with_ids(struct mtd_info *mtd, int max_chips,
struct nand_flash_dev *ids);
static inline int nand_scan(struct mtd_info *mtd, int max_chips)
{
return nand_scan_with_ids(mtd, max_chips, NULL);
}
/* /*
* Separate phases of nand_scan(), allowing board driver to intervene * Separate phases of nand_scan(), allowing board driver to intervene
* and override command or ECC setup according to flash type. * and override command or ECC setup according to flash type.
@ -740,8 +747,9 @@ enum nand_data_interface_type {
/** /**
* struct nand_data_interface - NAND interface timing * struct nand_data_interface - NAND interface timing
* @type: type of the timing * @type: type of the timing
* @timings: The timing, type according to @type * @timings: The timing, type according to @type
* @timings.sdr: Use it when @type is %NAND_SDR_IFACE.
*/ */
struct nand_data_interface { struct nand_data_interface {
enum nand_data_interface_type type; enum nand_data_interface_type type;
@ -798,8 +806,9 @@ struct nand_op_addr_instr {
/** /**
* struct nand_op_data_instr - Definition of a data instruction * struct nand_op_data_instr - Definition of a data instruction
* @len: number of data bytes to move * @len: number of data bytes to move
* @in: buffer to fill when reading from the NAND chip * @buf: buffer to fill
* @out: buffer to read from when writing to the NAND chip * @buf.in: buffer to fill when reading from the NAND chip
* @buf.out: buffer to read from when writing to the NAND chip
* @force_8bit: force 8-bit access * @force_8bit: force 8-bit access
* *
* Please note that "in" and "out" are inverted from the ONFI specification * Please note that "in" and "out" are inverted from the ONFI specification
@ -842,9 +851,13 @@ enum nand_op_instr_type {
/** /**
* struct nand_op_instr - Instruction object * struct nand_op_instr - Instruction object
* @type: the instruction type * @type: the instruction type
* @cmd/@addr/@data/@waitrdy: extra data associated to the instruction. * @ctx: extra data associated to the instruction. You'll have to use the
* You'll have to use the appropriate element * appropriate element depending on @type
* depending on @type * @ctx.cmd: use it if @type is %NAND_OP_CMD_INSTR
* @ctx.addr: use it if @type is %NAND_OP_ADDR_INSTR
* @ctx.data: use it if @type is %NAND_OP_DATA_IN_INSTR
* or %NAND_OP_DATA_OUT_INSTR
* @ctx.waitrdy: use it if @type is %NAND_OP_WAITRDY_INSTR
* @delay_ns: delay the controller should apply after the instruction has been * @delay_ns: delay the controller should apply after the instruction has been
* issued on the bus. Most modern controllers have internal timings * issued on the bus. Most modern controllers have internal timings
* control logic, and in this case, the controller driver can ignore * control logic, and in this case, the controller driver can ignore
@ -997,7 +1010,9 @@ struct nand_op_parser_data_constraints {
* struct nand_op_parser_pattern_elem - One element of a pattern * struct nand_op_parser_pattern_elem - One element of a pattern
* @type: the instructuction type * @type: the instructuction type
* @optional: whether this element of the pattern is optional or mandatory * @optional: whether this element of the pattern is optional or mandatory
* @addr/@data: address or data constraint (number of cycles or data length) * @ctx: address or data constraint
* @ctx.addr: address constraint (number of cycles)
* @ctx.data: data constraint (data length)
*/ */
struct nand_op_parser_pattern_elem { struct nand_op_parser_pattern_elem {
enum nand_op_instr_type type; enum nand_op_instr_type type;
@ -1224,6 +1239,8 @@ int nand_op_parser_exec_op(struct nand_chip *chip,
* devices. * devices.
* @priv: [OPTIONAL] pointer to private chip data * @priv: [OPTIONAL] pointer to private chip data
* @manufacturer: [INTERN] Contains manufacturer information * @manufacturer: [INTERN] Contains manufacturer information
* @manufacturer.desc: [INTERN] Contains manufacturer's description
* @manufacturer.priv: [INTERN] Contains manufacturer private information
*/ */
struct nand_chip { struct nand_chip {