Commit graph

969158 commits

Author SHA1 Message Date
Yonglong Liu
d78e5b6a67 net: hns3: keep MAC pause mode when multiple TCs are enabled
Bellow HNAE3_DEVICE_VERSION_V3, MAC pause mode just support one
TC, when enabled multiple TCs, force enable PFC mode.

HNAE3_DEVICE_VERSION_V3 can support MAC pause mode on multiple
TCs, so when enable multiple TCs, just keep MAC pause mode,
and enable PFC mode just according to the user settings.

Signed-off-by: Yonglong Liu <liuyonglong@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-01 15:16:31 -08:00
Huazhong Tan
ade36ccef1 net: hns3: add a check for devcie's verion in hns3_tunnel_csum_bug()
For the device whose version is above V3(include V3), the hardware
can do checksum offload for the non-tunnel udp packet, who has
a dest port as the IANA assigned. So add a check for devcie's verion
in hns3_tunnel_csum_bug().

Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-01 15:16:31 -08:00
Huazhong Tan
b1533ada74 net: hns3: add more info to hns3_dbg_bd_info()
Since TX hardware checksum and RX completion checksum have been
supported now, so add related information in hns3_dbg_bd_info().

Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-01 15:16:31 -08:00
Huazhong Tan
3e2816219d net: hns3: add udp tunnel checksum segmentation support
For the device who has the capability to handle udp tunnel
checksum segmentation, add support for it.

Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-01 15:16:31 -08:00
Huazhong Tan
57e72c121c net: hns3: remove unsupported NETIF_F_GSO_UDP_TUNNEL_CSUM
Currently, device V1 and V2 do not support segmentation
offload for UDP based tunnel packet who needs outer UDP
checksum offload, so there is a workaround in the driver
to set the checksum of the outer UDP checksum as zero. This
is not what the user wants, so remove this feature for
device V1 and V2, add support for it later(when the device
has the ability to do that).

Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-01 15:16:31 -08:00
Huazhong Tan
66d52f3bf3 net: hns3: add support for TX hardware checksum offload
For the device that supports TX hardware checksum, the hardware
can calculate the checksum from the start and fill the checksum
to the offset position, which reduces the operations of
calculating the type and header length of L3/L4. So add this
feature for the HNS3 ethernet driver.

The previous simple BD description is unsuitable, rename it as
HW TX CSUM.

Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-01 15:16:31 -08:00
Huazhong Tan
4b2fe769aa net: hns3: add support for RX completion checksum
In some cases (for example ip fragment), hardware will
calculate the checksum of whole packet in RX, and setup
the HNS3_RXD_L2_CSUM_B flag in the descriptor, so add
support to utilize this checksum.

Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-01 15:16:31 -08:00
Lukas Bulwahn
9e39394fae net/ipv6: propagate user pointer annotation
For IPV6_2292PKTOPTIONS, do_ipv6_getsockopt() stores the user pointer
optval in the msg_control field of the msghdr.

Hence, sparse rightfully warns at ./net/ipv6/ipv6_sockglue.c:1151:33:

  warning: incorrect type in assignment (different address spaces)
      expected void *msg_control
      got char [noderef] __user *optval

Since commit 1f466e1f15 ("net: cleanly handle kernel vs user buffers for
->msg_control"), user pointers shall be stored in the msg_control_user
field, and kernel pointers in the msg_control field. This allows to
propagate __user annotations nicely through this struct.

Store optval in msg_control_user to properly record and propagate the
memory space annotation of this pointer.

Note that msg_control_is_user is set to true, so the key invariant, i.e.,
use msg_control_user if and only if msg_control_is_user is true, holds.

The msghdr is further used in the six alternative put_cmsg() calls, with
msg_control_is_user being true, put_cmsg() picks msg_control_user
preserving the __user annotation and passes that properly to
copy_to_user().

No functional change. No change in object code.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20201127093421.21673-1-lukas.bulwahn@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-01 11:42:33 -08:00
Marco Elver
fa69ee5aa4 net: switch to storing KCOV handle directly in sk_buff
It turns out that usage of skb extensions can cause memory leaks. Ido
Schimmel reported: "[...] there are instances that blindly overwrite
'skb->extensions' by invoking skb_copy_header() after __alloc_skb()."

Therefore, give up on using skb extensions for KCOV handle, and instead
directly store kcov_handle in sk_buff.

Fixes: 6370cc3bbd ("net: add kcov handle to skb extensions")
Fixes: 85ce50d337 ("net: kcov: don't select SKB_EXTENSIONS when there is no NET")
Fixes: 97f53a08cb ("net: linux/skbuff.h: combine SKB_EXTENSIONS + KCOV handling")
Link: https://lore.kernel.org/linux-wireless/20201121160941.GA485907@shredder.lan/
Reported-by: Ido Schimmel <idosch@idosch.org>
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20201125224840.2014773-1-elver@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-01 11:26:19 -08:00
Vlad Buslov
0fca55ed98 net: sched: remove redundant 'rtnl_held' argument
Functions tfilter_notify_chain() and tcf_get_next_proto() are always called
with rtnl lock held in current implementation. Moreover, attempting to call
them without rtnl lock would cause a warning down the call chain in
function __tcf_get_next_proto() that requires the lock to be held by
callers. Remove the 'rtnl_held' argument in order to simplify the code and
make rtnl lock requirement explicit.

Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Link: https://lore.kernel.org/r/20201127151205.23492-1-vladbu@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-01 11:24:56 -08:00
Jakub Kicinski
cb7fb043e6 linux-can-next-for-5.11-20201130
-----BEGIN PGP SIGNATURE-----
 
 iQFHBAABCgAxFiEEK3kIWJt9yTYMP3ehqclaivrt76kFAl/E/NkTHG1rbEBwZW5n
 dXRyb25peC5kZQAKCRCpyVqK+u3vqREXB/93EL5RGMeI48Yni3WEk0HZadw8oelq
 LWooZQjkNbpzboxwy9w7PspiMfrswkADH3bpPnGnWk9StUeHKgk7T+HxhWsQn/C+
 6xkeOjTr4/qxrKMC+xmgLpML+29gOXdyXFw2FOsHYi6F+bHZWWx483JiLUBc/+SP
 78TiyGSC8Y6cKkU113iPTEg/8FRqmajj/W0hbGgYdksXUPkCxQ3LK0jjt8jybCj0
 ojn7wsqiB0l7nZrgL09L1v9+jgsWwbC1I1kJSOxFVXK3gEcAbV08dbIufpWd649G
 sx8BOa+mkLHiOLdf9KZoLbvHXa/uVS1rZcmTBThdAkUw1cTwq2GmO/zI
 =3YGc
 -----END PGP SIGNATURE-----

Merge tag 'linux-can-next-for-5.11-20201130' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next

Marc Kleine-Budde says:

====================
pull-request: can-next 2020-11-30

Gustavo A. R. Silva's patch for the pcan_usb driver fixes fall-through warnings
for Clang.

The next 5 patches target the mcp251xfd driver and are by Ursula Maplehurst and
me. They optimizie the TEF and RX path by reducing number of SPI core requests
to set the UINC bit.

The remaining 8 patches target the m_can driver. The first 4 are various
cleanups for the SPI binding driver (tcan4x5x) by Sean Nyekjaer, Dan Murphy and
me. Followed by 4 cleanup patches by me for the m_can and m_can_platform
driver.

* tag 'linux-can-next-for-5.11-20201130' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next:
  can: m_can: m_can_class_unregister(): move right after m_can_class_register()
  can: m_can: m_can_plat_remove(): remove unneeded platform_set_drvdata()
  can: m_can: remove not used variable struct m_can_classdev::freq
  can: m_can: Kconfig: convert the into menu
  can: tcan4x5x: tcan4x5x_can_probe(): remove probe failed error message
  can: tcan4x5x: remove mram_start and reg_offset from struct tcan4x5x_priv
  can: tcan4x5x: rename parse_config() function
  can: tcan4x5x: tcan4x5x_clear_interrupts(): remove redundant return statement
  can: mcp251xfd: tef-path: reduce number of SPI core requests to set UINC bit
  can: mcp251xfd: move struct mcp251xfd_tef_ring definition
  can: mcp251xfd: struct mcp251xfd_priv::tef to array of length 1
  can: mcp25xxfd: rx-path: reduce number of SPI core requests to set UINC bit
  can: mcp251xfd: mcp25xxfd_ring_alloc(): add define instead open coding the maximum number of RX objects
  can: pcan_usb_core: fix fall-through warnings for Clang
====================

Link: https://lore.kernel.org/r/20201130141432.278219-1-mkl@pengutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 19:03:32 -08:00
Tom Rix
76810ed840 net: wan: remove trailing semicolon in macro definition
The macro use will already have a semicolon.

Signed-off-by: Tom Rix <trix@redhat.com>
Link: https://lore.kernel.org/r/20201127165734.2694693-1-trix@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 18:58:31 -08:00
Jakub Kicinski
5f3e915c36 Merge branch 'mptcp-avoid-workqueue-usage-for-data'
Paolo Abeni says:

====================
mptcp: avoid workqueue usage for data

The current locking schema used to protect the MPTCP data-path
requires the usage of the MPTCP workqueue to process the incoming
data, depending on trylock result.

The above poses scalability limits and introduces random delays
in MPTCP-level acks.

With this series we use a single spinlock to protect the MPTCP
data-path, removing the need for workqueue and delayed ack usage.

This additionally reduces the number of atomic operations required
per packet and cleans-up considerably the poll/wake-up code.
====================

Link: https://lore.kernel.org/r/cover.1606413118.git.pabeni@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:55:26 -08:00
Paolo Abeni
6e628cd3a8 mptcp: use mptcp release_cb for delayed tasks
We have some tasks triggered by the subflow receive path
which require to access the msk socket status, specifically:
mptcp_clean_una() and mptcp_push_pending()

We have almost everything in place to defer to the msk
release_cb such tasks when the msk sock is owned.

Since the worker is no more used to clean the acked data,
for fallback sockets we need to explicitly flush them.

As an added bonus we can move the wake-up code in __mptcp_clean_una(),
simplify a lot mptcp_poll() and move the timer update under
the data lock.

The worker is now used only to process and send DATA_FIN
packets and do the mptcp-level retransmissions.

Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:55:23 -08:00
Paolo Abeni
7439d687b7 mptcp: avoid a few atomic ops in the rx path
Extending the data_lock scope in mptcp_incoming_option
we can use that to protect both snd_una and wnd_end.
In the typical case, we will have a single atomic op instead of 2

Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:55:23 -08:00
Paolo Abeni
724cfd2ee8 mptcp: allocate TX skbs in msk context
Move the TX skbs allocation in mptcp_sendmsg() scope,
and tentatively pre-allocate a skbs number proportional
to the sendmsg() length.

Use the ssk tx skb cache to prevent the subflow allocation.

This allows removing the msk skb extension cache and will
make possible the later patches.

Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:55:23 -08:00
Paolo Abeni
879526030c mptcp: protect the rx path with the msk socket spinlock
Such spinlock is currently used only to protect the 'owned'
flag inside the socket lock itself. With this patch, we extend
its scope to protect the whole msk receive path and
sk_forward_memory.

Given the above, we can always move data into the msk receive
queue (and OoO queue) from the subflow.

We leverage the previous commit, so that we need to acquire the
spinlock in the tx path only when moving fwd memory.

recvmsg() must now explicitly acquire the socket spinlock
when moving skbs out of sk_receive_queue. To reduce the number of
lock operations required we use a second rx queue and splice the
first into the latter in mptcp_lock_sock(). Additionally rmem
allocated memory is bulk-freed via release_cb()

Acked-by: Florian Westphal <fw@strlen.de>
Co-developed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:55:23 -08:00
Paolo Abeni
e93da92896 mptcp: implement wmem reservation
This leverages the previous commit to reserve the wmem
required for the sendmsg() operation when the msk socket
lock is first acquired.
Some heuristics are used to get a reasonable [over] estimation of
the whole memory required. If we can't forward alloc such amount
fallback to a reasonable small chunk, otherwise enter the wait
for memory path.

When sendmsg() needs more memory it looks at wmem_reserved
first and if that is exhausted, move more space from
sk_forward_alloc.

The reserved memory is not persistent and is released at the
next socket unlock via the release_cb().

Overall this will simplify the next patch.

Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:55:23 -08:00
Paolo Abeni
ad80b0fc6e mptcp: open code mptcp variant for lock_sock
This allows invoking an additional callback under the
socket spin lock.

Will be used by the next patches to avoid additional
spin lock contention.

Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:55:23 -08:00
Jakub Kicinski
be5724240b Merge branch 'dpaa_eth-add-xdp-support'
Camelia Groza says:

====================
dpaa_eth: add XDP support

Enable XDP support for the QorIQ DPAA1 platforms.

Implement all the current actions (DROP, ABORTED, PASS, TX, REDIRECT). No
Tx batching is added at this time.

Additional XDP_PACKET_HEADROOM bytes are reserved in each frame's headroom.

After transmit, a reference to the xdp_frame is saved in the buffer for
clean-up on confirmation in a newly created structure for software
annotations. DPAA_TX_PRIV_DATA_SIZE bytes are reserved in the buffer for
storing this structure and the XDP program is restricted from accessing
them.

The driver shares the egress frame queues used for XDP with the network
stack. The DPAA driver is a LLTX driver so no explicit locking is required
on transmission.
====================

Link: https://lore.kernel.org/r/cover.1606322126.git.camelia.groza@nxp.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:33:24 -08:00
Camelia Groza
ae680bcbd0 dpaa_eth: implement the A050385 erratum workaround for XDP
For XDP TX, even tough we start out with correctly aligned buffers, the
XDP program might change the data's alignment. For REDIRECT, we have no
control over the alignment either.

Create a new workaround for xdp_frame structures to verify the erratum
conditions and move the data to a fresh buffer if necessary. Create a new
xdp_frame for managing the new buffer and free the old one using the XDP
API.

Due to alignment constraints, all frames have a 256 byte headroom that
is offered fully to XDP under the erratum. If the XDP program uses all
of it, the data needs to be move to make room for the xdpf backpointer.

Disable the metadata support since the information can be lost.

Acked-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
Signed-off-by: Camelia Groza <camelia.groza@nxp.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:33:23 -08:00
Camelia Groza
d7af04486d dpaa_eth: rename current skb A050385 erratum workaround
Explicitly point that the current workaround addresses skbs. This change is
in preparation for adding a workaround for XDP scenarios.

Acked-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
Signed-off-by: Camelia Groza <camelia.groza@nxp.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:33:23 -08:00
Camelia Groza
a1e031ffb4 dpaa_eth: add XDP_REDIRECT support
After transmission, the frame is returned on confirmation queues for
cleanup. For this, store a backpointer to the xdp_frame in the private
reserved area at the start of the TX buffer.

No TX batching support is implemented at this time.

Acked-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
Signed-off-by: Camelia Groza <camelia.groza@nxp.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:33:21 -08:00
Camelia Groza
d57e57d0cd dpaa_eth: add XDP_TX support
Use an xdp_frame structure for managing the frame. Store a backpointer to
the structure at the start of the buffer before enqueueing for cleanup
on TX confirmation. Reserve DPAA_TX_PRIV_DATA_SIZE bytes from the frame
size shared with the XDP program for this purpose. Use the XDP
API for freeing the buffer when it returns to the driver on the TX
confirmation path.

The frame queues are shared with the netstack. The DPAA driver is a LLTX
driver so no explicit locking is required on transmission.

This approach will be reused for XDP REDIRECT.

Acked-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
Signed-off-by: Camelia Groza <camelia.groza@nxp.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:33:21 -08:00
Camelia Groza
828eadbacc dpaa_eth: limit the possible MTU range when XDP is enabled
Implement the ndo_change_mtu callback to prevent users from setting an
MTU that would permit processing of S/G frames. The maximum MTU size
is dependent on the buffer size.

Acked-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
Signed-off-by: Camelia Groza <camelia.groza@nxp.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:33:18 -08:00
Camelia Groza
86c0c196cb dpaa_eth: add basic XDP support
Implement the XDP_DROP and XDP_PASS actions.

Avoid draining and reconfiguring the buffer pool at each XDP
setup/teardown by increasing the frame headroom and reserving
XDP_PACKET_HEADROOM bytes from the start. Since we always reserve an
entire page per buffer, this change only impacts Jumbo frame scenarios
where the maximum linear frame size is reduced by 256 bytes. Multi
buffer Scatter/Gather frames are now used instead in these scenarios.

Allow XDP programs to access the entire buffer.

The data in the received frame's headroom can be overwritten by the XDP
program. Extract the relevant fields from the headroom while they are
still available, before running the XDP program.

Since the headroom might be resized before the frame is passed up to the
stack, remove the check for a fixed headroom value when building an skb.

Allow the meta data to be updated and pass the information up the stack.

Scatter/Gather frames are dropped when XDP is enabled.

Acked-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
Signed-off-by: Camelia Groza <camelia.groza@nxp.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:31:36 -08:00
Camelia Groza
fb9afd961c dpaa_eth: add struct for software backpointers
We maintain an skb backpointer in the software annotations area of Tx
frames. Introduce a structure for explicit handling.

Acked-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
Signed-off-by: Camelia Groza <camelia.groza@nxp.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-30 17:31:28 -08:00
Marc Kleine-Budde
6d9986b46f can: m_can: m_can_class_unregister(): move right after m_can_class_register()
This patch moves the function m_can_class_unregister() directly after the
m_can_class_register() function.

Link: https://lore.kernel.org/r/20201130133713.269256-7-mkl@pengutronix.de
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-30 14:55:41 +01:00
Marc Kleine-Budde
ba844cb96f can: m_can: m_can_plat_remove(): remove unneeded platform_set_drvdata()
There's no need to unset the drvdata on remove, so remove the unneeded call to
platform_set_drvdata() in m_can_plat_remove().

Link: https://lore.kernel.org/r/20201130133713.269256-6-mkl@pengutronix.de
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-30 14:55:38 +01:00
Marc Kleine-Budde
3fb5a7cef9 can: m_can: remove not used variable struct m_can_classdev::freq
This patch removes the unused variable freq from the struct m_can_classdev.

Link: https://lore.kernel.org/r/20201130133713.269256-5-mkl@pengutronix.de
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-30 14:55:35 +01:00
Marc Kleine-Budde
f566373fc5 can: m_can: Kconfig: convert the into menu
Since there is more than one base driver for the m_can core, let's
convert this into a menu.

Link: https://lore.kernel.org/r/20201130133713.269256-4-mkl@pengutronix.de
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-30 14:55:32 +01:00
Marc Kleine-Budde
ca3ad869da can: tcan4x5x: tcan4x5x_can_probe(): remove probe failed error message
The driver core already emits a probe failed error message, so remove this one
from the driver.

Link: https://lore.kernel.org/r/20201130133713.269256-3-mkl@pengutronix.de
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-30 14:55:29 +01:00
Marc Kleine-Budde
225dfc2552 can: tcan4x5x: remove mram_start and reg_offset from struct tcan4x5x_priv
Both struct tcan4x5x_priv::mram_start and struct tcan4x5x_priv::reg_offset are
only assigned once with a constant and then always used read-only. This patch
changes the driver to use the constant directly instead.

Link: https://lore.kernel.org/r/20201130133713.269256-2-mkl@pengutronix.de
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-30 14:55:25 +01:00
Dan Murphy
018a0c5845 can: tcan4x5x: rename parse_config() function
Rename the tcan4x5x_parse_config() function to tcan4x5x_get_gpios() since the
function retrieves the gpio configurations from the firmware.

Signed-off-by: Dan Murphy <dmurphy@ti.com>
Link: http://lore.kernel.org/r/20200226140358.30017-1-dmurphy@ti.com
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-30 14:37:35 +01:00
Sean Nyekjaer
d1390d7d55 can: tcan4x5x: tcan4x5x_clear_interrupts(): remove redundant return statement
This patch removes a redundant return at the end of
tcan4x5x_clear_interrupts().

Signed-off-by: Sean Nyekjaer <sean@geanix.com>
Link: http://lore.kernel.org/r/20191211141635.322577-1-sean@geanix.com
Reported-by: Daniels Umanovskis <daniels@umanovskis.se>
Acked-by: Dan Murphy <dmurphy@ti.com>
Fixes: 5443c226ba ("can: tcan4x5x: Add tcan4x5x driver to the kernel")
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-29 21:19:12 +01:00
Marc Kleine-Budde
68c0c1c7f9 can: mcp251xfd: tef-path: reduce number of SPI core requests to set UINC bit
Reduce the number of separate SPI core requests when setting the UINC bit in
the TEF FIFO, and instead batch them up into a single SPI core request.

Link: https://lore.kernel.org/r/20201126132144.351154-6-mkl@pengutronix.de
Tested-by: Thomas Kopp <thomas.kopp@microchip.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-29 21:19:12 +01:00
Marc Kleine-Budde
63e70488b4 can: mcp251xfd: move struct mcp251xfd_tef_ring definition
This patch moves the struct mcp251xfd_tef_ring upwards, so that the union
mcp251xfd_write_reg_buf and struct spi_transfer can be made members of it.

Link: https://lore.kernel.org/r/20201126132144.351154-5-mkl@pengutronix.de
Tested-by: Thomas Kopp <thomas.kopp@microchip.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-29 21:19:12 +01:00
Marc Kleine-Budde
dada6a6c7d can: mcp251xfd: struct mcp251xfd_priv::tef to array of length 1
This patch converts the struct mcp251xfd_tef_ring member within the struct
mcp251xfd_priv into an array of length one. This way all rings (tef, tx and rx)
can be accessed in the same way.

Link: https://lore.kernel.org/r/20201126132144.351154-4-mkl@pengutronix.de
Tested-by: Thomas Kopp <thomas.kopp@microchip.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-29 21:19:12 +01:00
Ursula Maplehurst
1f652bb6ba can: mcp25xxfd: rx-path: reduce number of SPI core requests to set UINC bit
Reduce the number of separate SPI core requests when setting the UINC bit in
the RX FIFO, and instead batch them up into a single SPI core request.

Link: https://github.com/marckleinebudde/linux/issues/4
Link: https://lore.kernel.org/r/20201126132144.351154-3-mkl@pengutronix.de
Tested-by: Thomas Kopp <thomas.kopp@microchip.com>
Signed-off-by: Ursula Maplehurst <ursula@kangatronix.co.uk>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-29 21:19:12 +01:00
Marc Kleine-Budde
4843ad9b61 can: mcp251xfd: mcp25xxfd_ring_alloc(): add define instead open coding the maximum number of RX objects
This patch add a define for the maximum number of RX objects instead of open
coding it.

Link: https://lore.kernel.org/r/20201126132144.351154-2-mkl@pengutronix.de
Tested-by: Thomas Kopp <thomas.kopp@microchip.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-29 21:19:12 +01:00
Gustavo A. R. Silva
368444dd7a can: pcan_usb_core: fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning by
moving the "default" to the end of the "switch" statement and explicitly adding
a break statement instead of letting the code fall through to the next case.

Link: https://github.com/KSPP/linux/issues/115
Reported-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: http://lore.kernel.org/r/aab7cf16bf43cc7c3e9c9930d2dae850c1d07a3c.1605896059.git.gustavoars@kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
[mkl: move default to end]
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-11-29 21:19:11 +01:00
Jakub Kicinski
e71d2b957e Merge branch 'net-ipa-start-adding-ipa-v4-5-support'
Alex Elder says:

====================
net: ipa: start adding IPA v4.5 support

This series starts updating the IPA code to support IPA hardware
version 4.5.

The first patch fixes a problem found while preparing these updates.
Testing shows the code works with or without the change, and with
the fix the code matches "downstream" Qualcomm code.

The second patch updates the definitions for IPA register offsets
and field masks to reflect the changes that come with IPA v4.5.  A
few register updates have been deferred until later, because making
use of them involves some nontrivial code updates.

One type of change that IPA v4.5 brings is expanding the range of
certain configuration values.  High-order bits are added in a few
cases, and the third patch implements the code changes necessary to
use those newly available bits.

The fourth patch implements several fairly minor changes to the code
required for IPA v4.5 support.

The last two patches implement changes to the GSI registers used for
IPA.  Almost none of the registers change, but the range of memory
in which most of the GSI registers is located is shifted by a fixed
amount.  The fifth patch updates the GSI register definitions, and
the last patch implements the memory shift for IPA v4.5.
====================

Link: https://lore.kernel.org/r/20201125204522.5884-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-28 12:14:02 -08:00
Alex Elder
cdeee49f3e net: ipa: adjust GSI register addresses
The offsets for almost all GSI registers we use have different
offsets starting at IPA version 4.5.  Only two registers remain
in their original location.

In a way though, the new register locations are not *that*
different.  The entire group of affected registers has simply
been shifted down in memory by a fixed amount (0xd000).  So for
example, the channel context 0 register that has a base offset of
0x0001c000 for "older" hardware now has a base offset of 0x0000f000.

This patch aims to add support for IPA v4.5 registers at their new
offets in a way that minimizes the amount of code that needs to
change.  It is not ideal, but it avoids the need to maintain
a nearly complete set of additional register offset definitions.

The approach takes advantage of the fact that when accessing GSI
registers we do not access any of memory at lower end of the "gsi"
memory range (with two exceptions already noted).  In particular,
we do not access anything within the bottom 0xd000 bytes of the
GSI memory range.

For IPA version 4.5, after we map the GSI memory, we adjust the
virtual memory pointer downward by the fixed amount (0xd000).
That way, register accesses using the offsets defined by the
existing GSI_REG_*() macros will resolve to the proper locations
for IPA version 4.5.

The two registers *not* affected by this offset are accessed only
in gsi_irq_setup().  There, for IPA version 4.5, we undo the general
register adjustment by adding the fixed amount back to the virtual
address to access these registers.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-28 12:13:55 -08:00
Alex Elder
b0b6f0ddce net: ipa: update gsi registers for IPA v4.5
Very few GSI register definitions change for IPA v4.5, however
as a group their position in memory shifts a constant amount
(handled by the next commit).

Add definitions and update comments to the set of GSI registers to
support changes that come with IPA v4.5.

Update the logic in gsi_channel_program() to accommodate the new
(expanded) PREFETCH_MODE field in the CH_C_QOS register.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-28 12:13:55 -08:00
Alex Elder
8bfc4e21d5 net: ipa: add support to code for IPA v4.5
Update the IPA code to make use of the updated IPA v4.5 register
definitions.  Generally what this patch does is, if IPA v4.5
hardware is in use:
  - Ensure new registers or fields in IPA v4.5 are updated where
    required
  - Ensure registers or fields not supported in IPA v4.5 are not
    examined when read, or are set to 0 when written
It does this while preserving the existing functionality for IPA
versions lower than v4.5.

The values to program for QSB_MAX_READS and QSB_MAX_WRITES and the
source and destination resource counts are updated to be correct for
all versions through v4.5 as well.

Note that IPA_RESOURCE_GROUP_SRC_MAX and IPA_RESOURCE_GROUP_DST_MAX
already reflect that 5 is an acceptable number of resources (which
IPA v4.5 implements).

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-28 12:13:55 -08:00
Alex Elder
1af15c2a78 net: ipa: add new most-significant bits to registers
IPA v4.5 adds a few fields to the endpoint header and extended
header configuration registers that represent new high-order bits
for certain offsets and sizes.  Add code to incorporate these upper
bits into the registers for IPA v4.5.

This includes creating ipa_header_size_encoded(), which handles
encoding the metadata offset field for use in the ENDP_INIT_HDR
register in a way appropriate for the hardware version.  This and
ipa_metadata_offset_encoded() ensure the mask argument passed to
u32_encode_bits() is constant.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-28 12:13:54 -08:00
Alex Elder
5b6cd69e89 net: ipa: update IPA registers for IPA v4.5
Update "ipa_reg.h" so that register definitions support IPA hardware
version 4.5, in addition to versions 3.5.1 through v4.2.  Most of
the register definitions are the same, but in some cases fields are
added, changed, or eliminated.

Updates for a few IPA v4.5 registers are more complex, and adding
those definition will be deferred to separate patches.  This patch
only updates the register offset and field definitions, and adds
informational comments.

The only code change avoids accessing the backward compatibility
register for IPA version 4.5 in ipa_hardware_config().  Other IPA
v4.5-specific code changes will come later.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-28 12:13:54 -08:00
Alex Elder
9f84819860 net: ipa: reverse logic on escape buffer use
Starting with IPA v4.2 there is a GSI channel option to use an
"escape buffer" instead of prefetch buffers.  This should be used
for all channels *except* the AP command TX channel.  The logic
that implements this has it backwards; fix this bug.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-28 12:13:54 -08:00
Marcelo Ricardo Leitner
3567e23379 net/sched: act_ct: enable stats for HW offloaded entries
By setting NF_FLOWTABLE_COUNTER. Otherwise, the updates added by
commit ef803b3cf9 ("netfilter: flowtable: add counter support in HW
offload") are not effective when using act_ct.

While at it, now that we have the flag set, protect the call to
nf_ct_acct_update() by commit beb97d3a31 ("net/sched: act_ct: update
nf_conn_acct for act_ct SW offload in flowtable") with the check on
NF_FLOWTABLE_COUNTER, as also done on other places.

Note that this shouldn't impact performance as these stats are only
enabled when net.netfilter.nf_conntrack_acct is enabled.

Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: wenxu <wenxu@ucloud.cn>
Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Link: https://lore.kernel.org/r/481a65741261fd81b0a0813e698af163477467ec.1606415787.git.marcelo.leitner@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-28 11:43:40 -08:00
Jakub Kicinski
5c39f26e67 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Trivial conflict in CAN, keep the net-next + the byteswap wrapper.

Conflicts:
	drivers/net/can/usb/gs_usb.c

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-27 18:25:27 -08:00