Replace ip_tunnel_get_stats64() with the new identical core function
dev_get_tstats64().
Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Replace ip_tunnel_get_stats64() with the new identical core function
dev_get_tstats64().
Acked-by: Harald Welte <laforge@gnumonks.org>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Replace ip_tunnel_get_stats64() with the new identical core function
dev_get_tstats64().
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Switch ip6_tunnel to the standard statistics pattern:
- use dev->stats for the less frequently accessed counters
- use dev->tstats for the frequently accessed counters
An additional benefit is that we now have 64bit statistics also on
32bit systems.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Switch tun to the standard statistics pattern:
- use netdev->stats for the less frequently accessed counters
- use netdev->tstats for the frequently accessed per-cpu counters
v3:
- add atomic_long_t member rx_frame_errors for making counter updates
atomic
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Use netdev->tstats instead of a member of dsa_slave_priv for storing
a pointer to the per-cpu counters. This allows us to use core
functionality for statistics handling.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Vladimir Oltean <olteanv@gmail.com>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
It's a frequent pattern to use netdev->stats for the less frequently
accessed counters and per-cpu counters for the frequently accessed
counters (rx/tx bytes/packets). Add a default ndo_get_stats64()
implementation for this use case.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Export the raw VTU data and related registers in a devlink region so
that it can be inspected from userspace and compared to the current
bridge configuration.
Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20201109082927.8684-1-tobias@waldekranz.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
The .config_aneg in microchip_t1 is genphy_config_aneg, so it's not
needed, because the phy core will call genphy_config_aneg() if the
.config_aneg is NULL.
Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20201109091605.3951c969@xhacker.debian
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Replace list_head with hlist_head for MRP list under the bridge.
There is no need for a circular list when a linear list will work.
This will also decrease the size of 'struct net_bridge'.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Link: https://lore.kernel.org/r/20201106215049.1448185-1-horatiu.vultur@microchip.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Tanner Love says:
====================
net/packet: make packet_fanout.arr size configurable up to 64K
First patch makes the change; second patch adds unit tests.
====================
Link: https://lore.kernel.org/r/20201106180741.2839668-1-tannerlove.kernel@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Add an additional control test that verifies:
-specifying two different max_num_members values fails
-specifying max_num_members > PACKET_FANOUT_MAX fails
In datapath tests, set max_num_members to PACKET_FANOUT_MAX.
Signed-off-by: Tanner Love <tannerlove@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
One use case of PACKET_FANOUT is lockless reception with one socket
per CPU. 256 is a practical limit on increasingly many machines.
Increase PACKET_FANOUT_MAX to 64K. Expand setsockopt PACKET_FANOUT to
take an extra argument max_num_members. Also explicitly define a
fanout_args struct, instead of implicitly casting to an integer. This
documents the API and simplifies the control flow.
If max_num_members is not specified or is set to 0, then 256 is used,
same as before.
Signed-off-by: Tanner Love <tannerlove@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
When udp_memory_allocated is at the limit, __udp_enqueue_schedule_skb
will return a -ENOBUFS, and skb will be dropped in __udp_queue_rcv_skb
without any counters being done. It's hard to find out what happened
once this happen.
So we introduce a UDP_MIB_MEMERRORS to do this job. Well, this change
looks friendly to the existing users, such as netstat:
$ netstat -u -s
Udp:
0 packets received
639 packets to unknown port received.
158689 packet receive errors
180022 packets sent
RcvbufErrors: 20930
MemErrors: 137759
UdpLite:
IpExt:
InOctets: 257426235
OutOctets: 257460598
InNoECTPkts: 181177
v2:
- Fix some alignment problems
Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn>
Link: https://lore.kernel.org/r/1604627354-43207-1-git-send-email-dong.menglong@zte.com.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Set all EHL/TGL phy_addr to -1 so that the driver will automatically
detect it at run-time by probing all the possible 32 addresses.
Signed-off-by: Voon Weifeng <weifeng.voon@intel.com>
Signed-off-by: Wong Vee Khee <vee.khee.wong@intel.com>
Link: https://lore.kernel.org/r/20201106094341.4241-1-vee.khee.wong@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alex Elder says:
====================
net: ipa: constrain GSI interrupts
The goal of this series is to more tightly control when GSI
interrupts are enabled. This is a long-ish series, so I'll
describe it in parts.
The first patch is actually unrelated... I forgot to include
it in my previous series (which exposed the GSI layer to the
IPA version). It is a trivial comments-only update patch.
The second patch defers registering the GSI interrupt handler
until *after* all of the resources that handler touches have
been initialized. In practice, we don't see this interrupt
that early, but this precludes an obvious problem.
The next two patches are simple changes. The first just
trivially renames a field. The second switches from using
constant mask values to using an enumerated type of bit
positions to represent each GSI interrupt type.
The rest implement the "real work." First, all interrupts
are disabled at initialization time. Next, we keep track of
a bitmask of enabled GSI interrupt types, updating it each
time we enable or disable one of them. From there we have
a set of patches that one-by-one enable each interrupt type
only during the period it is required. This includes allowing
a channel to generate IEOB interrupts only when it has been
enabled. And finally, the last patch simplifies some code
now that all GSI interrupt types are handled uniformly.
====================
Link: https://lore.kernel.org/r/20201105181407.8006-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Now that all of the GSI interrupts are handled uniformly,
change gsi_irq_type_update() so it takes a value. Have the
function assign that value to the cached mask of enabled GSI
IRQ types before writing it to hardware.
Note that gsi_irq_teardown() will only be called after
gsi_irq_disable(), so it's not necessary for the former
to disable all IRQ types. Get rid of that.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Most GSI general errors are unrecoverable without a full reset.
Despite that, we want to receive these errors so we can at least
report what happened before whatever undefined behavior ensues.
Explicitly disable all such interrupts in gsi_irq_setup(), then
enable those we want in gsi_irq_enable(). List the interrupt types
we are interested in (everything but breakpoint) explicitly rather
than using GSI_CNTXT_GSI_IRQ_ALL, and remove that symbol's
definition.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
It is possible for other execution environments (EEs, like the modem)
to request changes to local (AP) channel or event ring state. We do
not support this feature.
In gsi_irq_setup(), explicitly zero the mask that defines which
channels are permitted to generate inter-EE channel state change
interrupts. Do the same for the event ring mask.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
A GSI channel must be started in order to use it to perform a
transfer data (or command) transaction. And the only time we'll see
an IEOB interrupt is if we send a transaction to a started channel.
Therefore we do not need to have the IEOB interrupt type enabled
until at least one channel has been started. And once the last
started channel has been stopped, we can disable the IEOB interrupt
type again.
We already enable the IEOB interrupt for a particular channel only
when it is started. Extend that by having the IEOB interrupt *type*
be enabled only when at least one channel is in STARTED state.
Disallow all channels from triggering the IEOB interrupt in
gsi_irq_setup(). We only enable an channel's interrupt when
needed, so there is no longer any need to zero the channel mask
in gsi_irq_disable().
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
The completion of a generic EE GSI command is signaled by a global
interrupt of type GP_INT1. The only other used type for a global
interrupt is a hardware error report.
First, disallow all global interrupt types in gsi_irq_setup(). We
want to know about hardware errors, so re-enable the interrupt type
in gsi_irq_enable(), to allow hardware errors to be reported.
Disable that interrupt type again in gsi_irq_disable().
We only issue generic EE commands one at a time, and there's no
reason to keep the completion interrupt enabled when no generic
EE command is pending. We furthermore have no need to enable the
GP_INT2 or GP_INT3 interrupt types (which aren't used).
The change in gsi_irq_enable() makes GSI_CNTXT_GLOB_IRQ_ALL unused,
so get rid of it. Have gsi_generic_command() enable the GP_INT1
interrupt type (in addition to the ERROR_INT type) only while a
generic command is pending.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
A GSI event ring causes an event control interrupt to fire whenever
its state changes (between NOT_ALLOCATED and ALLOCATED). No event
ring should ever change state except when we request it to.
Currently, we permit *all* events rings to generate event control
interrupts--even those that are never used. And we enable event
control interrupts essentially at all times, from setup to teardown.
Instead, only enable the event control interrupt type for the
duration of an event ring command, and when doing so, only allow
the event ring being operated upon to cause the interrupt to fire.
Disallow all event rings from issuing the event control interrupt
in gsi_irq_setup().
Because an event ring's interrupt is only enabled when needed,
there is no longer any need to zero the event channel mask in
gsi_irq_disable().
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
A GSI channel causes a channel control interrupt to fire whenever
its state changes (between NOT_ALLOCATED, ALLOCATED, STARTED, etc.).
We do not support inter-EE channel commands (initiated by other EEs),
so no channel should ever change state except when we request it to.
Currently, we permit *all* channels to generate channel control
interrupts--even those that are never used. And we enable channel
control interrupts essentially at all times, from setup to teardown.
Instead, disable all channel control interrupts initially in
gsi_irq_setup(), and only enable the channel control interrupt
type for the duration of a channel command. When doing so, only
allow the channel being operated upon to cause the interrupt to
fire.
Because a channel's interrupt is now enabled only when needed (one
channel at a time), there is no longer any need to zero the channel
mask in gsi_irq_disable().
Add new gsi_irq_type_enable() and gsi_irq_type_disable() as helper
functions to control whether a given GSI interrupt type is enabled.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Keep track of the set of GSI interrupt types that are currently
enabled by recording the mask value to write (or last written) to
the TYPE_IRQ_MSK register.
Create a new helper function gsi_irq_type_update() to handle
actually writing the register.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Introduce gsi_irq_setup() and gsi_irq_teardown() to disable all
GSI interrupts when first setting up GSI hardware, and to clean
things up when we're done.
Re-enable all GSI interrupt types in gsi_irq_enable(), but do
so only after each of the type-specific interrupt masks has
been configured. Similarly, disable all interrupt types in
gsi_irq_disable()--first--before zeroing out the type-specific
masks.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Define the GSI interrupt types with an enumerated type whose values
are the bit positions representing each interrupt type. Include a
short comment describing how each interrupt type is used.
Build up the enabled interrupt mask explicitly in gsi_irq_enable(),
and get rid of the definition of GSI_CNTXT_TYPE_IRQ_MSK_ALL.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Rename the "event_enable_bitmap" field of the GSI structure to be
"ieob_enabled_bitmap". An upcoming patch will cache the last value
stored for another interrupt mask and this is a more direct naming
convention to follow.
Add a few comments to explain the bitmap fields in the GSI structure.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Introduce gsi_irq_init() and gsi_irq_exit(), to encapsulate looking
up the GSI IRQ and registering its handler. Call gsi_irq_init() a
little later in gsi_init(), and initialize the completion earlier.
The IRQ handler accesses both the GSI virtual memory pointer and the
completion, and this way these things will have been initialized
before the gsi_irq() can ever be called.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
The GSI code is now exposed to IPA version numbers, and we handle
version-specific behavior based on the IPA version.
Modify some comments that talk about GSI versions so they reference
IPA versions instead. Correct version number errors in a couple of
these comments.
The (comment) mapping between IPA and GSI versions in the definition
of the ipa_version enumerated type remains.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This driver transports LAPB (X.25 link layer) frames over TTY links.
I can safely say that this driver has no actual user because it was
not working at all until:
commit 8fdcabeac3 ("drivers/net/wan/x25_asy: Fix to make it work")
The code in its current state still has problems:
1.
The uses of "struct x25_asy" in x25_asy_unesc (when receiving) and in
x25_asy_write_wakeup (when sending) are not protected by locks against
x25_asy_change_mtu's changing of the transmitting/receiving buffers.
Also, all "netif_running" checks in this driver are not protected by
locks against the ndo_stop function.
2.
The driver stops all TTY read/write when the netif is down.
I think this is not right because this may cause the last outgoing frame
before the netif goes down to be incompletely transmitted, and the first
incoming frame after the netif goes up to be incompletely received.
And there may also be other problems.
I was planning to fix these problems but after recent discussions about
deleting other old networking code, I think we may just delete this
driver, too.
Signed-off-by: Xie He <xie.he.0141@gmail.com>
Acked-by: Martin Schiller <ms@dev.tdt.de>
Link: https://lore.kernel.org/r/20201105073434.429307-1-xie.he.0141@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Refactor idt82p33_xfer and use i2c_master_send for write operation.
Because some I2C controllers are only working with single-burst write
transaction.
Signed-off-by: Min Li <min.li.xe@renesas.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Link: https://lore.kernel.org/r/1604634729-24960-2-git-send-email-min.li.xe@renesas.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
The initialization for err with 0 seems useless, as it is soon updated
with -ENOMEM. So, we can remove it.
Changes since v1:
-Keep -ENOMEM still.
Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn>
Link: https://lore.kernel.org/r/1604541244-3241-1-git-send-email-dong.menglong@zte.com.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Fix the gcc warning:
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c:2673:9: warning: this 'for' clause does not guard... [-Wmisleading-indentation]
2673 | for (i = 0; i < n; ++i) \
Reported-by: Tosk Robot <tencent_os_robot@tencent.com>
Signed-off-by: Kaixu Xia <kaixuxia@tencent.com>
Link: https://lore.kernel.org/r/1604467444-23043-1-git-send-email-kaixuxia@tencent.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Radhey Shyam Pandey says:
====================
net: axienet: Dynamically enable MDIO interface
This patchset dynamically enable MDIO interface. The background for this
change is coming from Cadence GEM controller(macb) in which MDC is active
only during MDIO read or write operations while the PHY registers are
read or written. It is implemented as an IP feature.
For axiethernet as dynamic MDC enable/disable is not supported in hw
we are implementing it in sw. This change doesn't affect any existing
functionality.
====================
Link: https://lore.kernel.org/r/1604402770-78045-1-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
MDIO spec does not require an MDC at all times, only when MDIO
transactions are occurring. This patch allows the xilinx_axienet
driver to disable the MDC when not in use, and re-enable it when
needed. It also simplifies the driver by removing MDC disable
and enable in device reset sequence.
Signed-off-by: Clayton Rayment <clayton.rayment@xilinx.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Introduce helper functions to enable/disable MDIO interface clock. This
change serves a preparatory patch for the coming feature to dynamically
control the management bus clock.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Allen Pais says:
====================
net: convert tasklets to use new tasklet_setup API
Commit 12cc923f1c ("tasklet: Introduce new initialization API")'
introduced a new tasklet initialization API. This series converts
all the net/* drivers to use the new tasklet_setup() API
The following series is based on net-next (9faebeb2d)
v3:
introduce qdisc_from_priv, suggested by Eric Dumazet.
v2:
get rid of QDISC_ALIGN()
v1:
fix kerneldoc
====================
Link: https://lore.kernel.org/r/20201103091823.586717-1-allen.lkml@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <apais@linux.microsoft.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <apais@linux.microsoft.com>
Acked-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <apais@linux.microsoft.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Acked-by: Stefan Schmidt <stefan@datenfreihafen.org>
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <apais@linux.microsoft.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <apais@linux.microsoft.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>