Commit graph

547503 commits

Author SHA1 Message Date
Eric Dumazet
a00e74442b tcp/dccp: constify send_synack and send_reset socket argument
None of these functions need to change the socket, make it
const.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 16:53:07 -07:00
David S. Miller
4c7e622ddf Merge branch 'ipv4-routing-cleanups'
Alexander Duyck says:

====================
Minor IPv4 routing cleanups

These patches just contain some minor cleanups to address a few minor
issues.  The first and the third mostly just improve readability.  The
second patch should improve the performance for multicast destination
addresses that do not have a localhost source IP address by avoiding some
unnecessary dereferences.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 16:27:47 -07:00
David Ahern
0d7539603b net: Remove martian_source_keep_err goto label
err is initialized to -EINVAL when it is declared. It is not reset until
fib_lookup which is well after the 3 users of the martian_source jump. So
resetting err to -EINVAL at martian_source label is not needed.

Removing that line obviates the need for the martian_source_keep_err label
so delete it.

Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 16:27:47 -07:00
Alexander Duyck
75fea73dce net: Swap ordering of tests in ip_route_input_mc
This patch just swaps the ordering of one of the conditional tests in
ip_route_input_mc.  Specifically it swaps the testing for the source
address to see if it is loopback, and the test to see if we allow a
loopback source address.

The reason for swapping these two tests is because it is much faster to
test if an address is loopback than it is to dereference several pointers
to get at the net structure to see if the use of loopback is allowed.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 16:27:47 -07:00
Alexander Duyck
2094acbb71 net/ipv4: Pass proto as u8 instead of u16 in ip_check_mc_rcu
This patch updates ip_check_mc_rcu so that protocol is passed as a u8
instead of a u16.

The motivation is just to avoid any unneeded type transitions since some
systems will require an instruction to zero extend a u8 field to a u16.
Also it makes it a bit more readable as to the fact that protocol is a u8
so there are no byte ordering changes needed to pass it.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 16:27:47 -07:00
Liviu Dudau
0f50c10d26 RESEND: [PATCH v3 net-next] sky2: use random address if EEPROM is bad
On some embedded systems the EEPROM does not contain a valid MAC address.
In that case it is better to fallback to a generated mac address and
let init scripts fix the value later.

Reported-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
[Changed handcoded setup to use eth_hw_addr_random() and to save new address into HW]
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 15:00:04 -07:00
Alexander Duyck
822d54b9c2 netpoll: Drop budget parameter from NAPI polling call hierarchy
For some reason we were carrying the budget value around between the
various calls to napi->poll.  If for example one of the drivers called had
a bug in which it returned a non-zero value for work this could result in
the budget value becoming negative.

Rather than carry around a value of budget that is 0 or less we can instead
just loop through and pass 0 to each napi->poll call.  If any driver
returns a value for work done that is non-zero then we can report that
driver and continue rather than allowing a bad actor to make the budget
value negative and pass that negative value to napi->poll.

Note, the only actual change here is that instead of letting budget become
negative we are keeping it at 0 regardless of the value returned for work
since it should not be possible for the polling routine to do any actual
work with a budget of 0.  So if the polling routine returns a non-0 value
we are just reporting it and continuing with a budget of 0 rather than
letting that work value be subtracted from the budget of 0.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 14:57:16 -07:00
Nikolay Aleksandrov
2594e9064a bridge: vlan: add per-vlan struct and move to rhashtables
This patch changes the bridge vlan implementation to use rhashtables
instead of bitmaps. The main motivation behind this change is that we
need extensible per-vlan structures (both per-port and global) so more
advanced features can be introduced and the vlan support can be
extended. I've tried to break this up but the moment net_port_vlans is
changed and the whole API goes away, thus this is a larger patch.
A few short goals of this patch are:
- Extensible per-vlan structs stored in rhashtables and a sorted list
- Keep user-visible behaviour (compressed vlans etc)
- Keep fastpath ingress/egress logic the same (optimizations to come
  later)

Here's a brief list of some of the new features we'd like to introduce:
- per-vlan counters
- vlan ingress/egress mapping
- per-vlan igmp configuration
- vlan priorities
- avoid fdb entries replication (e.g. local fdb scaling issues)

The structure is kept single for both global and per-port entries so to
avoid code duplication where possible and also because we'll soon introduce
"port0 / aka bridge as port" which should simplify things further
(thanks to Vlad for the suggestion!).

Now we have per-vlan global rhashtable (bridge-wide) and per-vlan port
rhashtable, if an entry is added to a port it'll get a pointer to its
global context so it can be quickly accessed later. There's also a
sorted vlan list which is used for stable walks and some user-visible
behaviour such as the vlan ranges, also for error paths.
VLANs are stored in a "vlan group" which currently contains the
rhashtable, sorted vlan list and the number of "real" vlan entries.
A good side-effect of this change is that it resembles how hw keeps
per-vlan data.
One important note after this change is that if a VLAN is being looked up
in the bridge's rhashtable for filtering purposes (or to check if it's an
existing usable entry, not just a global context) then the new helper
br_vlan_should_use() needs to be used if the vlan is found. In case the
lookup is done only with a port's vlan group, then this check can be
skipped.

Things tested so far:
- basic vlan ingress/egress
- pvids
- untagged vlans
- undef CONFIG_BRIDGE_VLAN_FILTERING
- adding/deleting vlans in different scenarios (with/without global ctx,
  while transmitting traffic, in ranges etc)
- loading/removing the module while having/adding/deleting vlans
- extracting bridge vlan information (user ABI), compressed requests
- adding/deleting fdbs on vlans
- bridge mac change, promisc mode
- default pvid change
- kmemleak ON during the whole time

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 13:36:06 -07:00
David S. Miller
191988e0bd Merge branch 'mvneta_percpu_irq'
Gregory CLEMENT says:

====================
net: mvneta: Switch to per-CPU irq and make rxq_def useful

As stated in the first version: "this patchset reworks the Marvell
neta driver in order to really support its per-CPU interrupts, instead
of faking them as SPI, and allow the use of any RX queue instead of
the hardcoded RX queue 0 that we have currently."

Following the review which has been done, Maxime started adding the
CPU hotplug support. I continued his work a few weeks ago and here is
the result.

Since the 1st version the main change is this CPU hotplug support, in
order to validate it I powered up and down the CPUs while performing
iperf. I ran the tests during hours: the kernel didn't crash and the
network interfaces were still usable. Of course it impacted the
performance, but continuously power down and up the CPUs is not
something we usually do.

I also reorganized the series, the 3 first patches should go through
the irq subsystem, whereas the 4 others should go to the network
subsystem.

However, there is a runtime dependency between the two parts. Patch 5
depend on the patch 3 to be able to use the percpu irq.

Thanks,

Gregory

PS: Thanks to Willy who gave me some pointers on how to deal with the
NAPI.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 11:51:41 -07:00
Maxime Ripard
f864288544 net: mvneta: Statically assign queues to CPUs
Since the switch to per-CPU interrupts, we lost the ability to set which
CPU was going to receive our RX interrupt, which was now only the CPU on
which the mvneta_open function was run.

We can now assign our queues to their respective CPUs, and make sure only
this CPU is going to handle our traffic.

This also paves the road to be able to change that at runtime, and later on
to support RSS.

[gregory.clement@free-electrons.com]: hardened the CPU hotplug support.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 11:51:41 -07:00
Maxime Ripard
d893665728 net: mvneta: Allow different queues
The mvneta driver allows to change the default RX queue trough the rxq_def
kernel parameter.

However, the current code doesn't allow to have any value but 0. It is
actively checked for in the driver's probe because the drivers makes a
number of assumption and takes a number of shortcuts in order to just use
that RX queue.

Remove these limitations in order to be able to specify any available
queue.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 11:51:40 -07:00
Maxime Ripard
12bb03b436 net: mvneta: Handle per-cpu interrupts
Now that our interrupt controller is allowing us to use per-CPU interrupts,
actually use it in the mvneta driver.

This involves obviously reworking the driver to have a CPU-local NAPI
structure, and report for incoming packet using that structure.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 11:51:40 -07:00
Maxime Ripard
2502d0ef27 net: mvneta: Fix CPU_MAP registers initialisation
The CPU_MAP register is duplicated for each CPUs at different addresses,
each instance being at a different address.

However, the code so far was using CONFIG_NR_CPUS to initialise the CPU_MAP
registers for each registers, while the SoCs embed at most 4 CPUs.

This is especially an issue with multi_v7_defconfig, where CONFIG_NR_CPUS
is currently set to 16, resulting in writes to registers that are not
CPU_MAP.

Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network unit")
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: <stable@vger.kernel.org> # v3.8+
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 11:51:40 -07:00
Maxime Ripard
080481f97f irqchip: armada-370-xp: Rework per-cpu interrupts handling
The MPIC driver currently has a list of interrupts to handle as per-cpu.

Since the timer, fabric and neta interrupts were the only per-cpu
interrupts in the system, we can now remove the switch and just check for
the hardware irq number to determine whether a given interrupt is per-cpu
or not.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 11:51:40 -07:00
Maxime Ripard
aec2e2ad17 irq: Export per-cpu irq allocation and de-allocation functions
Some drivers might use the per-cpu interrupts and still might be built as a
module. Export request_percpu_irq an free_percpu_irq to these user, which
also make it consistent with enable/disable_percpu_irq that were exported.

Reported-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 11:51:40 -07:00
Maxime Ripard
a1b7febd72 genirq: Fix the documentation of request_percpu_irq
The documentation of request_percpu_irq is confusing and suggest that the
interrupt is not enabled at all, while it is actually enabled on the local
CPU.

Clarify that.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-29 11:51:40 -07:00
Jesper Dangaard Brouer
8a4683a5e0 net: help compiler generate better code in eth_get_headlen
Noticed that the compiler (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC))
generated suboptimal assembler code in eth_get_headlen().

This early return coding style is usually not an issue, on super scalar CPUs,
but the compiler choose to put the return statement after this very unlikely
branch, thus creating larger jump down to the likely code path.

Performance wise, I could measure slightly less L1-icache-load-misses
and less branch-misses, and an improvement of 1 nanosec with an IP-forwarding
use-case with 257 bytes packets with ixgbe (CPU i7-4790K @ 4.00GHz).

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:51:15 -07:00
Bendik Rønning Opstad
d2e1339f40 tcp: Fix CWV being too strict on thin streams
Application limited streams such as thin streams, that transmit small
amounts of payload in relatively few packets per RTT, can be prevented
from growing the CWND when in congestion avoidance. This leads to
increased sojourn times for data segments in streams that often transmit
time-dependent data.

Currently, a connection is considered CWND limited only after having
successfully transmitted at least one packet with new data, while at the
same time failing to transmit some unsent data from the output queue
because the CWND is full. Applications that produce small amounts of
data may be left in a state where it is never considered to be CWND
limited, because all unsent data is successfully transmitted each time
an incoming ACK opens up for more data to be transmitted in the send
window.

Fix by always testing whether the CWND is fully used after successful
packet transmissions, such that a connection is considered CWND limited
whenever the CWND has been filled. This is the correct behavior as
specified in RFC2861 (section 3.1).

Cc: Andreas Petlund <apetlund@simula.no>
Cc: Carsten Griwodz <griff@simula.no>
Cc: Jonas Markussen <jonassm@ifi.uio.no>
Cc: Kenneth Klette Jonassen <kennetkl@ifi.uio.no>
Cc: Mads Johannessen <madsjoh@ifi.uio.no>
Signed-off-by: Bendik Rønning Opstad <bro.devel+kernel@gmail.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Tested-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Tested-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:36:30 -07:00
Hariprasad Shenai
5e2a5ebc3f cxgb4: Add HW timesptamp support for RX
Adds support for ethtool get time stamp ioctl, which is used by
tcpdump to get the supported time stamp types

eg: tcpdump -i eth5 -J
Time stamp types for eth5 (use option -j to set):
  host (Host)
  adapter_unsynced (Adapter, not synced with system time)

Adds support for adapter unsynced mode, by adding SIOCSHWTSTAMP support
in driver.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:35:29 -07:00
huangdaode
e4600d69ff net: Fix Hisilicon Network Subsystem Support Compilation
This patch fixes the compilation error with arm allmodconfig, this error
generated due to unavailability of readq() on 32-bit platform which was
found during net-next daily compilation. In the same time, fix all the
hns drivers compilation warnings.

Signed-off-by: huangdaode <huangdaode@hisilicon.com>
Signed-off-by: zhaungyuzeng <Yisen.zhuang@huawei.com>
Signed-off-by: kenneth Lee <liguozhu@hisilicon.com>
Signed-off-by: yankejian <yankejian@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:34:23 -07:00
Robert Jarzmik
1273bc573a net: irda: pxaficp_ir: dmaengine conversion
Convert pxaficp_ir to dmaengine. As pxa architecture is shifting from
raw DMA registers access to pxa_dma dmaengine driver, convert this
driver to dmaengine.

Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Tested-by: Petr Cvek <petr.cvek@tul.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:32:48 -07:00
Robert Jarzmik
89fa57244a net: irda: pxaficp_ir: convert to readl and writel
Convert the pxa IRDA driver to readl and writel primitives, and remove
another set of direct registers access. This leaves only the DMA
registers access, which will be dealt with dmaengine conversion.

Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Tested-by: Petr Cvek <petr.cvek@tul.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:32:48 -07:00
Robert Jarzmik
be01891e46 net: irda: pxaficp_ir: use sched_clock() for time management
Instead of using directly the OS timer through direct register access,
use the standard sched_clock(), which will end up in OSCR reading
anyway.

This is a first step for direct access register removal and machine
specific code removal from this driver.

This commit changes the behavior, as previously the minimum turnaround
time was counted in 76ns steps, while with this patch it is counted in
microsecond steps. The strictly equal formula would have been :
	    while ((sched_clock() - si->last_clk) * 76 < mtt)

Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:32:48 -07:00
Fabio Estevam
5b40f709a1 net: fec: Remove unneeded FEATURES_NEED_QUIESCE definition
There is no need to have FEATURES_NEED_QUIESCE defined as we
can simply use NETIF_F_RXCSUM instead as done in other parts
of the driver.

Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:31:12 -07:00
David Ahern
17fb0b2b90 net: Remove redundant oif checks in rt6_device_match
The oif has already been checked that it is non-zero; the 2 additional
checks on oif within that if (oif) {...} block are redundant.

CC: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:30:24 -07:00
Woojung.Huh@microchip.com
49d28b5642 lan78xx: Return 0 when lan78xx_suspend() has no error.
lan78xx_suspend() may return non-zero from lan78xx_write_reg() in some scenario.
Fix to return 0 when lan78xx_suspend() has no error.

Signed-off-by: Woojung Huh <woojung.huh@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:28:53 -07:00
David S. Miller
366f02d873 Merge branch 'mlx5-next'
Or Gerlitz says:

====================
Mellanox mlx5 driver update

Bunch of changes from the team, while warming engines for the
upcoming SRIOV support.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:19:56 -07:00
Eli Cohen
171bb2c560 net/mlx5_core: Update health syndromes
Update new health monitored syndromes and their descriptions.

Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:19:50 -07:00
Eli Cohen
78ccb25861 net/mlx5_core: Fix wrong name in struct
The name refers to syndrome so uset ext_synd instread of ext_sync.

Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:19:50 -07:00
Majd Dibbiny
a31208b1e1 net/mlx5_core: New init and exit flow for mlx5_core
In the new flow, we separate the pci initialization and teardown from the
initialization and teardown of the other resources.

init_one calls mlx5_pci_init that handles the pci resources initialization.
It then calls mlx5_load_one to initialize the remainder of the resources.

When removing a device, remove_one is invoked. However, now remove_one
calls mlx5_unload_one to free all the resources except the pci resources.
When mlx5_unload_one returns, mlx5_pci_close is called to free the pci
resources.

The above separation will allow us to implement the pci error handlers and
suspend and resume callbacks.

Signed-off-by: Majd Dibbiny <majd@mellanox.com>
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:19:50 -07:00
Eli Cohen
a8ffe63e60 net/mlx5_core: Fix notification of page supplement error
Some errors did not result with notifying firmware that the page request
could not be fulfilled. Fix this and put the notification logic into a
separate function.

Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:19:49 -07:00
Eli Cohen
be87544de8 net/mlx5_core: Fix async commands return code
In case of async command completion, the error code returned should take
into account the command completion status.

Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:19:49 -07:00
Achiad Shochat
6c3dbd2d72 net/mlx5_core: Remove redundant "err" variable usage
Cosmetic change.
Do not use the an err variable just to assign and return it.

Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:19:49 -07:00
Saeed Mahameed
97909302f9 net/mlx5_core: Fix struct type in the DESTROY_TIR/TIS device commands
Used the output mailbox format for input mailbox.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:19:49 -07:00
Achiad Shochat
343b29f308 net/mlx5e: Priv state flag not rolled-back upon netdev open error
The private mlx5 state flag that indicates that the netdev is
opened is set at the beginning of the netdev open flow.
In case an error occured later in the mlx5 netdev open flow, this
flag was not cleared, remaining set although the actual set is
closed.

Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:19:49 -07:00
Andrzej Hajda
4de61ba234 tools: bpf_jit_disasm: make get_last_jit_image return unsigned
The function returns always non-negative values.

The problem has been detected using proposed semantic patch
scripts/coccinelle/tests/assign_signed_to_unsigned.cocci [1].

[1]: http://permalink.gmane.org/gmane.linux.kernel/2046107

Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:18:01 -07:00
Eric Dumazet
7c85af8810 tcp: avoid reorders for TFO passive connections
We found that a TCP Fast Open passive connection was vulnerable
to reorders, as the exchange might look like

[1] C -> S S <FO ...> <request>
[2] S -> C S. ack request <options>
[3] S -> C . <answer>

packets [2] and [3] can be generated at almost the same time.

If C receives the 3rd packet before the 2nd, it will drop it as
the socket is in SYN_SENT state and expects a SYNACK.

S will have to retransmit the answer.

Current OOO avoidance in linux is defeated because SYNACK
packets are attached to the LISTEN socket, while DATA packets
are attached to the children. They might be sent by different cpus,
and different TX queues might be selected.

It turns out that for TFO, we created a child, which is a
full blown socket in TCP_SYN_RECV state, and we simply can attach
the SYNACK packet to this socket.

This means that at the time tcp_sendmsg() pushes DATA packet,
skb->ooo_okay will be set iff the SYNACK packet had been sent
and TX completed.

This removes the reorder source at the host level.

We also removed the export of tcp_try_fastopen(), as it is no
longer called from IPv6.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 22:11:19 -07:00
David S. Miller
eae93fe4ff Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:

====================
Intel Wired LAN Driver Updates 2015-09-28

This series contains updates to i40e, i40evf and igb to resolve issues
seen and reported by Red Hat.

Kiran moves i40e_get_head() in preparation for the refactor of the Tx
timeout logic, so that it can be used in other areas of the driver.
Refactored the driver timeout logic by issuing a writeback request via
a software interrupt to the hardware the first time the driver detects
a hang.  This was due to the driver being too aggressive in resetting a
hung queue.

Shannon adds the GRE protocol to the transmit checksum encoding.

Anjali fixes an issue of forcing writeback too often, which caused us to
not benefit from NAPI.  We now disable force writeback in the clean
routine for X710 and XL710 adapters.  The X722 adapters do not enable
interrupt to force a writeback and benefit from WB_ON_ITR and so force
WB is left enabled for those adapters.  Fixed a possible deadlock issue
where sync_vsi_filters() can be called directly under RTNL or through
the timer subtask without RTNL.  So update the flow to see if we are
already under RTNL before trying to grab it.

Stefan Assmann provides a fix for igb where SR-IOV was not getting
enabled properly and we ran into a NULL pointer if the max_vfs module
parameter is specified.  This is prevented by setting the
IGB_FLAG_HAS_MSIX bit before calling igb_probe_vfs().

v2: added "i40e: Fix for recursive RTNL lock during PROMISC change" patch
    to the series, as it resolves another issues seen and reported by
    Red Hat.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-28 20:56:02 -07:00
Stefan Assmann
cbfe360a15 igb: assume MSI-X interrupts during initialization
In igb_sw_init() the sequence of calls was changed from
igb_init_queue_configuration()
igb_init_interrupt_scheme()
igb_probe_vfs()
to
igb_probe_vfs()
igb_init_queue_configuration()
igb_init_interrupt_scheme()

This results in adapter->flags not having the IGB_FLAG_HAS_MSIX bit set
during igb_probe_vfs()->igb_enable_sriov(). Therefore SR-IOV does not
get enabled properly and we run into a NULL pointer if the max_vfs
module parameter is specified (adapter->vf_data does not get allocated,
crash on accessing the structure).

[    7.419348] BUG: unable to handle kernel NULL pointer dereference at 0000000000000048
[    7.419367] IP: [<ffffffffa02161c6>] igb_reset+0xe6/0x5d0 [igb]
[    7.419370] PGD 0
[    7.419373] Oops: 0002 [#1] SMP
[    7.419381] Modules linked in: ahci(+) libahci igb(+) i40e(+) vxlan ip6_udp_tunnel udp_tunnel megaraid_sas(+) ixgbe(+) mdio
[    7.419385] CPU: 0 PID: 4 Comm: kworker/0:0 Not tainted 4.2.0+ #153
[    7.419387] Hardware name: Dell Inc. PowerEdge R720/0C4Y3R, BIOS 1.6.0 03/07/2013
[...]
[    7.419431] Call Trace:
[    7.419442]  [<ffffffffa0217236>] igb_probe+0x8b6/0x1340 [igb]
[    7.419447]  [<ffffffff814c7f15>] local_pci_probe+0x45/0xa0

Prevent this by setting the IGB_FLAG_HAS_MSIX bit before calling
igb_probe_vfs(). The real interrupt capabilities will be checked during
igb_init_interrupt_scheme() so this is safe to do.

Signed-off-by: Stefan Assmann <sassmann@kpanic.de>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-09-28 17:48:34 -07:00
Anjali Singhai
30e2561b95 i40e: Fix for recursive RTNL lock during PROMISC change
The sync_vsi_filters function can be called directly under RTNL
or through the timer subtask without one. This was causing a deadlock.

If sync_vsi_filters is called from a thread which held the lock,
and in another thread the PROMISC setting got changed we would
be executing the PROMISC change in the thread which already held
the lock alongside the other filter update. The PROMISC change
requires a reset if we are on a VEB, which requires it to be called
under RTNL.

Earlier the driver would call reset for PROMISC change without
checking if we were already under RTNL and would try to grab it
causing a deadlock. This patch changes the flow to see if we are
already under RTNL before trying to grab it.

Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-09-28 17:43:23 -07:00
Anjali Singhai
5804474311 i40e: Fix RS bit update in Tx path and disable force WB workaround
This patch fixes the issue of forcing WB too often causing us to not
benefit from NAPI.

Without this patch we were forcing WB/arming interrupt too often taking
away the benefits of NAPI and causing a performance impact.

With this patch we disable force WB in the clean routine for X710
and XL710 adapters. X722 adapters do not enable interrupt to force
a WB and benefit from WB_ON_ITR and hence force WB is left enabled
for those adapters.
For XL710 and X710 adapters if we have less than 4 packets pending
a software Interrupt triggered from service task will force a WB.

This patch also changes the conditions for setting RS bit as described
in code comments. This optimizes when the HW does a tail bump and when
it does a WB. It also optimizes when we do a wmb.

Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-09-28 17:38:28 -07:00
Shannon Nelson
c1d1791dc8 i40e: add GRE tunnel type to csum encoding
Make sure the Tx checksum encoder knows about GRE protocol and sets the
descriptor flag appropriately.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-09-28 17:38:27 -07:00
Kiran Patil
b03a8c1f4c i40e/i40evf: refactor tx timeout logic
This patch modifies the driver timeout logic by issuing a writeback
request via a software interrupt to the hardware the first time the
driver detects a hang. The driver was too aggressive in resetting a hung
queue, so back that off by removing logic to down the netdevice after
too many hangs, and move the function to the service task.

Change-ID: Ife100b9d124cd08cbdb81ab659008c1b9abbedea
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-09-28 17:38:27 -07:00
Kiran Patil
1e6d6f8c1b i40e: Move i40e_get_head into header file
i40e_get_head needs to be called in multiple files in a further patch,
prepare by moving the function into a header file.

Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2015-09-28 17:38:27 -07:00
Ian Wilson
34c2d9fb04 bridge: Allow forward delay to be cfgd when STP enabled
Allow bridge forward delay to be configured when Spanning Tree is enabled.

Signed-off-by: Ian Wilson <iwilson@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-27 19:09:38 -07:00
David S. Miller
8f35043729 Merge branch 'vxlan-ipv4-ipv6'
Jiri Benc says:

====================
vxlan: support both IPv4 and IPv6 sockets

Note: this needs net merged into net-next in order to apply.

It's currently not easy enough to work with metadata based vxlan tunnels. In
particular, it's necessary to create separate network interfaces for IPv4
and IPv6 tunneling. Assigning an IPv6 address to an IPv4 interface is
allowed yet won't do what's expected. With route based tunneling, one has to
pay attention to use the vxlan interface opened with the correct family.
Other users of this (openvswitch) would need to always create two vxlan
interfaces.

Furthermore, there's no sane API for creating an IPv6 vxlan metadata based
interface.

This patchset simplifies this by opening both IPv4 and IPv6 socket if the
vxlan interface has the metadata flag (IFLA_VXLAN_COLLECT_METADATA) set.
Assignment of addresses etc. works as expected after this.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-26 22:40:56 -07:00
Jiri Benc
b1be00a6c3 vxlan: support both IPv4 and IPv6 sockets in a single vxlan device
For metadata based vxlan interface, open both IPv4 and IPv6 socket. This is
much more user friendly: it's not necessary to create two vxlan interfaces
and pay attention to using the right one in routing rules.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-26 22:40:55 -07:00
Jiri Benc
205f356d16 vxlan: make vxlan_sock_add and vxlan_sock_release complementary
Make vxlan_sock_add both alloc the socket and attach it to vxlan_dev. Let
vxlan_sock_release accept vxlan_dev as its parameter instead of vxlan_sock.

This makes vxlan_sock_add and vxlan_sock release complementary. It reduces
code duplication in the next patch.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-26 22:40:55 -07:00
David Woodhouse
8b7a704822 8139cp: Fix GSO MSS handling
When fixing the TSO support I noticed we just mask ->gso_size with the
MSSMask value and don't care about the consequences.

Provide a .ndo_features_check() method which drops the NETIF_F_TSO
feature for any skb which would exceed the maximum, and thus forces it
to be segmented by software.

Then we can stop the masking in cp_start_xmit(), and just WARN if the
maximum is exceeded, which should now never happen.

Finally, Francois Romieu noticed that we didn't even have the right
value for MSSMask anyway; it should be 0x7ff (11 bits) not 0xfff.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-26 22:38:34 -07:00
David Woodhouse
5a58f22779 8139cp: Enable offload features by default
I fixed TSO. Hardware checksum and scatter/gather also appear to be
working correctly both on real hardware and in QEMU's emulation.

Let's enable them by default and see if anyone screams...

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-26 22:37:12 -07:00