Commit graph

521521 commits

Author SHA1 Message Date
Sergey Popovich
edda079174 netfilter: ipset: Use SET_WITH_*() helpers to test set extensions
Signed-off-by: Sergey Popovich <popovich_sergei@mail.ua>
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
2015-06-14 10:40:12 +02:00
Jozsef Kadlecsik
aaeb6e24f5 netfilter: ipset: Use MSEC_PER_SEC consistently
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
2015-06-14 10:40:12 +02:00
Florian Westphal
482cfc3185 netfilter: xtables: avoid percpu ruleset duplication
We store the rule blob per (possible) cpu.  Unfortunately this means we can
waste lot of memory on big smp machines. ipt_entry structure ('rule head')
is 112 byte, so e.g. with maxcpu=64 one single rule eats
close to 8k RAM.

Since previous patch made counters percpu it appears there is nothing
left in the rule blob that needs to be percpu.

On my test system (144 possible cpus, 400k dummy rules) this
change saves close to 9 Gigabyte of RAM.

Reported-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:27:10 +02:00
Florian Westphal
71ae0dff02 netfilter: xtables: use percpu rule counters
The binary arp/ip/ip6tables ruleset is stored per cpu.

The only reason left as to why we need percpu duplication are the rule
counters embedded into ipt_entry et al -- since each cpu has its own copy
of the rules, all counters can be lockless.

The downside is that the more cpus are supported, the more memory is
required.  Rules are not just duplicated per online cpu but for each
possible cpu, i.e. if maxcpu is 144, then rule is duplicated 144 times,
not for the e.g. 64 cores present.

To save some memory and also improve utilization of shared caches it
would be preferable to only store the rule blob once.

So we first need to separate counters and the rule blob.

Instead of using entry->counters, allocate this percpu and store the
percpu address in entry->counters.pcnt on CONFIG_SMP.

This change makes no sense as-is; it is merely an intermediate step to
remove the percpu duplication of the rule set in a followup patch.

Suggested-by: Eric Dumazet <edumazet@google.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reported-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:27:09 +02:00
Florian Westphal
d7b5974215 netfilter: bridge: restore vlan tag when refragmenting
If bridge netfilter is used with both
bridge-nf-call-iptables and bridge-nf-filter-vlan-tagged enabled
then ip fragments in VLAN frames are sent without the vlan header.

This has never worked reliably.  Turns out this relied on pre-3.5
behaviour where skb frag_list was used to store ip fragments;
ip_fragment() then re-used these skbs.

But since commit 3cc4949269
("ipv4: use skb coalescing in defragmentation") this is no longer
the case.  ip_do_fragment now needs to allocate new skbs, but these
don't contain the vlan tag information anymore.

Fix it by storing vlan information of the ressembled skb in the
br netfilter percpu frag area, and restore them for each of the
fragments.

Fixes: 3cc4949269 ("ipv4: use skb coalescing in defragmentation")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:16:55 +02:00
Florian Westphal
33b1f31392 net: ip_fragment: remove BRIDGE_NETFILTER mtu special handling
since commit d6b915e29f
("ip_fragment: don't forward defragmented DF packet") the largest
fragment size is available in the IPCB.

Therefore we no longer need to care about 'encapsulation'
overhead of stripped PPPOE/VLAN headers since ip_do_fragment
doesn't use device mtu in such cases.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:16:46 +02:00
Bernhard Thaler
efb6de9b4b netfilter: bridge: forward IPv6 fragmented packets
IPv6 fragmented packets are not forwarded on an ethernet bridge
with netfilter ip6_tables loaded. e.g. steps to reproduce

1) create a simple bridge like this

        modprobe br_netfilter
        brctl addbr br0
        brctl addif br0 eth0
        brctl addif br0 eth2
        ifconfig eth0 up
        ifconfig eth2 up
        ifconfig br0 up

2) place a host with an IPv6 address on each side of the bridge

        set IPv6 address on host A:
        ip -6 addr add fd01:2345:6789:1::1/64 dev eth0

        set IPv6 address on host B:
        ip -6 addr add fd01:2345:6789:1::2/64 dev eth0

3) run a simple ping command on host A with packets > MTU

        ping6 -s 4000 fd01:2345:6789:1::2

4) wait some time and run e.g. "ip6tables -t nat -nvL" on the bridge

IPv6 fragmented packets traverse the bridge cleanly until somebody runs.
"ip6tables -t nat -nvL". As soon as it is run (and netfilter modules are
loaded) IPv6 fragmented packets do not traverse the bridge any more (you
see no more responses in ping's output).

After applying this patch IPv6 fragmented packets traverse the bridge
cleanly in above scenario.

Signed-off-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
[pablo@netfilter.org: small changes to br_nf_dev_queue_xmit]
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:10:12 +02:00
Bernhard Thaler
a4611d3b74 netfilter: bridge: re-order check_hbh_len()
Prepare check_hbh_len() to be called from newly introduced
br_validate_ipv6() in next commit.

Signed-off-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:09:46 +02:00
Bernhard Thaler
77d574e728 netfilter: bridge: rename br_parse_ip_options
br_parse_ip_options() does not parse any IP options, it validates IP
packets as a whole and the function name is misleading.

Rename br_parse_ip_options() to br_validate_ipv4() and remove unneeded
commments.

Signed-off-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:09:17 +02:00
Bernhard Thaler
411ffb4fde netfilter: bridge: refactor frag_max_size
Currently frag_max_size is member of br_input_skb_cb and copied back and
forth using IPCB(skb) and BR_INPUT_SKB_CB(skb) each time it is changed or
used.

Attach frag_max_size to nf_bridge_info and set value in pre_routing and
forward functions. Use its value in forward and xmit functions.

Signed-off-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:08:51 +02:00
Bernhard Thaler
72b31f7271 netfilter: bridge: detect NAT66 correctly and change MAC address
IPv4 iptables allows to REDIRECT/DNAT/SNAT any traffic over a bridge.

e.g. REDIRECT
$ sysctl -w net.bridge.bridge-nf-call-iptables=1
$ iptables -t nat -A PREROUTING -p tcp -m tcp --dport 8080 \
  -j REDIRECT --to-ports 81

This does not work with ip6tables on a bridge in NAT66 scenario
because the REDIRECT/DNAT/SNAT is not correctly detected.

The bridge pre-routing (finish) netfilter hook has to check for a possible
redirect and then fix the destination mac address. This allows to use the
ip6tables rules for local REDIRECT/DNAT/SNAT REDIRECT similar to the IPv4
iptables version.

e.g. REDIRECT
$ sysctl -w net.bridge.bridge-nf-call-ip6tables=1
$ ip6tables -t nat -A PREROUTING -p tcp -m tcp --dport 8080 \
  -j REDIRECT --to-ports 81

This patch makes it possible to use IPv6 NAT66 on a bridge. It was tested
on a bridge with two interfaces using SNAT/DNAT NAT66 rules.

Reported-by: Artie Hamilton <artiemhamilton@yahoo.com>
Signed-off-by: Sven Eckelmann <sven@open-mesh.com>
[bernhard.thaler@wvnet.at: rebased, add indirect call to ip6_route_input()]
[bernhard.thaler@wvnet.at: rebased, split into separate patches]
Signed-off-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:08:07 +02:00
Bernhard Thaler
8cae308d2b netfilter: bridge: re-order br_nf_pre_routing_finish_ipv6()
Put br_nf_pre_routing_finish_ipv6() after daddr_was_changed() and
br_nf_pre_routing_finish_bridge() to prepare calling these functions
from there.

Signed-off-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:07:56 +02:00
Bernhard Thaler
d39a33ed9b netfilter: bridge: refactor clearing BRNF_NF_BRIDGE_PREROUTING
use binary AND on complement of BRNF_NF_BRIDGE_PREROUTING to unset
bit in nf_bridge->mask.

Signed-off-by: Bernhard Thaler <bernhard.thaler@wvnet.at>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:07:53 +02:00
Marcelo Ricardo Leitner
779668450a netfilter: conntrack: warn the user if there is a better helper to use
After db29a9508a ("netfilter: conntrack: disable generic tracking for
known protocols"), if the specific helper is built but not loaded
(a standard for most distributions) systems with a restrictive firewall
but weak configuration regarding netfilter modules to load, will
silently stop working.

This patch then puts a warning message so the sysadmin knows where to
start looking into. It's a pr_warn_once regardless of protocol itself
but it should be enough to give a hint on where to look.

Cc: Florian Westphal <fw@strlen.de>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-06-12 14:06:24 +02:00
David S. Miller
c63264def3 Merge branch 'tcp-gso-settings-defer'
Eric Dumazet says:

====================
tcp: defer shinfo->gso_size|type settings

We put shinfo->gso_segs in TCP_SKB_CB(skb) a while back for performance
reasons.

This was in commit cd7d8498c9 ("tcp: change tcp_skb_pcount() location")

This patch series complete the job for gso_size and gso_type, so that
we do not bring 2 extra cache lines in tcp write xmit fast path,
and making tcp_init_tso_segs() simpler and faster.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 16:33:11 -07:00
Eric Dumazet
b5e2c45783 tcp: remove obsolete check in tcp_set_skb_tso_segs()
We had various issues in the past when TCP stack was modifying
gso_size/gso_segs while clones were in flight.

Commit c52e2421f7 ("tcp: must unclone packets before mangling them")
fixed these bugs and added a WARN_ON_ONCE(skb_cloned(skb)); in
tcp_set_skb_tso_segs()

These bugs are now fixed, and because TCP stack now only sets
shinfo->gso_size|segs on the clone itself, the check can be removed.

As a result of this change, compiler inlines tcp_set_skb_tso_segs() in
tcp_init_tso_segs()

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 16:33:11 -07:00
Eric Dumazet
f69ad292cf tcp: fill shinfo->gso_size at last moment
In commit cd7d8498c9 ("tcp: change tcp_skb_pcount() location") we stored
gso_segs in a temporary cache hot location.

This patch does the same for gso_size.

This allows to save 2 cache line misses in tcp xmit path for
the last packet that is considered but not sent because of
various conditions (cwnd, tso defer, receiver window, TSQ...)

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 16:33:11 -07:00
Eric Dumazet
5bbb432c89 tcp: tcp_set_skb_tso_segs() no longer need struct sock parameter
tcp_set_skb_tso_segs() & tcp_init_tso_segs() no longer
use the sock pointer.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 16:33:11 -07:00
Eric Dumazet
51466a7545 tcp: fill shinfo->gso_type at last moment
Our goal is to touch skb_shinfo(skb) only when absolutely needed,
to avoid two cache line misses in TCP output path for last skb
that is considered but not sent because of various conditions
(cwnd, tso defer, receiver window, TSQ...)

A packet is GSO only when skb_shinfo(skb)->gso_size is not zero.

We can set skb_shinfo(skb)->gso_type to sk->sk_gso_type even for
non GSO packets.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 16:33:11 -07:00
Eric Dumazet
a7eea416cb tcp: reserve tcp_skb_mss() to tcp stack
tcp_gso_segment() and tcp_gro_receive() are not strictly
part of TCP stack. They should not assume tcp_skb_mss(skb)
is in fact skb_shinfo(skb)->gso_size.

This will allow us to change tcp_skb_mss() in following patches.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 16:33:10 -07:00
Scott Feldman
57225e7720 switchdev: fix BUG when port driver doesn't support set attr op
Fix a BUG_ON() where CONFIG_NET_SWITCHDEV is set but the driver for a
bridged port does not support switchdev_port_attr_set op.  Don't BUG_ON()
if -EOPNOTSUPP is returned.

Also change BUG_ON() to netdev_err since this is a normal error path and
does not warrant the use of BUG_ON(), which is reserved for unrecoverable
errs.

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Reported-by: Brenden Blanco <bblanco@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 16:27:09 -07:00
David S. Miller
d0504f4d8f Merge branch 'bna-next'
Ivan Vecera says:

====================
bna: clean-up

The patches clean the bna driver.

v2: changes & comments requested by Joe
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:18 -07:00
Ivan Vecera
ecc467896d bna: use netdev_* and dev_* instead of printk and pr_*
...and remove some of them. It is not necessary to log when .probe() and
.remove() are called or when TxQ is started or stopped. Also log level
of some of them was changed to more appropriate one (link up/down,
firmware loading failure.

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:18 -07:00
Ivan Vecera
ad24d6f04d bna: fix timeout API argument type
Timeout functions are defined with 'void *' ptr argument. They should
be defined directly with 'struct bfa_ioc *' type to avoid type conversions.

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:17 -07:00
Ivan Vecera
16712c5311 bna: use list_for_each_entry where appropriate
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:17 -07:00
Ivan Vecera
2b26fb9567 bna: get rid of private macros for manipulation with lists
Remove macros for manipulation with struct list_head and replace them
with standard ones.

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:17 -07:00
Ivan Vecera
b45da3fcd7 bna: remove useless pointer assignment
Pointer cmpl used to iterate through completion entries is updated at
the beginning of while loop as well as at the end. The update at the end
of the loop is useless.

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:17 -07:00
Ivan Vecera
d0e6a8064c bna: use memdup_user to copy userspace buffers
Patch converts kzalloc->copy_from_user sequence to memdup_user. There
is also one useless assignment of NULL to bnad->regdata as it is followed
by assignment of kzalloc output.

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:17 -07:00
Ivan Vecera
93719d266a bna: correct comparisons/assignments to bool
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:17 -07:00
Ivan Vecera
2a2d75c0e4 bna: remove TX_E_PRIO_CHANGE event and BNA_TX_F_PRIO_CHANGED flag
TX_E_PRIO_CHANGE event is never sent for bna_tx so it doesn't need to be
handled. After this change bna_tx->flags cannot contain
BNA_TX_F_PRIO_CHANGED flag and it can be also eliminated.

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:16 -07:00
Ivan Vecera
d7548e6725 bna: remove paused from bna_rx_config and flags from bna_rxf
The bna_rx_config struct member paused can be removed as it is never
written and as it cannot have non-zero value the bna_rxf struct member
flags also cannot have BNA_RXF_F_PAUSED value and is always zero.
So the flags member can be removed as well as bna_rxf_flags enum and
the code-paths that needs to have non-zero bna_rxf->flags.
This clean-up makes bna_rxf_sm_paused state unsed and can be also removed.

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:16 -07:00
Ivan Vecera
82362d73b0 bna: remove RXF_E_PAUSE and RXF_E_RESUME events
RXF_E_PAUSE & RXF_E_RESUME events are never sent for bna_rxf object so
they needn't to be handled. The bna_rxf's state bna_rxf_sm_fltr_clr_wait
and function bna_rxf_fltr_clear are unused after this so remove them also.

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:16 -07:00
Ivan Vecera
6f544de636 bna: remove prio_change_cbfn oper_state_cbfn from struct bna_tx
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:16 -07:00
Ivan Vecera
f39857e1e2 bna: remove oper_state_cbfn from struct bna_rxf
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:16 -07:00
Ivan Vecera
ef5b67c133 bna: remove pause_cbfn from struct bna_enet
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:15 -07:00
Ivan Vecera
1f9883e032 bna: remove unused cbfn parameter
removed:
bna_rx_ucast_add
bna_rx_ucast_del

simplified:
bna_enet_pause_config
bna_rx_mcast_delall
bna_rx_mcast_listset
bna_rx_mode_set
bna_rx_ucast_listset
bna_rx_ucast_set

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:15 -07:00
Ivan Vecera
1a50691a95 bna: use BIT(x) instead of (1 << x)
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:15 -07:00
Ivan Vecera
a1ac490d0d bna: get rid of duplicate and unused macros
replaced macros:
BNA_MAC_IS_EQUAL -> ether_addr_equal
BNA_POWER_OF_2 -> is_power_of_2
BNA_TO_POWER_OF_2_HIGH -> roundup_pow_of_two

removed unused macros:
bfa_fsm_get_state
bfa_ioc_clr_stats
bfa_ioc_fetch_stats
bfa_ioc_get_alt_ioc_fwstate
bfa_ioc_isr_mode_set
bfa_ioc_maxfrsize
bfa_ioc_mbox_cmd_pending
bfa_ioc_ownership_reset
bfa_ioc_rx_bbcredit
bfa_ioc_state_disabled
bfa_sm_cmp_state
bfa_sm_get_state
bfa_sm_send_event
bfa_sm_set_state
bfa_sm_state_decl
BFA_STRING_32
BFI_ADAPTER_IS_{PROTO,TTV,UNSUPP)
BFI_IOC_ENDIAN_SIG
BNA_{C,RX,TX}Q_PAGE_INDEX_MAX
BNA_{C,RX,TX}Q_PAGE_INDEX_MAX_SHIFT
BNA_{C,RX,TX}Q_QPGE_PTR_GET
BNA_IOC_TIMER_FREQ
BNA_MESSAGE_SIZE
BNA_QE_INDX_2_PTR
BNA_QE_INDX_RANGE
BNA_Q_GET_{C,P}I
BNA_Q_{C,P}I_ADD
BNA_Q_FREE_COUNT
BNA_Q_IN_USE_COUNT
BNA_TO_POWER_OF_2
containing_rec

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:15 -07:00
Ivan Vecera
e423c85603 bna: replace pragma(pack) with attribute __packed
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:15 -07:00
Ivan Vecera
d6b3059850 bna: get rid of mac_t
The patch converts mac_t type to widely used 'u8 [ETH_ALEN]'.

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:14 -07:00
Ivan Vecera
e2f9ecfcc6 bna: use ether_addr_copy instead of memcpy
Parameters of all ether_addr_copy instances were checked for proper
alignment. Alignment of bnad_bcast_addr is forced to 2 as the implicit
alignment is 1.
I have also renamed address parameter of bnad_set_mac_address() to addr.
The name mac_addr was a little bit confusing as the real parameter is
struct sockaddr *.

v2: added __aligned directive to bnad_bcast_addr, renamed parameter of
    bnad_set_mac_address() (thx joe@perches.com)

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:57:14 -07:00
David S. Miller
d205ce5c18 Merge branch 'mlx5-next'
Or Gerlitz says:

====================
mlx5 Ethernet driver update - Jun 11 2015

This series from Saeed, Achiad and Gal contains few fixes
to the recently introduced mlx5 Ethernet functionality.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:55:26 -07:00
Achiad Shochat
3191e05fea net/mlx5e: Add transport domain to the ethernet TIRs/TISs
Allocate and use transport domain by the Ethernet driver code.

Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:55:26 -07:00
Achiad Shochat
56508b5013 net/mlx5_core: Add transport domain alloc/dealloc support
Each transport object, namely TIR and TIS, must have a transport domain
number (TDN) identifier.

The driver wrongly assumed that it is OK to use TDN=0 without explicit
TDN allocation from the device.

The TDN will also be used for isolating different processes once user
mode Ethernet will be supported.

Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:55:26 -07:00
Saeed Mahameed
12be4b2190 net/mlx5e: Support NETIF_F_SG
When NETIF_F_SG is set, each send WQE may have a different size since
each skb can have different number of fragments as of LSO header etc.

This implies that a given WQE may wrap around the send queue, i.e begin
at its end and continue at its start. While it is legal by the device spec,
we preferred a solution that avoids it - when building of current WQE is
done, if the next WQE may wrap around the send queue, fill the send queue
with NOPs WQEs till its end, so that the next WQE will begin at send queue
start.

NOP WQE for itself cannot wrap around the send queue since it is of
minimal size - 64 bytes, and all send WQEs are a multiple of that size.

Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:55:25 -07:00
Gal Pressman
796a27ec2d net/mlx5e: Enforce max flow-tables level >= 3
The Ethernet driver requires at least 3 flow table levels to
operate, enforce that.

Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:55:25 -07:00
Saeed Mahameed
cd58c714ac net/mlx5e: Disable client vlan TX acceleration
We need to resolve a HW configuration issue for enabling HW CVLAN
insertion. Meanwhile, no need to implement the VLAN insertion in
the driver, rather use the generic kernel VLAN insertion method.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:55:25 -07:00
Saeed Mahameed
fc11fbf9a7 net/mlx5e: Add HW cacheline start padding
Enable HW cacheline start padding and align RX WQE size to cacheline
while considering HW start padding. Also, fix dma_unmap call to use
the correct SKB data buffer size.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:55:25 -07:00
Saeed Mahameed
facc9699f0 net/mlx5e: Fix HW MTU settings
Previously we configured HW MTU to be netdev->mtu, actually we
need to configure netdev->mtu + (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN).

Also, query MTU can not fail, hence make the relevant helper a
void functionm, add mlx5e_set_dev_port_mtu, helper function to
handle MTU setting.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:55:25 -07:00
Dan Carpenter
7ec0bb227a net/mlx5_core: fix an error code
We return success if mlx5e_alloc_sq_db() fails but we should return an
error code.

Fixes: f62b8bb8f2 ('net/mlx5: Extend mlx5_core to support ConnectX-4 Ethernet functionality')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-11 15:44:41 -07:00