net: flow_dissector: small optimizations in IPv4 dissect

By moving code around, we avoid :

1) A reload of iph->ihl (bit field, so needs a mask)

2) A conditional test (replaced by a conditional mov on x86)
   Fast path loads iph->protocol anyway.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Eric Dumazet 2013-11-07 08:37:28 -08:00 committed by David S. Miller
parent cdc4ead09d
commit 3797d3e846

View file

@ -68,13 +68,13 @@ bool skb_flow_dissect(const struct sk_buff *skb, struct flow_keys *flow)
iph = skb_header_pointer(skb, nhoff, sizeof(_iph), &_iph); iph = skb_header_pointer(skb, nhoff, sizeof(_iph), &_iph);
if (!iph || iph->ihl < 5) if (!iph || iph->ihl < 5)
return false; return false;
nhoff += iph->ihl * 4;
ip_proto = iph->protocol;
if (ip_is_fragment(iph)) if (ip_is_fragment(iph))
ip_proto = 0; ip_proto = 0;
else
ip_proto = iph->protocol;
iph_to_flow_copy_addrs(flow, iph); iph_to_flow_copy_addrs(flow, iph);
nhoff += iph->ihl * 4;
break; break;
} }
case __constant_htons(ETH_P_IPV6): { case __constant_htons(ETH_P_IPV6): {