mirror of
https://github.com/torvalds/linux
synced 2024-11-05 18:23:50 +00:00
Merge branch 'bpf-direct-pkt-access'
Alexei Starovoitov says: ==================== bpf: introduce direct packet access This set of patches introduce 'direct packet access' from cls_bpf and act_bpf programs (which are root only). Current bpf programs use LD_ABS, LD_INS instructions which have to do 'if (off < skb_headlen)' for every packet access. It's ok for socket filters, but too slow for XDP, since single LD_ABS insn consumes 3% of cpu. Therefore we have to amortize the cost of length check over multiple packet accesses via direct access to skb->data, data_end pointers. The existing packet parser typically look like: if (load_half(skb, offsetof(struct ethhdr, h_proto)) != ETH_P_IP) return 0; if (load_byte(skb, ETH_HLEN + offsetof(struct iphdr, protocol)) != IPPROTO_UDP || load_byte(skb, ETH_HLEN) != 0x45) return 0; ... with 'direct packet access' the bpf program becomes: void *data = (void *)(long)skb->data; void *data_end = (void *)(long)skb->data_end; struct eth_hdr *eth = data; struct iphdr *iph = data + sizeof(*eth); if (data + sizeof(*eth) + sizeof(*iph) + sizeof(*udp) > data_end) return 0; if (eth->h_proto != htons(ETH_P_IP)) return 0; if (iph->protocol != IPPROTO_UDP || iph->ihl != 5) return 0; ... which is more natural to write and significantly faster. See patch 6 for performance tests: 21Mpps(old) vs 24Mpps(new) with just 5 loads. For more complex parsers the performance gain is higher. The other approach implemented in [1] was adding two new instructions to interpreter and JITs and was too hard to use from llvm side. The approach presented here doesn't need any instruction changes, but the verifier has to work harder to check safety of the packet access. Patch 1 prepares the code and Patch 2 adds new checks for direct packet access and all of them are gated with 'env->allow_ptr_leaks' which is true for root only. Patch 3 improves search pruning for large programs. Patch 4 wires in verifier's changes with net/core/filter side. Patch 5 updates docs Patches 6 and 7 add tests. [1] https://git.kernel.org/cgit/linux/kernel/git/ast/bpf.git/?h=ld_abs_dw ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
commit
4b307a8edb
14 changed files with 1004 additions and 82 deletions
|
@ -1095,6 +1095,87 @@ all use cases.
|
|||
|
||||
See details of eBPF verifier in kernel/bpf/verifier.c
|
||||
|
||||
Direct packet access
|
||||
--------------------
|
||||
In cls_bpf and act_bpf programs the verifier allows direct access to the packet
|
||||
data via skb->data and skb->data_end pointers.
|
||||
Ex:
|
||||
1: r4 = *(u32 *)(r1 +80) /* load skb->data_end */
|
||||
2: r3 = *(u32 *)(r1 +76) /* load skb->data */
|
||||
3: r5 = r3
|
||||
4: r5 += 14
|
||||
5: if r5 > r4 goto pc+16
|
||||
R1=ctx R3=pkt(id=0,off=0,r=14) R4=pkt_end R5=pkt(id=0,off=14,r=14) R10=fp
|
||||
6: r0 = *(u16 *)(r3 +12) /* access 12 and 13 bytes of the packet */
|
||||
|
||||
this 2byte load from the packet is safe to do, since the program author
|
||||
did check 'if (skb->data + 14 > skb->data_end) goto err' at insn #5 which
|
||||
means that in the fall-through case the register R3 (which points to skb->data)
|
||||
has at least 14 directly accessible bytes. The verifier marks it
|
||||
as R3=pkt(id=0,off=0,r=14).
|
||||
id=0 means that no additional variables were added to the register.
|
||||
off=0 means that no additional constants were added.
|
||||
r=14 is the range of safe access which means that bytes [R3, R3 + 14) are ok.
|
||||
Note that R5 is marked as R5=pkt(id=0,off=14,r=14). It also points
|
||||
to the packet data, but constant 14 was added to the register, so
|
||||
it now points to 'skb->data + 14' and accessible range is [R5, R5 + 14 - 14)
|
||||
which is zero bytes.
|
||||
|
||||
More complex packet access may look like:
|
||||
R0=imm1 R1=ctx R3=pkt(id=0,off=0,r=14) R4=pkt_end R5=pkt(id=0,off=14,r=14) R10=fp
|
||||
6: r0 = *(u8 *)(r3 +7) /* load 7th byte from the packet */
|
||||
7: r4 = *(u8 *)(r3 +12)
|
||||
8: r4 *= 14
|
||||
9: r3 = *(u32 *)(r1 +76) /* load skb->data */
|
||||
10: r3 += r4
|
||||
11: r2 = r1
|
||||
12: r2 <<= 48
|
||||
13: r2 >>= 48
|
||||
14: r3 += r2
|
||||
15: r2 = r3
|
||||
16: r2 += 8
|
||||
17: r1 = *(u32 *)(r1 +80) /* load skb->data_end */
|
||||
18: if r2 > r1 goto pc+2
|
||||
R0=inv56 R1=pkt_end R2=pkt(id=2,off=8,r=8) R3=pkt(id=2,off=0,r=8) R4=inv52 R5=pkt(id=0,off=14,r=14) R10=fp
|
||||
19: r1 = *(u8 *)(r3 +4)
|
||||
The state of the register R3 is R3=pkt(id=2,off=0,r=8)
|
||||
id=2 means that two 'r3 += rX' instructions were seen, so r3 points to some
|
||||
offset within a packet and since the program author did
|
||||
'if (r3 + 8 > r1) goto err' at insn #18, the safe range is [R3, R3 + 8).
|
||||
The verifier only allows 'add' operation on packet registers. Any other
|
||||
operation will set the register state to 'unknown_value' and it won't be
|
||||
available for direct packet access.
|
||||
Operation 'r3 += rX' may overflow and become less than original skb->data,
|
||||
therefore the verifier has to prevent that. So it tracks the number of
|
||||
upper zero bits in all 'uknown_value' registers, so when it sees
|
||||
'r3 += rX' instruction and rX is more than 16-bit value, it will error as:
|
||||
"cannot add integer value with N upper zero bits to ptr_to_packet"
|
||||
Ex. after insn 'r4 = *(u8 *)(r3 +12)' (insn #7 above) the state of r4 is
|
||||
R4=inv56 which means that upper 56 bits on the register are guaranteed
|
||||
to be zero. After insn 'r4 *= 14' the state becomes R4=inv52, since
|
||||
multiplying 8-bit value by constant 14 will keep upper 52 bits as zero.
|
||||
Similarly 'r2 >>= 48' will make R2=inv48, since the shift is not sign
|
||||
extending. This logic is implemented in evaluate_reg_alu() function.
|
||||
|
||||
The end result is that bpf program author can access packet directly
|
||||
using normal C code as:
|
||||
void *data = (void *)(long)skb->data;
|
||||
void *data_end = (void *)(long)skb->data_end;
|
||||
struct eth_hdr *eth = data;
|
||||
struct iphdr *iph = data + sizeof(*eth);
|
||||
struct udphdr *udp = data + sizeof(*eth) + sizeof(*iph);
|
||||
|
||||
if (data + sizeof(*eth) + sizeof(*iph) + sizeof(*udp) > data_end)
|
||||
return 0;
|
||||
if (eth->h_proto != htons(ETH_P_IP))
|
||||
return 0;
|
||||
if (iph->protocol != IPPROTO_UDP || iph->ihl != 5)
|
||||
return 0;
|
||||
if (udp->dest == 53 || udp->source == 9)
|
||||
...;
|
||||
which makes such programs easier to write comparing to LD_ABS insn
|
||||
and significantly faster.
|
||||
|
||||
eBPF maps
|
||||
---------
|
||||
'maps' is a generic storage of different types for sharing data between kernel
|
||||
|
@ -1293,5 +1374,5 @@ to give potential BPF hackers or security auditors a better overview of
|
|||
the underlying architecture.
|
||||
|
||||
Jay Schulist <jschlst@samba.org>
|
||||
Daniel Borkmann <dborkman@redhat.com>
|
||||
Alexei Starovoitov <ast@plumgrid.com>
|
||||
Daniel Borkmann <daniel@iogearbox.net>
|
||||
Alexei Starovoitov <ast@kernel.org>
|
||||
|
|
|
@ -352,6 +352,22 @@ struct sk_filter {
|
|||
|
||||
#define BPF_SKB_CB_LEN QDISC_CB_PRIV_LEN
|
||||
|
||||
struct bpf_skb_data_end {
|
||||
struct qdisc_skb_cb qdisc_cb;
|
||||
void *data_end;
|
||||
};
|
||||
|
||||
/* compute the linear packet data range [data, data_end) which
|
||||
* will be accessed by cls_bpf and act_bpf programs
|
||||
*/
|
||||
static inline void bpf_compute_data_end(struct sk_buff *skb)
|
||||
{
|
||||
struct bpf_skb_data_end *cb = (struct bpf_skb_data_end *)skb->cb;
|
||||
|
||||
BUILD_BUG_ON(sizeof(*cb) > FIELD_SIZEOF(struct sk_buff, cb));
|
||||
cb->data_end = skb->data + skb_headlen(skb);
|
||||
}
|
||||
|
||||
static inline u8 *bpf_skb_cb(struct sk_buff *skb)
|
||||
{
|
||||
/* eBPF programs may read/write skb->cb[] area to transfer meta
|
||||
|
|
|
@ -370,6 +370,8 @@ struct __sk_buff {
|
|||
__u32 cb[5];
|
||||
__u32 hash;
|
||||
__u32 tc_classid;
|
||||
__u32 data;
|
||||
__u32 data_end;
|
||||
};
|
||||
|
||||
struct bpf_tunnel_key {
|
||||
|
|
|
@ -794,6 +794,11 @@ void __weak bpf_int_jit_compile(struct bpf_prog *prog)
|
|||
{
|
||||
}
|
||||
|
||||
bool __weak bpf_helper_changes_skb_data(void *func)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
/* To execute LD_ABS/LD_IND instructions __bpf_prog_run() may call
|
||||
* skb_copy_bits(), so provide a weak definition of it for NET-less config.
|
||||
*/
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
|
||||
* Copyright (c) 2016 Facebook
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of version 2 of the GNU General Public
|
||||
|
@ -136,13 +137,32 @@ enum bpf_reg_type {
|
|||
FRAME_PTR, /* reg == frame_pointer */
|
||||
PTR_TO_STACK, /* reg == frame_pointer + imm */
|
||||
CONST_IMM, /* constant integer value */
|
||||
|
||||
/* PTR_TO_PACKET represents:
|
||||
* skb->data
|
||||
* skb->data + imm
|
||||
* skb->data + (u16) var
|
||||
* skb->data + (u16) var + imm
|
||||
* if (range > 0) then [ptr, ptr + range - off) is safe to access
|
||||
* if (id > 0) means that some 'var' was added
|
||||
* if (off > 0) menas that 'imm' was added
|
||||
*/
|
||||
PTR_TO_PACKET,
|
||||
PTR_TO_PACKET_END, /* skb->data + headlen */
|
||||
};
|
||||
|
||||
struct reg_state {
|
||||
enum bpf_reg_type type;
|
||||
union {
|
||||
/* valid when type == CONST_IMM | PTR_TO_STACK */
|
||||
long imm;
|
||||
/* valid when type == CONST_IMM | PTR_TO_STACK | UNKNOWN_VALUE */
|
||||
s64 imm;
|
||||
|
||||
/* valid when type == PTR_TO_PACKET* */
|
||||
struct {
|
||||
u32 id;
|
||||
u16 off;
|
||||
u16 range;
|
||||
};
|
||||
|
||||
/* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE |
|
||||
* PTR_TO_MAP_VALUE_OR_NULL
|
||||
|
@ -247,30 +267,39 @@ static const char * const reg_type_str[] = {
|
|||
[FRAME_PTR] = "fp",
|
||||
[PTR_TO_STACK] = "fp",
|
||||
[CONST_IMM] = "imm",
|
||||
[PTR_TO_PACKET] = "pkt",
|
||||
[PTR_TO_PACKET_END] = "pkt_end",
|
||||
};
|
||||
|
||||
static void print_verifier_state(struct verifier_env *env)
|
||||
static void print_verifier_state(struct verifier_state *state)
|
||||
{
|
||||
struct reg_state *reg;
|
||||
enum bpf_reg_type t;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < MAX_BPF_REG; i++) {
|
||||
t = env->cur_state.regs[i].type;
|
||||
reg = &state->regs[i];
|
||||
t = reg->type;
|
||||
if (t == NOT_INIT)
|
||||
continue;
|
||||
verbose(" R%d=%s", i, reg_type_str[t]);
|
||||
if (t == CONST_IMM || t == PTR_TO_STACK)
|
||||
verbose("%ld", env->cur_state.regs[i].imm);
|
||||
verbose("%lld", reg->imm);
|
||||
else if (t == PTR_TO_PACKET)
|
||||
verbose("(id=%d,off=%d,r=%d)",
|
||||
reg->id, reg->off, reg->range);
|
||||
else if (t == UNKNOWN_VALUE && reg->imm)
|
||||
verbose("%lld", reg->imm);
|
||||
else if (t == CONST_PTR_TO_MAP || t == PTR_TO_MAP_VALUE ||
|
||||
t == PTR_TO_MAP_VALUE_OR_NULL)
|
||||
verbose("(ks=%d,vs=%d)",
|
||||
env->cur_state.regs[i].map_ptr->key_size,
|
||||
env->cur_state.regs[i].map_ptr->value_size);
|
||||
reg->map_ptr->key_size,
|
||||
reg->map_ptr->value_size);
|
||||
}
|
||||
for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
|
||||
if (env->cur_state.stack_slot_type[i] == STACK_SPILL)
|
||||
if (state->stack_slot_type[i] == STACK_SPILL)
|
||||
verbose(" fp%d=%s", -MAX_BPF_STACK + i,
|
||||
reg_type_str[env->cur_state.spilled_regs[i / BPF_REG_SIZE].type]);
|
||||
reg_type_str[state->spilled_regs[i / BPF_REG_SIZE].type]);
|
||||
}
|
||||
verbose("\n");
|
||||
}
|
||||
|
@ -546,6 +575,8 @@ static bool is_spillable_regtype(enum bpf_reg_type type)
|
|||
case PTR_TO_MAP_VALUE_OR_NULL:
|
||||
case PTR_TO_STACK:
|
||||
case PTR_TO_CTX:
|
||||
case PTR_TO_PACKET:
|
||||
case PTR_TO_PACKET_END:
|
||||
case FRAME_PTR:
|
||||
case CONST_PTR_TO_MAP:
|
||||
return true;
|
||||
|
@ -645,6 +676,27 @@ static int check_map_access(struct verifier_env *env, u32 regno, int off,
|
|||
return 0;
|
||||
}
|
||||
|
||||
#define MAX_PACKET_OFF 0xffff
|
||||
|
||||
static int check_packet_access(struct verifier_env *env, u32 regno, int off,
|
||||
int size)
|
||||
{
|
||||
struct reg_state *regs = env->cur_state.regs;
|
||||
struct reg_state *reg = ®s[regno];
|
||||
int linear_size = (int) reg->range - (int) reg->off;
|
||||
|
||||
if (linear_size < 0 || linear_size >= MAX_PACKET_OFF) {
|
||||
verbose("verifier bug\n");
|
||||
return -EFAULT;
|
||||
}
|
||||
if (off < 0 || off + size > linear_size) {
|
||||
verbose("invalid access to packet, off=%d size=%d, allowed=%d\n",
|
||||
off, size, linear_size);
|
||||
return -EACCES;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* check access to 'struct bpf_context' fields */
|
||||
static int check_ctx_access(struct verifier_env *env, int off, int size,
|
||||
enum bpf_access_type t)
|
||||
|
@ -675,6 +727,45 @@ static bool is_pointer_value(struct verifier_env *env, int regno)
|
|||
}
|
||||
}
|
||||
|
||||
static int check_ptr_alignment(struct verifier_env *env, struct reg_state *reg,
|
||||
int off, int size)
|
||||
{
|
||||
if (reg->type != PTR_TO_PACKET) {
|
||||
if (off % size != 0) {
|
||||
verbose("misaligned access off %d size %d\n", off, size);
|
||||
return -EACCES;
|
||||
} else {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
switch (env->prog->type) {
|
||||
case BPF_PROG_TYPE_SCHED_CLS:
|
||||
case BPF_PROG_TYPE_SCHED_ACT:
|
||||
break;
|
||||
default:
|
||||
verbose("verifier is misconfigured\n");
|
||||
return -EACCES;
|
||||
}
|
||||
|
||||
if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))
|
||||
/* misaligned access to packet is ok on x86,arm,arm64 */
|
||||
return 0;
|
||||
|
||||
if (reg->id && size != 1) {
|
||||
verbose("Unknown packet alignment. Only byte-sized access allowed\n");
|
||||
return -EACCES;
|
||||
}
|
||||
|
||||
/* skb->data is NET_IP_ALIGN-ed */
|
||||
if ((NET_IP_ALIGN + reg->off + off) % size != 0) {
|
||||
verbose("misaligned packet access off %d+%d+%d size %d\n",
|
||||
NET_IP_ALIGN, reg->off, off, size);
|
||||
return -EACCES;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* check whether memory at (regno + off) is accessible for t = (read | write)
|
||||
* if t==write, value_regno is a register which value is stored into memory
|
||||
* if t==read, value_regno is a register which will receive the value from memory
|
||||
|
@ -686,21 +777,21 @@ static int check_mem_access(struct verifier_env *env, u32 regno, int off,
|
|||
int value_regno)
|
||||
{
|
||||
struct verifier_state *state = &env->cur_state;
|
||||
struct reg_state *reg = &state->regs[regno];
|
||||
int size, err = 0;
|
||||
|
||||
if (state->regs[regno].type == PTR_TO_STACK)
|
||||
off += state->regs[regno].imm;
|
||||
if (reg->type == PTR_TO_STACK)
|
||||
off += reg->imm;
|
||||
|
||||
size = bpf_size_to_bytes(bpf_size);
|
||||
if (size < 0)
|
||||
return size;
|
||||
|
||||
if (off % size != 0) {
|
||||
verbose("misaligned access off %d size %d\n", off, size);
|
||||
return -EACCES;
|
||||
}
|
||||
err = check_ptr_alignment(env, reg, off, size);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (state->regs[regno].type == PTR_TO_MAP_VALUE) {
|
||||
if (reg->type == PTR_TO_MAP_VALUE) {
|
||||
if (t == BPF_WRITE && value_regno >= 0 &&
|
||||
is_pointer_value(env, value_regno)) {
|
||||
verbose("R%d leaks addr into map\n", value_regno);
|
||||
|
@ -710,18 +801,25 @@ static int check_mem_access(struct verifier_env *env, u32 regno, int off,
|
|||
if (!err && t == BPF_READ && value_regno >= 0)
|
||||
mark_reg_unknown_value(state->regs, value_regno);
|
||||
|
||||
} else if (state->regs[regno].type == PTR_TO_CTX) {
|
||||
} else if (reg->type == PTR_TO_CTX) {
|
||||
if (t == BPF_WRITE && value_regno >= 0 &&
|
||||
is_pointer_value(env, value_regno)) {
|
||||
verbose("R%d leaks addr into ctx\n", value_regno);
|
||||
return -EACCES;
|
||||
}
|
||||
err = check_ctx_access(env, off, size, t);
|
||||
if (!err && t == BPF_READ && value_regno >= 0)
|
||||
if (!err && t == BPF_READ && value_regno >= 0) {
|
||||
mark_reg_unknown_value(state->regs, value_regno);
|
||||
if (off == offsetof(struct __sk_buff, data) &&
|
||||
env->allow_ptr_leaks)
|
||||
/* note that reg.[id|off|range] == 0 */
|
||||
state->regs[value_regno].type = PTR_TO_PACKET;
|
||||
else if (off == offsetof(struct __sk_buff, data_end) &&
|
||||
env->allow_ptr_leaks)
|
||||
state->regs[value_regno].type = PTR_TO_PACKET_END;
|
||||
}
|
||||
|
||||
} else if (state->regs[regno].type == FRAME_PTR ||
|
||||
state->regs[regno].type == PTR_TO_STACK) {
|
||||
} else if (reg->type == FRAME_PTR || reg->type == PTR_TO_STACK) {
|
||||
if (off >= 0 || off < -MAX_BPF_STACK) {
|
||||
verbose("invalid stack off=%d size=%d\n", off, size);
|
||||
return -EACCES;
|
||||
|
@ -737,11 +835,28 @@ static int check_mem_access(struct verifier_env *env, u32 regno, int off,
|
|||
} else {
|
||||
err = check_stack_read(state, off, size, value_regno);
|
||||
}
|
||||
} else if (state->regs[regno].type == PTR_TO_PACKET) {
|
||||
if (t == BPF_WRITE) {
|
||||
verbose("cannot write into packet\n");
|
||||
return -EACCES;
|
||||
}
|
||||
err = check_packet_access(env, regno, off, size);
|
||||
if (!err && t == BPF_READ && value_regno >= 0)
|
||||
mark_reg_unknown_value(state->regs, value_regno);
|
||||
} else {
|
||||
verbose("R%d invalid mem access '%s'\n",
|
||||
regno, reg_type_str[state->regs[regno].type]);
|
||||
regno, reg_type_str[reg->type]);
|
||||
return -EACCES;
|
||||
}
|
||||
|
||||
if (!err && size <= 2 && value_regno >= 0 && env->allow_ptr_leaks &&
|
||||
state->regs[value_regno].type == UNKNOWN_VALUE) {
|
||||
/* 1 or 2 byte load zero-extends, determine the number of
|
||||
* zero upper bits. Not doing it fo 4 byte load, since
|
||||
* such values cannot be added to ptr_to_packet anyway.
|
||||
*/
|
||||
state->regs[value_regno].imm = 64 - size * 8;
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -999,6 +1114,29 @@ static int check_raw_mode(const struct bpf_func_proto *fn)
|
|||
return count > 1 ? -EINVAL : 0;
|
||||
}
|
||||
|
||||
static void clear_all_pkt_pointers(struct verifier_env *env)
|
||||
{
|
||||
struct verifier_state *state = &env->cur_state;
|
||||
struct reg_state *regs = state->regs, *reg;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < MAX_BPF_REG; i++)
|
||||
if (regs[i].type == PTR_TO_PACKET ||
|
||||
regs[i].type == PTR_TO_PACKET_END)
|
||||
mark_reg_unknown_value(regs, i);
|
||||
|
||||
for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
|
||||
if (state->stack_slot_type[i] != STACK_SPILL)
|
||||
continue;
|
||||
reg = &state->spilled_regs[i / BPF_REG_SIZE];
|
||||
if (reg->type != PTR_TO_PACKET &&
|
||||
reg->type != PTR_TO_PACKET_END)
|
||||
continue;
|
||||
reg->type = UNKNOWN_VALUE;
|
||||
reg->imm = 0;
|
||||
}
|
||||
}
|
||||
|
||||
static int check_call(struct verifier_env *env, int func_id)
|
||||
{
|
||||
struct verifier_state *state = &env->cur_state;
|
||||
|
@ -1006,6 +1144,7 @@ static int check_call(struct verifier_env *env, int func_id)
|
|||
struct reg_state *regs = state->regs;
|
||||
struct reg_state *reg;
|
||||
struct bpf_call_arg_meta meta;
|
||||
bool changes_data;
|
||||
int i, err;
|
||||
|
||||
/* find function prototype */
|
||||
|
@ -1028,6 +1167,8 @@ static int check_call(struct verifier_env *env, int func_id)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
changes_data = bpf_helper_changes_skb_data(fn->func);
|
||||
|
||||
memset(&meta, 0, sizeof(meta));
|
||||
|
||||
/* We only support one arg being in raw mode at the moment, which
|
||||
|
@ -1098,13 +1239,196 @@ static int check_call(struct verifier_env *env, int func_id)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
if (changes_data)
|
||||
clear_all_pkt_pointers(env);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int check_packet_ptr_add(struct verifier_env *env, struct bpf_insn *insn)
|
||||
{
|
||||
struct reg_state *regs = env->cur_state.regs;
|
||||
struct reg_state *dst_reg = ®s[insn->dst_reg];
|
||||
struct reg_state *src_reg = ®s[insn->src_reg];
|
||||
s32 imm;
|
||||
|
||||
if (BPF_SRC(insn->code) == BPF_K) {
|
||||
/* pkt_ptr += imm */
|
||||
imm = insn->imm;
|
||||
|
||||
add_imm:
|
||||
if (imm <= 0) {
|
||||
verbose("addition of negative constant to packet pointer is not allowed\n");
|
||||
return -EACCES;
|
||||
}
|
||||
if (imm >= MAX_PACKET_OFF ||
|
||||
imm + dst_reg->off >= MAX_PACKET_OFF) {
|
||||
verbose("constant %d is too large to add to packet pointer\n",
|
||||
imm);
|
||||
return -EACCES;
|
||||
}
|
||||
/* a constant was added to pkt_ptr.
|
||||
* Remember it while keeping the same 'id'
|
||||
*/
|
||||
dst_reg->off += imm;
|
||||
} else {
|
||||
if (src_reg->type == CONST_IMM) {
|
||||
/* pkt_ptr += reg where reg is known constant */
|
||||
imm = src_reg->imm;
|
||||
goto add_imm;
|
||||
}
|
||||
/* disallow pkt_ptr += reg
|
||||
* if reg is not uknown_value with guaranteed zero upper bits
|
||||
* otherwise pkt_ptr may overflow and addition will become
|
||||
* subtraction which is not allowed
|
||||
*/
|
||||
if (src_reg->type != UNKNOWN_VALUE) {
|
||||
verbose("cannot add '%s' to ptr_to_packet\n",
|
||||
reg_type_str[src_reg->type]);
|
||||
return -EACCES;
|
||||
}
|
||||
if (src_reg->imm < 48) {
|
||||
verbose("cannot add integer value with %lld upper zero bits to ptr_to_packet\n",
|
||||
src_reg->imm);
|
||||
return -EACCES;
|
||||
}
|
||||
/* dst_reg stays as pkt_ptr type and since some positive
|
||||
* integer value was added to the pointer, increment its 'id'
|
||||
*/
|
||||
dst_reg->id++;
|
||||
|
||||
/* something was added to pkt_ptr, set range and off to zero */
|
||||
dst_reg->off = 0;
|
||||
dst_reg->range = 0;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int evaluate_reg_alu(struct verifier_env *env, struct bpf_insn *insn)
|
||||
{
|
||||
struct reg_state *regs = env->cur_state.regs;
|
||||
struct reg_state *dst_reg = ®s[insn->dst_reg];
|
||||
u8 opcode = BPF_OP(insn->code);
|
||||
s64 imm_log2;
|
||||
|
||||
/* for type == UNKNOWN_VALUE:
|
||||
* imm > 0 -> number of zero upper bits
|
||||
* imm == 0 -> don't track which is the same as all bits can be non-zero
|
||||
*/
|
||||
|
||||
if (BPF_SRC(insn->code) == BPF_X) {
|
||||
struct reg_state *src_reg = ®s[insn->src_reg];
|
||||
|
||||
if (src_reg->type == UNKNOWN_VALUE && src_reg->imm > 0 &&
|
||||
dst_reg->imm && opcode == BPF_ADD) {
|
||||
/* dreg += sreg
|
||||
* where both have zero upper bits. Adding them
|
||||
* can only result making one more bit non-zero
|
||||
* in the larger value.
|
||||
* Ex. 0xffff (imm=48) + 1 (imm=63) = 0x10000 (imm=47)
|
||||
* 0xffff (imm=48) + 0xffff = 0x1fffe (imm=47)
|
||||
*/
|
||||
dst_reg->imm = min(dst_reg->imm, src_reg->imm);
|
||||
dst_reg->imm--;
|
||||
return 0;
|
||||
}
|
||||
if (src_reg->type == CONST_IMM && src_reg->imm > 0 &&
|
||||
dst_reg->imm && opcode == BPF_ADD) {
|
||||
/* dreg += sreg
|
||||
* where dreg has zero upper bits and sreg is const.
|
||||
* Adding them can only result making one more bit
|
||||
* non-zero in the larger value.
|
||||
*/
|
||||
imm_log2 = __ilog2_u64((long long)src_reg->imm);
|
||||
dst_reg->imm = min(dst_reg->imm, 63 - imm_log2);
|
||||
dst_reg->imm--;
|
||||
return 0;
|
||||
}
|
||||
/* all other cases non supported yet, just mark dst_reg */
|
||||
dst_reg->imm = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* sign extend 32-bit imm into 64-bit to make sure that
|
||||
* negative values occupy bit 63. Note ilog2() would have
|
||||
* been incorrect, since sizeof(insn->imm) == 4
|
||||
*/
|
||||
imm_log2 = __ilog2_u64((long long)insn->imm);
|
||||
|
||||
if (dst_reg->imm && opcode == BPF_LSH) {
|
||||
/* reg <<= imm
|
||||
* if reg was a result of 2 byte load, then its imm == 48
|
||||
* which means that upper 48 bits are zero and shifting this reg
|
||||
* left by 4 would mean that upper 44 bits are still zero
|
||||
*/
|
||||
dst_reg->imm -= insn->imm;
|
||||
} else if (dst_reg->imm && opcode == BPF_MUL) {
|
||||
/* reg *= imm
|
||||
* if multiplying by 14 subtract 4
|
||||
* This is conservative calculation of upper zero bits.
|
||||
* It's not trying to special case insn->imm == 1 or 0 cases
|
||||
*/
|
||||
dst_reg->imm -= imm_log2 + 1;
|
||||
} else if (opcode == BPF_AND) {
|
||||
/* reg &= imm */
|
||||
dst_reg->imm = 63 - imm_log2;
|
||||
} else if (dst_reg->imm && opcode == BPF_ADD) {
|
||||
/* reg += imm */
|
||||
dst_reg->imm = min(dst_reg->imm, 63 - imm_log2);
|
||||
dst_reg->imm--;
|
||||
} else if (opcode == BPF_RSH) {
|
||||
/* reg >>= imm
|
||||
* which means that after right shift, upper bits will be zero
|
||||
* note that verifier already checked that
|
||||
* 0 <= imm < 64 for shift insn
|
||||
*/
|
||||
dst_reg->imm += insn->imm;
|
||||
if (unlikely(dst_reg->imm > 64))
|
||||
/* some dumb code did:
|
||||
* r2 = *(u32 *)mem;
|
||||
* r2 >>= 32;
|
||||
* and all bits are zero now */
|
||||
dst_reg->imm = 64;
|
||||
} else {
|
||||
/* all other alu ops, means that we don't know what will
|
||||
* happen to the value, mark it with unknown number of zero bits
|
||||
*/
|
||||
dst_reg->imm = 0;
|
||||
}
|
||||
|
||||
if (dst_reg->imm < 0) {
|
||||
/* all 64 bits of the register can contain non-zero bits
|
||||
* and such value cannot be added to ptr_to_packet, since it
|
||||
* may overflow, mark it as unknown to avoid further eval
|
||||
*/
|
||||
dst_reg->imm = 0;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int evaluate_reg_imm_alu(struct verifier_env *env, struct bpf_insn *insn)
|
||||
{
|
||||
struct reg_state *regs = env->cur_state.regs;
|
||||
struct reg_state *dst_reg = ®s[insn->dst_reg];
|
||||
struct reg_state *src_reg = ®s[insn->src_reg];
|
||||
u8 opcode = BPF_OP(insn->code);
|
||||
|
||||
/* dst_reg->type == CONST_IMM here, simulate execution of 'add' insn.
|
||||
* Don't care about overflow or negative values, just add them
|
||||
*/
|
||||
if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_K)
|
||||
dst_reg->imm += insn->imm;
|
||||
else if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_X &&
|
||||
src_reg->type == CONST_IMM)
|
||||
dst_reg->imm += src_reg->imm;
|
||||
else
|
||||
mark_reg_unknown_value(regs, insn->dst_reg);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* check validity of 32-bit and 64-bit arithmetic operations */
|
||||
static int check_alu_op(struct verifier_env *env, struct bpf_insn *insn)
|
||||
{
|
||||
struct reg_state *regs = env->cur_state.regs;
|
||||
struct reg_state *regs = env->cur_state.regs, *dst_reg;
|
||||
u8 opcode = BPF_OP(insn->code);
|
||||
int err;
|
||||
|
||||
|
@ -1193,8 +1517,6 @@ static int check_alu_op(struct verifier_env *env, struct bpf_insn *insn)
|
|||
|
||||
} else { /* all other ALU ops: and, sub, xor, add, ... */
|
||||
|
||||
bool stack_relative = false;
|
||||
|
||||
if (BPF_SRC(insn->code) == BPF_X) {
|
||||
if (insn->imm != 0 || insn->off != 0) {
|
||||
verbose("BPF_ALU uses reserved fields\n");
|
||||
|
@ -1232,11 +1554,34 @@ static int check_alu_op(struct verifier_env *env, struct bpf_insn *insn)
|
|||
}
|
||||
}
|
||||
|
||||
/* check dest operand */
|
||||
err = check_reg_arg(regs, insn->dst_reg, DST_OP_NO_MARK);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
dst_reg = ®s[insn->dst_reg];
|
||||
|
||||
/* pattern match 'bpf_add Rx, imm' instruction */
|
||||
if (opcode == BPF_ADD && BPF_CLASS(insn->code) == BPF_ALU64 &&
|
||||
regs[insn->dst_reg].type == FRAME_PTR &&
|
||||
BPF_SRC(insn->code) == BPF_K) {
|
||||
stack_relative = true;
|
||||
dst_reg->type == FRAME_PTR && BPF_SRC(insn->code) == BPF_K) {
|
||||
dst_reg->type = PTR_TO_STACK;
|
||||
dst_reg->imm = insn->imm;
|
||||
return 0;
|
||||
} else if (opcode == BPF_ADD &&
|
||||
BPF_CLASS(insn->code) == BPF_ALU64 &&
|
||||
dst_reg->type == PTR_TO_PACKET) {
|
||||
/* ptr_to_packet += K|X */
|
||||
return check_packet_ptr_add(env, insn);
|
||||
} else if (BPF_CLASS(insn->code) == BPF_ALU64 &&
|
||||
dst_reg->type == UNKNOWN_VALUE &&
|
||||
env->allow_ptr_leaks) {
|
||||
/* unknown += K|X */
|
||||
return evaluate_reg_alu(env, insn);
|
||||
} else if (BPF_CLASS(insn->code) == BPF_ALU64 &&
|
||||
dst_reg->type == CONST_IMM &&
|
||||
env->allow_ptr_leaks) {
|
||||
/* reg_imm += K|X */
|
||||
return evaluate_reg_imm_alu(env, insn);
|
||||
} else if (is_pointer_value(env, insn->dst_reg)) {
|
||||
verbose("R%d pointer arithmetic prohibited\n",
|
||||
insn->dst_reg);
|
||||
|
@ -1248,24 +1593,45 @@ static int check_alu_op(struct verifier_env *env, struct bpf_insn *insn)
|
|||
return -EACCES;
|
||||
}
|
||||
|
||||
/* check dest operand */
|
||||
err = check_reg_arg(regs, insn->dst_reg, DST_OP);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (stack_relative) {
|
||||
regs[insn->dst_reg].type = PTR_TO_STACK;
|
||||
regs[insn->dst_reg].imm = insn->imm;
|
||||
}
|
||||
/* mark dest operand */
|
||||
mark_reg_unknown_value(regs, insn->dst_reg);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void find_good_pkt_pointers(struct verifier_env *env,
|
||||
struct reg_state *dst_reg)
|
||||
{
|
||||
struct verifier_state *state = &env->cur_state;
|
||||
struct reg_state *regs = state->regs, *reg;
|
||||
int i;
|
||||
/* r2 = r3;
|
||||
* r2 += 8
|
||||
* if (r2 > pkt_end) goto somewhere
|
||||
* r2 == dst_reg, pkt_end == src_reg,
|
||||
* r2=pkt(id=n,off=8,r=0)
|
||||
* r3=pkt(id=n,off=0,r=0)
|
||||
* find register r3 and mark its range as r3=pkt(id=n,off=0,r=8)
|
||||
* so that range of bytes [r3, r3 + 8) is safe to access
|
||||
*/
|
||||
for (i = 0; i < MAX_BPF_REG; i++)
|
||||
if (regs[i].type == PTR_TO_PACKET && regs[i].id == dst_reg->id)
|
||||
regs[i].range = dst_reg->off;
|
||||
|
||||
for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
|
||||
if (state->stack_slot_type[i] != STACK_SPILL)
|
||||
continue;
|
||||
reg = &state->spilled_regs[i / BPF_REG_SIZE];
|
||||
if (reg->type == PTR_TO_PACKET && reg->id == dst_reg->id)
|
||||
reg->range = dst_reg->off;
|
||||
}
|
||||
}
|
||||
|
||||
static int check_cond_jmp_op(struct verifier_env *env,
|
||||
struct bpf_insn *insn, int *insn_idx)
|
||||
{
|
||||
struct reg_state *regs = env->cur_state.regs;
|
||||
struct reg_state *regs = env->cur_state.regs, *dst_reg;
|
||||
struct verifier_state *other_branch;
|
||||
u8 opcode = BPF_OP(insn->code);
|
||||
int err;
|
||||
|
@ -1303,11 +1669,12 @@ static int check_cond_jmp_op(struct verifier_env *env,
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
dst_reg = ®s[insn->dst_reg];
|
||||
|
||||
/* detect if R == 0 where R was initialized to zero earlier */
|
||||
if (BPF_SRC(insn->code) == BPF_K &&
|
||||
(opcode == BPF_JEQ || opcode == BPF_JNE) &&
|
||||
regs[insn->dst_reg].type == CONST_IMM &&
|
||||
regs[insn->dst_reg].imm == insn->imm) {
|
||||
dst_reg->type == CONST_IMM && dst_reg->imm == insn->imm) {
|
||||
if (opcode == BPF_JEQ) {
|
||||
/* if (imm == imm) goto pc+off;
|
||||
* only follow the goto, ignore fall-through
|
||||
|
@ -1329,44 +1696,30 @@ static int check_cond_jmp_op(struct verifier_env *env,
|
|||
|
||||
/* detect if R == 0 where R is returned value from bpf_map_lookup_elem() */
|
||||
if (BPF_SRC(insn->code) == BPF_K &&
|
||||
insn->imm == 0 && (opcode == BPF_JEQ ||
|
||||
opcode == BPF_JNE) &&
|
||||
regs[insn->dst_reg].type == PTR_TO_MAP_VALUE_OR_NULL) {
|
||||
insn->imm == 0 && (opcode == BPF_JEQ || opcode == BPF_JNE) &&
|
||||
dst_reg->type == PTR_TO_MAP_VALUE_OR_NULL) {
|
||||
if (opcode == BPF_JEQ) {
|
||||
/* next fallthrough insn can access memory via
|
||||
* this register
|
||||
*/
|
||||
regs[insn->dst_reg].type = PTR_TO_MAP_VALUE;
|
||||
/* branch targer cannot access it, since reg == 0 */
|
||||
other_branch->regs[insn->dst_reg].type = CONST_IMM;
|
||||
other_branch->regs[insn->dst_reg].imm = 0;
|
||||
mark_reg_unknown_value(other_branch->regs,
|
||||
insn->dst_reg);
|
||||
} else {
|
||||
other_branch->regs[insn->dst_reg].type = PTR_TO_MAP_VALUE;
|
||||
regs[insn->dst_reg].type = CONST_IMM;
|
||||
regs[insn->dst_reg].imm = 0;
|
||||
mark_reg_unknown_value(regs, insn->dst_reg);
|
||||
}
|
||||
} else if (BPF_SRC(insn->code) == BPF_X && opcode == BPF_JGT &&
|
||||
dst_reg->type == PTR_TO_PACKET &&
|
||||
regs[insn->src_reg].type == PTR_TO_PACKET_END) {
|
||||
find_good_pkt_pointers(env, dst_reg);
|
||||
} else if (is_pointer_value(env, insn->dst_reg)) {
|
||||
verbose("R%d pointer comparison prohibited\n", insn->dst_reg);
|
||||
return -EACCES;
|
||||
} else if (BPF_SRC(insn->code) == BPF_K &&
|
||||
(opcode == BPF_JEQ || opcode == BPF_JNE)) {
|
||||
|
||||
if (opcode == BPF_JEQ) {
|
||||
/* detect if (R == imm) goto
|
||||
* and in the target state recognize that R = imm
|
||||
*/
|
||||
other_branch->regs[insn->dst_reg].type = CONST_IMM;
|
||||
other_branch->regs[insn->dst_reg].imm = insn->imm;
|
||||
} else {
|
||||
/* detect if (R != imm) goto
|
||||
* and in the fall-through state recognize that R = imm
|
||||
*/
|
||||
regs[insn->dst_reg].type = CONST_IMM;
|
||||
regs[insn->dst_reg].imm = insn->imm;
|
||||
}
|
||||
}
|
||||
if (log_level)
|
||||
print_verifier_state(env);
|
||||
print_verifier_state(&env->cur_state);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1444,14 +1797,14 @@ static int check_ld_abs(struct verifier_env *env, struct bpf_insn *insn)
|
|||
int i, err;
|
||||
|
||||
if (!may_access_skb(env->prog->type)) {
|
||||
verbose("BPF_LD_ABS|IND instructions not allowed for this program type\n");
|
||||
verbose("BPF_LD_[ABS|IND] instructions not allowed for this program type\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (insn->dst_reg != BPF_REG_0 || insn->off != 0 ||
|
||||
BPF_SIZE(insn->code) == BPF_DW ||
|
||||
(mode == BPF_ABS && insn->src_reg != BPF_REG_0)) {
|
||||
verbose("BPF_LD_ABS uses reserved fields\n");
|
||||
verbose("BPF_LD_[ABS|IND] uses reserved fields\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -1684,6 +2037,58 @@ static int check_cfg(struct verifier_env *env)
|
|||
return ret;
|
||||
}
|
||||
|
||||
/* the following conditions reduce the number of explored insns
|
||||
* from ~140k to ~80k for ultra large programs that use a lot of ptr_to_packet
|
||||
*/
|
||||
static bool compare_ptrs_to_packet(struct reg_state *old, struct reg_state *cur)
|
||||
{
|
||||
if (old->id != cur->id)
|
||||
return false;
|
||||
|
||||
/* old ptr_to_packet is more conservative, since it allows smaller
|
||||
* range. Ex:
|
||||
* old(off=0,r=10) is equal to cur(off=0,r=20), because
|
||||
* old(off=0,r=10) means that with range=10 the verifier proceeded
|
||||
* further and found no issues with the program. Now we're in the same
|
||||
* spot with cur(off=0,r=20), so we're safe too, since anything further
|
||||
* will only be looking at most 10 bytes after this pointer.
|
||||
*/
|
||||
if (old->off == cur->off && old->range < cur->range)
|
||||
return true;
|
||||
|
||||
/* old(off=20,r=10) is equal to cur(off=22,re=22 or 5 or 0)
|
||||
* since both cannot be used for packet access and safe(old)
|
||||
* pointer has smaller off that could be used for further
|
||||
* 'if (ptr > data_end)' check
|
||||
* Ex:
|
||||
* old(off=20,r=10) and cur(off=22,r=22) and cur(off=22,r=0) mean
|
||||
* that we cannot access the packet.
|
||||
* The safe range is:
|
||||
* [ptr, ptr + range - off)
|
||||
* so whenever off >=range, it means no safe bytes from this pointer.
|
||||
* When comparing old->off <= cur->off, it means that older code
|
||||
* went with smaller offset and that offset was later
|
||||
* used to figure out the safe range after 'if (ptr > data_end)' check
|
||||
* Say, 'old' state was explored like:
|
||||
* ... R3(off=0, r=0)
|
||||
* R4 = R3 + 20
|
||||
* ... now R4(off=20,r=0) <-- here
|
||||
* if (R4 > data_end)
|
||||
* ... R4(off=20,r=20), R3(off=0,r=20) and R3 can be used to access.
|
||||
* ... the code further went all the way to bpf_exit.
|
||||
* Now the 'cur' state at the mark 'here' has R4(off=30,r=0).
|
||||
* old_R4(off=20,r=0) equal to cur_R4(off=30,r=0), since if the verifier
|
||||
* goes further, such cur_R4 will give larger safe packet range after
|
||||
* 'if (R4 > data_end)' and all further insn were already good with r=20,
|
||||
* so they will be good with r=30 and we can prune the search.
|
||||
*/
|
||||
if (old->off <= cur->off &&
|
||||
old->off >= old->range && cur->off >= cur->range)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/* compare two verifier states
|
||||
*
|
||||
* all states stored in state_list are known to be valid, since
|
||||
|
@ -1712,17 +2117,25 @@ static int check_cfg(struct verifier_env *env)
|
|||
*/
|
||||
static bool states_equal(struct verifier_state *old, struct verifier_state *cur)
|
||||
{
|
||||
struct reg_state *rold, *rcur;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < MAX_BPF_REG; i++) {
|
||||
if (memcmp(&old->regs[i], &cur->regs[i],
|
||||
sizeof(old->regs[0])) != 0) {
|
||||
if (old->regs[i].type == NOT_INIT ||
|
||||
(old->regs[i].type == UNKNOWN_VALUE &&
|
||||
cur->regs[i].type != NOT_INIT))
|
||||
continue;
|
||||
return false;
|
||||
}
|
||||
rold = &old->regs[i];
|
||||
rcur = &cur->regs[i];
|
||||
|
||||
if (memcmp(rold, rcur, sizeof(*rold)) == 0)
|
||||
continue;
|
||||
|
||||
if (rold->type == NOT_INIT ||
|
||||
(rold->type == UNKNOWN_VALUE && rcur->type != NOT_INIT))
|
||||
continue;
|
||||
|
||||
if (rold->type == PTR_TO_PACKET && rcur->type == PTR_TO_PACKET &&
|
||||
compare_ptrs_to_packet(rold, rcur))
|
||||
continue;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
for (i = 0; i < MAX_BPF_STACK; i++) {
|
||||
|
@ -1844,7 +2257,7 @@ static int do_check(struct verifier_env *env)
|
|||
|
||||
if (log_level && do_print_state) {
|
||||
verbose("\nfrom %d to %d:", prev_insn_idx, insn_idx);
|
||||
print_verifier_state(env);
|
||||
print_verifier_state(&env->cur_state);
|
||||
do_print_state = false;
|
||||
}
|
||||
|
||||
|
@ -2056,6 +2469,7 @@ static int do_check(struct verifier_env *env)
|
|||
insn_idx++;
|
||||
}
|
||||
|
||||
verbose("processed %d insns\n", insn_processed);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -1344,6 +1344,21 @@ struct bpf_scratchpad {
|
|||
|
||||
static DEFINE_PER_CPU(struct bpf_scratchpad, bpf_sp);
|
||||
|
||||
static inline int bpf_try_make_writable(struct sk_buff *skb,
|
||||
unsigned int write_len)
|
||||
{
|
||||
int err;
|
||||
|
||||
if (!skb_cloned(skb))
|
||||
return 0;
|
||||
if (skb_clone_writable(skb, write_len))
|
||||
return 0;
|
||||
err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
|
||||
if (!err)
|
||||
bpf_compute_data_end(skb);
|
||||
return err;
|
||||
}
|
||||
|
||||
static u64 bpf_skb_store_bytes(u64 r1, u64 r2, u64 r3, u64 r4, u64 flags)
|
||||
{
|
||||
struct bpf_scratchpad *sp = this_cpu_ptr(&bpf_sp);
|
||||
|
@ -1366,7 +1381,7 @@ static u64 bpf_skb_store_bytes(u64 r1, u64 r2, u64 r3, u64 r4, u64 flags)
|
|||
*/
|
||||
if (unlikely((u32) offset > 0xffff || len > sizeof(sp->buff)))
|
||||
return -EFAULT;
|
||||
if (unlikely(skb_try_make_writable(skb, offset + len)))
|
||||
if (unlikely(bpf_try_make_writable(skb, offset + len)))
|
||||
return -EFAULT;
|
||||
|
||||
ptr = skb_header_pointer(skb, offset, len, sp->buff);
|
||||
|
@ -1444,7 +1459,7 @@ static u64 bpf_l3_csum_replace(u64 r1, u64 r2, u64 from, u64 to, u64 flags)
|
|||
return -EINVAL;
|
||||
if (unlikely((u32) offset > 0xffff))
|
||||
return -EFAULT;
|
||||
if (unlikely(skb_try_make_writable(skb, offset + sizeof(sum))))
|
||||
if (unlikely(bpf_try_make_writable(skb, offset + sizeof(sum))))
|
||||
return -EFAULT;
|
||||
|
||||
ptr = skb_header_pointer(skb, offset, sizeof(sum), &sum);
|
||||
|
@ -1499,7 +1514,7 @@ static u64 bpf_l4_csum_replace(u64 r1, u64 r2, u64 from, u64 to, u64 flags)
|
|||
return -EINVAL;
|
||||
if (unlikely((u32) offset > 0xffff))
|
||||
return -EFAULT;
|
||||
if (unlikely(skb_try_make_writable(skb, offset + sizeof(sum))))
|
||||
if (unlikely(bpf_try_make_writable(skb, offset + sizeof(sum))))
|
||||
return -EFAULT;
|
||||
|
||||
ptr = skb_header_pointer(skb, offset, sizeof(sum), &sum);
|
||||
|
@ -1699,12 +1714,15 @@ static u64 bpf_skb_vlan_push(u64 r1, u64 r2, u64 vlan_tci, u64 r4, u64 r5)
|
|||
{
|
||||
struct sk_buff *skb = (struct sk_buff *) (long) r1;
|
||||
__be16 vlan_proto = (__force __be16) r2;
|
||||
int ret;
|
||||
|
||||
if (unlikely(vlan_proto != htons(ETH_P_8021Q) &&
|
||||
vlan_proto != htons(ETH_P_8021AD)))
|
||||
vlan_proto = htons(ETH_P_8021Q);
|
||||
|
||||
return skb_vlan_push(skb, vlan_proto, vlan_tci);
|
||||
ret = skb_vlan_push(skb, vlan_proto, vlan_tci);
|
||||
bpf_compute_data_end(skb);
|
||||
return ret;
|
||||
}
|
||||
|
||||
const struct bpf_func_proto bpf_skb_vlan_push_proto = {
|
||||
|
@ -1720,8 +1738,11 @@ EXPORT_SYMBOL_GPL(bpf_skb_vlan_push_proto);
|
|||
static u64 bpf_skb_vlan_pop(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
|
||||
{
|
||||
struct sk_buff *skb = (struct sk_buff *) (long) r1;
|
||||
int ret;
|
||||
|
||||
return skb_vlan_pop(skb);
|
||||
ret = skb_vlan_pop(skb);
|
||||
bpf_compute_data_end(skb);
|
||||
return ret;
|
||||
}
|
||||
|
||||
const struct bpf_func_proto bpf_skb_vlan_pop_proto = {
|
||||
|
@ -2066,8 +2087,12 @@ static bool __is_valid_access(int off, int size, enum bpf_access_type type)
|
|||
static bool sk_filter_is_valid_access(int off, int size,
|
||||
enum bpf_access_type type)
|
||||
{
|
||||
if (off == offsetof(struct __sk_buff, tc_classid))
|
||||
switch (off) {
|
||||
case offsetof(struct __sk_buff, tc_classid):
|
||||
case offsetof(struct __sk_buff, data):
|
||||
case offsetof(struct __sk_buff, data_end):
|
||||
return false;
|
||||
}
|
||||
|
||||
if (type == BPF_WRITE) {
|
||||
switch (off) {
|
||||
|
@ -2215,6 +2240,20 @@ static u32 bpf_net_convert_ctx_access(enum bpf_access_type type, int dst_reg,
|
|||
*insn++ = BPF_LDX_MEM(BPF_H, dst_reg, src_reg, ctx_off);
|
||||
break;
|
||||
|
||||
case offsetof(struct __sk_buff, data):
|
||||
*insn++ = BPF_LDX_MEM(bytes_to_bpf_size(FIELD_SIZEOF(struct sk_buff, data)),
|
||||
dst_reg, src_reg,
|
||||
offsetof(struct sk_buff, data));
|
||||
break;
|
||||
|
||||
case offsetof(struct __sk_buff, data_end):
|
||||
ctx_off -= offsetof(struct __sk_buff, data_end);
|
||||
ctx_off += offsetof(struct sk_buff, cb);
|
||||
ctx_off += offsetof(struct bpf_skb_data_end, data_end);
|
||||
*insn++ = BPF_LDX_MEM(bytes_to_bpf_size(sizeof(void *)),
|
||||
dst_reg, src_reg, ctx_off);
|
||||
break;
|
||||
|
||||
case offsetof(struct __sk_buff, tc_index):
|
||||
#ifdef CONFIG_NET_SCHED
|
||||
BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, tc_index) != 2);
|
||||
|
|
|
@ -53,9 +53,11 @@ static int tcf_bpf(struct sk_buff *skb, const struct tc_action *act,
|
|||
filter = rcu_dereference(prog->filter);
|
||||
if (at_ingress) {
|
||||
__skb_push(skb, skb->mac_len);
|
||||
bpf_compute_data_end(skb);
|
||||
filter_res = BPF_PROG_RUN(filter, skb);
|
||||
__skb_pull(skb, skb->mac_len);
|
||||
} else {
|
||||
bpf_compute_data_end(skb);
|
||||
filter_res = BPF_PROG_RUN(filter, skb);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
|
|
@ -96,9 +96,11 @@ static int cls_bpf_classify(struct sk_buff *skb, const struct tcf_proto *tp,
|
|||
if (at_ingress) {
|
||||
/* It is safe to push/pull even if skb_shared() */
|
||||
__skb_push(skb, skb->mac_len);
|
||||
bpf_compute_data_end(skb);
|
||||
filter_res = BPF_PROG_RUN(prog->filter, skb);
|
||||
__skb_pull(skb, skb->mac_len);
|
||||
} else {
|
||||
bpf_compute_data_end(skb);
|
||||
filter_res = BPF_PROG_RUN(prog->filter, skb);
|
||||
}
|
||||
|
||||
|
|
|
@ -60,6 +60,7 @@ always += spintest_kern.o
|
|||
always += map_perf_test_kern.o
|
||||
always += test_overhead_tp_kern.o
|
||||
always += test_overhead_kprobe_kern.o
|
||||
always += parse_varlen.o parse_simple.o parse_ldabs.o
|
||||
|
||||
HOSTCFLAGS += -I$(objtree)/usr/include
|
||||
|
||||
|
@ -120,4 +121,5 @@ $(src)/*.c: verify_target_bpf
|
|||
$(obj)/%.o: $(src)/%.c
|
||||
$(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \
|
||||
-D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \
|
||||
-Wno-compare-distinct-pointer-types \
|
||||
-O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@
|
||||
|
|
41
samples/bpf/parse_ldabs.c
Normal file
41
samples/bpf/parse_ldabs.c
Normal file
|
@ -0,0 +1,41 @@
|
|||
/* Copyright (c) 2016 Facebook
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of version 2 of the GNU General Public
|
||||
* License as published by the Free Software Foundation.
|
||||
*/
|
||||
#include <linux/ip.h>
|
||||
#include <linux/ipv6.h>
|
||||
#include <linux/in.h>
|
||||
#include <linux/tcp.h>
|
||||
#include <linux/udp.h>
|
||||
#include <uapi/linux/bpf.h>
|
||||
#include "bpf_helpers.h"
|
||||
|
||||
#define DEFAULT_PKTGEN_UDP_PORT 9
|
||||
#define IP_MF 0x2000
|
||||
#define IP_OFFSET 0x1FFF
|
||||
|
||||
static inline int ip_is_fragment(struct __sk_buff *ctx, __u64 nhoff)
|
||||
{
|
||||
return load_half(ctx, nhoff + offsetof(struct iphdr, frag_off))
|
||||
& (IP_MF | IP_OFFSET);
|
||||
}
|
||||
|
||||
SEC("ldabs")
|
||||
int handle_ingress(struct __sk_buff *skb)
|
||||
{
|
||||
__u64 troff = ETH_HLEN + sizeof(struct iphdr);
|
||||
|
||||
if (load_half(skb, offsetof(struct ethhdr, h_proto)) != ETH_P_IP)
|
||||
return 0;
|
||||
if (load_byte(skb, ETH_HLEN + offsetof(struct iphdr, protocol)) != IPPROTO_UDP ||
|
||||
load_byte(skb, ETH_HLEN) != 0x45)
|
||||
return 0;
|
||||
if (ip_is_fragment(skb, ETH_HLEN))
|
||||
return 0;
|
||||
if (load_half(skb, troff + offsetof(struct udphdr, dest)) == DEFAULT_PKTGEN_UDP_PORT)
|
||||
return TC_ACT_SHOT;
|
||||
return 0;
|
||||
}
|
||||
char _license[] SEC("license") = "GPL";
|
48
samples/bpf/parse_simple.c
Normal file
48
samples/bpf/parse_simple.c
Normal file
|
@ -0,0 +1,48 @@
|
|||
/* Copyright (c) 2016 Facebook
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of version 2 of the GNU General Public
|
||||
* License as published by the Free Software Foundation.
|
||||
*/
|
||||
#include <linux/ip.h>
|
||||
#include <linux/ipv6.h>
|
||||
#include <linux/in.h>
|
||||
#include <linux/tcp.h>
|
||||
#include <linux/udp.h>
|
||||
#include <uapi/linux/bpf.h>
|
||||
#include <net/ip.h>
|
||||
#include "bpf_helpers.h"
|
||||
|
||||
#define DEFAULT_PKTGEN_UDP_PORT 9
|
||||
|
||||
/* copy of 'struct ethhdr' without __packed */
|
||||
struct eth_hdr {
|
||||
unsigned char h_dest[ETH_ALEN];
|
||||
unsigned char h_source[ETH_ALEN];
|
||||
unsigned short h_proto;
|
||||
};
|
||||
|
||||
SEC("simple")
|
||||
int handle_ingress(struct __sk_buff *skb)
|
||||
{
|
||||
void *data = (void *)(long)skb->data;
|
||||
struct eth_hdr *eth = data;
|
||||
struct iphdr *iph = data + sizeof(*eth);
|
||||
struct udphdr *udp = data + sizeof(*eth) + sizeof(*iph);
|
||||
void *data_end = (void *)(long)skb->data_end;
|
||||
|
||||
/* single length check */
|
||||
if (data + sizeof(*eth) + sizeof(*iph) + sizeof(*udp) > data_end)
|
||||
return 0;
|
||||
|
||||
if (eth->h_proto != htons(ETH_P_IP))
|
||||
return 0;
|
||||
if (iph->protocol != IPPROTO_UDP || iph->ihl != 5)
|
||||
return 0;
|
||||
if (ip_is_fragment(iph))
|
||||
return 0;
|
||||
if (udp->dest == htons(DEFAULT_PKTGEN_UDP_PORT))
|
||||
return TC_ACT_SHOT;
|
||||
return 0;
|
||||
}
|
||||
char _license[] SEC("license") = "GPL";
|
153
samples/bpf/parse_varlen.c
Normal file
153
samples/bpf/parse_varlen.c
Normal file
|
@ -0,0 +1,153 @@
|
|||
/* Copyright (c) 2016 Facebook
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of version 2 of the GNU General Public
|
||||
* License as published by the Free Software Foundation.
|
||||
*/
|
||||
#include <linux/if_ether.h>
|
||||
#include <linux/ip.h>
|
||||
#include <linux/ipv6.h>
|
||||
#include <linux/in.h>
|
||||
#include <linux/tcp.h>
|
||||
#include <linux/udp.h>
|
||||
#include <uapi/linux/bpf.h>
|
||||
#include <net/ip.h>
|
||||
#include "bpf_helpers.h"
|
||||
|
||||
#define DEFAULT_PKTGEN_UDP_PORT 9
|
||||
#define DEBUG 0
|
||||
|
||||
static int tcp(void *data, uint64_t tp_off, void *data_end)
|
||||
{
|
||||
struct tcphdr *tcp = data + tp_off;
|
||||
|
||||
if (tcp + 1 > data_end)
|
||||
return 0;
|
||||
if (tcp->dest == htons(80) || tcp->source == htons(80))
|
||||
return TC_ACT_SHOT;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int udp(void *data, uint64_t tp_off, void *data_end)
|
||||
{
|
||||
struct udphdr *udp = data + tp_off;
|
||||
|
||||
if (udp + 1 > data_end)
|
||||
return 0;
|
||||
if (udp->dest == htons(DEFAULT_PKTGEN_UDP_PORT) ||
|
||||
udp->source == htons(DEFAULT_PKTGEN_UDP_PORT)) {
|
||||
if (DEBUG) {
|
||||
char fmt[] = "udp port 9 indeed\n";
|
||||
|
||||
bpf_trace_printk(fmt, sizeof(fmt));
|
||||
}
|
||||
return TC_ACT_SHOT;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int parse_ipv4(void *data, uint64_t nh_off, void *data_end)
|
||||
{
|
||||
struct iphdr *iph;
|
||||
uint64_t ihl_len;
|
||||
|
||||
iph = data + nh_off;
|
||||
if (iph + 1 > data_end)
|
||||
return 0;
|
||||
|
||||
if (ip_is_fragment(iph))
|
||||
return 0;
|
||||
ihl_len = iph->ihl * 4;
|
||||
|
||||
if (iph->protocol == IPPROTO_IPIP) {
|
||||
iph = data + nh_off + ihl_len;
|
||||
if (iph + 1 > data_end)
|
||||
return 0;
|
||||
ihl_len += iph->ihl * 4;
|
||||
}
|
||||
|
||||
if (iph->protocol == IPPROTO_TCP)
|
||||
return tcp(data, nh_off + ihl_len, data_end);
|
||||
else if (iph->protocol == IPPROTO_UDP)
|
||||
return udp(data, nh_off + ihl_len, data_end);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int parse_ipv6(void *data, uint64_t nh_off, void *data_end)
|
||||
{
|
||||
struct ipv6hdr *ip6h;
|
||||
struct iphdr *iph;
|
||||
uint64_t ihl_len = sizeof(struct ipv6hdr);
|
||||
uint64_t nexthdr;
|
||||
|
||||
ip6h = data + nh_off;
|
||||
if (ip6h + 1 > data_end)
|
||||
return 0;
|
||||
|
||||
nexthdr = ip6h->nexthdr;
|
||||
|
||||
if (nexthdr == IPPROTO_IPIP) {
|
||||
iph = data + nh_off + ihl_len;
|
||||
if (iph + 1 > data_end)
|
||||
return 0;
|
||||
ihl_len += iph->ihl * 4;
|
||||
nexthdr = iph->protocol;
|
||||
} else if (nexthdr == IPPROTO_IPV6) {
|
||||
ip6h = data + nh_off + ihl_len;
|
||||
if (ip6h + 1 > data_end)
|
||||
return 0;
|
||||
ihl_len += sizeof(struct ipv6hdr);
|
||||
nexthdr = ip6h->nexthdr;
|
||||
}
|
||||
|
||||
if (nexthdr == IPPROTO_TCP)
|
||||
return tcp(data, nh_off + ihl_len, data_end);
|
||||
else if (nexthdr == IPPROTO_UDP)
|
||||
return udp(data, nh_off + ihl_len, data_end);
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct vlan_hdr {
|
||||
uint16_t h_vlan_TCI;
|
||||
uint16_t h_vlan_encapsulated_proto;
|
||||
};
|
||||
|
||||
SEC("varlen")
|
||||
int handle_ingress(struct __sk_buff *skb)
|
||||
{
|
||||
void *data = (void *)(long)skb->data;
|
||||
struct ethhdr *eth = data;
|
||||
void *data_end = (void *)(long)skb->data_end;
|
||||
uint64_t h_proto, nh_off;
|
||||
|
||||
nh_off = sizeof(*eth);
|
||||
if (data + nh_off > data_end)
|
||||
return 0;
|
||||
|
||||
h_proto = eth->h_proto;
|
||||
|
||||
if (h_proto == ETH_P_8021Q || h_proto == ETH_P_8021AD) {
|
||||
struct vlan_hdr *vhdr;
|
||||
|
||||
vhdr = data + nh_off;
|
||||
nh_off += sizeof(struct vlan_hdr);
|
||||
if (data + nh_off > data_end)
|
||||
return 0;
|
||||
h_proto = vhdr->h_vlan_encapsulated_proto;
|
||||
}
|
||||
if (h_proto == ETH_P_8021Q || h_proto == ETH_P_8021AD) {
|
||||
struct vlan_hdr *vhdr;
|
||||
|
||||
vhdr = data + nh_off;
|
||||
nh_off += sizeof(struct vlan_hdr);
|
||||
if (data + nh_off > data_end)
|
||||
return 0;
|
||||
h_proto = vhdr->h_vlan_encapsulated_proto;
|
||||
}
|
||||
if (h_proto == htons(ETH_P_IP))
|
||||
return parse_ipv4(data, nh_off, data_end);
|
||||
else if (h_proto == htons(ETH_P_IPV6))
|
||||
return parse_ipv6(data, nh_off, data_end);
|
||||
return 0;
|
||||
}
|
||||
char _license[] SEC("license") = "GPL";
|
37
samples/bpf/test_cls_bpf.sh
Executable file
37
samples/bpf/test_cls_bpf.sh
Executable file
|
@ -0,0 +1,37 @@
|
|||
#!/bin/bash
|
||||
|
||||
function pktgen {
|
||||
../pktgen/pktgen_bench_xmit_mode_netif_receive.sh -i $IFC -s 64 \
|
||||
-m 90:e2:ba:ff:ff:ff -d 192.168.0.1 -t 4
|
||||
local dropped=`tc -s qdisc show dev $IFC | tail -3 | awk '/drop/{print $7}'`
|
||||
if [ "$dropped" == "0," ]; then
|
||||
echo "FAIL"
|
||||
else
|
||||
echo "Successfully filtered " $dropped " packets"
|
||||
fi
|
||||
}
|
||||
|
||||
function test {
|
||||
echo -n "Loading bpf program '$2'... "
|
||||
tc qdisc add dev $IFC clsact
|
||||
tc filter add dev $IFC ingress bpf da obj $1 sec $2
|
||||
local status=$?
|
||||
if [ $status -ne 0 ]; then
|
||||
echo "FAIL"
|
||||
else
|
||||
echo "ok"
|
||||
pktgen
|
||||
fi
|
||||
tc qdisc del dev $IFC clsact
|
||||
}
|
||||
|
||||
IFC=test_veth
|
||||
|
||||
ip link add name $IFC type veth peer name pair_$IFC
|
||||
ip link set $IFC up
|
||||
ip link set pair_$IFC up
|
||||
|
||||
test ./parse_simple.o simple
|
||||
test ./parse_varlen.o varlen
|
||||
test ./parse_ldabs.o ldabs
|
||||
ip link del dev $IFC
|
|
@ -1448,6 +1448,86 @@ static struct bpf_test tests[] = {
|
|||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"pkt: test1",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"pkt: test2",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_4, 15),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 7),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_3, 12),
|
||||
BPF_ALU64_IMM(BPF_MUL, BPF_REG_4, 14),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_4),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 48),
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 48),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_3),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_3, 4),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"pkt: test3",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid bpf_context access off=76",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
|
||||
},
|
||||
{
|
||||
"pkt: test4",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "cannot write",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
};
|
||||
|
||||
static int probe_filter_length(struct bpf_insn *fp)
|
||||
|
|
Loading…
Reference in a new issue