#
fba961ab |
| 22-Dec-2017 |
David S. Miller <davem@davemloft.net> |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net Lots of overlapping changes. Also on the net-next side the XDP state management is handled more in the generic layers s
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net Lots of overlapping changes. Also on the net-next side the XDP state management is handled more in the generic layers so undo the 'net' nfp fix which isn't applicable in net-next. Include a necessary change by Jakub Kicinski, with log message: ==================== cls_bpf no longer takes care of offload tracking. Make sure netdevsim performs necessary checks. This fixes a warning caused by TC trying to remove a filter it has not added. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
d3f89b98 |
| 19-Dec-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: keep track of the offloaded program After TC offloads were converted to callbacks we have no choice but keep track of the offloaded filter in the driver. The check for
nfp: bpf: keep track of the offloaded program After TC offloads were converted to callbacks we have no choice but keep track of the offloaded filter in the driver. The check for nn->dp.bpf_offload_xdp was a stop gap solution to make sure failed TC offload won't disable XDP, it's no longer necessary. nfp_net_bpf_offload() will return -EBUSY on TC vs XDP conflicts. Fixes: 3f7889c4c79b ("net: sched: cls_bpf: call block callbacks for offload") Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
8231f844 |
| 14-Dec-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: optimize the adjust_head calls in trivial cases If the program is simple and has only one adjust head call with constant parameters, we can check that the call will always
nfp: bpf: optimize the adjust_head calls in trivial cases If the program is simple and has only one adjust head call with constant parameters, we can check that the call will always succeed at translation time. We need to track the location of the call and make sure parameters are always the same. We also have to check the parameters against datapath constraints and ETH_HLEN. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
show more ...
|
#
0d49eaf4 |
| 14-Dec-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: add basic support for adjust head call Support bpf_xdp_adjust_head(). We need to check whether the packet offset after adjustment is within datapath's limits. We also chec
nfp: bpf: add basic support for adjust head call Support bpf_xdp_adjust_head(). We need to check whether the packet offset after adjustment is within datapath's limits. We also check if the frame is at least ETH_HLEN long (similar to the kernel implementation). Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
show more ...
|
#
77a844ee |
| 14-Dec-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: prepare for parsing BPF FW capabilities BPF FW creates a run time symbol called bpf_capabilities which contains TLV-formatted capability information. Allocate app private
nfp: bpf: prepare for parsing BPF FW capabilities BPF FW creates a run time symbol called bpf_capabilities which contains TLV-formatted capability information. Allocate app private structure to store parsed capabilities and add a skeleton of parsing logic. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
show more ...
|
#
9879a381 |
| 30-Nov-2017 |
Jiong Wang <jiong.wang@netronome.com> |
nfp: bpf: implement memory bulk copy for length within 32-bytes For NFP, we want to re-group a sequence of load/store pairs lowered from memcpy/memmove into single memory bulk operation
nfp: bpf: implement memory bulk copy for length within 32-bytes For NFP, we want to re-group a sequence of load/store pairs lowered from memcpy/memmove into single memory bulk operation which then could be accelerated using NFP CPP bus. This patch extends the existing load/store auxiliary information by adding two new fields: struct bpf_insn *paired_st; s16 ldst_gather_len; Both fields are supposed to be carried by the the load instruction at the head of the sequence. "paired_st" is the corresponding store instruction at the head and "ldst_gather_len" is the gathered length. If "ldst_gather_len" is negative, then the sequence is doing memory load/store in descending order, otherwise it is in ascending order. We need this information to detect overlapped memory access. This patch then optimize memory bulk copy when the copy length is within 32-bytes. The strategy of read/write used is: * Read. Use read32 (direct_ref), always. * Write. - length <= 8-bytes write8 (direct_ref). - length <= 32-bytes and is 4-byte aligned write32 (direct_ref). - length <= 32-bytes but is not 4-byte aligned write8 (indirect_ref). NOTE: the optimization should not change program semantics. The destination register of the last load instruction should contain the same value before and after this optimization. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
show more ...
|
#
5e4d6d20 |
| 30-Nov-2017 |
Jiong Wang <jiong.wang@netronome.com> |
nfp: bpf: factor out is_mbpf_load & is_mbpf_store It is usual that we need to check if one BPF insn is for loading/storeing data from/to memory. Therefore, it makes sense to fac
nfp: bpf: factor out is_mbpf_load & is_mbpf_store It is usual that we need to check if one BPF insn is for loading/storeing data from/to memory. Therefore, it makes sense to factor out related code to become common helper functions. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
show more ...
|
#
a09d5c52 |
| 30-Nov-2017 |
Jiong Wang <jiong.wang@netronome.com> |
nfp: bpf: flag jump destination to guide insn combine optimizations NFP eBPF offload JIT engine is doing some instruction combine based optimizations which however must not be safe if th
nfp: bpf: flag jump destination to guide insn combine optimizations NFP eBPF offload JIT engine is doing some instruction combine based optimizations which however must not be safe if the combined sequences are across basic block boarders. Currently, there are post checks during fixing jump destinations. If the jump destination is found to be eBPF insn that has been combined into another one, then JIT engine will raise error and abort. This is not optimal. The JIT engine ought to disable the optimization on such cross-bb-border sequences instead of abort. As there is no control flow information in eBPF infrastructure that we can't do basic block based optimizations, this patch extends the existing jump destination record pass to also flag the jump destination, then in instruction combine passes we could skip the optimizations if insns in the sequence are jump targets. Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
show more ...
|
#
5b674140 |
| 30-Nov-2017 |
Jiong Wang <jiong.wang@netronome.com> |
nfp: bpf: record jump destination to simplify jump fixup eBPF insns are internally organized as dual-list inside NFP offload JIT. Random access to an insn needs to be done by either forw
nfp: bpf: record jump destination to simplify jump fixup eBPF insns are internally organized as dual-list inside NFP offload JIT. Random access to an insn needs to be done by either forward or backward traversal along the list. One place we need to do such traversal is at nfp_fixup_branches where one traversal is needed for each jump insn to find the destination. Such traversals could be avoided if jump destinations are collected through a single travesal in a pre-scan pass, and such information could also be useful in other places where jump destination info are needed. This patch adds such jump destination collection in nfp_prog_prepare. Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
show more ...
|
#
854dc87d |
| 30-Nov-2017 |
Jiong Wang <jiong.wang@netronome.com> |
nfp: bpf: support backward jump This patch adds support for backward jump on NFP. - restrictions on backward jump in various functions have been removed. - nfp_fixup_branche
nfp: bpf: support backward jump This patch adds support for backward jump on NFP. - restrictions on backward jump in various functions have been removed. - nfp_fixup_branches now supports backward jump. There is one thing to note, currently an input eBPF JMP insn may generate several NFP insns, for example, NFP imm move insn A \ NFP compare insn B --> 3 NFP insn jited from eBPF JMP insn M NFP branch insn C / --- NFP insn X --> 1 NFP insn jited from eBPF insn N --- ... therefore, we are doing sanity check to make sure the last jited insn from an eBPF JMP is a NFP branch instruction. Once backward jump is allowed, it is possible an eBPF JMP insn is at the end of the program. This is however causing trouble for the sanity check. Because the sanity check requires the end index of the NFP insns jited from one eBPF insn while only the start index is recorded before this patch that we can only get the end index by: start_index_of_the_next_eBPF_insn - 1 or for the above example: start_index_of_eBPF_insn_N (which is the index of NFP insn X) - 1 nfp_fixup_branches was using nfp_for_each_insn_walk2 to expose *next* insn to each iteration during the traversal so the last index could be calculated from which. Now, it needs some extra code to handle the last insn. Meanwhile, the use of walk2 is actually unnecessary, we could simply use generic single instruction walk to do this, the next insn could be easily calculated using list_next_entry. So, this patch migrates the jump fixup traversal method to *list_for_each_entry*, this simplifies the code logic a little bit. The other thing to note is a new state variable "last_bpf_off" is introduced to track the index of the last jited NFP insn. This is necessary because NFP is generating special purposes epilogue sequences, so the index of the last jited NFP insn is *not* always nfp_prog->prog_len - 1. Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
show more ...
|
Revision tags: v4.13.16, v4.14 |
|
#
c6c580d7 |
| 03-Nov-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: move to new BPF program offload infrastructure Following steps are taken in the driver to offload an XDP program: XDP_SETUP_PROG: * prepare: - allocate program
nfp: bpf: move to new BPF program offload infrastructure Following steps are taken in the driver to offload an XDP program: XDP_SETUP_PROG: * prepare: - allocate program state; - run verifier (bpf_analyzer()); - run translation; * load: - stop old program if needed; - load program; - enable BPF if not enabled; * clean up: - free program image. With new infrastructure the flow will look like this: BPF_OFFLOAD_VERIFIER_PREP: - allocate program state; BPF_OFFLOAD_TRANSLATE: - run translation; XDP_SETUP_PROG: - stop old program if needed; - load program; - enable BPF if not enabled; BPF_OFFLOAD_DESTROY: - free program image. Take advantage of the new infrastructure. Allocation of driver metadata has to be moved from jit.c to offload.c since it's now done at a different stage. Since there is no separate driver private data for verification step, move temporary nfp_meta pointer into nfp_prog. We will now use user space context offsets. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
9314c442 |
| 03-Nov-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: move translation prepare to offload.c struct nfp_prog is currently only used internally by the translator. This means there is a lot of parameter passing going on, between
nfp: bpf: move translation prepare to offload.c struct nfp_prog is currently only used internally by the translator. This means there is a lot of parameter passing going on, between the translator and different stages of offload. Simplify things by allocating nfp_prog in offload.c already. We will now use kmalloc() to allocate the program area and only DMA map it for the time of loading (instead of allocating DMA coherent memory upfront). Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
c1c88eae |
| 03-Nov-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: move program prepare and free into offload.c Most of offload/translation prepare logic will be moved to offload.c. To help git generate more reasonable diffs move nfp_prog
nfp: bpf: move program prepare and free into offload.c Most of offload/translation prepare logic will be moved to offload.c. To help git generate more reasonable diffs move nfp_prog_prepare() and nfp_prog_free() functions there as a first step. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
e4a91cd5 |
| 03-Nov-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: require seamless reload for program replace Firmware supports live replacement of programs for quite some time now. Remove the software-fallback related logic and depend o
nfp: bpf: require seamless reload for program replace Firmware supports live replacement of programs for quite some time now. Remove the software-fallback related logic and depend on the FW for program replace. Seamless reload will become a requirement if maps are present, anyway. Load and start stages have to be split now, since replace only needs a load, start has already been done on add. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
9ce7a956 |
| 03-Nov-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: refactor offload logic We currently create a fake cls_bpf offload object when we want to offload XDP. Simplify and clarify the code by moving the TC/XDP specific logic out
nfp: bpf: refactor offload logic We currently create a fake cls_bpf offload object when we want to offload XDP. Simplify and clarify the code by moving the TC/XDP specific logic out of common offload code. This is easy now that we don't support legacy TC actions. We only need the bpf program and state of the skip_sw flag. Temporarily set @code to NULL in nfp_net_bpf_offload(), compilers seem to have trouble recognizing it's always initialized. Next patches will eliminate that variable. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
5559eedb |
| 03-Nov-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: remove unnecessary include of nfp_net.h BPF offload's main header does not need to include nfp_net.h. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Revi
nfp: bpf: remove unnecessary include of nfp_net.h BPF offload's main header does not need to include nfp_net.h. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
94508438 |
| 03-Nov-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: remove the register renumbering leftovers The register renumbering was removed and will not be coming back in its old, naive form, given that it would be fundamentally inco
nfp: bpf: remove the register renumbering leftovers The register renumbering was removed and will not be coming back in its old, naive form, given that it would be fundamentally incompatible with calling functions. Remove the leftovers. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
012bb8a8 |
| 03-Nov-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: drop support for cls_bpf with legacy actions Only support BPF_PROG_TYPE_SCHED_CLS programs in direct action mode. This simplifies preparing the offload since there will no
nfp: bpf: drop support for cls_bpf with legacy actions Only support BPF_PROG_TYPE_SCHED_CLS programs in direct action mode. This simplifies preparing the offload since there will now be only one mode of operation for that type of program. We need to know the attachment mode type of cls_bpf programs, because exit codes are interpreted differently for legacy vs DA mode. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
3248f77f |
| 25-Oct-2017 |
Kees Cook <keescook@chromium.org> |
drivers/net: netronome: Convert timers to use timer_setup() In preparation for unconditionally passing the struct timer_list pointer to all timer callbacks, switch to using the new timer
drivers/net: netronome: Convert timers to use timer_setup() In preparation for unconditionally passing the struct timer_list pointer to all timer callbacks, switch to using the new timer_setup() and from_timer() to pass the timer pointer explicitly. Cc: Jakub Kicinski <jakub.kicinski@netronome.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jiri Pirko <jiri@mellanox.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: Simon Horman <simon.horman@netronome.com> Cc: oss-drivers@netronome.com Cc: netdev@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
b14157ee |
| 23-Oct-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: support stack accesses via non-constant pointers If stack pointer has a different value on different paths but the alignment to words (4B) remains the same, we can set a ne
nfp: bpf: support stack accesses via non-constant pointers If stack pointer has a different value on different paths but the alignment to words (4B) remains the same, we can set a new LMEM access pointer to the calculated value and access whichever word it's pointing to. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
d3488480 |
| 23-Oct-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: allow stack accesses via modified stack registers As long as the verifier tells us the stack offset exactly we can render the LMEM reads quite easily. Simply make sure that
nfp: bpf: allow stack accesses via modified stack registers As long as the verifier tells us the stack offset exactly we can render the LMEM reads quite easily. Simply make sure that the offset is constant for a given instruction and add it to the instruction's offset. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
ee9133a8 |
| 23-Oct-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: add stack write support Stack is implemented by the LMEM register file. Unaligned accesses to LMEM are not allowed. Accesses also have to be 4B wide. To support stac
nfp: bpf: add stack write support Stack is implemented by the LMEM register file. Unaligned accesses to LMEM are not allowed. Accesses also have to be 4B wide. To support stack we need to make sure offsets of pointers are known at translation time (for now) and perform correct load/mask/shift operations. Since we can access first 64B of LMEM without much effort support only stacks not bigger than 64B. Following commits will extend the possible sizes beyond that. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
2ca71441 |
| 12-Oct-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: add support for direct packet access - read In direct packet access bound checks are already done, we can simply dereference the packet pointer. Verifier/parser logic
nfp: bpf: add support for direct packet access - read In direct packet access bound checks are already done, we can simply dereference the packet pointer. Verifier/parser logic needs to record pointer type. Note that although verifier does protect us from CTX vs other pointer changes we will also want to differentiate between PACKET vs MAP_VALUE or STACK, so we can add the check already. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
18e53b6c |
| 08-Oct-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: move to datapath ABI version 2 Datapath ABI version 2 stores the packet information in LMEM instead of NNRs. We also have strict restrictions on which GPRs we can use. On
nfp: bpf: move to datapath ABI version 2 Datapath ABI version 2 stores the packet information in LMEM instead of NNRs. We also have strict restrictions on which GPRs we can use. Only GPRs 0-23 are reserved for BPF. Adjust the static register locations and "ABI" registers. Note that packet length is packed with other info so we have to extract it into one of the scratch registers, OTOH since LMEM can be used in restricted operands we don't have to extract packet pointer. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
509144e2 |
| 08-Oct-2017 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
nfp: bpf: remove packet marking support Temporarily drop support for skb->mark. We are primarily focusing on XDP offload, and implementing skb->mark on the new datapath has lower pr
nfp: bpf: remove packet marking support Temporarily drop support for skb->mark. We are primarily focusing on XDP offload, and implementing skb->mark on the new datapath has lower priority. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|