Searched refs:inference (Results 1 – 11 of 11) sorted by relevance
87 u32 inference; member110 lp->inference = 0; in tcp_lp_init()284 lp->inference = 3 * delta; in tcp_lp_pkts_acked()287 if (lp->last_drop && (now - lp->last_drop < lp->inference)) in tcp_lp_pkts_acked()
14 inference accelerator for Computer Vision and Deep Learning applications.
1 SUMMARY = "An abstract syntax tree for Python with inference support."
15 designed to accelerate Deep Learning inference workloads.
18 designed to accelerate Deep Learning inference and training workloads.
6 This helps cross compile when tag inference via heuristics
113 …envino] = "-DWITH_OPENVINO=ON,-DWITH_OPENVINO=OFF,openvino-inference-engine,openvino-inference-eng…
19 - Edge AI - doing inference at an edge device. It can be an embedded ASIC/FPGA,
13 inference workloads. They are AI accelerators.
33 Dynamic path inference can be avoided by passing a *.qemuboot.conf to\n \