utils.c (b7c15a3ce6fea5da3aa836c897a78ac628467d54) | utils.c (81091d7696ae71627ff80bbf2c6b0986d2c1cce3) |
---|---|
1// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB 2/* Copyright (c) 2015 - 2021 Intel Corporation */ 3#include "main.h" 4 5/** 6 * irdma_arp_table -manage arp table 7 * @rf: RDMA PCI function 8 * @ip_addr: ip address for device --- 244 unchanged lines hidden (view full) --- 253 struct neighbour *neigh = ptr; 254 struct net_device *real_dev, *netdev = (struct net_device *)neigh->dev; 255 struct irdma_device *iwdev; 256 struct ib_device *ibdev; 257 __be32 *p; 258 u32 local_ipaddr[4] = {}; 259 bool ipv4 = true; 260 | 1// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB 2/* Copyright (c) 2015 - 2021 Intel Corporation */ 3#include "main.h" 4 5/** 6 * irdma_arp_table -manage arp table 7 * @rf: RDMA PCI function 8 * @ip_addr: ip address for device --- 244 unchanged lines hidden (view full) --- 253 struct neighbour *neigh = ptr; 254 struct net_device *real_dev, *netdev = (struct net_device *)neigh->dev; 255 struct irdma_device *iwdev; 256 struct ib_device *ibdev; 257 __be32 *p; 258 u32 local_ipaddr[4] = {}; 259 bool ipv4 = true; 260 |
261 real_dev = rdma_vlan_dev_real_dev(netdev); 262 if (!real_dev) 263 real_dev = netdev; 264 265 ibdev = ib_device_get_by_netdev(real_dev, RDMA_DRIVER_IRDMA); 266 if (!ibdev) 267 return NOTIFY_DONE; 268 269 iwdev = to_iwdev(ibdev); 270 |
|
261 switch (event) { 262 case NETEVENT_NEIGH_UPDATE: | 271 switch (event) { 272 case NETEVENT_NEIGH_UPDATE: |
263 real_dev = rdma_vlan_dev_real_dev(netdev); 264 if (!real_dev) 265 real_dev = netdev; 266 ibdev = ib_device_get_by_netdev(real_dev, RDMA_DRIVER_IRDMA); 267 if (!ibdev) 268 return NOTIFY_DONE; 269 270 iwdev = to_iwdev(ibdev); | |
271 p = (__be32 *)neigh->primary_key; 272 if (neigh->tbl->family == AF_INET6) { 273 ipv4 = false; 274 irdma_copy_ip_ntohl(local_ipaddr, p); 275 } else { 276 local_ipaddr[0] = ntohl(*p); 277 } 278 --- 4 unchanged lines hidden (view full) --- 283 284 if (neigh->nud_state & NUD_VALID) 285 irdma_add_arp(iwdev->rf, local_ipaddr, ipv4, neigh->ha); 286 287 else 288 irdma_manage_arp_cache(iwdev->rf, neigh->ha, 289 local_ipaddr, ipv4, 290 IRDMA_ARP_DELETE); | 273 p = (__be32 *)neigh->primary_key; 274 if (neigh->tbl->family == AF_INET6) { 275 ipv4 = false; 276 irdma_copy_ip_ntohl(local_ipaddr, p); 277 } else { 278 local_ipaddr[0] = ntohl(*p); 279 } 280 --- 4 unchanged lines hidden (view full) --- 285 286 if (neigh->nud_state & NUD_VALID) 287 irdma_add_arp(iwdev->rf, local_ipaddr, ipv4, neigh->ha); 288 289 else 290 irdma_manage_arp_cache(iwdev->rf, neigh->ha, 291 local_ipaddr, ipv4, 292 IRDMA_ARP_DELETE); |
291 ib_device_put(ibdev); | |
292 break; 293 default: 294 break; 295 } 296 | 293 break; 294 default: 295 break; 296 } 297 |
298 ib_device_put(ibdev); 299 |
|
297 return NOTIFY_DONE; 298} 299 300/** 301 * irdma_netdevice_event - system notifier for netdev events 302 * @notifier: not used 303 * @event: event for notifier 304 * @ptr: netdev --- 2185 unchanged lines hidden (view full) --- 2490 2491 ukcq = &iwcq->sc_cq.cq_uk; 2492 cqe = IRDMA_GET_CURRENT_CQ_ELEM(ukcq); 2493 get_64bit_val(cqe, 24, &qword3); 2494 polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword3); 2495 2496 return polarity != ukcq->polarity; 2497} | 300 return NOTIFY_DONE; 301} 302 303/** 304 * irdma_netdevice_event - system notifier for netdev events 305 * @notifier: not used 306 * @event: event for notifier 307 * @ptr: netdev --- 2185 unchanged lines hidden (view full) --- 2493 2494 ukcq = &iwcq->sc_cq.cq_uk; 2495 cqe = IRDMA_GET_CURRENT_CQ_ELEM(ukcq); 2496 get_64bit_val(cqe, 24, &qword3); 2497 polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword3); 2498 2499 return polarity != ukcq->polarity; 2500} |
2501 2502void irdma_remove_cmpls_list(struct irdma_cq *iwcq) 2503{ 2504 struct irdma_cmpl_gen *cmpl_node; 2505 struct list_head *tmp_node, *list_node; 2506 2507 list_for_each_safe (list_node, tmp_node, &iwcq->cmpl_generated) { 2508 cmpl_node = list_entry(list_node, struct irdma_cmpl_gen, list); 2509 list_del(&cmpl_node->list); 2510 kfree(cmpl_node); 2511 } 2512} 2513 2514int irdma_generated_cmpls(struct irdma_cq *iwcq, struct irdma_cq_poll_info *cq_poll_info) 2515{ 2516 struct irdma_cmpl_gen *cmpl; 2517 2518 if (list_empty(&iwcq->cmpl_generated)) 2519 return -ENOENT; 2520 cmpl = list_first_entry_or_null(&iwcq->cmpl_generated, struct irdma_cmpl_gen, list); 2521 list_del(&cmpl->list); 2522 memcpy(cq_poll_info, &cmpl->cpi, sizeof(*cq_poll_info)); 2523 kfree(cmpl); 2524 2525 ibdev_dbg(iwcq->ibcq.device, 2526 "VERBS: %s: Poll artificially generated completion for QP 0x%X, op %u, wr_id=0x%llx\n", 2527 __func__, cq_poll_info->qp_id, cq_poll_info->op_type, 2528 cq_poll_info->wr_id); 2529 2530 return 0; 2531} 2532 2533/** 2534 * irdma_set_cpi_common_values - fill in values for polling info struct 2535 * @cpi: resulting structure of cq_poll_info type 2536 * @qp: QPair 2537 * @qp_num: id of the QP 2538 */ 2539static void irdma_set_cpi_common_values(struct irdma_cq_poll_info *cpi, 2540 struct irdma_qp_uk *qp, u32 qp_num) 2541{ 2542 cpi->comp_status = IRDMA_COMPL_STATUS_FLUSHED; 2543 cpi->error = true; 2544 cpi->major_err = IRDMA_FLUSH_MAJOR_ERR; 2545 cpi->minor_err = FLUSH_GENERAL_ERR; 2546 cpi->qp_handle = (irdma_qp_handle)(uintptr_t)qp; 2547 cpi->qp_id = qp_num; 2548} 2549 2550static inline void irdma_comp_handler(struct irdma_cq *cq) 2551{ 2552 if (!cq->ibcq.comp_handler) 2553 return; 2554 if (atomic_cmpxchg(&cq->armed, 1, 0)) 2555 cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); 2556} 2557 2558void irdma_generate_flush_completions(struct irdma_qp *iwqp) 2559{ 2560 struct irdma_qp_uk *qp = &iwqp->sc_qp.qp_uk; 2561 struct irdma_ring *sq_ring = &qp->sq_ring; 2562 struct irdma_ring *rq_ring = &qp->rq_ring; 2563 struct irdma_cmpl_gen *cmpl; 2564 __le64 *sw_wqe; 2565 u64 wqe_qword; 2566 u32 wqe_idx; 2567 bool compl_generated = false; 2568 unsigned long flags1; 2569 2570 spin_lock_irqsave(&iwqp->iwscq->lock, flags1); 2571 if (irdma_cq_empty(iwqp->iwscq)) { 2572 unsigned long flags2; 2573 2574 spin_lock_irqsave(&iwqp->lock, flags2); 2575 while (IRDMA_RING_MORE_WORK(*sq_ring)) { 2576 cmpl = kzalloc(sizeof(*cmpl), GFP_ATOMIC); 2577 if (!cmpl) { 2578 spin_unlock_irqrestore(&iwqp->lock, flags2); 2579 spin_unlock_irqrestore(&iwqp->iwscq->lock, flags1); 2580 return; 2581 } 2582 2583 wqe_idx = sq_ring->tail; 2584 irdma_set_cpi_common_values(&cmpl->cpi, qp, qp->qp_id); 2585 2586 cmpl->cpi.wr_id = qp->sq_wrtrk_array[wqe_idx].wrid; 2587 sw_wqe = qp->sq_base[wqe_idx].elem; 2588 get_64bit_val(sw_wqe, 24, &wqe_qword); 2589 cmpl->cpi.op_type = (u8)FIELD_GET(IRDMAQPSQ_OPCODE, IRDMAQPSQ_OPCODE); 2590 /* remove the SQ WR by moving SQ tail*/ 2591 IRDMA_RING_SET_TAIL(*sq_ring, 2592 sq_ring->tail + qp->sq_wrtrk_array[sq_ring->tail].quanta); 2593 2594 ibdev_dbg(iwqp->iwscq->ibcq.device, 2595 "DEV: %s: adding wr_id = 0x%llx SQ Completion to list qp_id=%d\n", 2596 __func__, cmpl->cpi.wr_id, qp->qp_id); 2597 list_add_tail(&cmpl->list, &iwqp->iwscq->cmpl_generated); 2598 compl_generated = true; 2599 } 2600 spin_unlock_irqrestore(&iwqp->lock, flags2); 2601 spin_unlock_irqrestore(&iwqp->iwscq->lock, flags1); 2602 if (compl_generated) 2603 irdma_comp_handler(iwqp->iwrcq); 2604 } else { 2605 spin_unlock_irqrestore(&iwqp->iwscq->lock, flags1); 2606 mod_delayed_work(iwqp->iwdev->cleanup_wq, &iwqp->dwork_flush, 2607 msecs_to_jiffies(IRDMA_FLUSH_DELAY_MS)); 2608 } 2609 2610 spin_lock_irqsave(&iwqp->iwrcq->lock, flags1); 2611 if (irdma_cq_empty(iwqp->iwrcq)) { 2612 unsigned long flags2; 2613 2614 spin_lock_irqsave(&iwqp->lock, flags2); 2615 while (IRDMA_RING_MORE_WORK(*rq_ring)) { 2616 cmpl = kzalloc(sizeof(*cmpl), GFP_ATOMIC); 2617 if (!cmpl) { 2618 spin_unlock_irqrestore(&iwqp->lock, flags2); 2619 spin_unlock_irqrestore(&iwqp->iwrcq->lock, flags1); 2620 return; 2621 } 2622 2623 wqe_idx = rq_ring->tail; 2624 irdma_set_cpi_common_values(&cmpl->cpi, qp, qp->qp_id); 2625 2626 cmpl->cpi.wr_id = qp->rq_wrid_array[wqe_idx]; 2627 cmpl->cpi.op_type = IRDMA_OP_TYPE_REC; 2628 /* remove the RQ WR by moving RQ tail */ 2629 IRDMA_RING_SET_TAIL(*rq_ring, rq_ring->tail + 1); 2630 ibdev_dbg(iwqp->iwrcq->ibcq.device, 2631 "DEV: %s: adding wr_id = 0x%llx RQ Completion to list qp_id=%d, wqe_idx=%d\n", 2632 __func__, cmpl->cpi.wr_id, qp->qp_id, 2633 wqe_idx); 2634 list_add_tail(&cmpl->list, &iwqp->iwrcq->cmpl_generated); 2635 2636 compl_generated = true; 2637 } 2638 spin_unlock_irqrestore(&iwqp->lock, flags2); 2639 spin_unlock_irqrestore(&iwqp->iwrcq->lock, flags1); 2640 if (compl_generated) 2641 irdma_comp_handler(iwqp->iwrcq); 2642 } else { 2643 spin_unlock_irqrestore(&iwqp->iwrcq->lock, flags1); 2644 mod_delayed_work(iwqp->iwdev->cleanup_wq, &iwqp->dwork_flush, 2645 msecs_to_jiffies(IRDMA_FLUSH_DELAY_MS)); 2646 } 2647} |
|