loop.c (1e05a7e6ebc4a5a5c53dce32e7e6d0ff5e7e08d1) | loop.c (c0f2f45be2976abe973c8cd544f38e2d928771b0) |
---|---|
1// SPDX-License-Identifier: GPL-2.0 2/* 3 * NVMe over Fabrics loopback device. 4 * Copyright (c) 2015-2016 HGST, a Western Digital Company. 5 */ 6#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7#include <linux/scatterlist.h> 8#include <linux/blk-mq.h> --- 355 unchanged lines hidden (view full) --- 364 } 365 366 error = nvmf_connect_admin_queue(&ctrl->ctrl); 367 if (error) 368 goto out_cleanup_queue; 369 370 set_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags); 371 | 1// SPDX-License-Identifier: GPL-2.0 2/* 3 * NVMe over Fabrics loopback device. 4 * Copyright (c) 2015-2016 HGST, a Western Digital Company. 5 */ 6#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7#include <linux/scatterlist.h> 8#include <linux/blk-mq.h> --- 355 unchanged lines hidden (view full) --- 364 } 365 366 error = nvmf_connect_admin_queue(&ctrl->ctrl); 367 if (error) 368 goto out_cleanup_queue; 369 370 set_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags); 371 |
372 error = nvmf_reg_read64(&ctrl->ctrl, NVME_REG_CAP, &ctrl->ctrl.cap); 373 if (error) { 374 dev_err(ctrl->ctrl.device, 375 "prop_get NVME_REG_CAP failed\n"); 376 goto out_cleanup_queue; 377 } 378 379 ctrl->ctrl.sqsize = 380 min_t(int, NVME_CAP_MQES(ctrl->ctrl.cap), ctrl->ctrl.sqsize); 381 382 error = nvme_enable_ctrl(&ctrl->ctrl, ctrl->ctrl.cap); | 372 error = nvme_enable_ctrl(&ctrl->ctrl); |
383 if (error) 384 goto out_cleanup_queue; 385 386 ctrl->ctrl.max_hw_sectors = 387 (NVME_LOOP_MAX_SEGMENTS - 1) << (PAGE_SHIFT - 9); 388 389 error = nvme_init_identify(&ctrl->ctrl); 390 if (error) --- 11 unchanged lines hidden (view full) --- 402} 403 404static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl) 405{ 406 if (ctrl->ctrl.queue_count > 1) { 407 nvme_stop_queues(&ctrl->ctrl); 408 blk_mq_tagset_busy_iter(&ctrl->tag_set, 409 nvme_cancel_request, &ctrl->ctrl); | 373 if (error) 374 goto out_cleanup_queue; 375 376 ctrl->ctrl.max_hw_sectors = 377 (NVME_LOOP_MAX_SEGMENTS - 1) << (PAGE_SHIFT - 9); 378 379 error = nvme_init_identify(&ctrl->ctrl); 380 if (error) --- 11 unchanged lines hidden (view full) --- 392} 393 394static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl) 395{ 396 if (ctrl->ctrl.queue_count > 1) { 397 nvme_stop_queues(&ctrl->ctrl); 398 blk_mq_tagset_busy_iter(&ctrl->tag_set, 399 nvme_cancel_request, &ctrl->ctrl); |
400 blk_mq_tagset_wait_completed_request(&ctrl->tag_set); |
|
410 nvme_loop_destroy_io_queues(ctrl); 411 } 412 413 if (ctrl->ctrl.state == NVME_CTRL_LIVE) 414 nvme_shutdown_ctrl(&ctrl->ctrl); 415 416 blk_mq_quiesce_queue(ctrl->ctrl.admin_q); 417 blk_mq_tagset_busy_iter(&ctrl->admin_tag_set, 418 nvme_cancel_request, &ctrl->ctrl); | 401 nvme_loop_destroy_io_queues(ctrl); 402 } 403 404 if (ctrl->ctrl.state == NVME_CTRL_LIVE) 405 nvme_shutdown_ctrl(&ctrl->ctrl); 406 407 blk_mq_quiesce_queue(ctrl->ctrl.admin_q); 408 blk_mq_tagset_busy_iter(&ctrl->admin_tag_set, 409 nvme_cancel_request, &ctrl->ctrl); |
410 blk_mq_tagset_wait_completed_request(&ctrl->admin_tag_set); |
|
419 blk_mq_unquiesce_queue(ctrl->ctrl.admin_q); 420 nvme_loop_destroy_admin_queue(ctrl); 421} 422 423static void nvme_loop_delete_ctrl_host(struct nvme_ctrl *ctrl) 424{ 425 nvme_loop_shutdown_ctrl(to_loop_ctrl(ctrl)); 426} --- 222 unchanged lines hidden (view full) --- 649 return 0; 650} 651 652static void nvme_loop_remove_port(struct nvmet_port *port) 653{ 654 mutex_lock(&nvme_loop_ports_mutex); 655 list_del_init(&port->entry); 656 mutex_unlock(&nvme_loop_ports_mutex); | 411 blk_mq_unquiesce_queue(ctrl->ctrl.admin_q); 412 nvme_loop_destroy_admin_queue(ctrl); 413} 414 415static void nvme_loop_delete_ctrl_host(struct nvme_ctrl *ctrl) 416{ 417 nvme_loop_shutdown_ctrl(to_loop_ctrl(ctrl)); 418} --- 222 unchanged lines hidden (view full) --- 641 return 0; 642} 643 644static void nvme_loop_remove_port(struct nvmet_port *port) 645{ 646 mutex_lock(&nvme_loop_ports_mutex); 647 list_del_init(&port->entry); 648 mutex_unlock(&nvme_loop_ports_mutex); |
657 658 /* 659 * Ensure any ctrls that are in the process of being 660 * deleted are in fact deleted before we return 661 * and free the port. This is to prevent active 662 * ctrls from using a port after it's freed. 663 */ 664 flush_workqueue(nvme_delete_wq); | |
665} 666 667static const struct nvmet_fabrics_ops nvme_loop_ops = { 668 .owner = THIS_MODULE, 669 .type = NVMF_TRTYPE_LOOP, 670 .add_port = nvme_loop_add_port, 671 .remove_port = nvme_loop_remove_port, 672 .queue_response = nvme_loop_queue_response, --- 45 unchanged lines hidden --- | 649} 650 651static const struct nvmet_fabrics_ops nvme_loop_ops = { 652 .owner = THIS_MODULE, 653 .type = NVMF_TRTYPE_LOOP, 654 .add_port = nvme_loop_add_port, 655 .remove_port = nvme_loop_remove_port, 656 .queue_response = nvme_loop_queue_response, --- 45 unchanged lines hidden --- |