Revision tags: v4.15 |
|
#
4b056682 |
| 08-Dec-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: Beef up stat counters for debug
If log verbose in not turned on, its hard to tell when certain error paths get hit. Add stats counters and corresponding logic to debugfs/sysfs to aid und
scsi: lpfc: Beef up stat counters for debug
If log verbose in not turned on, its hard to tell when certain error paths get hit. Add stats counters and corresponding logic to debugfs/sysfs to aid understanding what paths were traversed.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
#
a51e41b6 |
| 08-Dec-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: Increase SCSI CQ and WQ sizes.
Increased the sizes of the SCSI WQ's and CQ's so that SCSI operation is similar to that used by NVME. However, size increase restricted only to those newer
scsi: lpfc: Increase SCSI CQ and WQ sizes.
Increased the sizes of the SCSI WQ's and CQ's so that SCSI operation is similar to that used by NVME. However, size increase restricted only to those newer adapters that can support the larger WQE size, thus bigger queue sizes.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
#
cf1a1d3e |
| 08-Dec-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: Fix random heartbeat timeouts during heavy IO
NVME targets appear to randomly disconnect from the initiator when running heavy IO.
The error is due to the host aggregate (across all con
scsi: lpfc: Fix random heartbeat timeouts during heavy IO
NVME targets appear to randomly disconnect from the initiator when running heavy IO.
The error is due to the host aggregate (across all controllers) io load was beyond the maximum exchange count for nvme on the adapter. The driver was properly returning a resource busy status, but the io load was so great heartbeat commands would be bounced and not have a successful retry within the fuzz amount for the nvme heartbeat (yes, a very high io load!). Thus the target was terminating the controller due to a keep alive failure.
Resolve by reserving a few exchanges (by counters) which can be used when the adapter is out of normal exchanges and the command is a NVME heartbeat command. As counters are used, while the reserved command is outstanding, as soon as any other exchange completes, the counters are adjusted and the reserved count is replenished. The heartbeat completes execution in a normal fashion.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
Revision tags: v4.13.16 |
|
#
add9d6be |
| 20-Nov-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: Correct driver deregistrations with host nvme transport
The driver's interaction with the host nvme transport has been incorrect for a while. The driver did not wait for the unregister c
scsi: lpfc: Correct driver deregistrations with host nvme transport
The driver's interaction with the host nvme transport has been incorrect for a while. The driver did not wait for the unregister callbacks (waited only 5 jiffies). Thus the driver may remove objects that may be referenced by subsequent abort commands from the transport, and the actual unregister callback was effectively a noop. This was especially problematic if the driver was unloaded.
The driver now waits for the unregister callbacks, as it should, before continuing with teardown.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
#
81b96eda |
| 20-Nov-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: Expand WQE capability of every NVME hardware queue
Hardware queues are a fast staging area to push commands into the adapter. The adapter should drain them extremely quickly. However, u
scsi: lpfc: Expand WQE capability of every NVME hardware queue
Hardware queues are a fast staging area to push commands into the adapter. The adapter should drain them extremely quickly. However, under heavy io load, the host cpu is pushing commands faster than the drain rate of the adapter causing the driver to resource busy commands.
Enlarge the hardware queue (wq & cq) to support a larger number of queue entries (4x the prior size) before backpressure. Enlarging the queue requires larger contiguous buffers (16k) per logical page for the hardware. This changed calling sequences that were expecting 4K page sizes that now must pass a parameter with the page sizes. It also required use of a new version of an adapter command that can vary the page size values.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
Revision tags: v4.14, v4.13.5, v4.13, v4.12 |
|
#
80cc0043 |
| 01-Jun-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: Fix transition nvme-i rport handling to nport only.
As the devloss API was implemented in the nvmei driver, an evaluation of the nvme transport and the lpfc driver showed dual management
scsi: lpfc: Fix transition nvme-i rport handling to nport only.
As the devloss API was implemented in the nvmei driver, an evaluation of the nvme transport and the lpfc driver showed dual management of the rports. This creates a bug possibility when the thread count and SAN size increases.
The nvmei driver code was based on a very early transport and was not revisited until the devloss API was introduced.
Remove the listhead in the driver's rport data structure and the listhead in the driver's lport data structure. Remove all rport_list traversal. Convert the driver to use the nrport (nvme rport) pointer that is now NULL or nonNULL depending on a devloss action. Convert debugfs and nvme_info in sysfs to use the fc_nodes list in the vport.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
Revision tags: v4.10.17, v4.10.16, v4.10.15, v4.10.14, v4.10.13 |
|
#
bbe3012b |
| 21-Apr-2017 |
James Smart <jsmart2021@gmail.com> |
lpfc: Fix memory corruption of the lpfc_ncmd->list pointers
lpfc was changing the private pointer that is set/maintained by the nvme_fc transport. This caused two issues: a) the transport, on teardo
lpfc: Fix memory corruption of the lpfc_ncmd->list pointers
lpfc was changing the private pointer that is set/maintained by the nvme_fc transport. This caused two issues: a) the transport, on teardown may erroneous attempt to free whatever address was set; and b) lfpc uses any value set in lpfc_nvme_fcp_abort() and assumes its a valid io request.
Correct issue by properly defining a context structure for lpfc. Lpfc also updated to clear the private context structure on io completion.
Since this bug caused scrutiny of the way lpfc moves local request structures between lists, also cleaned up list_del()'s to list_del_inits()'s.
This is a nvme-specific bug. The patch was cut against the linux-block tree, for-4.12/block tree. It should be pulled in through that tree.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
#
06909bb9 |
| 21-Apr-2017 |
James Smart <jsmart2021@gmail.com> |
Remove unused defines for NVME PostBuf.
These defines for the posting of buffers for nvmet target were not used.
Removing the unused defines.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com
Remove unused defines for NVME PostBuf.
These defines for the posting of buffers for nvmet target were not used.
Removing the unused defines.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
show more ...
|
#
a44e4e8b |
| 21-Apr-2017 |
James Smart <jsmart2021@gmail.com> |
Standardize nvme SGL segment count
Standardize default SGL segment count for nvme target and initiator
The driver needs to make them the same for clarity.
Signed-off-by: Dick Kennedy <dick.kennedy
Standardize nvme SGL segment count
Standardize default SGL segment count for nvme target and initiator
The driver needs to make them the same for clarity.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
show more ...
|
Revision tags: v4.10.12, v4.10.11, v4.10.10, v4.10.9, v4.10.8, v4.10.7, v4.10.6, v4.10.5, v4.10.4, v4.10.3, v4.10.2 |
|
#
318083ad |
| 04-Mar-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: add NVME exchange aborts
previous code did little more than log a message.
This patch adds abort path support, modeled after the SCSI code paths. Currently addresses only the initiator
scsi: lpfc: add NVME exchange aborts
previous code did little more than log a message.
This patch adds abort path support, modeled after the SCSI code paths. Currently addresses only the initiator path. Target path under development, but stubbed out.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
Revision tags: v4.10.1, v4.10 |
|
#
d080abe0 |
| 12-Feb-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: Update copyrights
Update copyrights to 2017 for all files touched in this patch set
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broad
scsi: lpfc: Update copyrights
Update copyrights to 2017 for all files touched in this patch set
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
#
bd2cdd5e |
| 12-Feb-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: NVME Initiator: Add debugfs support
NVME Initiator: Add debugfs support
Adds debugfs snippets to cover the new NVME initiator functionality
Signed-off-by: Dick Kennedy <dick.kennedy@br
scsi: lpfc: NVME Initiator: Add debugfs support
NVME Initiator: Add debugfs support
Adds debugfs snippets to cover the new NVME initiator functionality
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
#
01649561 |
| 12-Feb-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: NVME Initiator: bind to nvme_fc api
NVME Initiator: Tie in to NVME Fabrics nvme_fc LLDD initiator api
Adds the routines to: - register and deregister the FC port as a nvme-fc initiator
scsi: lpfc: NVME Initiator: bind to nvme_fc api
NVME Initiator: Tie in to NVME Fabrics nvme_fc LLDD initiator api
Adds the routines to: - register and deregister the FC port as a nvme-fc initiator localport - register and deregister remote FC ports as a nvme-fc remoteport - binding of nvme queues to adapter WQs - send/perform NVME LS's - send/perform NVME FCP initiator io operations
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
#
895427bd |
| 12-Feb-2017 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: NVME Initiator: Base modifications
NVME Initiator: Base modifications
This patch adds base modifications for NVME initiator support.
The base modifications consist of: - Formal split o
scsi: lpfc: NVME Initiator: Base modifications
NVME Initiator: Base modifications
This patch adds base modifications for NVME initiator support.
The base modifications consist of: - Formal split of SLI3 rings from SLI-4 WQs (sometimes referred to as rings as well) as implementation now widely varies between the two. - Addition of configuration modes: SCSI initiator only; NVME initiator only; NVME target only; and SCSI and NVME initiator. The configuration mode drives overall adapter configuration, offloads enabled, and resource splits. NVME support is only available on SLI-4 devices and newer fw. - Implements the following based on configuration mode: - Exchange resources are split by protocol; Obviously, if only 1 mode, then no split occurs. Default is 50/50. module attribute allows tuning. - Pools and config parameters are separated per-protocol - Each protocol has it's own set of queues, but share interrupt vectors. SCSI: SLI3 devices have few queues and the original style of queue allocation remains. SLI4 devices piggy back on an "io-channel" concept that eventually needs to merge with scsi-mq/blk-mq support (it is underway). For now, the paradigm continues as it existed prior. io channel allocates N msix and N WQs (N=4 default) and either round robins or uses cpu # modulo N for scheduling. A bunch of module parameters allow the configuration to be tuned. NVME (initiator): Allocates an msix per cpu (or whatever pci_alloc_irq_vectors gets) Allocates a WQ per cpu, and maps the WQs to msix on a WQ # modulo msix vector count basis. Module parameters exist to cap/control the config if desired. - Each protocol has its own buffer and dma pools.
I apologize for the size of the patch.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <james.smart@broadcom.com>
---- Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
Revision tags: v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41, v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29 |
|
#
4c2805aa |
| 31-Mar-2020 |
James Smart <jsmart2021@gmail.com> |
lpfc: nvmet: Add support for NVME LS request hosthandle As the nvmet layer does not have the concept of a remoteport object, which can be used to identify the entity on the other end of
lpfc: nvmet: Add support for NVME LS request hosthandle As the nvmet layer does not have the concept of a remoteport object, which can be used to identify the entity on the other end of the fabric that is to receive an LS, the hosthandle was introduced. The driver passes the hosthandle, a value representative of the remote port, with a ls request receive. The LS request will create the association. The transport will remember the hosthandle for the association, and if there is a need to initiate a LS request to the remote port for the association, the hosthandle will be used. When the driver loses connectivity with the remote port, it needs to notify the transport that the hosthandle is no longer valid, allowing the transport to terminate associations related to the hosthandle. This patch adds support to the driver for the hosthandle. The driver will use the ndlp pointer of the remote port for the hosthandle in calls to nvmet_fc_rcv_ls_req(). The discovery engine is updated to invalidate the hosthandle whenever connectivity with the remote port is lost. Signed-off-by: Paul Ely <paul.ely@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
fe1bedec |
| 31-Mar-2020 |
James Smart <jsmart2021@gmail.com> |
lpfc: Refactor Send LS Response support Currently, the ability to send an NVME LS response is limited to the nvmet (controller/target) side of the driver. In preparation of both the nvm
lpfc: Refactor Send LS Response support Currently, the ability to send an NVME LS response is limited to the nvmet (controller/target) side of the driver. In preparation of both the nvme and nvmet sides supporting Send LS Response, rework the existing send ls_rsp and ls_rsp completion routines such that there is common code that can be used by both sides. Signed-off-by: Paul Ely <paul.ely@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
e96a22b0 |
| 31-Mar-2020 |
James Smart <jsmart2021@gmail.com> |
lpfc: Refactor Send LS Abort support Send LS Abort support is needed when Send LS Request is supported. Currently, the ability to abort an NVME LS request is limited to the nvme
lpfc: Refactor Send LS Abort support Send LS Abort support is needed when Send LS Request is supported. Currently, the ability to abort an NVME LS request is limited to the nvme (host) side of the driver. In preparation of both the nvme and nvmet sides supporting Send LS Abort, rework the existing ls_req abort routines such that there is common code that can be used by both sides. While refactoring it was seen the logic in the abort routine was incorrect. It attempted to abort all NVME LS's on the indicated port. As such, the routine was reworked to abort only the NVME LS request that was specified. Signed-off-by: Paul Ely <paul.ely@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
6514b25d |
| 31-Mar-2020 |
James Smart <jsmart2021@gmail.com> |
lpfc: Refactor Send LS Request support Currently, the ability to send an NVME LS request is limited to the nvme (host) side of the driver. In preparation of both the nvme and nvmet side
lpfc: Refactor Send LS Request support Currently, the ability to send an NVME LS request is limited to the nvme (host) side of the driver. In preparation of both the nvme and nvmet sides support Send LS Request, rework the existing send ls_req and ls_req completion routines such that there is common code that can be used by both sides. Signed-off-by: Paul Ely <paul.ely@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
3a8070c5 |
| 31-Mar-2020 |
James Smart <jsmart2021@gmail.com> |
lpfc: Refactor NVME LS receive handling In preparation for supporting both intiator mode and target mode receiving NVME LS's, commonize the existing NVME LS request receive handling
lpfc: Refactor NVME LS receive handling In preparation for supporting both intiator mode and target mode receiving NVME LS's, commonize the existing NVME LS request receive handling found in the base driver and in the nvmet side. Using the original lpfc_nvmet_unsol_ls_event() and lpfc_nvme_unsol_ls_buffer() routines as a templates, commonize the reception of an NVME LS request. The common routine will validate the LS request, that it was received from a logged-in node, and allocate a lpfc_async_xchg_ctx that is used to manage the LS request. The role of the port is then inspected to determine which handler is to receive the LS - nvme or nvmet. As such, the nvmet handler is tied back in. A handler is created in nvme and is stubbed out. Signed-off-by: Paul Ely <paul.ely@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
7b7f551b |
| 31-Mar-2020 |
James Smart <jsmart2021@gmail.com> |
lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions The last step of commonization is to remove the 'T' suffix from state and flag field definitions. This is minor, but remov
lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions The last step of commonization is to remove the 'T' suffix from state and flag field definitions. This is minor, but removes the mental association that it solely applies to nvmet use. Signed-off-by: Paul Ely <paul.ely@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
7cacae2a |
| 31-Mar-2020 |
James Smart <jsmart2021@gmail.com> |
lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx To support FC-NVME-2 support (actually FC-NVME (rev 1) with Ammendment 1), both the nvme (host) and nvmet (controller/target) s
lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx To support FC-NVME-2 support (actually FC-NVME (rev 1) with Ammendment 1), both the nvme (host) and nvmet (controller/target) sides will need to be able to receive LS requests. Currently, this support is in the nvmet side only. To prepare for both sides supporting LS receive, rename lpfc_nvmet_rcv_ctx to lpfc_async_xchg_ctx and commonize the definition. Signed-off-by: Paul Ely <paul.ely@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
2a1160a0 |
| 31-Mar-2020 |
James Smart <jsmart2021@gmail.com> |
lpfc: Refactor lpfc nvme headers A lot of files in lpfc include nvme headers, building up relationships that require a file to change for its headers when there is no other change ne
lpfc: Refactor lpfc nvme headers A lot of files in lpfc include nvme headers, building up relationships that require a file to change for its headers when there is no other change necessary. It would be better to localize the nvme headers. There is also no need for separate nvme (initiator) and nvmet (tgt) header files. Refactor the inclusion of nvme headers so that all nvme items are included by lpfc_nvme.h Merge lpfc_nvmet.h into lpfc_nvme.h so that there is a single header used by both the nvme and nvmet sides. This prepares for structure sharing between the two roles. Prep to add shared function prototypes for upcoming shared routines. Signed-off-by: Paul Ely <paul.ely@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
#
92fff53b |
| 09-Mar-2019 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI updates from James Bottomley: "This is mostly update of the usual drivers: arcmsr, qla2xxx, lp
Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI updates from James Bottomley: "This is mostly update of the usual drivers: arcmsr, qla2xxx, lpfc, hisi_sas, target/iscsi and target/core. Additionally Christoph refactored gdth as part of the dma changes. The major mid-layer change this time is the removal of bidi commands and with them the whole of the osd/exofs driver and filesystem. This is a major simplification for block and mq in particular" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (240 commits) scsi: cxgb4i: validate tcp sequence number only if chip version <= T5 scsi: cxgb4i: get pf number from lldi->pf scsi: core: replace GFP_ATOMIC with GFP_KERNEL in scsi_scan.c scsi: mpt3sas: Add missing breaks in switch statements scsi: aacraid: Fix missing break in switch statement scsi: kill command serial number scsi: csiostor: drop serial_number usage scsi: mvumi: use request tag instead of serial_number scsi: dpt_i2o: remove serial number usage scsi: st: osst: Remove negative constant left-shifts scsi: ufs-bsg: Allow reading descriptors scsi: ufs: Allow reading descriptor via raw upiu scsi: ufs-bsg: Change the calling convention for write descriptor scsi: ufs: Remove unused device quirks Revert "scsi: ufs: disable vccq if it's not needed by UFS device" scsi: megaraid_sas: Remove a bunch of set but not used variables scsi: clean obsolete return values of eh_timed_out scsi: sd: Optimal I/O size should be a multiple of physical block size scsi: MAINTAINERS: SCSI initiator and target tweaks scsi: fcoe: make use of fip_mode enum complete ...
show more ...
|
Revision tags: v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12, v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13, v5.3.12, v5.3.11, v5.3.10, v5.3.9, v5.3.8, v5.3.7, v5.3.6, v5.3.5, v5.3.4, v5.3.3, v5.3.2, v5.3.1, v5.3, v5.2.14, v5.3-rc8, v5.2.13, v5.2.12, v5.2.11, v5.2.10, v5.2.9, v5.2.8, v5.2.7, v5.2.6, v5.2.5, v5.2.4, v5.2.3, v5.2.2, v5.2.1, v5.2, v5.1.16, v5.1.15, v5.1.14, v5.1.13, v5.1.12, v5.1.11, v5.1.10, v5.1.9, v5.1.8, v5.1.7, v5.1.6, v5.1.5, v5.1.4, v5.1.3, v5.1.2, v5.1.1, v5.0.14, v5.1, v5.0.13, v5.0.12, v5.0.11, v5.0.10, v5.0.9, v5.0.8, v5.0.7, v5.0.6, v5.0.5, v5.0.4, v5.0.3, v4.19.29, v5.0.2, v4.19.28, v5.0.1, v4.19.27, v5.0, v4.19.26, v4.19.25, v4.19.24, v4.19.23, v4.19.22, v4.19.21, v4.19.20, v4.19.19 |
|
#
0d041215 |
| 28-Jan-2019 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: Update 12.2.0.0 file copyrights to 2019 For files modified as part of 12.2.0.0 patches, update copyright to 2019 Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
scsi: lpfc: Update 12.2.0.0 file copyrights to 2019 For files modified as part of 12.2.0.0 patches, update copyright to 2019 Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|
#
c490850a |
| 28-Jan-2019 |
James Smart <jsmart2021@gmail.com> |
scsi: lpfc: Adapt partitioned XRI lists to efficient sharing The XRI get/put lists were partitioned per hardware queue. However, the adapter rarely had sufficient resources to give a lar
scsi: lpfc: Adapt partitioned XRI lists to efficient sharing The XRI get/put lists were partitioned per hardware queue. However, the adapter rarely had sufficient resources to give a large number of resources per queue. As such, it became common for a cpu to encounter a lack of XRI resource and request the upper io stack to retry after returning a BUSY condition. This occurred even though other cpus were idle and not using their resources. Create as efficient a scheme as possible to move resources to the cpus that need them. Each cpu maintains a small private pool which it allocates from for io. There is a watermark that the cpu attempts to keep in the private pool. The private pool, when empty, pulls from a global pool from the cpu. When the cpu's global pool is empty it will pull from other cpu's global pool. As there many cpu global pools (1 per cpu or hardware queue count) and as each cpu selects what cpu to pull from at different rates and at different times, it creates a radomizing effect that minimizes the number of cpu's that will contend with each other when the steal XRI's from another cpu's global pool. On io completion, a cpu will push the XRI back on to its private pool. A watermark level is maintained for the private pool such that when it is exceeded it will move XRI's to the CPU global pool so that other cpu's may allocate them. On NVME, as heartbeat commands are critical to get placed on the wire, a single expedite pool is maintained. When a heartbeat is to be sent, it will allocate an XRI from the expedite pool rather than the normal cpu private/global pools. On any io completion, if a reduction in the expedite pools is seen, it will be replenished before the XRI is placed on the cpu private pool. Statistics are added to aid understanding the XRI levels on each cpu and their behaviors. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
show more ...
|