Allow a struct fc_rport_priv to have no fc_rport associated with it.
This sets up to remove the need for "rogue" rports.
Add a few fields to fc_rport_priv that are needed before the fc_rport
is created. These are the ids, maxframe_size, classes, and rport pointer.
Remove the macro PRIV_TO_RPORT(). Just use rdata->rport where appropriate.
To take the place of the get_device()/put_device ops that were used to
hold both the rport and rdata, add a reference count to rdata structures
using kref. When kref_get decrements the refcount to zero, a new template
function releasing the rdata should be called. This will take care of
freeing the rdata and releasing the hold on the rport (for now). After
subsequent patches make the rport truly optional, this release function
will simply free the rdata.
Remove the simple inline function fc_rport_set_name(), which becomes
semanticly ambiguous otherwise. The caller will set the port_name and
node_name in the rdata->Ids, which will later be copied to the rport
when it its created.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
tt.elsct_send is used by both FCP and by the rport state machine.
After further patches, these two modules will use different
structures for the remote port.
So, change elsct_send to use the FC_ID instead of the fc_rport_priv
as its argument. It currently only uses the FC_ID anyway.
For CT requests the destination FC_ID is still implicitly 0xfffffc.
After further patches the did arg on CT requests will be used to
specify the FC_ID being inquired about for GPN_ID or other queries.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The rport and discovery modules deal with remote ports
before fc_remote_port_add() can be done, because the
full set of rport identifiers is not known at early stages.
In preparation for splitting the fc_rport/fc_rport_priv allocation,
make fc_rport_priv the primary interface for the remote port and
discovery engines.
The FCP / SCSI layers still deal with fc_rport and
fc_rport_libfc_priv, however.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
These macros introduce extra undesirable semicolons that keep
them from being used in expressions, and they don't protect
against being passed an expression.
Add parens and remove the semicolons.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The interface for lport->tt.rport_create() takes a fc_disc_port arg,
which is unnatural for most calls. The only reason for this was
to avoid passing in the local port as an argument, but otherwise
added to complexity.
Simplify by just using lport and fc_rport_identifiers.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
While the I/O and LLD interfaces use fc_rport_libfc_priv, the
disc and rport interfaces will use fc_rport_priv, which will
be separately allocated.
Change the disc and rport usage of fc_rport_libfc_priv to fc_rport_priv.
Use #define temporarily to make both names equivalent until a
subsequent patch splits them.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
This just cuts down on the number of locks we're dealing with, and
eliminates the need to take another lock in the netdev notifier.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Fixes reference counting on fcoe_instance and net_device, and adds
NETDEV_UNREGISTER notifier handling so that you can unload network drivers.
FCoE no longer increments the module use count for the network driver.
On an NETDEV_UNREGISTER event, destroying the FCoE instance is deferred to a
workqueue context to avoid RTNL deadlocks.
Based in part by an earlier patch from John Fastabend
John's patch description:
Currently, the netdev module ref count is not decremented with module_put()
when the module is unloaded while fcoe instances are present. To fix this
removed reference count on netdev module completely and added functionality to
netdev event handling for NETDEV_UNREGISTER events.
This allows fcoe to remove devices cleanly when the netdev module is unloaded
so we no longer need to hold a reference count for the netdev module.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
We only want the FCoE create and destroy routines to deal with top level
N_Ports, the VN_Ports are tracked on the vport list (see scsi_transport_fc).
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Rather than rely on the hostlist_lock to be held while creating exchange
managers, serialize fcoe instance creation and destruction with a mutex.
This will allow the hostlist addition to be moved out of fcoe_if_create(),
which will simplify NPIV support.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
fcoe_netdev_config() is called during initialization of a libfc instance.
Much of what was there only needs to be done once for each net_device.
The same goes for the corresponding cleanup.
The FIP controller initialization is moved to interface creation time.
Otherwise it will keep getting re-initialized for every VN_Port once NPIV is
enabled.
fcoe_if_destroy() has some reordering to deal with the changes. Receives are
not stopped until after fcoe_interface_put() is called, but transmits must be
stopped before. So there is some care to stop libfc transmits and the
transmit backlog timer, then call fcoe_interface_put which will stop receives
and cleanup the FIP controller, then the receive queues can be cleaned and the
port freed.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Up to this point the fcoe_instance structure was simply kzalloc/kfreed. This
patch introduces create and destroy functions as well as kref based reference
counting. The create function will grow as the initialization code is moved
there.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The priv pointer is no longer needed, and once NPIV is enabled
fcoe_interface:fc_lport becomes a one-to-many relationship.
Remove the single pointer.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The offload EM pointer is only used when setting up a new libfc instance, but
as it's designed to be shared among NPIV VN_Ports it should be tracked in
fcoe_interface.
With the host-list changed to track fcoe_interfaces as well, this is needed
before we can remove the priv pointer from that structure (which is only there
to help in the transition, and stops making sense once NPIV is enabled).
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
There is only one FIP state per net_device, so the FIP controller needs to be
moved from the per-SCSI-host fcoe_port to the per-net_device fcoe_interface
structure.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The packet handlers need to be tracked in fcoe_interface so there is only one
set per net_device. When NPIV is enabled there will be multiple SCSI hosts
and multiple fcoe_port structures on a single net_device.
The packet handlers match by ethertype and netdev. If the same handler gets
registered on a single netdev multiple times, the receive function will be
called multiple times for each frame.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The network interface needs to be shared between all NPIV VN_Ports, therefor
it should be tracked in the fcoe_interface and not for each SCSI host in
fcoe_port.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
In preparation for NPIV support, I'm splitting the fcoe instance structure
into two to remove the assumptions about it being 1:1 with the net_device.
There will now be two structures, one which is 1:1 with the underlying
net_device and one which is allocated per virtual SCSI/FC host.
fcoe_softc is renamed to fcoe_port for the per Scsi_Host FCoE private data.
Later patches with start moving shared stuff from fcoe_port to fcoe_interface
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
By passing in the parent device instead of assuming the netdev is what
should be used, fcoe_if_create becomes usable for NPIV vports as well.
You still need a netdev, because that's how FCoE works. Also removed some
duplicate checks from fcoe_if_create that are already in fcoe_create.
fcoe_if_destroy needs to take an lport as it's only argument, not a netdev.
That removes the 1:1 netdev:lport assumption from the destroy path.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The hostlist and the hostlist_lock were initialized both in
the delcaration and in fcoe_init(). Remove the unneeded code.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
fcoe_if_init() can fail, but it's return value wasn't checked
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Use cancel_work_sync() in place of flush_work(), so that
fcoe_ctlr_destroy() can be called from a workqueue.
Also, purge the receive queue after the recv_work has been cancled because
if recv_work isn't run it's not guaranteed to be empty now.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
This adds fcoe_ddp_min as a module parameter for fcoe module to:
/sys/module/fcoe/parameters/ddp_min
It is observed that for some hardware, particularly Intel 82599, there is too
much overhead in setting up context for direct data placement (DDP) read when
the requested read I/O size is small. This is added as a module parameter for
performance tuning and is set as 0 by default and user can change this based
on their own hardware.
Signed-off-by: Yi Zou <yi.zou@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
1. Updates fcoe_rcv() to queue incoming frames to the fcoe per
cpu thread on which this frame's exch was originated and simply
use current cpu for request exch not originated by initiator.
It is redundant to add this code under CONFIG_SMP, so removes
CONFIG_SMP uses around this code.
2. Updates fc_exch_em_alloc, fc_exch_delete, fc_exch_find to use
per cpu exch pools, here fc_exch_delete is rename of older
fc_exch_mgr_delete_ep since ep/exch are now deleted in pools
of EM and so brief new name is sufficient and better name.
Updates these functions to map exch id to their index into exch
pool using fc_cpu_mask, fc_cpu_order and EM min_xid.
This mapping is as per detailed explanation about this in
last patch and basically this is just as lower fc_cpu_mask
bits of exch id as cpu number and upper bit sum of EM min_xid
and exch index in pool.
Uses pool next_index to keep track of exch allocation from
pool along with pool_max_index as upper bound of exches array
in pool.
3. Adds exch pool ptr to fc_exch to free exch to its pool in
fc_exch_delete.
4. Updates fc_exch_mgr_reset to reset all exch pools of an EM,
this required adding fc_exch_pool_reset func to reset exches
in pool and then have fc_exch_mgr_reset call fc_exch_pool_reset
for each pool within each EM for a lport.
5. Removes no longer needed exches array, em_lock, next_xid, and
total_exches from struct fc_exch_mgr, these are not needed after
use of per cpu exch pool, also removes not used max_read,
last_read from struct fc_exch_mgr.
6. Updates locking notes for exch pool lock with fc_exch lock and
uses pool lock in exch allocation, lookup and reset.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Adds per cpu exch pool for these reasons:-
1. Currently an EM instance is shared across all cpus to manage
all exches for all cpus. This required em_lock across all
cpus for an exch alloc, free, lookup and reset each frame
and that made em_lock expensive, so instead having per cpu
exch pool with their own per cpu pool lock will likely reduce
locking contention in fast path for an exch alloc, free and
lookup.
2. Per cpu exch pool will likely improve cache hit ratio since
all frames of an exch will be processed on the same cpu on
which exch originated.
This patch is only prep work to help in keeping complexity of next
patch low, so this patch only sets up per cpu exch pool and related
helper funcs to be used by next patch. The next patch fully makes
use of per cpu exch pool in all code paths ie. tx, rx and reset.
Divides per EM exch id range equally across all cpus to setup per
cpu exch pool. This division is such that lower bits of exch id
carries cpu number info on which exch originated, later a simple
bitwise AND operation on exch id of incoming frame with fc_cpu_mask
retrieves cpu number info to direct all frames to same cpu on which
exch originated. This required a global fc_cpu_mask and fc_cpu_order
initialized to max possible cpus number nr_cpu_ids rounded up to 2's
power, this will be used in mapping exch id and exch ptr array
index in pool during exch allocation, find or reset code paths.
Adds a check in fc_exch_mgr_alloc() to ensure specified min_xid
lower bits are zero since these bits are used to carry cpu info.
Adds and initializes struct fc_exch_pool with all required fields
to manage exches in pool.
Allocates per cpu struct fc_exch_pool with memory for exches array
for range of exches per pool. The exches array memory is followed
by struct fc_exch_pool.
Adds fc_exch_ptr_get/set() helper functions to get/set exch ptr in
pool exches array at specified array index.
Increases default FCOE_MAX_XID to 0x0FFF from 0x07EF, so that more
exches are available per cpu after above described exch id range
division across all cpus to each pool.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
If using code like this:
if (foo)
FCOE_DBG("foo\n);
else
FCOE_DBG("bar\n");
one gets compile errors because FCOE_DBG expands with its own semicolon,
making one too many for the if-statement.
Remove the offending semicolon in fcoe.h and also a similar case
in libfcoe.c.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The statement reads, "Exchange timed out, notifying the upper layer",
however, this statement is printed whenever the timer is armed. This
is confusing to someone debugging the code because every time an
exchange is initialized, there is an incorrect statement stating that
the timer has already timed out. This patch changes the statement to
read, "Exchange timer armed" which is more accurate.
This patch also adds a debug statement in the timeout handler to
properly indicate that the exchange has timed out.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
There's currently no space between the interface name and the
user specified format/string. This patch adds a space and a colon
to the output to separate the interface name and the user
specified string.
So, instead of "ethXfoo" it will read "ethX: foo".
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
If we had multiple tasks on the cmd or requeue lists, and iscsi_tcp
returns a error, the write_space function can still run and queue
iscsi_data_xmit. If it was a legetimate problem and iscsi_conn_failure
was run but we raced and iscsi_data_xmit was run first it could miss
the suspend bit checks, and start trying to send data again and hit
another timeout. A similar problem is present when using cxgb3i.
This has libiscsi check the suspend bit before calling the xmit
task callout, so we at least do not try sending multiple tasks
(one could be sent).
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
If a target closed the connection, we will detect it in the
state_changed or data_ready callout. This adds a new conn
error value to use for this problem, so it is not confused
with when the initiator throws a conn error and drops
the connection.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Logging for connections and sessions in the scsi_transport_iscsi module
is now controlled by module parameters.
Signed-off-by: Erez Zilber <erezzi.list@gmail.com>
[Mike Christie: newline fixups and modification of some dbg statements]
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The residual variable is only valid for udnerrun so do
not print it out for the overrun case.
Signed-off-by: Karen Higgins <karen.higgins@qlogic.com>
[Mike Christie: Fix coding style issues in patch]
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
If we sent multiple pdus as immediate the target could be
rejecting some and we have just been dropping the rejection
notification. This adds code to handle nop-out rejections,
so if a nop-out was sent as a ping and rejected we do not
mark the connection bad. Instead we just clean up the timers
since we have pdu making a rount trip we know the connection
is good.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
We increment session->cmdsn at the top of iscsi_prep_scsi_cmd_pdu, but
if the prep ecb or prep bidi or init_task calls fails then we leave the
session->cmdsn incremented. This moves the cmdsn manipulation to the end
of the function when we know it has succeeded.
It also adds a session->cmdsn--; in queuecommand for if a driver like
bnx2i tries to send a a task from that context but it fails. We do not
have to do this in the xmit thread context because that code will retry
the same task if the initial call fails.
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The network core will call the state_change() callback
prior to the data_ready() callback, which might cause
us to lose a connection state change.
So we have to evaluate the socket state at the end
of the data_ready() callback, too.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
ISPs which support this feature include 23xx and above.
Signed-off-by: Andrew Vasquez <andrew.vasquez@qlogic.com>
Signed-off-by: Giridhar Malavali <giridhar.malavali@qlogic.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Lay groundwork for adding alternative asynchronous operations by
generalize and extending the SRB structure. This allows for
follow-on patches to add support for:
- Asynchronous logins.
- ELS/CT passthru requests.
- Loopback requests.
- Non-blocking mailbox commands (ABTS, Task Management, etc).
Signed-off-by: Andrew Vasquez <andrew.vasquez@qlogic.com>
Signed-off-by: Giridhar Malavali <giridhar.malavali@qlogic.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Bump version to 01.100.06.00
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Reviewed-by:: Eric Moore <Eric.moore@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Cleaned up base_interrupt routine to be more effiecent.
Deleted about a third of the config page API by moving redundant code from all
the calling functions to _config_request.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Reviewed-by: Eric Moore <Eric.moore@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
This patch modifies the slave_configure callback so the messages that get sent
to system log for RAID1E volumes contain the string "RAID10" instead of
"RAID1E". These messages contain information regarding what kind of scsi device
is being added. Certain OEMS can enable displaying the RAID10 string instead of
RAID1E via manufacturing page 10. The driver will read this config page at
driver load time, then determine from the GenericFlags0 bits whether display
the RAID10 or RAID1E string, also even drive count is taken into consideration.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Reviewed-by: Eric Moore <Eric.moore@lsi.com>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Changing SDEV Running state from interrupt context. Previously It was
handle in work queue thread. With this change It will not wait for work
queue thread to execute scsih_ublock_io_device to put SDEV into Running
state. This will reduce delay for Device becoming RUNNING.
Modified this patch considering James comment "Not to change SDEV state
using scsi_device_set_state API, instead use scsi_internal_device_unblock
scsi_internal_device_block API"
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Reviewed-by: Eric Moore <Eric.moore@lsi.com>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Deleted the wrapper function called _scsih_link_change. This function was
implemented for compatibility reasons only, between different kernel versions.
Currently this function is no longer needed. The calling function are
converted to calling mpt2sas_transport_update_phy_link_change directly in the
transport layer.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Reviewed-by: Eric Moore <Eric.moore@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
This patch renames the flag for indicating host reset from
ioc_reset_in_progress to shost_recovery. It also removes the spin locks
surrounding the setting of this flag, which are unnecessary. Sanity checks on
the shost_recovery flag were added thru out the code so as to prevent sending
firmware commands during host reset. Also, the setting of the shost state to
SHOST_RECOVERY was removed to prevent deadlocks, this is actually better
handled by the shost_recovery flag.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Reviewed-by: Eric Moore <Eric.moore@lsi.com>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Following host reset its possible that the controller firmware could
assign new handles for devices, as well as adding or deleting devices. There is
code in the driver that will rescan the topology folowing host reset; updating
device handles, and remove devices that are no longer responding. This patch
will improve the responsivness by moving this rescaning from the delayed hotplug
worker thread to immediately following the host reset.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Reviewed-by: Eric Moore <Eric.moore@lsi.com>
Cc: Stable Tree <stable@kernel.org>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Add reset related code for st_yel.
1. Set the SS_H2I_INT_RESET bit.
2. Wait for the SS_MU_OPERATIONAL flag. This is also part of
normal handshake process so move it to handshake routine.
3. Continue handshake with the firmware.
Signed-off-by: Ed Lin <ed.lin@promise.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Remove expensive ktime_get()/ktime_us_delta() functions from the hot
path and use get_clock_monotonic() instead. This elimates seven
function calls and avoids a lot of unnecessary calculations.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The timestamp calculation used for s390dbf output is the same in a
private zfcp function and in debug.c. Replace both with a common
inline function.
Reviewed-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
dev_set_name tries to allocate memory, so check the return value for
allocation failures. After dev_set_name succeeds, call device_register
as next step to be able to use put_device during error handling.
Reviewed-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Don't use kfree directly after device registration started.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The config semaphore is only used as a mutex, so replace it with a
simple mutex.
Reviewed-by: Swen Schillig <swen@vnet.ibm.com>
Signed-off-by: Christof Schmitt <christof.schmitt@de.ibm.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>