Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: (55 commits)
  workqueue: mark init_workqueues() as early_initcall()
  workqueue: explain for_each_*cwq_cpu() iterators
  fscache: fix build on !CONFIG_SYSCTL
  slow-work: kill it
  gfs2: use workqueue instead of slow-work
  drm: use workqueue instead of slow-work
  cifs: use workqueue instead of slow-work
  fscache: drop references to slow-work
  fscache: convert operation to use workqueue instead of slow-work
  fscache: convert object to use workqueue instead of slow-work
  workqueue: fix how cpu number is stored in work->data
  workqueue: fix mayday_mask handling on UP
  workqueue: fix build problem on !CONFIG_SMP
  workqueue: fix locking in retry path of maybe_create_worker()
  async: use workqueue for worker pool
  workqueue: remove WQ_SINGLE_CPU and use WQ_UNBOUND instead
  workqueue: implement unbound workqueue
  workqueue: prepare for WQ_UNBOUND implementation
  libata: take advantage of cmwq and remove concurrency limitations
  workqueue: fix worker management invocation without pending works
  ...

Fixed up conflicts in fs/cifs/* as per Tejun. Other trivial conflicts in
include/linux/workqueue.h, kernel/trace/Kconfig and kernel/workqueue.c
This commit is contained in:
Linus Torvalds 2010-08-07 12:42:58 -07:00
commit 3b7433b8a8
58 changed files with 3525 additions and 2960 deletions

View file

@ -343,8 +343,8 @@ This will look something like:
[root@andromeda ~]# head /proc/fs/fscache/objects [root@andromeda ~]# head /proc/fs/fscache/objects
OBJECT PARENT STAT CHLDN OPS OOP IPR EX READS EM EV F S | NETFS_COOKIE_DEF TY FL NETFS_DATA OBJECT_KEY, AUX_DATA OBJECT PARENT STAT CHLDN OPS OOP IPR EX READS EM EV F S | NETFS_COOKIE_DEF TY FL NETFS_DATA OBJECT_KEY, AUX_DATA
======== ======== ==== ===== === === === == ===== == == = = | ================ == == ================ ================ ======== ======== ==== ===== === === === == ===== == == = = | ================ == == ================ ================
17e4b 2 ACTV 0 0 0 0 0 0 7b 4 0 8 | NFS.fh DT 0 ffff88001dd82820 010006017edcf8bbc93b43298fdfbe71e50b57b13a172c0117f38472, e567634700000000000000000000000063f2404a000000000000000000000000c9030000000000000000000063f2404a 17e4b 2 ACTV 0 0 0 0 0 0 7b 4 0 0 | NFS.fh DT 0 ffff88001dd82820 010006017edcf8bbc93b43298fdfbe71e50b57b13a172c0117f38472, e567634700000000000000000000000063f2404a000000000000000000000000c9030000000000000000000063f2404a
1693a 2 ACTV 0 0 0 0 0 0 7b 4 0 8 | NFS.fh DT 0 ffff88002db23380 010006017edcf8bbc93b43298fdfbe71e50b57b1e0162c01a2df0ea6, 420ebc4a000000000000000000000000420ebc4a0000000000000000000000000e1801000000000000000000420ebc4a 1693a 2 ACTV 0 0 0 0 0 0 7b 4 0 0 | NFS.fh DT 0 ffff88002db23380 010006017edcf8bbc93b43298fdfbe71e50b57b1e0162c01a2df0ea6, 420ebc4a000000000000000000000000420ebc4a0000000000000000000000000e1801000000000000000000420ebc4a
where the first set of columns before the '|' describe the object: where the first set of columns before the '|' describe the object:
@ -362,7 +362,7 @@ where the first set of columns before the '|' describe the object:
EM Object's event mask EM Object's event mask
EV Events raised on this object EV Events raised on this object
F Object flags F Object flags
S Object slow-work work item flags S Object work item busy state mask (1:pending 2:running)
and the second set of columns describe the object's cookie, if present: and the second set of columns describe the object's cookie, if present:
@ -395,8 +395,8 @@ and the following paired letters:
w Show objects that don't have pending writes w Show objects that don't have pending writes
R Show objects that have outstanding reads R Show objects that have outstanding reads
r Show objects that don't have outstanding reads r Show objects that don't have outstanding reads
S Show objects that have slow work queued S Show objects that have work queued
s Show objects that don't have slow work queued s Show objects that don't have work queued
If neither side of a letter pair is given, then both are implied. For example: If neither side of a letter pair is given, then both are implied. For example:

View file

@ -1,322 +0,0 @@
====================================
SLOW WORK ITEM EXECUTION THREAD POOL
====================================
By: David Howells <dhowells@redhat.com>
The slow work item execution thread pool is a pool of threads for performing
things that take a relatively long time, such as making mkdir calls.
Typically, when processing something, these items will spend a lot of time
blocking a thread on I/O, thus making that thread unavailable for doing other
work.
The standard workqueue model is unsuitable for this class of work item as that
limits the owner to a single thread or a single thread per CPU. For some
tasks, however, more threads - or fewer - are required.
There is just one pool per system. It contains no threads unless something
wants to use it - and that something must register its interest first. When
the pool is active, the number of threads it contains is dynamic, varying
between a maximum and minimum setting, depending on the load.
====================
CLASSES OF WORK ITEM
====================
This pool support two classes of work items:
(*) Slow work items.
(*) Very slow work items.
The former are expected to finish much quicker than the latter.
An operation of the very slow class may do a batch combination of several
lookups, mkdirs, and a create for instance.
An operation of the ordinarily slow class may, for example, write stuff or
expand files, provided the time taken to do so isn't too long.
Operations of both types may sleep during execution, thus tying up the thread
loaned to it.
A further class of work item is available, based on the slow work item class:
(*) Delayed slow work items.
These are slow work items that have a timer to defer queueing of the item for
a while.
THREAD-TO-CLASS ALLOCATION
--------------------------
Not all the threads in the pool are available to work on very slow work items.
The number will be between one and one fewer than the number of active threads.
This is configurable (see the "Pool Configuration" section).
All the threads are available to work on ordinarily slow work items, but a
percentage of the threads will prefer to work on very slow work items.
The configuration ensures that at least one thread will be available to work on
very slow work items, and at least one thread will be available that won't work
on very slow work items at all.
=====================
USING SLOW WORK ITEMS
=====================
Firstly, a module or subsystem wanting to make use of slow work items must
register its interest:
int ret = slow_work_register_user(struct module *module);
This will return 0 if successful, or a -ve error upon failure. The module
pointer should be the module interested in using this facility (almost
certainly THIS_MODULE).
Slow work items may then be set up by:
(1) Declaring a slow_work struct type variable:
#include <linux/slow-work.h>
struct slow_work myitem;
(2) Declaring the operations to be used for this item:
struct slow_work_ops myitem_ops = {
.get_ref = myitem_get_ref,
.put_ref = myitem_put_ref,
.execute = myitem_execute,
};
[*] For a description of the ops, see section "Item Operations".
(3) Initialising the item:
slow_work_init(&myitem, &myitem_ops);
or:
delayed_slow_work_init(&myitem, &myitem_ops);
or:
vslow_work_init(&myitem, &myitem_ops);
depending on its class.
A suitably set up work item can then be enqueued for processing:
int ret = slow_work_enqueue(&myitem);
This will return a -ve error if the thread pool is unable to gain a reference
on the item, 0 otherwise, or (for delayed work):
int ret = delayed_slow_work_enqueue(&myitem, my_jiffy_delay);
The items are reference counted, so there ought to be no need for a flush
operation. But as the reference counting is optional, means to cancel
existing work items are also included:
cancel_slow_work(&myitem);
cancel_delayed_slow_work(&myitem);
can be used to cancel pending work. The above cancel function waits for
existing work to have been executed (or prevent execution of them, depending
on timing).
When all a module's slow work items have been processed, and the
module has no further interest in the facility, it should unregister its
interest:
slow_work_unregister_user(struct module *module);
The module pointer is used to wait for all outstanding work items for that
module before completing the unregistration. This prevents the put_ref() code
from being taken away before it completes. module should almost certainly be
THIS_MODULE.
================
HELPER FUNCTIONS
================
The slow-work facility provides a function by which it can be determined
whether or not an item is queued for later execution:
bool queued = slow_work_is_queued(struct slow_work *work);
If it returns false, then the item is not on the queue (it may be executing
with a requeue pending). This can be used to work out whether an item on which
another depends is on the queue, thus allowing a dependent item to be queued
after it.
If the above shows an item on which another depends not to be queued, then the
owner of the dependent item might need to wait. However, to avoid locking up
the threads unnecessarily be sleeping in them, it can make sense under some
circumstances to return the work item to the queue, thus deferring it until
some other items have had a chance to make use of the yielded thread.
To yield a thread and defer an item, the work function should simply enqueue
the work item again and return. However, this doesn't work if there's nothing
actually on the queue, as the thread just vacated will jump straight back into
the item's work function, thus busy waiting on a CPU.
Instead, the item should use the thread to wait for the dependency to go away,
but rather than using schedule() or schedule_timeout() to sleep, it should use
the following function:
bool requeue = slow_work_sleep_till_thread_needed(
struct slow_work *work,
signed long *_timeout);
This will add a second wait and then sleep, such that it will be woken up if
either something appears on the queue that could usefully make use of the
thread - and behind which this item can be queued, or if the event the caller
set up to wait for happens. True will be returned if something else appeared
on the queue and this work function should perhaps return, of false if
something else woke it up. The timeout is as for schedule_timeout().
For example:
wq = bit_waitqueue(&my_flags, MY_BIT);
init_wait(&wait);
requeue = false;
do {
prepare_to_wait(wq, &wait, TASK_UNINTERRUPTIBLE);
if (!test_bit(MY_BIT, &my_flags))
break;
requeue = slow_work_sleep_till_thread_needed(&my_work,
&timeout);
} while (timeout > 0 && !requeue);
finish_wait(wq, &wait);
if (!test_bit(MY_BIT, &my_flags)
goto do_my_thing;
if (requeue)
return; // to slow_work
===============
ITEM OPERATIONS
===============
Each work item requires a table of operations of type struct slow_work_ops.
Only ->execute() is required; the getting and putting of a reference and the
describing of an item are all optional.
(*) Get a reference on an item:
int (*get_ref)(struct slow_work *work);
This allows the thread pool to attempt to pin an item by getting a
reference on it. This function should return 0 if the reference was
granted, or a -ve error otherwise. If an error is returned,
slow_work_enqueue() will fail.
The reference is held whilst the item is queued and whilst it is being
executed. The item may then be requeued with the same reference held, or
the reference will be released.
(*) Release a reference on an item:
void (*put_ref)(struct slow_work *work);
This allows the thread pool to unpin an item by releasing the reference on
it. The thread pool will not touch the item again once this has been
called.
(*) Execute an item:
void (*execute)(struct slow_work *work);
This should perform the work required of the item. It may sleep, it may
perform disk I/O and it may wait for locks.
(*) View an item through /proc:
void (*desc)(struct slow_work *work, struct seq_file *m);
If supplied, this should print to 'm' a small string describing the work
the item is to do. This should be no more than about 40 characters, and
shouldn't include a newline character.
See the 'Viewing executing and queued items' section below.
==================
POOL CONFIGURATION
==================
The slow-work thread pool has a number of configurables:
(*) /proc/sys/kernel/slow-work/min-threads
The minimum number of threads that should be in the pool whilst it is in
use. This may be anywhere between 2 and max-threads.
(*) /proc/sys/kernel/slow-work/max-threads
The maximum number of threads that should in the pool. This may be
anywhere between min-threads and 255 or NR_CPUS * 2, whichever is greater.
(*) /proc/sys/kernel/slow-work/vslow-percentage
The percentage of active threads in the pool that may be used to execute
very slow work items. This may be between 1 and 99. The resultant number
is bounded to between 1 and one fewer than the number of active threads.
This ensures there is always at least one thread that can process very
slow work items, and always at least one thread that won't.
==================================
VIEWING EXECUTING AND QUEUED ITEMS
==================================
If CONFIG_SLOW_WORK_DEBUG is enabled, a debugfs file is made available:
/sys/kernel/debug/slow_work/runqueue
through which the list of work items being executed and the queues of items to
be executed may be viewed. The owner of a work item is given the chance to
add some information of its own.
The contents look something like the following:
THR PID ITEM ADDR FL MARK DESC
=== ===== ================ == ===== ==========
0 3005 ffff880023f52348 a 952ms FSC: OBJ17d3: LOOK
1 3006 ffff880024e33668 2 160ms FSC: OBJ17e5 OP60d3b: Write1/Store fl=2
2 3165 ffff8800296dd180 a 424ms FSC: OBJ17e4: LOOK
3 4089 ffff8800262c8d78 a 212ms FSC: OBJ17ea: CRTN
4 4090 ffff88002792bed8 2 388ms FSC: OBJ17e8 OP60d36: Write1/Store fl=2
5 4092 ffff88002a0ef308 2 388ms FSC: OBJ17e7 OP60d2e: Write1/Store fl=2
6 4094 ffff88002abaf4b8 2 132ms FSC: OBJ17e2 OP60d4e: Write1/Store fl=2
7 4095 ffff88002bb188e0 a 388ms FSC: OBJ17e9: CRTN
vsq - ffff880023d99668 1 308ms FSC: OBJ17e0 OP60f91: Write1/EnQ fl=2
vsq - ffff8800295d1740 1 212ms FSC: OBJ16be OP4d4b6: Write1/EnQ fl=2
vsq - ffff880025ba3308 1 160ms FSC: OBJ179a OP58dec: Write1/EnQ fl=2
vsq - ffff880024ec83e0 1 160ms FSC: OBJ17ae OP599f2: Write1/EnQ fl=2
vsq - ffff880026618e00 1 160ms FSC: OBJ17e6 OP60d33: Write1/EnQ fl=2
vsq - ffff880025a2a4b8 1 132ms FSC: OBJ16a2 OP4d583: Write1/EnQ fl=2
vsq - ffff880023cbe6d8 9 212ms FSC: OBJ17eb: LOOK
vsq - ffff880024d37590 9 212ms FSC: OBJ17ec: LOOK
vsq - ffff880027746cb0 9 212ms FSC: OBJ17ed: LOOK
vsq - ffff880024d37ae8 9 212ms FSC: OBJ17ee: LOOK
vsq - ffff880024d37cb0 9 212ms FSC: OBJ17ef: LOOK
vsq - ffff880025036550 9 212ms FSC: OBJ17f0: LOOK
vsq - ffff8800250368e0 9 212ms FSC: OBJ17f1: LOOK
vsq - ffff880025036aa8 9 212ms FSC: OBJ17f2: LOOK
In the 'THR' column, executing items show the thread they're occupying and
queued threads indicate which queue they're on. 'PID' shows the process ID of
a slow-work thread that's executing something. 'FL' shows the work item flags.
'MARK' indicates how long since an item was queued or began executing. Lastly,
the 'DESC' column permits the owner of an item to give some information.

View file

@ -519,7 +519,7 @@ do_boot_cpu (int sapicid, int cpu)
/* /*
* We can't use kernel_thread since we must avoid to reschedule the child. * We can't use kernel_thread since we must avoid to reschedule the child.
*/ */
if (!keventd_up() || current_is_keventd()) if (!keventd_up())
c_idle.work.func(&c_idle.work); c_idle.work.func(&c_idle.work);
else { else {
schedule_work(&c_idle.work); schedule_work(&c_idle.work);

View file

@ -735,7 +735,7 @@ static int __cpuinit do_boot_cpu(int apicid, int cpu)
goto do_rest; goto do_rest;
} }
if (!keventd_up() || current_is_keventd()) if (!keventd_up())
c_idle.work.func(&c_idle.work); c_idle.work.func(&c_idle.work);
else { else {
schedule_work(&c_idle.work); schedule_work(&c_idle.work);

View file

@ -191,36 +191,11 @@ acpi_status __init acpi_os_initialize(void)
return AE_OK; return AE_OK;
} }
static void bind_to_cpu0(struct work_struct *work)
{
set_cpus_allowed_ptr(current, cpumask_of(0));
kfree(work);
}
static void bind_workqueue(struct workqueue_struct *wq)
{
struct work_struct *work;
work = kzalloc(sizeof(struct work_struct), GFP_KERNEL);
INIT_WORK(work, bind_to_cpu0);
queue_work(wq, work);
}
acpi_status acpi_os_initialize1(void) acpi_status acpi_os_initialize1(void)
{ {
/* kacpid_wq = create_workqueue("kacpid");
* On some machines, a software-initiated SMI causes corruption unless kacpi_notify_wq = create_workqueue("kacpi_notify");
* the SMI runs on CPU 0. An SMI can be initiated by any AML, but kacpi_hotplug_wq = create_workqueue("kacpi_hotplug");
* typically it's done in GPE-related methods that are run via
* workqueues, so we can avoid the known corruption cases by binding
* the workqueues to CPU 0.
*/
kacpid_wq = create_singlethread_workqueue("kacpid");
bind_workqueue(kacpid_wq);
kacpi_notify_wq = create_singlethread_workqueue("kacpi_notify");
bind_workqueue(kacpi_notify_wq);
kacpi_hotplug_wq = create_singlethread_workqueue("kacpi_hotplug");
bind_workqueue(kacpi_hotplug_wq);
BUG_ON(!kacpid_wq); BUG_ON(!kacpid_wq);
BUG_ON(!kacpi_notify_wq); BUG_ON(!kacpi_notify_wq);
BUG_ON(!kacpi_hotplug_wq); BUG_ON(!kacpi_hotplug_wq);
@ -766,7 +741,14 @@ static acpi_status __acpi_os_execute(acpi_execute_type type,
else else
INIT_WORK(&dpc->work, acpi_os_execute_deferred); INIT_WORK(&dpc->work, acpi_os_execute_deferred);
ret = queue_work(queue, &dpc->work); /*
* On some machines, a software-initiated SMI causes corruption unless
* the SMI runs on CPU 0. An SMI can be initiated by any AML, but
* typically it's done in GPE-related methods that are run via
* workqueues, so we can avoid the known corruption cases by always
* queueing on CPU 0.
*/
ret = queue_work_on(0, queue, &dpc->work);
if (!ret) { if (!ret) {
printk(KERN_ERR PREFIX printk(KERN_ERR PREFIX

View file

@ -98,8 +98,6 @@ static unsigned long ata_dev_blacklisted(const struct ata_device *dev);
unsigned int ata_print_id = 1; unsigned int ata_print_id = 1;
struct workqueue_struct *ata_aux_wq;
struct ata_force_param { struct ata_force_param {
const char *name; const char *name;
unsigned int cbl; unsigned int cbl;
@ -5594,6 +5592,7 @@ struct ata_port *ata_port_alloc(struct ata_host *host)
ap->msg_enable = ATA_MSG_DRV | ATA_MSG_ERR | ATA_MSG_WARN; ap->msg_enable = ATA_MSG_DRV | ATA_MSG_ERR | ATA_MSG_WARN;
#endif #endif
mutex_init(&ap->scsi_scan_mutex);
INIT_DELAYED_WORK(&ap->hotplug_task, ata_scsi_hotplug); INIT_DELAYED_WORK(&ap->hotplug_task, ata_scsi_hotplug);
INIT_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan); INIT_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan);
INIT_LIST_HEAD(&ap->eh_done_q); INIT_LIST_HEAD(&ap->eh_done_q);
@ -6532,29 +6531,20 @@ static int __init ata_init(void)
ata_parse_force_param(); ata_parse_force_param();
ata_aux_wq = create_singlethread_workqueue("ata_aux");
if (!ata_aux_wq)
goto fail;
rc = ata_sff_init(); rc = ata_sff_init();
if (rc) if (rc) {
goto fail; kfree(ata_force_tbl);
return rc;
}
printk(KERN_DEBUG "libata version " DRV_VERSION " loaded.\n"); printk(KERN_DEBUG "libata version " DRV_VERSION " loaded.\n");
return 0; return 0;
fail:
kfree(ata_force_tbl);
if (ata_aux_wq)
destroy_workqueue(ata_aux_wq);
return rc;
} }
static void __exit ata_exit(void) static void __exit ata_exit(void)
{ {
ata_sff_exit(); ata_sff_exit();
kfree(ata_force_tbl); kfree(ata_force_tbl);
destroy_workqueue(ata_aux_wq);
} }
subsys_initcall(ata_init); subsys_initcall(ata_init);

View file

@ -727,7 +727,7 @@ void ata_scsi_error(struct Scsi_Host *host)
if (ap->pflags & ATA_PFLAG_LOADING) if (ap->pflags & ATA_PFLAG_LOADING)
ap->pflags &= ~ATA_PFLAG_LOADING; ap->pflags &= ~ATA_PFLAG_LOADING;
else if (ap->pflags & ATA_PFLAG_SCSI_HOTPLUG) else if (ap->pflags & ATA_PFLAG_SCSI_HOTPLUG)
queue_delayed_work(ata_aux_wq, &ap->hotplug_task, 0); schedule_delayed_work(&ap->hotplug_task, 0);
if (ap->pflags & ATA_PFLAG_RECOVERED) if (ap->pflags & ATA_PFLAG_RECOVERED)
ata_port_printk(ap, KERN_INFO, "EH complete\n"); ata_port_printk(ap, KERN_INFO, "EH complete\n");
@ -2945,7 +2945,7 @@ static int ata_eh_revalidate_and_attach(struct ata_link *link,
ehc->i.flags |= ATA_EHI_SETMODE; ehc->i.flags |= ATA_EHI_SETMODE;
/* schedule the scsi_rescan_device() here */ /* schedule the scsi_rescan_device() here */
queue_work(ata_aux_wq, &(ap->scsi_rescan_task)); schedule_work(&(ap->scsi_rescan_task));
} else if (dev->class == ATA_DEV_UNKNOWN && } else if (dev->class == ATA_DEV_UNKNOWN &&
ehc->tries[dev->devno] && ehc->tries[dev->devno] &&
ata_class_enabled(ehc->classes[dev->devno])) { ata_class_enabled(ehc->classes[dev->devno])) {

View file

@ -3435,7 +3435,7 @@ void ata_scsi_scan_host(struct ata_port *ap, int sync)
" switching to async\n"); " switching to async\n");
} }
queue_delayed_work(ata_aux_wq, &ap->hotplug_task, queue_delayed_work(system_long_wq, &ap->hotplug_task,
round_jiffies_relative(HZ)); round_jiffies_relative(HZ));
} }
@ -3582,6 +3582,7 @@ void ata_scsi_hotplug(struct work_struct *work)
} }
DPRINTK("ENTER\n"); DPRINTK("ENTER\n");
mutex_lock(&ap->scsi_scan_mutex);
/* Unplug detached devices. We cannot use link iterator here /* Unplug detached devices. We cannot use link iterator here
* because PMP links have to be scanned even if PMP is * because PMP links have to be scanned even if PMP is
@ -3595,6 +3596,7 @@ void ata_scsi_hotplug(struct work_struct *work)
/* scan for new ones */ /* scan for new ones */
ata_scsi_scan_host(ap, 0); ata_scsi_scan_host(ap, 0);
mutex_unlock(&ap->scsi_scan_mutex);
DPRINTK("EXIT\n"); DPRINTK("EXIT\n");
} }
@ -3673,9 +3675,7 @@ static int ata_scsi_user_scan(struct Scsi_Host *shost, unsigned int channel,
* @work: Pointer to ATA port to perform scsi_rescan_device() * @work: Pointer to ATA port to perform scsi_rescan_device()
* *
* After ATA pass thru (SAT) commands are executed successfully, * After ATA pass thru (SAT) commands are executed successfully,
* libata need to propagate the changes to SCSI layer. This * libata need to propagate the changes to SCSI layer.
* function must be executed from ata_aux_wq such that sdev
* attach/detach don't race with rescan.
* *
* LOCKING: * LOCKING:
* Kernel thread context (may sleep). * Kernel thread context (may sleep).
@ -3688,6 +3688,7 @@ void ata_scsi_dev_rescan(struct work_struct *work)
struct ata_device *dev; struct ata_device *dev;
unsigned long flags; unsigned long flags;
mutex_lock(&ap->scsi_scan_mutex);
spin_lock_irqsave(ap->lock, flags); spin_lock_irqsave(ap->lock, flags);
ata_for_each_link(link, ap, EDGE) { ata_for_each_link(link, ap, EDGE) {
@ -3707,6 +3708,7 @@ void ata_scsi_dev_rescan(struct work_struct *work)
} }
spin_unlock_irqrestore(ap->lock, flags); spin_unlock_irqrestore(ap->lock, flags);
mutex_unlock(&ap->scsi_scan_mutex);
} }
/** /**

View file

@ -3318,14 +3318,7 @@ void ata_sff_port_init(struct ata_port *ap)
int __init ata_sff_init(void) int __init ata_sff_init(void)
{ {
/* ata_sff_wq = alloc_workqueue("ata_sff", WQ_RESCUER, WQ_MAX_ACTIVE);
* FIXME: In UP case, there is only one workqueue thread and if you
* have more than one PIO device, latency is bloody awful, with
* occasional multi-second "hiccups" as one PIO device waits for
* another. It's an ugly wart that users DO occasionally complain
* about; luckily most users have at most one PIO polled device.
*/
ata_sff_wq = create_workqueue("ata_sff");
if (!ata_sff_wq) if (!ata_sff_wq)
return -ENOMEM; return -ENOMEM;

View file

@ -54,7 +54,6 @@ enum {
}; };
extern unsigned int ata_print_id; extern unsigned int ata_print_id;
extern struct workqueue_struct *ata_aux_wq;
extern int atapi_passthru16; extern int atapi_passthru16;
extern int libata_fua; extern int libata_fua;
extern int libata_noacpi; extern int libata_noacpi;

View file

@ -831,13 +831,11 @@ int drm_helper_resume_force_mode(struct drm_device *dev)
} }
EXPORT_SYMBOL(drm_helper_resume_force_mode); EXPORT_SYMBOL(drm_helper_resume_force_mode);
static struct slow_work_ops output_poll_ops;
#define DRM_OUTPUT_POLL_PERIOD (10*HZ) #define DRM_OUTPUT_POLL_PERIOD (10*HZ)
static void output_poll_execute(struct slow_work *work) static void output_poll_execute(struct work_struct *work)
{ {
struct delayed_slow_work *delayed_work = container_of(work, struct delayed_slow_work, work); struct delayed_work *delayed_work = to_delayed_work(work);
struct drm_device *dev = container_of(delayed_work, struct drm_device, mode_config.output_poll_slow_work); struct drm_device *dev = container_of(delayed_work, struct drm_device, mode_config.output_poll_work);
struct drm_connector *connector; struct drm_connector *connector;
enum drm_connector_status old_status, status; enum drm_connector_status old_status, status;
bool repoll = false, changed = false; bool repoll = false, changed = false;
@ -877,7 +875,7 @@ static void output_poll_execute(struct slow_work *work)
} }
if (repoll) { if (repoll) {
ret = delayed_slow_work_enqueue(delayed_work, DRM_OUTPUT_POLL_PERIOD); ret = queue_delayed_work(system_nrt_wq, delayed_work, DRM_OUTPUT_POLL_PERIOD);
if (ret) if (ret)
DRM_ERROR("delayed enqueue failed %d\n", ret); DRM_ERROR("delayed enqueue failed %d\n", ret);
} }
@ -887,7 +885,7 @@ void drm_kms_helper_poll_disable(struct drm_device *dev)
{ {
if (!dev->mode_config.poll_enabled) if (!dev->mode_config.poll_enabled)
return; return;
delayed_slow_work_cancel(&dev->mode_config.output_poll_slow_work); cancel_delayed_work_sync(&dev->mode_config.output_poll_work);
} }
EXPORT_SYMBOL(drm_kms_helper_poll_disable); EXPORT_SYMBOL(drm_kms_helper_poll_disable);
@ -903,7 +901,7 @@ void drm_kms_helper_poll_enable(struct drm_device *dev)
} }
if (poll) { if (poll) {
ret = delayed_slow_work_enqueue(&dev->mode_config.output_poll_slow_work, DRM_OUTPUT_POLL_PERIOD); ret = queue_delayed_work(system_nrt_wq, &dev->mode_config.output_poll_work, DRM_OUTPUT_POLL_PERIOD);
if (ret) if (ret)
DRM_ERROR("delayed enqueue failed %d\n", ret); DRM_ERROR("delayed enqueue failed %d\n", ret);
} }
@ -912,9 +910,7 @@ EXPORT_SYMBOL(drm_kms_helper_poll_enable);
void drm_kms_helper_poll_init(struct drm_device *dev) void drm_kms_helper_poll_init(struct drm_device *dev)
{ {
slow_work_register_user(THIS_MODULE); INIT_DELAYED_WORK(&dev->mode_config.output_poll_work, output_poll_execute);
delayed_slow_work_init(&dev->mode_config.output_poll_slow_work,
&output_poll_ops);
dev->mode_config.poll_enabled = true; dev->mode_config.poll_enabled = true;
drm_kms_helper_poll_enable(dev); drm_kms_helper_poll_enable(dev);
@ -924,7 +920,6 @@ EXPORT_SYMBOL(drm_kms_helper_poll_init);
void drm_kms_helper_poll_fini(struct drm_device *dev) void drm_kms_helper_poll_fini(struct drm_device *dev)
{ {
drm_kms_helper_poll_disable(dev); drm_kms_helper_poll_disable(dev);
slow_work_unregister_user(THIS_MODULE);
} }
EXPORT_SYMBOL(drm_kms_helper_poll_fini); EXPORT_SYMBOL(drm_kms_helper_poll_fini);
@ -932,12 +927,8 @@ void drm_helper_hpd_irq_event(struct drm_device *dev)
{ {
if (!dev->mode_config.poll_enabled) if (!dev->mode_config.poll_enabled)
return; return;
delayed_slow_work_cancel(&dev->mode_config.output_poll_slow_work); /* kill timer and schedule immediate execution, this doesn't block */
/* schedule a slow work asap */ cancel_delayed_work(&dev->mode_config.output_poll_work);
delayed_slow_work_enqueue(&dev->mode_config.output_poll_slow_work, 0); queue_delayed_work(system_nrt_wq, &dev->mode_config.output_poll_work, 0);
} }
EXPORT_SYMBOL(drm_helper_hpd_irq_event); EXPORT_SYMBOL(drm_helper_hpd_irq_event);
static struct slow_work_ops output_poll_ops = {
.execute = output_poll_execute,
};

View file

@ -705,6 +705,8 @@ static void ivtv_process_options(struct ivtv *itv)
*/ */
static int __devinit ivtv_init_struct1(struct ivtv *itv) static int __devinit ivtv_init_struct1(struct ivtv *itv)
{ {
struct sched_param param = { .sched_priority = 99 };
itv->base_addr = pci_resource_start(itv->pdev, 0); itv->base_addr = pci_resource_start(itv->pdev, 0);
itv->enc_mbox.max_mbox = 2; /* the encoder has 3 mailboxes (0-2) */ itv->enc_mbox.max_mbox = 2; /* the encoder has 3 mailboxes (0-2) */
itv->dec_mbox.max_mbox = 1; /* the decoder has 2 mailboxes (0-1) */ itv->dec_mbox.max_mbox = 1; /* the decoder has 2 mailboxes (0-1) */
@ -716,13 +718,17 @@ static int __devinit ivtv_init_struct1(struct ivtv *itv)
spin_lock_init(&itv->lock); spin_lock_init(&itv->lock);
spin_lock_init(&itv->dma_reg_lock); spin_lock_init(&itv->dma_reg_lock);
itv->irq_work_queues = create_singlethread_workqueue(itv->v4l2_dev.name); init_kthread_worker(&itv->irq_worker);
if (itv->irq_work_queues == NULL) { itv->irq_worker_task = kthread_run(kthread_worker_fn, &itv->irq_worker,
IVTV_ERR("Could not create ivtv workqueue\n"); itv->v4l2_dev.name);
if (IS_ERR(itv->irq_worker_task)) {
IVTV_ERR("Could not create ivtv task\n");
return -1; return -1;
} }
/* must use the FIFO scheduler as it is realtime sensitive */
sched_setscheduler(itv->irq_worker_task, SCHED_FIFO, &param);
INIT_WORK(&itv->irq_work_queue, ivtv_irq_work_handler); init_kthread_work(&itv->irq_work, ivtv_irq_work_handler);
/* start counting open_id at 1 */ /* start counting open_id at 1 */
itv->open_id = 1; itv->open_id = 1;
@ -1006,7 +1012,7 @@ static int __devinit ivtv_probe(struct pci_dev *pdev,
/* PCI Device Setup */ /* PCI Device Setup */
retval = ivtv_setup_pci(itv, pdev, pci_id); retval = ivtv_setup_pci(itv, pdev, pci_id);
if (retval == -EIO) if (retval == -EIO)
goto free_workqueue; goto free_worker;
if (retval == -ENXIO) if (retval == -ENXIO)
goto free_mem; goto free_mem;
@ -1218,8 +1224,8 @@ static int __devinit ivtv_probe(struct pci_dev *pdev,
release_mem_region(itv->base_addr + IVTV_REG_OFFSET, IVTV_REG_SIZE); release_mem_region(itv->base_addr + IVTV_REG_OFFSET, IVTV_REG_SIZE);
if (itv->has_cx23415) if (itv->has_cx23415)
release_mem_region(itv->base_addr + IVTV_DECODER_OFFSET, IVTV_DECODER_SIZE); release_mem_region(itv->base_addr + IVTV_DECODER_OFFSET, IVTV_DECODER_SIZE);
free_workqueue: free_worker:
destroy_workqueue(itv->irq_work_queues); kthread_stop(itv->irq_worker_task);
err: err:
if (retval == 0) if (retval == 0)
retval = -ENODEV; retval = -ENODEV;
@ -1363,9 +1369,9 @@ static void ivtv_remove(struct pci_dev *pdev)
ivtv_set_irq_mask(itv, 0xffffffff); ivtv_set_irq_mask(itv, 0xffffffff);
del_timer_sync(&itv->dma_timer); del_timer_sync(&itv->dma_timer);
/* Stop all Work Queues */ /* Kill irq worker */
flush_workqueue(itv->irq_work_queues); flush_kthread_worker(&itv->irq_worker);
destroy_workqueue(itv->irq_work_queues); kthread_stop(itv->irq_worker_task);
ivtv_streams_cleanup(itv, 1); ivtv_streams_cleanup(itv, 1);
ivtv_udma_free(itv); ivtv_udma_free(itv);

View file

@ -51,7 +51,7 @@
#include <linux/unistd.h> #include <linux/unistd.h>
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#include <linux/workqueue.h> #include <linux/kthread.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
@ -260,7 +260,6 @@ struct ivtv_mailbox_data {
#define IVTV_F_I_DEC_PAUSED 20 /* the decoder is paused */ #define IVTV_F_I_DEC_PAUSED 20 /* the decoder is paused */
#define IVTV_F_I_INITED 21 /* set after first open */ #define IVTV_F_I_INITED 21 /* set after first open */
#define IVTV_F_I_FAILED 22 /* set if first open failed */ #define IVTV_F_I_FAILED 22 /* set if first open failed */
#define IVTV_F_I_WORK_INITED 23 /* worker thread was initialized */
/* Event notifications */ /* Event notifications */
#define IVTV_F_I_EV_DEC_STOPPED 28 /* decoder stopped event */ #define IVTV_F_I_EV_DEC_STOPPED 28 /* decoder stopped event */
@ -666,8 +665,9 @@ struct ivtv {
/* Interrupts & DMA */ /* Interrupts & DMA */
u32 irqmask; /* active interrupts */ u32 irqmask; /* active interrupts */
u32 irq_rr_idx; /* round-robin stream index */ u32 irq_rr_idx; /* round-robin stream index */
struct workqueue_struct *irq_work_queues; /* workqueue for PIO/YUV/VBI actions */ struct kthread_worker irq_worker; /* kthread worker for PIO/YUV/VBI actions */
struct work_struct irq_work_queue; /* work entry */ struct task_struct *irq_worker_task; /* task for irq_worker */
struct kthread_work irq_work; /* kthread work entry */
spinlock_t dma_reg_lock; /* lock access to DMA engine registers */ spinlock_t dma_reg_lock; /* lock access to DMA engine registers */
int cur_dma_stream; /* index of current stream doing DMA (-1 if none) */ int cur_dma_stream; /* index of current stream doing DMA (-1 if none) */
int cur_pio_stream; /* index of current stream doing PIO (-1 if none) */ int cur_pio_stream; /* index of current stream doing PIO (-1 if none) */

View file

@ -71,19 +71,10 @@ static void ivtv_pio_work_handler(struct ivtv *itv)
write_reg(IVTV_IRQ_ENC_PIO_COMPLETE, 0x44); write_reg(IVTV_IRQ_ENC_PIO_COMPLETE, 0x44);
} }
void ivtv_irq_work_handler(struct work_struct *work) void ivtv_irq_work_handler(struct kthread_work *work)
{ {
struct ivtv *itv = container_of(work, struct ivtv, irq_work_queue); struct ivtv *itv = container_of(work, struct ivtv, irq_work);
DEFINE_WAIT(wait);
if (test_and_clear_bit(IVTV_F_I_WORK_INITED, &itv->i_flags)) {
struct sched_param param = { .sched_priority = 99 };
/* This thread must use the FIFO scheduler as it
is realtime sensitive. */
sched_setscheduler(current, SCHED_FIFO, &param);
}
if (test_and_clear_bit(IVTV_F_I_WORK_HANDLER_PIO, &itv->i_flags)) if (test_and_clear_bit(IVTV_F_I_WORK_HANDLER_PIO, &itv->i_flags))
ivtv_pio_work_handler(itv); ivtv_pio_work_handler(itv);
@ -975,7 +966,7 @@ irqreturn_t ivtv_irq_handler(int irq, void *dev_id)
} }
if (test_and_clear_bit(IVTV_F_I_HAVE_WORK, &itv->i_flags)) { if (test_and_clear_bit(IVTV_F_I_HAVE_WORK, &itv->i_flags)) {
queue_work(itv->irq_work_queues, &itv->irq_work_queue); queue_kthread_work(&itv->irq_worker, &itv->irq_work);
} }
spin_unlock(&itv->dma_reg_lock); spin_unlock(&itv->dma_reg_lock);

View file

@ -46,7 +46,7 @@
irqreturn_t ivtv_irq_handler(int irq, void *dev_id); irqreturn_t ivtv_irq_handler(int irq, void *dev_id);
void ivtv_irq_work_handler(struct work_struct *work); void ivtv_irq_work_handler(struct kthread_work *work);
void ivtv_dma_stream_dec_prepare(struct ivtv_stream *s, u32 offset, int lock); void ivtv_dma_stream_dec_prepare(struct ivtv_stream *s, u32 offset, int lock);
void ivtv_unfinished_dma(unsigned long arg); void ivtv_unfinished_dma(unsigned long arg);

View file

@ -37,9 +37,9 @@ void __cachefiles_printk_object(struct cachefiles_object *object,
printk(KERN_ERR "%sobject: OBJ%x\n", printk(KERN_ERR "%sobject: OBJ%x\n",
prefix, object->fscache.debug_id); prefix, object->fscache.debug_id);
printk(KERN_ERR "%sobjstate=%s fl=%lx swfl=%lx ev=%lx[%lx]\n", printk(KERN_ERR "%sobjstate=%s fl=%lx wbusy=%x ev=%lx[%lx]\n",
prefix, fscache_object_states[object->fscache.state], prefix, fscache_object_states[object->fscache.state],
object->fscache.flags, object->fscache.work.flags, object->fscache.flags, work_busy(&object->fscache.work),
object->fscache.events, object->fscache.events,
object->fscache.event_mask & FSCACHE_OBJECT_EVENTS_MASK); object->fscache.event_mask & FSCACHE_OBJECT_EVENTS_MASK);
printk(KERN_ERR "%sops=%u inp=%u exc=%u\n", printk(KERN_ERR "%sops=%u inp=%u exc=%u\n",
@ -212,7 +212,7 @@ static int cachefiles_mark_object_active(struct cachefiles_cache *cache,
/* if the object we're waiting for is queued for processing, /* if the object we're waiting for is queued for processing,
* then just put ourselves on the queue behind it */ * then just put ourselves on the queue behind it */
if (slow_work_is_queued(&xobject->fscache.work)) { if (work_pending(&xobject->fscache.work)) {
_debug("queue OBJ%x behind OBJ%x immediately", _debug("queue OBJ%x behind OBJ%x immediately",
object->fscache.debug_id, object->fscache.debug_id,
xobject->fscache.debug_id); xobject->fscache.debug_id);
@ -220,8 +220,7 @@ static int cachefiles_mark_object_active(struct cachefiles_cache *cache,
} }
/* otherwise we sleep until either the object we're waiting for /* otherwise we sleep until either the object we're waiting for
* is done, or the slow-work facility wants the thread back to * is done, or the fscache_object is congested */
* do other work */
wq = bit_waitqueue(&xobject->flags, CACHEFILES_OBJECT_ACTIVE); wq = bit_waitqueue(&xobject->flags, CACHEFILES_OBJECT_ACTIVE);
init_wait(&wait); init_wait(&wait);
requeue = false; requeue = false;
@ -229,8 +228,8 @@ static int cachefiles_mark_object_active(struct cachefiles_cache *cache,
prepare_to_wait(wq, &wait, TASK_UNINTERRUPTIBLE); prepare_to_wait(wq, &wait, TASK_UNINTERRUPTIBLE);
if (!test_bit(CACHEFILES_OBJECT_ACTIVE, &xobject->flags)) if (!test_bit(CACHEFILES_OBJECT_ACTIVE, &xobject->flags))
break; break;
requeue = slow_work_sleep_till_thread_needed(
&object->fscache.work, &timeout); requeue = fscache_object_sleep_till_congested(&timeout);
} while (timeout > 0 && !requeue); } while (timeout > 0 && !requeue);
finish_wait(wq, &wait); finish_wait(wq, &wait);

View file

@ -422,7 +422,7 @@ int cachefiles_read_or_alloc_page(struct fscache_retrieval *op,
shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits; shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits;
op->op.flags &= FSCACHE_OP_KEEP_FLAGS; op->op.flags &= FSCACHE_OP_KEEP_FLAGS;
op->op.flags |= FSCACHE_OP_FAST; op->op.flags |= FSCACHE_OP_ASYNC;
op->op.processor = cachefiles_read_copier; op->op.processor = cachefiles_read_copier;
pagevec_init(&pagevec, 0); pagevec_init(&pagevec, 0);
@ -729,7 +729,7 @@ int cachefiles_read_or_alloc_pages(struct fscache_retrieval *op,
pagevec_init(&pagevec, 0); pagevec_init(&pagevec, 0);
op->op.flags &= FSCACHE_OP_KEEP_FLAGS; op->op.flags &= FSCACHE_OP_KEEP_FLAGS;
op->op.flags |= FSCACHE_OP_FAST; op->op.flags |= FSCACHE_OP_ASYNC;
op->op.processor = cachefiles_read_copier; op->op.processor = cachefiles_read_copier;
INIT_LIST_HEAD(&backpages); INIT_LIST_HEAD(&backpages);

View file

@ -2,7 +2,6 @@ config CIFS
tristate "CIFS support (advanced network filesystem, SMBFS successor)" tristate "CIFS support (advanced network filesystem, SMBFS successor)"
depends on INET depends on INET
select NLS select NLS
select SLOW_WORK
help help
This is the client VFS module for the Common Internet File System This is the client VFS module for the Common Internet File System
(CIFS) protocol which is the successor to the Server Message Block (CIFS) protocol which is the successor to the Server Message Block

View file

@ -939,15 +939,10 @@ init_cifs(void)
if (rc) if (rc)
goto out_unregister_key_type; goto out_unregister_key_type;
#endif #endif
rc = slow_work_register_user(THIS_MODULE);
if (rc)
goto out_unregister_resolver_key;
return 0; return 0;
out_unregister_resolver_key:
#ifdef CONFIG_CIFS_DFS_UPCALL #ifdef CONFIG_CIFS_DFS_UPCALL
cifs_exit_dns_resolver();
out_unregister_key_type: out_unregister_key_type:
#endif #endif
#ifdef CONFIG_CIFS_UPCALL #ifdef CONFIG_CIFS_UPCALL

View file

@ -22,7 +22,7 @@
#include <linux/in.h> #include <linux/in.h>
#include <linux/in6.h> #include <linux/in6.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/slow-work.h> #include <linux/workqueue.h>
#include "cifs_fs_sb.h" #include "cifs_fs_sb.h"
#include "cifsacl.h" #include "cifsacl.h"
/* /*
@ -356,7 +356,7 @@ struct cifsFileInfo {
atomic_t count; /* reference count */ atomic_t count; /* reference count */
struct mutex fh_mutex; /* prevents reopen race after dead ses*/ struct mutex fh_mutex; /* prevents reopen race after dead ses*/
struct cifs_search_info srch_inf; struct cifs_search_info srch_inf;
struct slow_work oplock_break; /* slow_work job for oplock breaks */ struct work_struct oplock_break; /* work for oplock breaks */
}; };
/* Take a reference on the file private data */ /* Take a reference on the file private data */
@ -728,6 +728,10 @@ GLOBAL_EXTERN unsigned int cifs_min_rcv; /* min size of big ntwrk buf pool */
GLOBAL_EXTERN unsigned int cifs_min_small; /* min size of small buf pool */ GLOBAL_EXTERN unsigned int cifs_min_small; /* min size of small buf pool */
GLOBAL_EXTERN unsigned int cifs_max_pending; /* MAX requests at once to server*/ GLOBAL_EXTERN unsigned int cifs_max_pending; /* MAX requests at once to server*/
void cifs_oplock_break(struct work_struct *work);
void cifs_oplock_break_get(struct cifsFileInfo *cfile);
void cifs_oplock_break_put(struct cifsFileInfo *cfile);
extern const struct slow_work_ops cifs_oplock_break_ops; extern const struct slow_work_ops cifs_oplock_break_ops;
#endif /* _CIFS_GLOB_H */ #endif /* _CIFS_GLOB_H */

View file

@ -157,7 +157,7 @@ cifs_new_fileinfo(struct inode *newinode, __u16 fileHandle,
mutex_init(&pCifsFile->lock_mutex); mutex_init(&pCifsFile->lock_mutex);
INIT_LIST_HEAD(&pCifsFile->llist); INIT_LIST_HEAD(&pCifsFile->llist);
atomic_set(&pCifsFile->count, 1); atomic_set(&pCifsFile->count, 1);
slow_work_init(&pCifsFile->oplock_break, &cifs_oplock_break_ops); INIT_WORK(&pCifsFile->oplock_break, cifs_oplock_break);
write_lock(&GlobalSMBSeslock); write_lock(&GlobalSMBSeslock);
list_add(&pCifsFile->tlist, &cifs_sb->tcon->openFileList); list_add(&pCifsFile->tlist, &cifs_sb->tcon->openFileList);

View file

@ -2307,8 +2307,7 @@ static void cifs_invalidate_page(struct page *page, unsigned long offset)
cifs_fscache_invalidate_page(page, &cifsi->vfs_inode); cifs_fscache_invalidate_page(page, &cifsi->vfs_inode);
} }
static void void cifs_oplock_break(struct work_struct *work)
cifs_oplock_break(struct slow_work *work)
{ {
struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo, struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo,
oplock_break); oplock_break);
@ -2345,33 +2344,30 @@ cifs_oplock_break(struct slow_work *work)
LOCKING_ANDX_OPLOCK_RELEASE, false); LOCKING_ANDX_OPLOCK_RELEASE, false);
cFYI(1, "Oplock release rc = %d", rc); cFYI(1, "Oplock release rc = %d", rc);
} }
/*
* We might have kicked in before is_valid_oplock_break()
* finished grabbing reference for us. Make sure it's done by
* waiting for GlobalSMSSeslock.
*/
write_lock(&GlobalSMBSeslock);
write_unlock(&GlobalSMBSeslock);
cifs_oplock_break_put(cfile);
} }
static int void cifs_oplock_break_get(struct cifsFileInfo *cfile)
cifs_oplock_break_get(struct slow_work *work)
{ {
struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo,
oplock_break);
mntget(cfile->mnt); mntget(cfile->mnt);
cifsFileInfo_get(cfile); cifsFileInfo_get(cfile);
return 0;
} }
static void void cifs_oplock_break_put(struct cifsFileInfo *cfile)
cifs_oplock_break_put(struct slow_work *work)
{ {
struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo,
oplock_break);
mntput(cfile->mnt); mntput(cfile->mnt);
cifsFileInfo_put(cfile); cifsFileInfo_put(cfile);
} }
const struct slow_work_ops cifs_oplock_break_ops = {
.get_ref = cifs_oplock_break_get,
.put_ref = cifs_oplock_break_put,
.execute = cifs_oplock_break,
};
const struct address_space_operations cifs_addr_ops = { const struct address_space_operations cifs_addr_ops = {
.readpage = cifs_readpage, .readpage = cifs_readpage,
.readpages = cifs_readpages, .readpages = cifs_readpages,

View file

@ -498,7 +498,6 @@ is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv)
struct cifsTconInfo *tcon; struct cifsTconInfo *tcon;
struct cifsInodeInfo *pCifsInode; struct cifsInodeInfo *pCifsInode;
struct cifsFileInfo *netfile; struct cifsFileInfo *netfile;
int rc;
cFYI(1, "Checking for oplock break or dnotify response"); cFYI(1, "Checking for oplock break or dnotify response");
if ((pSMB->hdr.Command == SMB_COM_NT_TRANSACT) && if ((pSMB->hdr.Command == SMB_COM_NT_TRANSACT) &&
@ -583,13 +582,18 @@ is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv)
pCifsInode->clientCanCacheAll = false; pCifsInode->clientCanCacheAll = false;
if (pSMB->OplockLevel == 0) if (pSMB->OplockLevel == 0)
pCifsInode->clientCanCacheRead = false; pCifsInode->clientCanCacheRead = false;
rc = slow_work_enqueue(&netfile->oplock_break);
if (rc) { /*
cERROR(1, "failed to enqueue oplock " * cifs_oplock_break_put() can't be called
"break: %d\n", rc); * from here. Get reference after queueing
} else { * succeeded. cifs_oplock_break() will
netfile->oplock_break_cancelled = false; * synchronize using GlobalSMSSeslock.
} */
if (queue_work(system_nrt_wq,
&netfile->oplock_break))
cifs_oplock_break_get(netfile);
netfile->oplock_break_cancelled = false;
read_unlock(&GlobalSMBSeslock); read_unlock(&GlobalSMBSeslock);
read_unlock(&cifs_tcp_ses_lock); read_unlock(&cifs_tcp_ses_lock);
return true; return true;

View file

@ -1,7 +1,6 @@
config FSCACHE config FSCACHE
tristate "General filesystem local caching manager" tristate "General filesystem local caching manager"
select SLOW_WORK
help help
This option enables a generic filesystem caching manager that can be This option enables a generic filesystem caching manager that can be
used by various network and other filesystems to cache data locally. used by various network and other filesystems to cache data locally.

View file

@ -82,6 +82,14 @@ extern unsigned fscache_defer_lookup;
extern unsigned fscache_defer_create; extern unsigned fscache_defer_create;
extern unsigned fscache_debug; extern unsigned fscache_debug;
extern struct kobject *fscache_root; extern struct kobject *fscache_root;
extern struct workqueue_struct *fscache_object_wq;
extern struct workqueue_struct *fscache_op_wq;
DECLARE_PER_CPU(wait_queue_head_t, fscache_object_cong_wait);
static inline bool fscache_object_congested(void)
{
return workqueue_congested(WORK_CPU_UNBOUND, fscache_object_wq);
}
extern int fscache_wait_bit(void *); extern int fscache_wait_bit(void *);
extern int fscache_wait_bit_interruptible(void *); extern int fscache_wait_bit_interruptible(void *);

View file

@ -15,6 +15,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/completion.h> #include <linux/completion.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/seq_file.h>
#include "internal.h" #include "internal.h"
MODULE_DESCRIPTION("FS Cache Manager"); MODULE_DESCRIPTION("FS Cache Manager");
@ -40,22 +41,105 @@ MODULE_PARM_DESC(fscache_debug,
"FS-Cache debugging mask"); "FS-Cache debugging mask");
struct kobject *fscache_root; struct kobject *fscache_root;
struct workqueue_struct *fscache_object_wq;
struct workqueue_struct *fscache_op_wq;
DEFINE_PER_CPU(wait_queue_head_t, fscache_object_cong_wait);
/* these values serve as lower bounds, will be adjusted in fscache_init() */
static unsigned fscache_object_max_active = 4;
static unsigned fscache_op_max_active = 2;
#ifdef CONFIG_SYSCTL
static struct ctl_table_header *fscache_sysctl_header;
static int fscache_max_active_sysctl(struct ctl_table *table, int write,
void __user *buffer,
size_t *lenp, loff_t *ppos)
{
struct workqueue_struct **wqp = table->extra1;
unsigned int *datap = table->data;
int ret;
ret = proc_dointvec(table, write, buffer, lenp, ppos);
if (ret == 0)
workqueue_set_max_active(*wqp, *datap);
return ret;
}
ctl_table fscache_sysctls[] = {
{
.procname = "object_max_active",
.data = &fscache_object_max_active,
.maxlen = sizeof(unsigned),
.mode = 0644,
.proc_handler = fscache_max_active_sysctl,
.extra1 = &fscache_object_wq,
},
{
.procname = "operation_max_active",
.data = &fscache_op_max_active,
.maxlen = sizeof(unsigned),
.mode = 0644,
.proc_handler = fscache_max_active_sysctl,
.extra1 = &fscache_op_wq,
},
{}
};
ctl_table fscache_sysctls_root[] = {
{
.procname = "fscache",
.mode = 0555,
.child = fscache_sysctls,
},
{}
};
#endif
/* /*
* initialise the fs caching module * initialise the fs caching module
*/ */
static int __init fscache_init(void) static int __init fscache_init(void)
{ {
unsigned int nr_cpus = num_possible_cpus();
unsigned int cpu;
int ret; int ret;
ret = slow_work_register_user(THIS_MODULE); fscache_object_max_active =
if (ret < 0) clamp_val(nr_cpus,
goto error_slow_work; fscache_object_max_active, WQ_UNBOUND_MAX_ACTIVE);
ret = -ENOMEM;
fscache_object_wq = alloc_workqueue("fscache_object", WQ_UNBOUND,
fscache_object_max_active);
if (!fscache_object_wq)
goto error_object_wq;
fscache_op_max_active =
clamp_val(fscache_object_max_active / 2,
fscache_op_max_active, WQ_UNBOUND_MAX_ACTIVE);
ret = -ENOMEM;
fscache_op_wq = alloc_workqueue("fscache_operation", WQ_UNBOUND,
fscache_op_max_active);
if (!fscache_op_wq)
goto error_op_wq;
for_each_possible_cpu(cpu)
init_waitqueue_head(&per_cpu(fscache_object_cong_wait, cpu));
ret = fscache_proc_init(); ret = fscache_proc_init();
if (ret < 0) if (ret < 0)
goto error_proc; goto error_proc;
#ifdef CONFIG_SYSCTL
ret = -ENOMEM;
fscache_sysctl_header = register_sysctl_table(fscache_sysctls_root);
if (!fscache_sysctl_header)
goto error_sysctl;
#endif
fscache_cookie_jar = kmem_cache_create("fscache_cookie_jar", fscache_cookie_jar = kmem_cache_create("fscache_cookie_jar",
sizeof(struct fscache_cookie), sizeof(struct fscache_cookie),
0, 0,
@ -78,10 +162,16 @@ static int __init fscache_init(void)
error_kobj: error_kobj:
kmem_cache_destroy(fscache_cookie_jar); kmem_cache_destroy(fscache_cookie_jar);
error_cookie_jar: error_cookie_jar:
#ifdef CONFIG_SYSCTL
unregister_sysctl_table(fscache_sysctl_header);
error_sysctl:
#endif
fscache_proc_cleanup(); fscache_proc_cleanup();
error_proc: error_proc:
slow_work_unregister_user(THIS_MODULE); destroy_workqueue(fscache_op_wq);
error_slow_work: error_op_wq:
destroy_workqueue(fscache_object_wq);
error_object_wq:
return ret; return ret;
} }
@ -96,8 +186,12 @@ static void __exit fscache_exit(void)
kobject_put(fscache_root); kobject_put(fscache_root);
kmem_cache_destroy(fscache_cookie_jar); kmem_cache_destroy(fscache_cookie_jar);
#ifdef CONFIG_SYSCTL
unregister_sysctl_table(fscache_sysctl_header);
#endif
fscache_proc_cleanup(); fscache_proc_cleanup();
slow_work_unregister_user(THIS_MODULE); destroy_workqueue(fscache_op_wq);
destroy_workqueue(fscache_object_wq);
printk(KERN_NOTICE "FS-Cache: Unloaded\n"); printk(KERN_NOTICE "FS-Cache: Unloaded\n");
} }

View file

@ -34,8 +34,8 @@ struct fscache_objlist_data {
#define FSCACHE_OBJLIST_CONFIG_NOREADS 0x00000200 /* show objects without active reads */ #define FSCACHE_OBJLIST_CONFIG_NOREADS 0x00000200 /* show objects without active reads */
#define FSCACHE_OBJLIST_CONFIG_EVENTS 0x00000400 /* show objects with events */ #define FSCACHE_OBJLIST_CONFIG_EVENTS 0x00000400 /* show objects with events */
#define FSCACHE_OBJLIST_CONFIG_NOEVENTS 0x00000800 /* show objects without no events */ #define FSCACHE_OBJLIST_CONFIG_NOEVENTS 0x00000800 /* show objects without no events */
#define FSCACHE_OBJLIST_CONFIG_WORK 0x00001000 /* show objects with slow work */ #define FSCACHE_OBJLIST_CONFIG_WORK 0x00001000 /* show objects with work */
#define FSCACHE_OBJLIST_CONFIG_NOWORK 0x00002000 /* show objects without slow work */ #define FSCACHE_OBJLIST_CONFIG_NOWORK 0x00002000 /* show objects without work */
u8 buf[512]; /* key and aux data buffer */ u8 buf[512]; /* key and aux data buffer */
}; };
@ -231,12 +231,11 @@ static int fscache_objlist_show(struct seq_file *m, void *v)
READS, NOREADS); READS, NOREADS);
FILTER(obj->events & obj->event_mask, FILTER(obj->events & obj->event_mask,
EVENTS, NOEVENTS); EVENTS, NOEVENTS);
FILTER(obj->work.flags & ~(1UL << SLOW_WORK_VERY_SLOW), FILTER(work_busy(&obj->work), WORK, NOWORK);
WORK, NOWORK);
} }
seq_printf(m, seq_printf(m,
"%8x %8x %s %5u %3u %3u %3u %2u %5u %2lx %2lx %1lx %1lx | ", "%8x %8x %s %5u %3u %3u %3u %2u %5u %2lx %2lx %1lx %1x | ",
obj->debug_id, obj->debug_id,
obj->parent ? obj->parent->debug_id : -1, obj->parent ? obj->parent->debug_id : -1,
fscache_object_states_short[obj->state], fscache_object_states_short[obj->state],
@ -249,7 +248,7 @@ static int fscache_objlist_show(struct seq_file *m, void *v)
obj->event_mask & FSCACHE_OBJECT_EVENTS_MASK, obj->event_mask & FSCACHE_OBJECT_EVENTS_MASK,
obj->events, obj->events,
obj->flags, obj->flags,
obj->work.flags); work_busy(&obj->work));
no_cookie = true; no_cookie = true;
keylen = auxlen = 0; keylen = auxlen = 0;

View file

@ -14,7 +14,6 @@
#define FSCACHE_DEBUG_LEVEL COOKIE #define FSCACHE_DEBUG_LEVEL COOKIE
#include <linux/module.h> #include <linux/module.h>
#include <linux/seq_file.h>
#include "internal.h" #include "internal.h"
const char *fscache_object_states[FSCACHE_OBJECT__NSTATES] = { const char *fscache_object_states[FSCACHE_OBJECT__NSTATES] = {
@ -50,12 +49,8 @@ const char fscache_object_states_short[FSCACHE_OBJECT__NSTATES][5] = {
[FSCACHE_OBJECT_DEAD] = "DEAD", [FSCACHE_OBJECT_DEAD] = "DEAD",
}; };
static void fscache_object_slow_work_put_ref(struct slow_work *); static int fscache_get_object(struct fscache_object *);
static int fscache_object_slow_work_get_ref(struct slow_work *); static void fscache_put_object(struct fscache_object *);
static void fscache_object_slow_work_execute(struct slow_work *);
#ifdef CONFIG_SLOW_WORK_DEBUG
static void fscache_object_slow_work_desc(struct slow_work *, struct seq_file *);
#endif
static void fscache_initialise_object(struct fscache_object *); static void fscache_initialise_object(struct fscache_object *);
static void fscache_lookup_object(struct fscache_object *); static void fscache_lookup_object(struct fscache_object *);
static void fscache_object_available(struct fscache_object *); static void fscache_object_available(struct fscache_object *);
@ -64,17 +59,6 @@ static void fscache_withdraw_object(struct fscache_object *);
static void fscache_enqueue_dependents(struct fscache_object *); static void fscache_enqueue_dependents(struct fscache_object *);
static void fscache_dequeue_object(struct fscache_object *); static void fscache_dequeue_object(struct fscache_object *);
const struct slow_work_ops fscache_object_slow_work_ops = {
.owner = THIS_MODULE,
.get_ref = fscache_object_slow_work_get_ref,
.put_ref = fscache_object_slow_work_put_ref,
.execute = fscache_object_slow_work_execute,
#ifdef CONFIG_SLOW_WORK_DEBUG
.desc = fscache_object_slow_work_desc,
#endif
};
EXPORT_SYMBOL(fscache_object_slow_work_ops);
/* /*
* we need to notify the parent when an op completes that we had outstanding * we need to notify the parent when an op completes that we had outstanding
* upon it * upon it
@ -345,7 +329,7 @@ static void fscache_object_state_machine(struct fscache_object *object)
/* /*
* execute an object * execute an object
*/ */
static void fscache_object_slow_work_execute(struct slow_work *work) void fscache_object_work_func(struct work_struct *work)
{ {
struct fscache_object *object = struct fscache_object *object =
container_of(work, struct fscache_object, work); container_of(work, struct fscache_object, work);
@ -359,23 +343,9 @@ static void fscache_object_slow_work_execute(struct slow_work *work)
if (object->events & object->event_mask) if (object->events & object->event_mask)
fscache_enqueue_object(object); fscache_enqueue_object(object);
clear_bit(FSCACHE_OBJECT_EV_REQUEUE, &object->events); clear_bit(FSCACHE_OBJECT_EV_REQUEUE, &object->events);
fscache_put_object(object);
} }
EXPORT_SYMBOL(fscache_object_work_func);
/*
* describe an object for slow-work debugging
*/
#ifdef CONFIG_SLOW_WORK_DEBUG
static void fscache_object_slow_work_desc(struct slow_work *work,
struct seq_file *m)
{
struct fscache_object *object =
container_of(work, struct fscache_object, work);
seq_printf(m, "FSC: OBJ%x: %s",
object->debug_id,
fscache_object_states_short[object->state]);
}
#endif
/* /*
* initialise an object * initialise an object
@ -393,7 +363,6 @@ static void fscache_initialise_object(struct fscache_object *object)
_enter(""); _enter("");
ASSERT(object->cookie != NULL); ASSERT(object->cookie != NULL);
ASSERT(object->cookie->parent != NULL); ASSERT(object->cookie->parent != NULL);
ASSERT(list_empty(&object->work.link));
if (object->events & ((1 << FSCACHE_OBJECT_EV_ERROR) | if (object->events & ((1 << FSCACHE_OBJECT_EV_ERROR) |
(1 << FSCACHE_OBJECT_EV_RELEASE) | (1 << FSCACHE_OBJECT_EV_RELEASE) |
@ -671,10 +640,8 @@ static void fscache_drop_object(struct fscache_object *object)
object->parent = NULL; object->parent = NULL;
} }
/* this just shifts the object release to the slow work processor */ /* this just shifts the object release to the work processor */
fscache_stat(&fscache_n_cop_put_object); fscache_put_object(object);
object->cache->ops->put_object(object);
fscache_stat_d(&fscache_n_cop_put_object);
_leave(""); _leave("");
} }
@ -758,12 +725,10 @@ void fscache_withdrawing_object(struct fscache_cache *cache,
} }
/* /*
* allow the slow work item processor to get a ref on an object * get a ref on an object
*/ */
static int fscache_object_slow_work_get_ref(struct slow_work *work) static int fscache_get_object(struct fscache_object *object)
{ {
struct fscache_object *object =
container_of(work, struct fscache_object, work);
int ret; int ret;
fscache_stat(&fscache_n_cop_grab_object); fscache_stat(&fscache_n_cop_grab_object);
@ -773,13 +738,10 @@ static int fscache_object_slow_work_get_ref(struct slow_work *work)
} }
/* /*
* allow the slow work item processor to discard a ref on a work item * discard a ref on a work item
*/ */
static void fscache_object_slow_work_put_ref(struct slow_work *work) static void fscache_put_object(struct fscache_object *object)
{ {
struct fscache_object *object =
container_of(work, struct fscache_object, work);
fscache_stat(&fscache_n_cop_put_object); fscache_stat(&fscache_n_cop_put_object);
object->cache->ops->put_object(object); object->cache->ops->put_object(object);
fscache_stat_d(&fscache_n_cop_put_object); fscache_stat_d(&fscache_n_cop_put_object);
@ -792,9 +754,49 @@ void fscache_enqueue_object(struct fscache_object *object)
{ {
_enter("{OBJ%x}", object->debug_id); _enter("{OBJ%x}", object->debug_id);
slow_work_enqueue(&object->work); if (fscache_get_object(object) >= 0) {
wait_queue_head_t *cong_wq =
&get_cpu_var(fscache_object_cong_wait);
if (queue_work(fscache_object_wq, &object->work)) {
if (fscache_object_congested())
wake_up(cong_wq);
} else
fscache_put_object(object);
put_cpu_var(fscache_object_cong_wait);
}
} }
/**
* fscache_object_sleep_till_congested - Sleep until object wq is congested
* @timoutp: Scheduler sleep timeout
*
* Allow an object handler to sleep until the object workqueue is congested.
*
* The caller must set up a wake up event before calling this and must have set
* the appropriate sleep mode (such as TASK_UNINTERRUPTIBLE) and tested its own
* condition before calling this function as no test is made here.
*
* %true is returned if the object wq is congested, %false otherwise.
*/
bool fscache_object_sleep_till_congested(signed long *timeoutp)
{
wait_queue_head_t *cong_wq = &__get_cpu_var(fscache_object_cong_wait);
DEFINE_WAIT(wait);
if (fscache_object_congested())
return true;
add_wait_queue_exclusive(cong_wq, &wait);
if (!fscache_object_congested())
*timeoutp = schedule_timeout(*timeoutp);
finish_wait(cong_wq, &wait);
return fscache_object_congested();
}
EXPORT_SYMBOL_GPL(fscache_object_sleep_till_congested);
/* /*
* enqueue the dependents of an object for metadata-type processing * enqueue the dependents of an object for metadata-type processing
* - the caller must hold the object's lock * - the caller must hold the object's lock
@ -819,9 +821,7 @@ static void fscache_enqueue_dependents(struct fscache_object *object)
/* sort onto appropriate lists */ /* sort onto appropriate lists */
fscache_enqueue_object(dep); fscache_enqueue_object(dep);
fscache_stat(&fscache_n_cop_put_object); fscache_put_object(dep);
dep->cache->ops->put_object(dep);
fscache_stat_d(&fscache_n_cop_put_object);
if (!list_empty(&object->dependents)) if (!list_empty(&object->dependents))
cond_resched_lock(&object->lock); cond_resched_lock(&object->lock);

View file

@ -42,16 +42,12 @@ void fscache_enqueue_operation(struct fscache_operation *op)
fscache_stat(&fscache_n_op_enqueue); fscache_stat(&fscache_n_op_enqueue);
switch (op->flags & FSCACHE_OP_TYPE) { switch (op->flags & FSCACHE_OP_TYPE) {
case FSCACHE_OP_FAST: case FSCACHE_OP_ASYNC:
_debug("queue fast"); _debug("queue async");
atomic_inc(&op->usage); atomic_inc(&op->usage);
if (!schedule_work(&op->fast_work)) if (!queue_work(fscache_op_wq, &op->work))
fscache_put_operation(op); fscache_put_operation(op);
break; break;
case FSCACHE_OP_SLOW:
_debug("queue slow");
slow_work_enqueue(&op->slow_work);
break;
case FSCACHE_OP_MYTHREAD: case FSCACHE_OP_MYTHREAD:
_debug("queue for caller's attention"); _debug("queue for caller's attention");
break; break;
@ -455,36 +451,13 @@ void fscache_operation_gc(struct work_struct *work)
} }
/* /*
* allow the slow work item processor to get a ref on an operation * execute an operation using fs_op_wq to provide processing context -
* the caller holds a ref to this object, so we don't need to hold one
*/ */
static int fscache_op_get_ref(struct slow_work *work) void fscache_op_work_func(struct work_struct *work)
{ {
struct fscache_operation *op = struct fscache_operation *op =
container_of(work, struct fscache_operation, slow_work); container_of(work, struct fscache_operation, work);
atomic_inc(&op->usage);
return 0;
}
/*
* allow the slow work item processor to discard a ref on an operation
*/
static void fscache_op_put_ref(struct slow_work *work)
{
struct fscache_operation *op =
container_of(work, struct fscache_operation, slow_work);
fscache_put_operation(op);
}
/*
* execute an operation using the slow thread pool to provide processing context
* - the caller holds a ref to this object, so we don't need to hold one
*/
static void fscache_op_execute(struct slow_work *work)
{
struct fscache_operation *op =
container_of(work, struct fscache_operation, slow_work);
unsigned long start; unsigned long start;
_enter("{OBJ%x OP%x,%d}", _enter("{OBJ%x OP%x,%d}",
@ -494,31 +467,7 @@ static void fscache_op_execute(struct slow_work *work)
start = jiffies; start = jiffies;
op->processor(op); op->processor(op);
fscache_hist(fscache_ops_histogram, start); fscache_hist(fscache_ops_histogram, start);
fscache_put_operation(op);
_leave(""); _leave("");
} }
/*
* describe an operation for slow-work debugging
*/
#ifdef CONFIG_SLOW_WORK_DEBUG
static void fscache_op_desc(struct slow_work *work, struct seq_file *m)
{
struct fscache_operation *op =
container_of(work, struct fscache_operation, slow_work);
seq_printf(m, "FSC: OBJ%x OP%x: %s/%s fl=%lx",
op->object->debug_id, op->debug_id,
op->name, op->state, op->flags);
}
#endif
const struct slow_work_ops fscache_op_slow_work_ops = {
.owner = THIS_MODULE,
.get_ref = fscache_op_get_ref,
.put_ref = fscache_op_put_ref,
.execute = fscache_op_execute,
#ifdef CONFIG_SLOW_WORK_DEBUG
.desc = fscache_op_desc,
#endif
};

View file

@ -105,7 +105,7 @@ bool __fscache_maybe_release_page(struct fscache_cookie *cookie,
page_busy: page_busy:
/* we might want to wait here, but that could deadlock the allocator as /* we might want to wait here, but that could deadlock the allocator as
* the slow-work threads writing to the cache may all end up sleeping * the work threads writing to the cache may all end up sleeping
* on memory allocation */ * on memory allocation */
fscache_stat(&fscache_n_store_vmscan_busy); fscache_stat(&fscache_n_store_vmscan_busy);
return false; return false;
@ -188,9 +188,8 @@ int __fscache_attr_changed(struct fscache_cookie *cookie)
return -ENOMEM; return -ENOMEM;
} }
fscache_operation_init(op, NULL); fscache_operation_init(op, fscache_attr_changed_op, NULL);
fscache_operation_init_slow(op, fscache_attr_changed_op); op->flags = FSCACHE_OP_ASYNC | (1 << FSCACHE_OP_EXCLUSIVE);
op->flags = FSCACHE_OP_SLOW | (1 << FSCACHE_OP_EXCLUSIVE);
fscache_set_op_name(op, "Attr"); fscache_set_op_name(op, "Attr");
spin_lock(&cookie->lock); spin_lock(&cookie->lock);
@ -217,24 +216,6 @@ int __fscache_attr_changed(struct fscache_cookie *cookie)
} }
EXPORT_SYMBOL(__fscache_attr_changed); EXPORT_SYMBOL(__fscache_attr_changed);
/*
* handle secondary execution given to a retrieval op on behalf of the
* cache
*/
static void fscache_retrieval_work(struct work_struct *work)
{
struct fscache_retrieval *op =
container_of(work, struct fscache_retrieval, op.fast_work);
unsigned long start;
_enter("{OP%x}", op->op.debug_id);
start = jiffies;
op->op.processor(&op->op);
fscache_hist(fscache_ops_histogram, start);
fscache_put_operation(&op->op);
}
/* /*
* release a retrieval op reference * release a retrieval op reference
*/ */
@ -269,13 +250,12 @@ static struct fscache_retrieval *fscache_alloc_retrieval(
return NULL; return NULL;
} }
fscache_operation_init(&op->op, fscache_release_retrieval_op); fscache_operation_init(&op->op, NULL, fscache_release_retrieval_op);
op->op.flags = FSCACHE_OP_MYTHREAD | (1 << FSCACHE_OP_WAITING); op->op.flags = FSCACHE_OP_MYTHREAD | (1 << FSCACHE_OP_WAITING);
op->mapping = mapping; op->mapping = mapping;
op->end_io_func = end_io_func; op->end_io_func = end_io_func;
op->context = context; op->context = context;
op->start_time = jiffies; op->start_time = jiffies;
INIT_WORK(&op->op.fast_work, fscache_retrieval_work);
INIT_LIST_HEAD(&op->to_do); INIT_LIST_HEAD(&op->to_do);
fscache_set_op_name(&op->op, "Retr"); fscache_set_op_name(&op->op, "Retr");
return op; return op;
@ -795,9 +775,9 @@ int __fscache_write_page(struct fscache_cookie *cookie,
if (!op) if (!op)
goto nomem; goto nomem;
fscache_operation_init(&op->op, fscache_release_write_op); fscache_operation_init(&op->op, fscache_write_op,
fscache_operation_init_slow(&op->op, fscache_write_op); fscache_release_write_op);
op->op.flags = FSCACHE_OP_SLOW | (1 << FSCACHE_OP_WAITING); op->op.flags = FSCACHE_OP_ASYNC | (1 << FSCACHE_OP_WAITING);
fscache_set_op_name(&op->op, "Write1"); fscache_set_op_name(&op->op, "Write1");
ret = radix_tree_preload(gfp & ~__GFP_HIGHMEM); ret = radix_tree_preload(gfp & ~__GFP_HIGHMEM);
@ -852,7 +832,7 @@ int __fscache_write_page(struct fscache_cookie *cookie,
fscache_stat(&fscache_n_store_ops); fscache_stat(&fscache_n_store_ops);
fscache_stat(&fscache_n_stores_ok); fscache_stat(&fscache_n_stores_ok);
/* the slow work queue now carries its own ref on the object */ /* the work queue now carries its own ref on the object */
fscache_put_operation(&op->op); fscache_put_operation(&op->op);
_leave(" = 0"); _leave(" = 0");
return 0; return 0;

View file

@ -7,7 +7,6 @@ config GFS2_FS
select IP_SCTP if DLM_SCTP select IP_SCTP if DLM_SCTP
select FS_POSIX_ACL select FS_POSIX_ACL
select CRC32 select CRC32
select SLOW_WORK
select QUOTACTL select QUOTACTL
help help
A cluster filesystem. A cluster filesystem.

View file

@ -12,7 +12,6 @@
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/slow-work.h>
#include <linux/dlm.h> #include <linux/dlm.h>
#include <linux/buffer_head.h> #include <linux/buffer_head.h>
@ -383,7 +382,7 @@ struct gfs2_journal_extent {
struct gfs2_jdesc { struct gfs2_jdesc {
struct list_head jd_list; struct list_head jd_list;
struct list_head extent_list; struct list_head extent_list;
struct slow_work jd_work; struct work_struct jd_work;
struct inode *jd_inode; struct inode *jd_inode;
unsigned long jd_flags; unsigned long jd_flags;
#define JDF_RECOVERY 1 #define JDF_RECOVERY 1

View file

@ -15,7 +15,6 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/gfs2_ondisk.h> #include <linux/gfs2_ondisk.h>
#include <asm/atomic.h> #include <asm/atomic.h>
#include <linux/slow-work.h>
#include "gfs2.h" #include "gfs2.h"
#include "incore.h" #include "incore.h"
@ -24,6 +23,7 @@
#include "util.h" #include "util.h"
#include "glock.h" #include "glock.h"
#include "quota.h" #include "quota.h"
#include "recovery.h"
static struct shrinker qd_shrinker = { static struct shrinker qd_shrinker = {
.shrink = gfs2_shrink_qd_memory, .shrink = gfs2_shrink_qd_memory,
@ -138,9 +138,11 @@ static int __init init_gfs2_fs(void)
if (error) if (error)
goto fail_unregister; goto fail_unregister;
error = slow_work_register_user(THIS_MODULE); error = -ENOMEM;
if (error) gfs_recovery_wq = alloc_workqueue("gfs_recovery",
goto fail_slow; WQ_NON_REENTRANT | WQ_RESCUER, 0);
if (!gfs_recovery_wq)
goto fail_wq;
gfs2_register_debugfs(); gfs2_register_debugfs();
@ -148,7 +150,7 @@ static int __init init_gfs2_fs(void)
return 0; return 0;
fail_slow: fail_wq:
unregister_filesystem(&gfs2meta_fs_type); unregister_filesystem(&gfs2meta_fs_type);
fail_unregister: fail_unregister:
unregister_filesystem(&gfs2_fs_type); unregister_filesystem(&gfs2_fs_type);
@ -190,7 +192,7 @@ static void __exit exit_gfs2_fs(void)
gfs2_unregister_debugfs(); gfs2_unregister_debugfs();
unregister_filesystem(&gfs2_fs_type); unregister_filesystem(&gfs2_fs_type);
unregister_filesystem(&gfs2meta_fs_type); unregister_filesystem(&gfs2meta_fs_type);
slow_work_unregister_user(THIS_MODULE); destroy_workqueue(gfs_recovery_wq);
kmem_cache_destroy(gfs2_quotad_cachep); kmem_cache_destroy(gfs2_quotad_cachep);
kmem_cache_destroy(gfs2_rgrpd_cachep); kmem_cache_destroy(gfs2_rgrpd_cachep);

View file

@ -17,7 +17,6 @@
#include <linux/namei.h> #include <linux/namei.h>
#include <linux/mount.h> #include <linux/mount.h>
#include <linux/gfs2_ondisk.h> #include <linux/gfs2_ondisk.h>
#include <linux/slow-work.h>
#include <linux/quotaops.h> #include <linux/quotaops.h>
#include "gfs2.h" #include "gfs2.h"
@ -673,7 +672,7 @@ static int gfs2_jindex_hold(struct gfs2_sbd *sdp, struct gfs2_holder *ji_gh)
break; break;
INIT_LIST_HEAD(&jd->extent_list); INIT_LIST_HEAD(&jd->extent_list);
slow_work_init(&jd->jd_work, &gfs2_recover_ops); INIT_WORK(&jd->jd_work, gfs2_recover_func);
jd->jd_inode = gfs2_lookupi(sdp->sd_jindex, &name, 1); jd->jd_inode = gfs2_lookupi(sdp->sd_jindex, &name, 1);
if (!jd->jd_inode || IS_ERR(jd->jd_inode)) { if (!jd->jd_inode || IS_ERR(jd->jd_inode)) {
if (!jd->jd_inode) if (!jd->jd_inode)
@ -782,7 +781,8 @@ static int init_journal(struct gfs2_sbd *sdp, int undo)
if (sdp->sd_lockstruct.ls_first) { if (sdp->sd_lockstruct.ls_first) {
unsigned int x; unsigned int x;
for (x = 0; x < sdp->sd_journals; x++) { for (x = 0; x < sdp->sd_journals; x++) {
error = gfs2_recover_journal(gfs2_jdesc_find(sdp, x)); error = gfs2_recover_journal(gfs2_jdesc_find(sdp, x),
true);
if (error) { if (error) {
fs_err(sdp, "error recovering journal %u: %d\n", fs_err(sdp, "error recovering journal %u: %d\n",
x, error); x, error);
@ -792,7 +792,7 @@ static int init_journal(struct gfs2_sbd *sdp, int undo)
gfs2_others_may_mount(sdp); gfs2_others_may_mount(sdp);
} else if (!sdp->sd_args.ar_spectator) { } else if (!sdp->sd_args.ar_spectator) {
error = gfs2_recover_journal(sdp->sd_jdesc); error = gfs2_recover_journal(sdp->sd_jdesc, true);
if (error) { if (error) {
fs_err(sdp, "error recovering my journal: %d\n", error); fs_err(sdp, "error recovering my journal: %d\n", error);
goto fail_jinode_gh; goto fail_jinode_gh;

View file

@ -14,7 +14,6 @@
#include <linux/buffer_head.h> #include <linux/buffer_head.h>
#include <linux/gfs2_ondisk.h> #include <linux/gfs2_ondisk.h>
#include <linux/crc32.h> #include <linux/crc32.h>
#include <linux/slow-work.h>
#include "gfs2.h" #include "gfs2.h"
#include "incore.h" #include "incore.h"
@ -28,6 +27,8 @@
#include "util.h" #include "util.h"
#include "dir.h" #include "dir.h"
struct workqueue_struct *gfs_recovery_wq;
int gfs2_replay_read_block(struct gfs2_jdesc *jd, unsigned int blk, int gfs2_replay_read_block(struct gfs2_jdesc *jd, unsigned int blk,
struct buffer_head **bh) struct buffer_head **bh)
{ {
@ -443,23 +444,7 @@ static void gfs2_recovery_done(struct gfs2_sbd *sdp, unsigned int jid,
kobject_uevent_env(&sdp->sd_kobj, KOBJ_CHANGE, envp); kobject_uevent_env(&sdp->sd_kobj, KOBJ_CHANGE, envp);
} }
static int gfs2_recover_get_ref(struct slow_work *work) void gfs2_recover_func(struct work_struct *work)
{
struct gfs2_jdesc *jd = container_of(work, struct gfs2_jdesc, jd_work);
if (test_and_set_bit(JDF_RECOVERY, &jd->jd_flags))
return -EBUSY;
return 0;
}
static void gfs2_recover_put_ref(struct slow_work *work)
{
struct gfs2_jdesc *jd = container_of(work, struct gfs2_jdesc, jd_work);
clear_bit(JDF_RECOVERY, &jd->jd_flags);
smp_mb__after_clear_bit();
wake_up_bit(&jd->jd_flags, JDF_RECOVERY);
}
static void gfs2_recover_work(struct slow_work *work)
{ {
struct gfs2_jdesc *jd = container_of(work, struct gfs2_jdesc, jd_work); struct gfs2_jdesc *jd = container_of(work, struct gfs2_jdesc, jd_work);
struct gfs2_inode *ip = GFS2_I(jd->jd_inode); struct gfs2_inode *ip = GFS2_I(jd->jd_inode);
@ -578,7 +563,7 @@ static void gfs2_recover_work(struct slow_work *work)
gfs2_glock_dq_uninit(&j_gh); gfs2_glock_dq_uninit(&j_gh);
fs_info(sdp, "jid=%u: Done\n", jd->jd_jid); fs_info(sdp, "jid=%u: Done\n", jd->jd_jid);
return; goto done;
fail_gunlock_tr: fail_gunlock_tr:
gfs2_glock_dq_uninit(&t_gh); gfs2_glock_dq_uninit(&t_gh);
@ -590,32 +575,35 @@ static void gfs2_recover_work(struct slow_work *work)
} }
fs_info(sdp, "jid=%u: %s\n", jd->jd_jid, (error) ? "Failed" : "Done"); fs_info(sdp, "jid=%u: %s\n", jd->jd_jid, (error) ? "Failed" : "Done");
fail: fail:
gfs2_recovery_done(sdp, jd->jd_jid, LM_RD_GAVEUP); gfs2_recovery_done(sdp, jd->jd_jid, LM_RD_GAVEUP);
done:
clear_bit(JDF_RECOVERY, &jd->jd_flags);
smp_mb__after_clear_bit();
wake_up_bit(&jd->jd_flags, JDF_RECOVERY);
} }
struct slow_work_ops gfs2_recover_ops = {
.owner = THIS_MODULE,
.get_ref = gfs2_recover_get_ref,
.put_ref = gfs2_recover_put_ref,
.execute = gfs2_recover_work,
};
static int gfs2_recovery_wait(void *word) static int gfs2_recovery_wait(void *word)
{ {
schedule(); schedule();
return 0; return 0;
} }
int gfs2_recover_journal(struct gfs2_jdesc *jd) int gfs2_recover_journal(struct gfs2_jdesc *jd, bool wait)
{ {
int rv; int rv;
rv = slow_work_enqueue(&jd->jd_work);
if (rv) if (test_and_set_bit(JDF_RECOVERY, &jd->jd_flags))
return rv; return -EBUSY;
wait_on_bit(&jd->jd_flags, JDF_RECOVERY, gfs2_recovery_wait, TASK_UNINTERRUPTIBLE);
/* we have JDF_RECOVERY, queue should always succeed */
rv = queue_work(gfs_recovery_wq, &jd->jd_work);
BUG_ON(!rv);
if (wait)
wait_on_bit(&jd->jd_flags, JDF_RECOVERY, gfs2_recovery_wait,
TASK_UNINTERRUPTIBLE);
return 0; return 0;
} }

View file

@ -12,6 +12,8 @@
#include "incore.h" #include "incore.h"
extern struct workqueue_struct *gfs_recovery_wq;
static inline void gfs2_replay_incr_blk(struct gfs2_sbd *sdp, unsigned int *blk) static inline void gfs2_replay_incr_blk(struct gfs2_sbd *sdp, unsigned int *blk)
{ {
if (++*blk == sdp->sd_jdesc->jd_blocks) if (++*blk == sdp->sd_jdesc->jd_blocks)
@ -27,8 +29,8 @@ extern void gfs2_revoke_clean(struct gfs2_sbd *sdp);
extern int gfs2_find_jhead(struct gfs2_jdesc *jd, extern int gfs2_find_jhead(struct gfs2_jdesc *jd,
struct gfs2_log_header_host *head); struct gfs2_log_header_host *head);
extern int gfs2_recover_journal(struct gfs2_jdesc *gfs2_jd); extern int gfs2_recover_journal(struct gfs2_jdesc *gfs2_jd, bool wait);
extern struct slow_work_ops gfs2_recover_ops; extern void gfs2_recover_func(struct work_struct *work);
#endif /* __RECOVERY_DOT_H__ */ #endif /* __RECOVERY_DOT_H__ */

View file

@ -25,6 +25,7 @@
#include "quota.h" #include "quota.h"
#include "util.h" #include "util.h"
#include "glops.h" #include "glops.h"
#include "recovery.h"
struct gfs2_attr { struct gfs2_attr {
struct attribute attr; struct attribute attr;
@ -376,7 +377,7 @@ static ssize_t recover_store(struct gfs2_sbd *sdp, const char *buf, size_t len)
list_for_each_entry(jd, &sdp->sd_jindex_list, jd_list) { list_for_each_entry(jd, &sdp->sd_jindex_list, jd_list) {
if (jd->jd_jid != jid) if (jd->jd_jid != jid)
continue; continue;
rv = slow_work_enqueue(&jd->jd_work); rv = gfs2_recover_journal(jd, false);
break; break;
} }
out: out:

View file

@ -31,7 +31,6 @@
#include <linux/idr.h> #include <linux/idr.h>
#include <linux/fb.h> #include <linux/fb.h>
#include <linux/slow-work.h>
struct drm_device; struct drm_device;
struct drm_mode_set; struct drm_mode_set;
@ -595,7 +594,7 @@ struct drm_mode_config {
/* output poll support */ /* output poll support */
bool poll_enabled; bool poll_enabled;
struct delayed_slow_work output_poll_slow_work; struct delayed_work output_poll_work;
/* pointers to standard properties */ /* pointers to standard properties */
struct list_head property_blob_list; struct list_head property_blob_list;

View file

@ -71,6 +71,8 @@ enum {
/* migration should happen before other stuff but after perf */ /* migration should happen before other stuff but after perf */
CPU_PRI_PERF = 20, CPU_PRI_PERF = 20,
CPU_PRI_MIGRATION = 10, CPU_PRI_MIGRATION = 10,
/* prepare workqueues for other notifiers */
CPU_PRI_WORKQUEUE = 5,
}; };
#ifdef CONFIG_SMP #ifdef CONFIG_SMP

View file

@ -20,7 +20,7 @@
#include <linux/fscache.h> #include <linux/fscache.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/slow-work.h> #include <linux/workqueue.h>
#define NR_MAXCACHES BITS_PER_LONG #define NR_MAXCACHES BITS_PER_LONG
@ -76,18 +76,14 @@ typedef void (*fscache_operation_release_t)(struct fscache_operation *op);
typedef void (*fscache_operation_processor_t)(struct fscache_operation *op); typedef void (*fscache_operation_processor_t)(struct fscache_operation *op);
struct fscache_operation { struct fscache_operation {
union { struct work_struct work; /* record for async ops */
struct work_struct fast_work; /* record for fast ops */
struct slow_work slow_work; /* record for (very) slow ops */
};
struct list_head pend_link; /* link in object->pending_ops */ struct list_head pend_link; /* link in object->pending_ops */
struct fscache_object *object; /* object to be operated upon */ struct fscache_object *object; /* object to be operated upon */
unsigned long flags; unsigned long flags;
#define FSCACHE_OP_TYPE 0x000f /* operation type */ #define FSCACHE_OP_TYPE 0x000f /* operation type */
#define FSCACHE_OP_FAST 0x0001 /* - fast op, processor may not sleep for disk */ #define FSCACHE_OP_ASYNC 0x0001 /* - async op, processor may sleep for disk */
#define FSCACHE_OP_SLOW 0x0002 /* - (very) slow op, processor may sleep for disk */ #define FSCACHE_OP_MYTHREAD 0x0002 /* - processing is done be issuing thread, not pool */
#define FSCACHE_OP_MYTHREAD 0x0003 /* - processing is done be issuing thread, not pool */
#define FSCACHE_OP_WAITING 4 /* cleared when op is woken */ #define FSCACHE_OP_WAITING 4 /* cleared when op is woken */
#define FSCACHE_OP_EXCLUSIVE 5 /* exclusive op, other ops must wait */ #define FSCACHE_OP_EXCLUSIVE 5 /* exclusive op, other ops must wait */
#define FSCACHE_OP_DEAD 6 /* op is now dead */ #define FSCACHE_OP_DEAD 6 /* op is now dead */
@ -105,7 +101,8 @@ struct fscache_operation {
/* operation releaser */ /* operation releaser */
fscache_operation_release_t release; fscache_operation_release_t release;
#ifdef CONFIG_SLOW_WORK_DEBUG #ifdef CONFIG_WORKQUEUE_DEBUGFS
struct work_struct put_work; /* work to delay operation put */
const char *name; /* operation name */ const char *name; /* operation name */
const char *state; /* operation state */ const char *state; /* operation state */
#define fscache_set_op_name(OP, N) do { (OP)->name = (N); } while(0) #define fscache_set_op_name(OP, N) do { (OP)->name = (N); } while(0)
@ -117,7 +114,7 @@ struct fscache_operation {
}; };
extern atomic_t fscache_op_debug_id; extern atomic_t fscache_op_debug_id;
extern const struct slow_work_ops fscache_op_slow_work_ops; extern void fscache_op_work_func(struct work_struct *work);
extern void fscache_enqueue_operation(struct fscache_operation *); extern void fscache_enqueue_operation(struct fscache_operation *);
extern void fscache_put_operation(struct fscache_operation *); extern void fscache_put_operation(struct fscache_operation *);
@ -128,33 +125,21 @@ extern void fscache_put_operation(struct fscache_operation *);
* @release: The release function to assign * @release: The release function to assign
* *
* Do basic initialisation of an operation. The caller must still set flags, * Do basic initialisation of an operation. The caller must still set flags,
* object, either fast_work or slow_work if necessary, and processor if needed. * object and processor if needed.
*/ */
static inline void fscache_operation_init(struct fscache_operation *op, static inline void fscache_operation_init(struct fscache_operation *op,
fscache_operation_release_t release) fscache_operation_processor_t processor,
fscache_operation_release_t release)
{ {
INIT_WORK(&op->work, fscache_op_work_func);
atomic_set(&op->usage, 1); atomic_set(&op->usage, 1);
op->debug_id = atomic_inc_return(&fscache_op_debug_id); op->debug_id = atomic_inc_return(&fscache_op_debug_id);
op->processor = processor;
op->release = release; op->release = release;
INIT_LIST_HEAD(&op->pend_link); INIT_LIST_HEAD(&op->pend_link);
fscache_set_op_state(op, "Init"); fscache_set_op_state(op, "Init");
} }
/**
* fscache_operation_init_slow - Do additional initialisation of a slow op
* @op: The operation to initialise
* @processor: The processor function to assign
*
* Do additional initialisation of an operation as required for slow work.
*/
static inline
void fscache_operation_init_slow(struct fscache_operation *op,
fscache_operation_processor_t processor)
{
op->processor = processor;
slow_work_init(&op->slow_work, &fscache_op_slow_work_ops);
}
/* /*
* data read operation * data read operation
*/ */
@ -389,7 +374,7 @@ struct fscache_object {
struct fscache_cache *cache; /* cache that supplied this object */ struct fscache_cache *cache; /* cache that supplied this object */
struct fscache_cookie *cookie; /* netfs's file/index object */ struct fscache_cookie *cookie; /* netfs's file/index object */
struct fscache_object *parent; /* parent object */ struct fscache_object *parent; /* parent object */
struct slow_work work; /* attention scheduling record */ struct work_struct work; /* attention scheduling record */
struct list_head dependents; /* FIFO of dependent objects */ struct list_head dependents; /* FIFO of dependent objects */
struct list_head dep_link; /* link in parent's dependents list */ struct list_head dep_link; /* link in parent's dependents list */
struct list_head pending_ops; /* unstarted operations on this object */ struct list_head pending_ops; /* unstarted operations on this object */
@ -411,7 +396,7 @@ extern const char *fscache_object_states[];
(test_bit(FSCACHE_IOERROR, &(obj)->cache->flags) && \ (test_bit(FSCACHE_IOERROR, &(obj)->cache->flags) && \
(obj)->state >= FSCACHE_OBJECT_DYING) (obj)->state >= FSCACHE_OBJECT_DYING)
extern const struct slow_work_ops fscache_object_slow_work_ops; extern void fscache_object_work_func(struct work_struct *work);
/** /**
* fscache_object_init - Initialise a cache object description * fscache_object_init - Initialise a cache object description
@ -433,7 +418,7 @@ void fscache_object_init(struct fscache_object *object,
spin_lock_init(&object->lock); spin_lock_init(&object->lock);
INIT_LIST_HEAD(&object->cache_link); INIT_LIST_HEAD(&object->cache_link);
INIT_HLIST_NODE(&object->cookie_link); INIT_HLIST_NODE(&object->cookie_link);
vslow_work_init(&object->work, &fscache_object_slow_work_ops); INIT_WORK(&object->work, fscache_object_work_func);
INIT_LIST_HEAD(&object->dependents); INIT_LIST_HEAD(&object->dependents);
INIT_LIST_HEAD(&object->dep_link); INIT_LIST_HEAD(&object->dep_link);
INIT_LIST_HEAD(&object->pending_ops); INIT_LIST_HEAD(&object->pending_ops);
@ -534,6 +519,8 @@ extern void fscache_io_error(struct fscache_cache *cache);
extern void fscache_mark_pages_cached(struct fscache_retrieval *op, extern void fscache_mark_pages_cached(struct fscache_retrieval *op,
struct pagevec *pagevec); struct pagevec *pagevec);
extern bool fscache_object_sleep_till_congested(signed long *timeoutp);
extern enum fscache_checkaux fscache_check_aux(struct fscache_object *object, extern enum fscache_checkaux fscache_check_aux(struct fscache_object *object,
const void *data, const void *data,
uint16_t datalen); uint16_t datalen);

View file

@ -30,8 +30,73 @@ struct task_struct *kthread_create(int (*threadfn)(void *data),
void kthread_bind(struct task_struct *k, unsigned int cpu); void kthread_bind(struct task_struct *k, unsigned int cpu);
int kthread_stop(struct task_struct *k); int kthread_stop(struct task_struct *k);
int kthread_should_stop(void); int kthread_should_stop(void);
void *kthread_data(struct task_struct *k);
int kthreadd(void *unused); int kthreadd(void *unused);
extern struct task_struct *kthreadd_task; extern struct task_struct *kthreadd_task;
/*
* Simple work processor based on kthread.
*
* This provides easier way to make use of kthreads. A kthread_work
* can be queued and flushed using queue/flush_kthread_work()
* respectively. Queued kthread_works are processed by a kthread
* running kthread_worker_fn().
*
* A kthread_work can't be freed while it is executing.
*/
struct kthread_work;
typedef void (*kthread_work_func_t)(struct kthread_work *work);
struct kthread_worker {
spinlock_t lock;
struct list_head work_list;
struct task_struct *task;
};
struct kthread_work {
struct list_head node;
kthread_work_func_t func;
wait_queue_head_t done;
atomic_t flushing;
int queue_seq;
int done_seq;
};
#define KTHREAD_WORKER_INIT(worker) { \
.lock = SPIN_LOCK_UNLOCKED, \
.work_list = LIST_HEAD_INIT((worker).work_list), \
}
#define KTHREAD_WORK_INIT(work, fn) { \
.node = LIST_HEAD_INIT((work).node), \
.func = (fn), \
.done = __WAIT_QUEUE_HEAD_INITIALIZER((work).done), \
.flushing = ATOMIC_INIT(0), \
}
#define DEFINE_KTHREAD_WORKER(worker) \
struct kthread_worker worker = KTHREAD_WORKER_INIT(worker)
#define DEFINE_KTHREAD_WORK(work, fn) \
struct kthread_work work = KTHREAD_WORK_INIT(work, fn)
static inline void init_kthread_worker(struct kthread_worker *worker)
{
*worker = (struct kthread_worker)KTHREAD_WORKER_INIT(*worker);
}
static inline void init_kthread_work(struct kthread_work *work,
kthread_work_func_t fn)
{
*work = (struct kthread_work)KTHREAD_WORK_INIT(*work, fn);
}
int kthread_worker_fn(void *worker_ptr);
bool queue_kthread_work(struct kthread_worker *worker,
struct kthread_work *work);
void flush_kthread_work(struct kthread_work *work);
void flush_kthread_worker(struct kthread_worker *worker);
#endif /* _LINUX_KTHREAD_H */ #endif /* _LINUX_KTHREAD_H */

View file

@ -751,6 +751,7 @@ struct ata_port {
struct ata_host *host; struct ata_host *host;
struct device *dev; struct device *dev;
struct mutex scsi_scan_mutex;
struct delayed_work hotplug_task; struct delayed_work hotplug_task;
struct work_struct scsi_rescan_task; struct work_struct scsi_rescan_task;

View file

@ -1,163 +0,0 @@
/* Worker thread pool for slow items, such as filesystem lookups or mkdirs
*
* Copyright (C) 2008 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*
* See Documentation/slow-work.txt
*/
#ifndef _LINUX_SLOW_WORK_H
#define _LINUX_SLOW_WORK_H
#ifdef CONFIG_SLOW_WORK
#include <linux/sysctl.h>
#include <linux/timer.h>
struct slow_work;
#ifdef CONFIG_SLOW_WORK_DEBUG
struct seq_file;
#endif
/*
* The operations used to support slow work items
*/
struct slow_work_ops {
/* owner */
struct module *owner;
/* get a ref on a work item
* - return 0 if successful, -ve if not
*/
int (*get_ref)(struct slow_work *work);
/* discard a ref to a work item */
void (*put_ref)(struct slow_work *work);
/* execute a work item */
void (*execute)(struct slow_work *work);
#ifdef CONFIG_SLOW_WORK_DEBUG
/* describe a work item for debugfs */
void (*desc)(struct slow_work *work, struct seq_file *m);
#endif
};
/*
* A slow work item
* - A reference is held on the parent object by the thread pool when it is
* queued
*/
struct slow_work {
struct module *owner; /* the owning module */
unsigned long flags;
#define SLOW_WORK_PENDING 0 /* item pending (further) execution */
#define SLOW_WORK_EXECUTING 1 /* item currently executing */
#define SLOW_WORK_ENQ_DEFERRED 2 /* item enqueue deferred */
#define SLOW_WORK_VERY_SLOW 3 /* item is very slow */
#define SLOW_WORK_CANCELLING 4 /* item is being cancelled, don't enqueue */
#define SLOW_WORK_DELAYED 5 /* item is struct delayed_slow_work with active timer */
const struct slow_work_ops *ops; /* operations table for this item */
struct list_head link; /* link in queue */
#ifdef CONFIG_SLOW_WORK_DEBUG
struct timespec mark; /* jiffies at which queued or exec begun */
#endif
};
struct delayed_slow_work {
struct slow_work work;
struct timer_list timer;
};
/**
* slow_work_init - Initialise a slow work item
* @work: The work item to initialise
* @ops: The operations to use to handle the slow work item
*
* Initialise a slow work item.
*/
static inline void slow_work_init(struct slow_work *work,
const struct slow_work_ops *ops)
{
work->flags = 0;
work->ops = ops;
INIT_LIST_HEAD(&work->link);
}
/**
* slow_work_init - Initialise a delayed slow work item
* @work: The work item to initialise
* @ops: The operations to use to handle the slow work item
*
* Initialise a delayed slow work item.
*/
static inline void delayed_slow_work_init(struct delayed_slow_work *dwork,
const struct slow_work_ops *ops)
{
init_timer(&dwork->timer);
slow_work_init(&dwork->work, ops);
}
/**
* vslow_work_init - Initialise a very slow work item
* @work: The work item to initialise
* @ops: The operations to use to handle the slow work item
*
* Initialise a very slow work item. This item will be restricted such that
* only a certain number of the pool threads will be able to execute items of
* this type.
*/
static inline void vslow_work_init(struct slow_work *work,
const struct slow_work_ops *ops)
{
work->flags = 1 << SLOW_WORK_VERY_SLOW;
work->ops = ops;
INIT_LIST_HEAD(&work->link);
}
/**
* slow_work_is_queued - Determine if a slow work item is on the work queue
* work: The work item to test
*
* Determine if the specified slow-work item is on the work queue. This
* returns true if it is actually on the queue.
*
* If the item is executing and has been marked for requeue when execution
* finishes, then false will be returned.
*
* Anyone wishing to wait for completion of execution can wait on the
* SLOW_WORK_EXECUTING bit.
*/
static inline bool slow_work_is_queued(struct slow_work *work)
{
unsigned long flags = work->flags;
return flags & SLOW_WORK_PENDING && !(flags & SLOW_WORK_EXECUTING);
}
extern int slow_work_enqueue(struct slow_work *work);
extern void slow_work_cancel(struct slow_work *work);
extern int slow_work_register_user(struct module *owner);
extern void slow_work_unregister_user(struct module *owner);
extern int delayed_slow_work_enqueue(struct delayed_slow_work *dwork,
unsigned long delay);
static inline void delayed_slow_work_cancel(struct delayed_slow_work *dwork)
{
slow_work_cancel(&dwork->work);
}
extern bool slow_work_sleep_till_thread_needed(struct slow_work *work,
signed long *_timeout);
#ifdef CONFIG_SYSCTL
extern ctl_table slow_work_sysctls[];
#endif
#endif /* CONFIG_SLOW_WORK */
#endif /* _LINUX_SLOW_WORK_H */

View file

@ -9,6 +9,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/lockdep.h> #include <linux/lockdep.h>
#include <linux/threads.h>
#include <asm/atomic.h> #include <asm/atomic.h>
struct workqueue_struct; struct workqueue_struct;
@ -22,12 +23,59 @@ typedef void (*work_func_t)(struct work_struct *work);
*/ */
#define work_data_bits(work) ((unsigned long *)(&(work)->data)) #define work_data_bits(work) ((unsigned long *)(&(work)->data))
enum {
WORK_STRUCT_PENDING_BIT = 0, /* work item is pending execution */
WORK_STRUCT_CWQ_BIT = 1, /* data points to cwq */
WORK_STRUCT_LINKED_BIT = 2, /* next work is linked to this one */
#ifdef CONFIG_DEBUG_OBJECTS_WORK
WORK_STRUCT_STATIC_BIT = 3, /* static initializer (debugobjects) */
WORK_STRUCT_COLOR_SHIFT = 4, /* color for workqueue flushing */
#else
WORK_STRUCT_COLOR_SHIFT = 3, /* color for workqueue flushing */
#endif
WORK_STRUCT_COLOR_BITS = 4,
WORK_STRUCT_PENDING = 1 << WORK_STRUCT_PENDING_BIT,
WORK_STRUCT_CWQ = 1 << WORK_STRUCT_CWQ_BIT,
WORK_STRUCT_LINKED = 1 << WORK_STRUCT_LINKED_BIT,
#ifdef CONFIG_DEBUG_OBJECTS_WORK
WORK_STRUCT_STATIC = 1 << WORK_STRUCT_STATIC_BIT,
#else
WORK_STRUCT_STATIC = 0,
#endif
/*
* The last color is no color used for works which don't
* participate in workqueue flushing.
*/
WORK_NR_COLORS = (1 << WORK_STRUCT_COLOR_BITS) - 1,
WORK_NO_COLOR = WORK_NR_COLORS,
/* special cpu IDs */
WORK_CPU_UNBOUND = NR_CPUS,
WORK_CPU_NONE = NR_CPUS + 1,
WORK_CPU_LAST = WORK_CPU_NONE,
/*
* Reserve 7 bits off of cwq pointer w/ debugobjects turned
* off. This makes cwqs aligned to 128 bytes which isn't too
* excessive while allowing 15 workqueue flush colors.
*/
WORK_STRUCT_FLAG_BITS = WORK_STRUCT_COLOR_SHIFT +
WORK_STRUCT_COLOR_BITS,
WORK_STRUCT_FLAG_MASK = (1UL << WORK_STRUCT_FLAG_BITS) - 1,
WORK_STRUCT_WQ_DATA_MASK = ~WORK_STRUCT_FLAG_MASK,
WORK_STRUCT_NO_CPU = WORK_CPU_NONE << WORK_STRUCT_FLAG_BITS,
/* bit mask for work_busy() return values */
WORK_BUSY_PENDING = 1 << 0,
WORK_BUSY_RUNNING = 1 << 1,
};
struct work_struct { struct work_struct {
atomic_long_t data; atomic_long_t data;
#define WORK_STRUCT_PENDING 0 /* T if work item pending execution */
#define WORK_STRUCT_STATIC 1 /* static initializer (debugobjects) */
#define WORK_STRUCT_FLAG_MASK (3UL)
#define WORK_STRUCT_WQ_DATA_MASK (~WORK_STRUCT_FLAG_MASK)
struct list_head entry; struct list_head entry;
work_func_t func; work_func_t func;
#ifdef CONFIG_LOCKDEP #ifdef CONFIG_LOCKDEP
@ -35,8 +83,9 @@ struct work_struct {
#endif #endif
}; };
#define WORK_DATA_INIT() ATOMIC_LONG_INIT(0) #define WORK_DATA_INIT() ATOMIC_LONG_INIT(WORK_STRUCT_NO_CPU)
#define WORK_DATA_STATIC_INIT() ATOMIC_LONG_INIT(2) #define WORK_DATA_STATIC_INIT() \
ATOMIC_LONG_INIT(WORK_STRUCT_NO_CPU | WORK_STRUCT_STATIC)
struct delayed_work { struct delayed_work {
struct work_struct work; struct work_struct work;
@ -96,9 +145,14 @@ struct execute_work {
#ifdef CONFIG_DEBUG_OBJECTS_WORK #ifdef CONFIG_DEBUG_OBJECTS_WORK
extern void __init_work(struct work_struct *work, int onstack); extern void __init_work(struct work_struct *work, int onstack);
extern void destroy_work_on_stack(struct work_struct *work); extern void destroy_work_on_stack(struct work_struct *work);
static inline unsigned int work_static(struct work_struct *work)
{
return *work_data_bits(work) & WORK_STRUCT_STATIC;
}
#else #else
static inline void __init_work(struct work_struct *work, int onstack) { } static inline void __init_work(struct work_struct *work, int onstack) { }
static inline void destroy_work_on_stack(struct work_struct *work) { } static inline void destroy_work_on_stack(struct work_struct *work) { }
static inline unsigned int work_static(struct work_struct *work) { return 0; }
#endif #endif
/* /*
@ -162,7 +216,7 @@ static inline void destroy_work_on_stack(struct work_struct *work) { }
* @work: The work item in question * @work: The work item in question
*/ */
#define work_pending(work) \ #define work_pending(work) \
test_bit(WORK_STRUCT_PENDING, work_data_bits(work)) test_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))
/** /**
* delayed_work_pending - Find out whether a delayable work item is currently * delayed_work_pending - Find out whether a delayable work item is currently
@ -177,16 +231,56 @@ static inline void destroy_work_on_stack(struct work_struct *work) { }
* @work: The work item in question * @work: The work item in question
*/ */
#define work_clear_pending(work) \ #define work_clear_pending(work) \
clear_bit(WORK_STRUCT_PENDING, work_data_bits(work)) clear_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))
enum {
WQ_NON_REENTRANT = 1 << 0, /* guarantee non-reentrance */
WQ_UNBOUND = 1 << 1, /* not bound to any cpu */
WQ_FREEZEABLE = 1 << 2, /* freeze during suspend */
WQ_RESCUER = 1 << 3, /* has an rescue worker */
WQ_HIGHPRI = 1 << 4, /* high priority */
WQ_CPU_INTENSIVE = 1 << 5, /* cpu instensive workqueue */
WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */
WQ_MAX_UNBOUND_PER_CPU = 4, /* 4 * #cpus for unbound wq */
WQ_DFL_ACTIVE = WQ_MAX_ACTIVE / 2,
};
/* unbound wq's aren't per-cpu, scale max_active according to #cpus */
#define WQ_UNBOUND_MAX_ACTIVE \
max_t(int, WQ_MAX_ACTIVE, num_possible_cpus() * WQ_MAX_UNBOUND_PER_CPU)
/*
* System-wide workqueues which are always present.
*
* system_wq is the one used by schedule[_delayed]_work[_on]().
* Multi-CPU multi-threaded. There are users which expect relatively
* short queue flush time. Don't queue works which can run for too
* long.
*
* system_long_wq is similar to system_wq but may host long running
* works. Queue flushing might take relatively long.
*
* system_nrt_wq is non-reentrant and guarantees that any given work
* item is never executed in parallel by multiple CPUs. Queue
* flushing might take relatively long.
*
* system_unbound_wq is unbound workqueue. Workers are not bound to
* any specific CPU, not concurrency managed, and all queued works are
* executed immediately as long as max_active limit is not reached and
* resources are available.
*/
extern struct workqueue_struct *system_wq;
extern struct workqueue_struct *system_long_wq;
extern struct workqueue_struct *system_nrt_wq;
extern struct workqueue_struct *system_unbound_wq;
extern struct workqueue_struct * extern struct workqueue_struct *
__create_workqueue_key(const char *name, int singlethread, __alloc_workqueue_key(const char *name, unsigned int flags, int max_active,
int freezeable, int rt, struct lock_class_key *key, struct lock_class_key *key, const char *lock_name);
const char *lock_name);
#ifdef CONFIG_LOCKDEP #ifdef CONFIG_LOCKDEP
#define __create_workqueue(name, singlethread, freezeable, rt) \ #define alloc_workqueue(name, flags, max_active) \
({ \ ({ \
static struct lock_class_key __key; \ static struct lock_class_key __key; \
const char *__lock_name; \ const char *__lock_name; \
@ -196,20 +290,20 @@ __create_workqueue_key(const char *name, int singlethread,
else \ else \
__lock_name = #name; \ __lock_name = #name; \
\ \
__create_workqueue_key((name), (singlethread), \ __alloc_workqueue_key((name), (flags), (max_active), \
(freezeable), (rt), &__key, \ &__key, __lock_name); \
__lock_name); \
}) })
#else #else
#define __create_workqueue(name, singlethread, freezeable, rt) \ #define alloc_workqueue(name, flags, max_active) \
__create_workqueue_key((name), (singlethread), (freezeable), (rt), \ __alloc_workqueue_key((name), (flags), (max_active), NULL, NULL)
NULL, NULL)
#endif #endif
#define create_workqueue(name) __create_workqueue((name), 0, 0, 0) #define create_workqueue(name) \
#define create_rt_workqueue(name) __create_workqueue((name), 0, 0, 1) alloc_workqueue((name), WQ_RESCUER, 1)
#define create_freezeable_workqueue(name) __create_workqueue((name), 1, 1, 0) #define create_freezeable_workqueue(name) \
#define create_singlethread_workqueue(name) __create_workqueue((name), 1, 0, 0) alloc_workqueue((name), WQ_FREEZEABLE | WQ_UNBOUND | WQ_RESCUER, 1)
#define create_singlethread_workqueue(name) \
alloc_workqueue((name), WQ_UNBOUND | WQ_RESCUER, 1)
extern void destroy_workqueue(struct workqueue_struct *wq); extern void destroy_workqueue(struct workqueue_struct *wq);
@ -231,16 +325,19 @@ extern int schedule_delayed_work(struct delayed_work *work, unsigned long delay)
extern int schedule_delayed_work_on(int cpu, struct delayed_work *work, extern int schedule_delayed_work_on(int cpu, struct delayed_work *work,
unsigned long delay); unsigned long delay);
extern int schedule_on_each_cpu(work_func_t func); extern int schedule_on_each_cpu(work_func_t func);
extern int current_is_keventd(void);
extern int keventd_up(void); extern int keventd_up(void);
extern void init_workqueues(void);
int execute_in_process_context(work_func_t fn, struct execute_work *); int execute_in_process_context(work_func_t fn, struct execute_work *);
extern int flush_work(struct work_struct *work); extern int flush_work(struct work_struct *work);
extern int cancel_work_sync(struct work_struct *work); extern int cancel_work_sync(struct work_struct *work);
extern void workqueue_set_max_active(struct workqueue_struct *wq,
int max_active);
extern bool workqueue_congested(unsigned int cpu, struct workqueue_struct *wq);
extern unsigned int work_cpu(struct work_struct *work);
extern unsigned int work_busy(struct work_struct *work);
/* /*
* Kill off a pending schedule_delayed_work(). Note that the work callback * Kill off a pending schedule_delayed_work(). Note that the work callback
* function may still be running on return from cancel_delayed_work(), unless * function may still be running on return from cancel_delayed_work(), unless
@ -298,7 +395,14 @@ static inline long work_on_cpu(unsigned int cpu, long (*fn)(void *), void *arg)
long work_on_cpu(unsigned int cpu, long (*fn)(void *), void *arg); long work_on_cpu(unsigned int cpu, long (*fn)(void *), void *arg);
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
#ifdef CONFIG_FREEZER
extern void freeze_workqueues_begin(void);
extern bool freeze_workqueues_busy(void);
extern void thaw_workqueues(void);
#endif /* CONFIG_FREEZER */
#ifdef CONFIG_LOCKDEP #ifdef CONFIG_LOCKDEP
int in_workqueue_context(struct workqueue_struct *wq); int in_workqueue_context(struct workqueue_struct *wq);
#endif #endif
#endif #endif

View file

@ -1,92 +0,0 @@
#undef TRACE_SYSTEM
#define TRACE_SYSTEM workqueue
#if !defined(_TRACE_WORKQUEUE_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_WORKQUEUE_H
#include <linux/workqueue.h>
#include <linux/sched.h>
#include <linux/tracepoint.h>
DECLARE_EVENT_CLASS(workqueue,
TP_PROTO(struct task_struct *wq_thread, struct work_struct *work),
TP_ARGS(wq_thread, work),
TP_STRUCT__entry(
__array(char, thread_comm, TASK_COMM_LEN)
__field(pid_t, thread_pid)
__field(work_func_t, func)
),
TP_fast_assign(
memcpy(__entry->thread_comm, wq_thread->comm, TASK_COMM_LEN);
__entry->thread_pid = wq_thread->pid;
__entry->func = work->func;
),
TP_printk("thread=%s:%d func=%pf", __entry->thread_comm,
__entry->thread_pid, __entry->func)
);
DEFINE_EVENT(workqueue, workqueue_insertion,
TP_PROTO(struct task_struct *wq_thread, struct work_struct *work),
TP_ARGS(wq_thread, work)
);
DEFINE_EVENT(workqueue, workqueue_execution,
TP_PROTO(struct task_struct *wq_thread, struct work_struct *work),
TP_ARGS(wq_thread, work)
);
/* Trace the creation of one workqueue thread on a cpu */
TRACE_EVENT(workqueue_creation,
TP_PROTO(struct task_struct *wq_thread, int cpu),
TP_ARGS(wq_thread, cpu),
TP_STRUCT__entry(
__array(char, thread_comm, TASK_COMM_LEN)
__field(pid_t, thread_pid)
__field(int, cpu)
),
TP_fast_assign(
memcpy(__entry->thread_comm, wq_thread->comm, TASK_COMM_LEN);
__entry->thread_pid = wq_thread->pid;
__entry->cpu = cpu;
),
TP_printk("thread=%s:%d cpu=%d", __entry->thread_comm,
__entry->thread_pid, __entry->cpu)
);
TRACE_EVENT(workqueue_destruction,
TP_PROTO(struct task_struct *wq_thread),
TP_ARGS(wq_thread),
TP_STRUCT__entry(
__array(char, thread_comm, TASK_COMM_LEN)
__field(pid_t, thread_pid)
),
TP_fast_assign(
memcpy(__entry->thread_comm, wq_thread->comm, TASK_COMM_LEN);
__entry->thread_pid = wq_thread->pid;
),
TP_printk("thread=%s:%d", __entry->thread_comm, __entry->thread_pid)
);
#endif /* _TRACE_WORKQUEUE_H */
/* This part must be outside protection */
#include <trace/define_trace.h>

View file

@ -1143,30 +1143,6 @@ config TRACEPOINTS
source "arch/Kconfig" source "arch/Kconfig"
config SLOW_WORK
default n
bool
help
The slow work thread pool provides a number of dynamically allocated
threads that can be used by the kernel to perform operations that
take a relatively long time.
An example of this would be CacheFiles doing a path lookup followed
by a series of mkdirs and a create call, all of which have to touch
disk.
See Documentation/slow-work.txt.
config SLOW_WORK_DEBUG
bool "Slow work debugging through debugfs"
default n
depends on SLOW_WORK && DEBUG_FS
help
Display the contents of the slow work run queue through debugfs,
including items currently executing.
See Documentation/slow-work.txt.
endmenu # General setup endmenu # General setup
config HAVE_GENERIC_DMA_COHERENT config HAVE_GENERIC_DMA_COHERENT

View file

@ -32,7 +32,6 @@
#include <linux/start_kernel.h> #include <linux/start_kernel.h>
#include <linux/security.h> #include <linux/security.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/workqueue.h>
#include <linux/profile.h> #include <linux/profile.h>
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
@ -789,7 +788,6 @@ static void __init do_initcalls(void)
*/ */
static void __init do_basic_setup(void) static void __init do_basic_setup(void)
{ {
init_workqueues();
cpuset_init_smp(); cpuset_init_smp();
usermodehelper_init(); usermodehelper_init();
init_tmpfs(); init_tmpfs();

View file

@ -99,8 +99,6 @@ obj-$(CONFIG_TRACING) += trace/
obj-$(CONFIG_X86_DS) += trace/ obj-$(CONFIG_X86_DS) += trace/
obj-$(CONFIG_RING_BUFFER) += trace/ obj-$(CONFIG_RING_BUFFER) += trace/
obj-$(CONFIG_SMP) += sched_cpupri.o obj-$(CONFIG_SMP) += sched_cpupri.o
obj-$(CONFIG_SLOW_WORK) += slow-work.o
obj-$(CONFIG_SLOW_WORK_DEBUG) += slow-work-debugfs.o
obj-$(CONFIG_PERF_EVENTS) += perf_event.o obj-$(CONFIG_PERF_EVENTS) += perf_event.o
obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o
obj-$(CONFIG_USER_RETURN_NOTIFIER) += user-return-notifier.o obj-$(CONFIG_USER_RETURN_NOTIFIER) += user-return-notifier.o

View file

@ -49,40 +49,33 @@ asynchronous and synchronous parts of the kernel.
*/ */
#include <linux/async.h> #include <linux/async.h>
#include <linux/bug.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/init.h>
#include <linux/kthread.h>
#include <linux/delay.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/workqueue.h>
#include <asm/atomic.h> #include <asm/atomic.h>
static async_cookie_t next_cookie = 1; static async_cookie_t next_cookie = 1;
#define MAX_THREADS 256
#define MAX_WORK 32768 #define MAX_WORK 32768
static LIST_HEAD(async_pending); static LIST_HEAD(async_pending);
static LIST_HEAD(async_running); static LIST_HEAD(async_running);
static DEFINE_SPINLOCK(async_lock); static DEFINE_SPINLOCK(async_lock);
static int async_enabled = 0;
struct async_entry { struct async_entry {
struct list_head list; struct list_head list;
async_cookie_t cookie; struct work_struct work;
async_func_ptr *func; async_cookie_t cookie;
void *data; async_func_ptr *func;
struct list_head *running; void *data;
struct list_head *running;
}; };
static DECLARE_WAIT_QUEUE_HEAD(async_done); static DECLARE_WAIT_QUEUE_HEAD(async_done);
static DECLARE_WAIT_QUEUE_HEAD(async_new);
static atomic_t entry_count; static atomic_t entry_count;
static atomic_t thread_count;
extern int initcall_debug; extern int initcall_debug;
@ -117,27 +110,23 @@ static async_cookie_t lowest_in_progress(struct list_head *running)
spin_unlock_irqrestore(&async_lock, flags); spin_unlock_irqrestore(&async_lock, flags);
return ret; return ret;
} }
/* /*
* pick the first pending entry and run it * pick the first pending entry and run it
*/ */
static void run_one_entry(void) static void async_run_entry_fn(struct work_struct *work)
{ {
struct async_entry *entry =
container_of(work, struct async_entry, work);
unsigned long flags; unsigned long flags;
struct async_entry *entry;
ktime_t calltime, delta, rettime; ktime_t calltime, delta, rettime;
/* 1) pick one task from the pending queue */ /* 1) move self to the running queue */
spin_lock_irqsave(&async_lock, flags); spin_lock_irqsave(&async_lock, flags);
if (list_empty(&async_pending))
goto out;
entry = list_first_entry(&async_pending, struct async_entry, list);
/* 2) move it to the running queue */
list_move_tail(&entry->list, entry->running); list_move_tail(&entry->list, entry->running);
spin_unlock_irqrestore(&async_lock, flags); spin_unlock_irqrestore(&async_lock, flags);
/* 3) run it (and print duration)*/ /* 2) run (and print duration) */
if (initcall_debug && system_state == SYSTEM_BOOTING) { if (initcall_debug && system_state == SYSTEM_BOOTING) {
printk("calling %lli_%pF @ %i\n", (long long)entry->cookie, printk("calling %lli_%pF @ %i\n", (long long)entry->cookie,
entry->func, task_pid_nr(current)); entry->func, task_pid_nr(current));
@ -153,31 +142,25 @@ static void run_one_entry(void)
(long long)ktime_to_ns(delta) >> 10); (long long)ktime_to_ns(delta) >> 10);
} }
/* 4) remove it from the running queue */ /* 3) remove self from the running queue */
spin_lock_irqsave(&async_lock, flags); spin_lock_irqsave(&async_lock, flags);
list_del(&entry->list); list_del(&entry->list);
/* 5) free the entry */ /* 4) free the entry */
kfree(entry); kfree(entry);
atomic_dec(&entry_count); atomic_dec(&entry_count);
spin_unlock_irqrestore(&async_lock, flags); spin_unlock_irqrestore(&async_lock, flags);
/* 6) wake up any waiters. */ /* 5) wake up any waiters */
wake_up(&async_done); wake_up(&async_done);
return;
out:
spin_unlock_irqrestore(&async_lock, flags);
} }
static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct list_head *running) static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct list_head *running)
{ {
struct async_entry *entry; struct async_entry *entry;
unsigned long flags; unsigned long flags;
async_cookie_t newcookie; async_cookie_t newcookie;
/* allow irq-off callers */ /* allow irq-off callers */
entry = kzalloc(sizeof(struct async_entry), GFP_ATOMIC); entry = kzalloc(sizeof(struct async_entry), GFP_ATOMIC);
@ -186,7 +169,7 @@ static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct l
* If we're out of memory or if there's too much work * If we're out of memory or if there's too much work
* pending already, we execute synchronously. * pending already, we execute synchronously.
*/ */
if (!async_enabled || !entry || atomic_read(&entry_count) > MAX_WORK) { if (!entry || atomic_read(&entry_count) > MAX_WORK) {
kfree(entry); kfree(entry);
spin_lock_irqsave(&async_lock, flags); spin_lock_irqsave(&async_lock, flags);
newcookie = next_cookie++; newcookie = next_cookie++;
@ -196,6 +179,7 @@ static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct l
ptr(data, newcookie); ptr(data, newcookie);
return newcookie; return newcookie;
} }
INIT_WORK(&entry->work, async_run_entry_fn);
entry->func = ptr; entry->func = ptr;
entry->data = data; entry->data = data;
entry->running = running; entry->running = running;
@ -205,7 +189,10 @@ static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct l
list_add_tail(&entry->list, &async_pending); list_add_tail(&entry->list, &async_pending);
atomic_inc(&entry_count); atomic_inc(&entry_count);
spin_unlock_irqrestore(&async_lock, flags); spin_unlock_irqrestore(&async_lock, flags);
wake_up(&async_new);
/* schedule for execution */
queue_work(system_unbound_wq, &entry->work);
return newcookie; return newcookie;
} }
@ -312,87 +299,3 @@ void async_synchronize_cookie(async_cookie_t cookie)
async_synchronize_cookie_domain(cookie, &async_running); async_synchronize_cookie_domain(cookie, &async_running);
} }
EXPORT_SYMBOL_GPL(async_synchronize_cookie); EXPORT_SYMBOL_GPL(async_synchronize_cookie);
static int async_thread(void *unused)
{
DECLARE_WAITQUEUE(wq, current);
add_wait_queue(&async_new, &wq);
while (!kthread_should_stop()) {
int ret = HZ;
set_current_state(TASK_INTERRUPTIBLE);
/*
* check the list head without lock.. false positives
* are dealt with inside run_one_entry() while holding
* the lock.
*/
rmb();
if (!list_empty(&async_pending))
run_one_entry();
else
ret = schedule_timeout(HZ);
if (ret == 0) {
/*
* we timed out, this means we as thread are redundant.
* we sign off and die, but we to avoid any races there
* is a last-straw check to see if work snuck in.
*/
atomic_dec(&thread_count);
wmb(); /* manager must see our departure first */
if (list_empty(&async_pending))
break;
/*
* woops work came in between us timing out and us
* signing off; we need to stay alive and keep working.
*/
atomic_inc(&thread_count);
}
}
remove_wait_queue(&async_new, &wq);
return 0;
}
static int async_manager_thread(void *unused)
{
DECLARE_WAITQUEUE(wq, current);
add_wait_queue(&async_new, &wq);
while (!kthread_should_stop()) {
int tc, ec;
set_current_state(TASK_INTERRUPTIBLE);
tc = atomic_read(&thread_count);
rmb();
ec = atomic_read(&entry_count);
while (tc < ec && tc < MAX_THREADS) {
if (IS_ERR(kthread_run(async_thread, NULL, "async/%i",
tc))) {
msleep(100);
continue;
}
atomic_inc(&thread_count);
tc++;
}
schedule();
}
remove_wait_queue(&async_new, &wq);
return 0;
}
static int __init async_init(void)
{
async_enabled =
!IS_ERR(kthread_run(async_manager_thread, NULL, "async/mgr"));
WARN_ON(!async_enabled);
return 0;
}
core_initcall(async_init);

View file

@ -14,6 +14,8 @@
#include <linux/file.h> #include <linux/file.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/freezer.h>
#include <trace/events/sched.h> #include <trace/events/sched.h>
static DEFINE_SPINLOCK(kthread_create_lock); static DEFINE_SPINLOCK(kthread_create_lock);
@ -35,6 +37,7 @@ struct kthread_create_info
struct kthread { struct kthread {
int should_stop; int should_stop;
void *data;
struct completion exited; struct completion exited;
}; };
@ -54,6 +57,19 @@ int kthread_should_stop(void)
} }
EXPORT_SYMBOL(kthread_should_stop); EXPORT_SYMBOL(kthread_should_stop);
/**
* kthread_data - return data value specified on kthread creation
* @task: kthread task in question
*
* Return the data value specified when kthread @task was created.
* The caller is responsible for ensuring the validity of @task when
* calling this function.
*/
void *kthread_data(struct task_struct *task)
{
return to_kthread(task)->data;
}
static int kthread(void *_create) static int kthread(void *_create)
{ {
/* Copy data: it's on kthread's stack */ /* Copy data: it's on kthread's stack */
@ -64,6 +80,7 @@ static int kthread(void *_create)
int ret; int ret;
self.should_stop = 0; self.should_stop = 0;
self.data = data;
init_completion(&self.exited); init_completion(&self.exited);
current->vfork_done = &self.exited; current->vfork_done = &self.exited;
@ -247,3 +264,150 @@ int kthreadd(void *unused)
return 0; return 0;
} }
/**
* kthread_worker_fn - kthread function to process kthread_worker
* @worker_ptr: pointer to initialized kthread_worker
*
* This function can be used as @threadfn to kthread_create() or
* kthread_run() with @worker_ptr argument pointing to an initialized
* kthread_worker. The started kthread will process work_list until
* the it is stopped with kthread_stop(). A kthread can also call
* this function directly after extra initialization.
*
* Different kthreads can be used for the same kthread_worker as long
* as there's only one kthread attached to it at any given time. A
* kthread_worker without an attached kthread simply collects queued
* kthread_works.
*/
int kthread_worker_fn(void *worker_ptr)
{
struct kthread_worker *worker = worker_ptr;
struct kthread_work *work;
WARN_ON(worker->task);
worker->task = current;
repeat:
set_current_state(TASK_INTERRUPTIBLE); /* mb paired w/ kthread_stop */
if (kthread_should_stop()) {
__set_current_state(TASK_RUNNING);
spin_lock_irq(&worker->lock);
worker->task = NULL;
spin_unlock_irq(&worker->lock);
return 0;
}
work = NULL;
spin_lock_irq(&worker->lock);
if (!list_empty(&worker->work_list)) {
work = list_first_entry(&worker->work_list,
struct kthread_work, node);
list_del_init(&work->node);
}
spin_unlock_irq(&worker->lock);
if (work) {
__set_current_state(TASK_RUNNING);
work->func(work);
smp_wmb(); /* wmb worker-b0 paired with flush-b1 */
work->done_seq = work->queue_seq;
smp_mb(); /* mb worker-b1 paired with flush-b0 */
if (atomic_read(&work->flushing))
wake_up_all(&work->done);
} else if (!freezing(current))
schedule();
try_to_freeze();
goto repeat;
}
EXPORT_SYMBOL_GPL(kthread_worker_fn);
/**
* queue_kthread_work - queue a kthread_work
* @worker: target kthread_worker
* @work: kthread_work to queue
*
* Queue @work to work processor @task for async execution. @task
* must have been created with kthread_worker_create(). Returns %true
* if @work was successfully queued, %false if it was already pending.
*/
bool queue_kthread_work(struct kthread_worker *worker,
struct kthread_work *work)
{
bool ret = false;
unsigned long flags;
spin_lock_irqsave(&worker->lock, flags);
if (list_empty(&work->node)) {
list_add_tail(&work->node, &worker->work_list);
work->queue_seq++;
if (likely(worker->task))
wake_up_process(worker->task);
ret = true;
}
spin_unlock_irqrestore(&worker->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(queue_kthread_work);
/**
* flush_kthread_work - flush a kthread_work
* @work: work to flush
*
* If @work is queued or executing, wait for it to finish execution.
*/
void flush_kthread_work(struct kthread_work *work)
{
int seq = work->queue_seq;
atomic_inc(&work->flushing);
/*
* mb flush-b0 paired with worker-b1, to make sure either
* worker sees the above increment or we see done_seq update.
*/
smp_mb__after_atomic_inc();
/* A - B <= 0 tests whether B is in front of A regardless of overflow */
wait_event(work->done, seq - work->done_seq <= 0);
atomic_dec(&work->flushing);
/*
* rmb flush-b1 paired with worker-b0, to make sure our caller
* sees every change made by work->func().
*/
smp_mb__after_atomic_dec();
}
EXPORT_SYMBOL_GPL(flush_kthread_work);
struct kthread_flush_work {
struct kthread_work work;
struct completion done;
};
static void kthread_flush_work_fn(struct kthread_work *work)
{
struct kthread_flush_work *fwork =
container_of(work, struct kthread_flush_work, work);
complete(&fwork->done);
}
/**
* flush_kthread_worker - flush all current works on a kthread_worker
* @worker: worker to flush
*
* Wait until all currently executing or pending works on @worker are
* finished.
*/
void flush_kthread_worker(struct kthread_worker *worker)
{
struct kthread_flush_work fwork = {
KTHREAD_WORK_INIT(fwork.work, kthread_flush_work_fn),
COMPLETION_INITIALIZER_ONSTACK(fwork.done),
};
queue_kthread_work(worker, &fwork.work);
wait_for_completion(&fwork.done);
}
EXPORT_SYMBOL_GPL(flush_kthread_worker);

View file

@ -15,6 +15,7 @@
#include <linux/syscalls.h> #include <linux/syscalls.h>
#include <linux/freezer.h> #include <linux/freezer.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/workqueue.h>
/* /*
* Timeout for stopping processes * Timeout for stopping processes
@ -35,6 +36,7 @@ static int try_to_freeze_tasks(bool sig_only)
struct task_struct *g, *p; struct task_struct *g, *p;
unsigned long end_time; unsigned long end_time;
unsigned int todo; unsigned int todo;
bool wq_busy = false;
struct timeval start, end; struct timeval start, end;
u64 elapsed_csecs64; u64 elapsed_csecs64;
unsigned int elapsed_csecs; unsigned int elapsed_csecs;
@ -42,6 +44,10 @@ static int try_to_freeze_tasks(bool sig_only)
do_gettimeofday(&start); do_gettimeofday(&start);
end_time = jiffies + TIMEOUT; end_time = jiffies + TIMEOUT;
if (!sig_only)
freeze_workqueues_begin();
while (true) { while (true) {
todo = 0; todo = 0;
read_lock(&tasklist_lock); read_lock(&tasklist_lock);
@ -63,6 +69,12 @@ static int try_to_freeze_tasks(bool sig_only)
todo++; todo++;
} while_each_thread(g, p); } while_each_thread(g, p);
read_unlock(&tasklist_lock); read_unlock(&tasklist_lock);
if (!sig_only) {
wq_busy = freeze_workqueues_busy();
todo += wq_busy;
}
if (!todo || time_after(jiffies, end_time)) if (!todo || time_after(jiffies, end_time))
break; break;
@ -86,8 +98,12 @@ static int try_to_freeze_tasks(bool sig_only)
*/ */
printk("\n"); printk("\n");
printk(KERN_ERR "Freezing of tasks failed after %d.%02d seconds " printk(KERN_ERR "Freezing of tasks failed after %d.%02d seconds "
"(%d tasks refusing to freeze):\n", "(%d tasks refusing to freeze, wq_busy=%d):\n",
elapsed_csecs / 100, elapsed_csecs % 100, todo); elapsed_csecs / 100, elapsed_csecs % 100,
todo - wq_busy, wq_busy);
thaw_workqueues();
read_lock(&tasklist_lock); read_lock(&tasklist_lock);
do_each_thread(g, p) { do_each_thread(g, p) {
task_lock(p); task_lock(p);
@ -157,6 +173,7 @@ void thaw_processes(void)
oom_killer_enable(); oom_killer_enable();
printk("Restarting tasks ... "); printk("Restarting tasks ... ");
thaw_workqueues();
thaw_tasks(true); thaw_tasks(true);
thaw_tasks(false); thaw_tasks(false);
schedule(); schedule();

View file

@ -1,227 +0,0 @@
/* Slow work debugging
*
* Copyright (C) 2009 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#include <linux/module.h>
#include <linux/slow-work.h>
#include <linux/fs.h>
#include <linux/time.h>
#include <linux/seq_file.h>
#include "slow-work.h"
#define ITERATOR_SHIFT (BITS_PER_LONG - 4)
#define ITERATOR_SELECTOR (0xfUL << ITERATOR_SHIFT)
#define ITERATOR_COUNTER (~ITERATOR_SELECTOR)
void slow_work_new_thread_desc(struct slow_work *work, struct seq_file *m)
{
seq_puts(m, "Slow-work: New thread");
}
/*
* Render the time mark field on a work item into a 5-char time with units plus
* a space
*/
static void slow_work_print_mark(struct seq_file *m, struct slow_work *work)
{
struct timespec now, diff;
now = CURRENT_TIME;
diff = timespec_sub(now, work->mark);
if (diff.tv_sec < 0)
seq_puts(m, " -ve ");
else if (diff.tv_sec == 0 && diff.tv_nsec < 1000)
seq_printf(m, "%3luns ", diff.tv_nsec);
else if (diff.tv_sec == 0 && diff.tv_nsec < 1000000)
seq_printf(m, "%3luus ", diff.tv_nsec / 1000);
else if (diff.tv_sec == 0 && diff.tv_nsec < 1000000000)
seq_printf(m, "%3lums ", diff.tv_nsec / 1000000);
else if (diff.tv_sec <= 1)
seq_puts(m, " 1s ");
else if (diff.tv_sec < 60)
seq_printf(m, "%4lus ", diff.tv_sec);
else if (diff.tv_sec < 60 * 60)
seq_printf(m, "%4lum ", diff.tv_sec / 60);
else if (diff.tv_sec < 60 * 60 * 24)
seq_printf(m, "%4luh ", diff.tv_sec / 3600);
else
seq_puts(m, "exces ");
}
/*
* Describe a slow work item for debugfs
*/
static int slow_work_runqueue_show(struct seq_file *m, void *v)
{
struct slow_work *work;
struct list_head *p = v;
unsigned long id;
switch ((unsigned long) v) {
case 1:
seq_puts(m, "THR PID ITEM ADDR FL MARK DESC\n");
return 0;
case 2:
seq_puts(m, "=== ===== ================ == ===== ==========\n");
return 0;
case 3 ... 3 + SLOW_WORK_THREAD_LIMIT - 1:
id = (unsigned long) v - 3;
read_lock(&slow_work_execs_lock);
work = slow_work_execs[id];
if (work) {
smp_read_barrier_depends();
seq_printf(m, "%3lu %5d %16p %2lx ",
id, slow_work_pids[id], work, work->flags);
slow_work_print_mark(m, work);
if (work->ops->desc)
work->ops->desc(work, m);
seq_putc(m, '\n');
}
read_unlock(&slow_work_execs_lock);
return 0;
default:
work = list_entry(p, struct slow_work, link);
seq_printf(m, "%3s - %16p %2lx ",
work->flags & SLOW_WORK_VERY_SLOW ? "vsq" : "sq",
work, work->flags);
slow_work_print_mark(m, work);
if (work->ops->desc)
work->ops->desc(work, m);
seq_putc(m, '\n');
return 0;
}
}
/*
* map the iterator to a work item
*/
static void *slow_work_runqueue_index(struct seq_file *m, loff_t *_pos)
{
struct list_head *p;
unsigned long count, id;
switch (*_pos >> ITERATOR_SHIFT) {
case 0x0:
if (*_pos == 0)
*_pos = 1;
if (*_pos < 3)
return (void *)(unsigned long) *_pos;
if (*_pos < 3 + SLOW_WORK_THREAD_LIMIT)
for (id = *_pos - 3;
id < SLOW_WORK_THREAD_LIMIT;
id++, (*_pos)++)
if (slow_work_execs[id])
return (void *)(unsigned long) *_pos;
*_pos = 0x1UL << ITERATOR_SHIFT;
case 0x1:
count = *_pos & ITERATOR_COUNTER;
list_for_each(p, &slow_work_queue) {
if (count == 0)
return p;
count--;
}
*_pos = 0x2UL << ITERATOR_SHIFT;
case 0x2:
count = *_pos & ITERATOR_COUNTER;
list_for_each(p, &vslow_work_queue) {
if (count == 0)
return p;
count--;
}
*_pos = 0x3UL << ITERATOR_SHIFT;
default:
return NULL;
}
}
/*
* set up the iterator to start reading from the first line
*/
static void *slow_work_runqueue_start(struct seq_file *m, loff_t *_pos)
{
spin_lock_irq(&slow_work_queue_lock);
return slow_work_runqueue_index(m, _pos);
}
/*
* move to the next line
*/
static void *slow_work_runqueue_next(struct seq_file *m, void *v, loff_t *_pos)
{
struct list_head *p = v;
unsigned long selector = *_pos >> ITERATOR_SHIFT;
(*_pos)++;
switch (selector) {
case 0x0:
return slow_work_runqueue_index(m, _pos);
case 0x1:
if (*_pos >> ITERATOR_SHIFT == 0x1) {
p = p->next;
if (p != &slow_work_queue)
return p;
}
*_pos = 0x2UL << ITERATOR_SHIFT;
p = &vslow_work_queue;
case 0x2:
if (*_pos >> ITERATOR_SHIFT == 0x2) {
p = p->next;
if (p != &vslow_work_queue)
return p;
}
*_pos = 0x3UL << ITERATOR_SHIFT;
default:
return NULL;
}
}
/*
* clean up after reading
*/
static void slow_work_runqueue_stop(struct seq_file *m, void *v)
{
spin_unlock_irq(&slow_work_queue_lock);
}
static const struct seq_operations slow_work_runqueue_ops = {
.start = slow_work_runqueue_start,
.stop = slow_work_runqueue_stop,
.next = slow_work_runqueue_next,
.show = slow_work_runqueue_show,
};
/*
* open "/sys/kernel/debug/slow_work/runqueue" to list queue contents
*/
static int slow_work_runqueue_open(struct inode *inode, struct file *file)
{
return seq_open(file, &slow_work_runqueue_ops);
}
const struct file_operations slow_work_runqueue_fops = {
.owner = THIS_MODULE,
.open = slow_work_runqueue_open,
.read = seq_read,
.llseek = seq_lseek,
.release = seq_release,
};

File diff suppressed because it is too large Load diff

View file

@ -1,72 +0,0 @@
/* Slow work private definitions
*
* Copyright (C) 2009 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#define SLOW_WORK_CULL_TIMEOUT (5 * HZ) /* cull threads 5s after running out of
* things to do */
#define SLOW_WORK_OOM_TIMEOUT (5 * HZ) /* can't start new threads for 5s after
* OOM */
#define SLOW_WORK_THREAD_LIMIT 255 /* abs maximum number of slow-work threads */
/*
* slow-work.c
*/
#ifdef CONFIG_SLOW_WORK_DEBUG
extern struct slow_work *slow_work_execs[];
extern pid_t slow_work_pids[];
extern rwlock_t slow_work_execs_lock;
#endif
extern struct list_head slow_work_queue;
extern struct list_head vslow_work_queue;
extern spinlock_t slow_work_queue_lock;
/*
* slow-work-debugfs.c
*/
#ifdef CONFIG_SLOW_WORK_DEBUG
extern const struct file_operations slow_work_runqueue_fops;
extern void slow_work_new_thread_desc(struct slow_work *, struct seq_file *);
#endif
/*
* Helper functions
*/
static inline void slow_work_set_thread_pid(int id, pid_t pid)
{
#ifdef CONFIG_SLOW_WORK_DEBUG
slow_work_pids[id] = pid;
#endif
}
static inline void slow_work_mark_time(struct slow_work *work)
{
#ifdef CONFIG_SLOW_WORK_DEBUG
work->mark = CURRENT_TIME;
#endif
}
static inline void slow_work_begin_exec(int id, struct slow_work *work)
{
#ifdef CONFIG_SLOW_WORK_DEBUG
slow_work_execs[id] = work;
#endif
}
static inline void slow_work_end_exec(int id, struct slow_work *work)
{
#ifdef CONFIG_SLOW_WORK_DEBUG
write_lock(&slow_work_execs_lock);
slow_work_execs[id] = NULL;
write_unlock(&slow_work_execs_lock);
#endif
}

View file

@ -50,7 +50,6 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/slow-work.h>
#include <linux/perf_event.h> #include <linux/perf_event.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <linux/pipe_fs_i.h> #include <linux/pipe_fs_i.h>
@ -917,13 +916,6 @@ static struct ctl_table kern_table[] = {
.proc_handler = proc_dointvec, .proc_handler = proc_dointvec,
}, },
#endif #endif
#ifdef CONFIG_SLOW_WORK
{
.procname = "slow-work",
.mode = 0555,
.child = slow_work_sysctls,
},
#endif
#ifdef CONFIG_PERF_EVENTS #ifdef CONFIG_PERF_EVENTS
{ {
.procname = "perf_event_paranoid", .procname = "perf_event_paranoid",

View file

@ -323,17 +323,6 @@ config STACK_TRACER
Say N if unsure. Say N if unsure.
config WORKQUEUE_TRACER
bool "Trace workqueues"
select GENERIC_TRACER
help
The workqueue tracer provides some statistical information
about each cpu workqueue thread such as the number of the
works inserted and executed since their creation. It can help
to evaluate the amount of work each of them has to perform.
For example it can help a developer to decide whether he should
choose a per-cpu workqueue instead of a singlethreaded one.
config BLK_DEV_IO_TRACE config BLK_DEV_IO_TRACE
bool "Support for tracing block IO actions" bool "Support for tracing block IO actions"
depends on SYSFS depends on SYSFS

File diff suppressed because it is too large Load diff

View file

@ -4,13 +4,6 @@
* Scheduler hooks for concurrency managed workqueue. Only to be * Scheduler hooks for concurrency managed workqueue. Only to be
* included from sched.c and workqueue.c. * included from sched.c and workqueue.c.
*/ */
static inline void wq_worker_waking_up(struct task_struct *task, void wq_worker_waking_up(struct task_struct *task, unsigned int cpu);
unsigned int cpu) struct task_struct *wq_worker_sleeping(struct task_struct *task,
{ unsigned int cpu);
}
static inline struct task_struct *wq_worker_sleeping(struct task_struct *task,
unsigned int cpu)
{
return NULL;
}