18 months ago 5bbf89fc26 changed to loading
bzImages directly, and no longer manually ungzipping them, so we no longer
need libz.
Also, -m32 is useful for those on 64-bit platforms (and harmless on
32-bit).
Reported-by: Ron Minnich <rminnich@gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The downside of the last patch which made restore_flags and irq_enable
check interrupts is that they are now too big to be patched directly
into the callsites, so the C versions are always used.
But the C versions go via PV_CALLEE_SAVE_REGS_THUNK which saves all
the registers. In fact, we don't need any registers in the fast path,
so we can do better than this if we actually code them in assembler.
The results are in the noise, but since it's about the same amount of
code, it's worth applying.
1GB Guest->Host: input(suppressed),output(suppressed)
Before:
Seconds: 0:16.53
Packets: 377268,753673
Interrupts: 22461,24297
Notifications: 1(5245),21303(732370)
Net IRQs triggered: 377023(245),42578(711095)
After:
Seconds: 0:16.48
Packets: 377289,753673
Interrupts: 22281,24465
Notifications: 1(5245),21296(732377)
Net IRQs triggered: 377060(229),42564(711109)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
lguest never checked for pending interrupts when enabling interrupts, and
things still worked. However, it makes a significant difference to TCP
performance, so it's time we fixed it by introducing a pending_irq flag
and checking it on irq_restore and irq_enable.
These two routines are now too big to patch into the 8/10 bytes
patch space, so we drop that code.
Note: The high latency on interrupt delivery had a very curious
effect: once everything else was optimized, networking without GSO was
faster than networking with GSO, since more interrupts were sent and
hence a greater chance of one getting through to the Guest!
Note2: (Almost) Closing the same loophole for iret doesn't have any
measurable effect, so I'm leaving that patch for the moment.
Before:
1GB tcpblast Guest->Host: 30.7 seconds
1GB tcpblast Guest->Host (no GSO): 76.0 seconds
After:
1GB tcpblast Guest->Host: 6.8 seconds
1GB tcpblast Guest->Host (no GSO): 27.8 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When the Guest does the LHCALL_HALT hypercall, we go to sleep, expecting
that a timer or the Waker will wake_up_process() us.
But we do it in a stupid way, leaving a classic missing wakeup race.
So split maybe_do_interrupt() into interrupt_pending() and
try_deliver_interrupt(), and check maybe_do_interrupt() and the
"break_out" flag before calling schedule.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2088761152 (lguest: notify on empty) introduced
lguest support for the VIRTIO_F_NOTIFY_ON_EMPTY flag, but in fact it turned on
interrupts all the time.
Because we always process one buffer at a time, the inflight count is always 0
when call trigger_irq and so we always ignore VRING_AVAIL_F_NO_INTERRUPT from
the Guest.
It should be looking to see if there are more buffers in the Guest's queue:
if it's empty, then we force an interrupt.
This makes little difference, since we usually have an empty queue; but
that's the subject of another patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The Launcher could be inside the Guest on another CPU; wake_up_process
will do nothing because it is "running". kick_process will knock it
back into our kernel in this case, otherwise we'll miss it until the
next guest exit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
lguest needs kick_process: wake_up_process() does nothing if a process
is running, which isn't sufficient (we need it in the kernel).
And lguest support is usually modular.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Ingo Molnar <mingo@elte.hu>
Since the Launcher process runs the Guest, it doesn't have to be very
serious about its barriers: the Guest isn't running while we are (Guest
is UP).
Before we change to use threads to service devices, we need to fix this.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Copy from arch/x86/kernel/irqinit_32.c: we don't use the vectors beyond
LGUEST_IRQS (if any), but we might as well set them all.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We hand the /dev/lguest fd everywhere; it's far neater to just make it
a global (it already is, in fact, hidden in the waker_fds struct).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can't trust the values in the device descriptor table once the
guest has booted, so keep local copies. They could set them to
strange values then cause us to segv (they're 8 bit values, so they
can't make our pointers go too wild).
This becomes more important with the following patches which read them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This patch allows a virtio driver to use VIRTIO_DEV_ANY_ID for the
device id. This will be used by a test module that can be bound to
any virtio device.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This bug never appeared, since all current virtio drivers use
VIRTIO_DEV_ANY_ID for the vendor field. If a real vendor would be used,
the check in virtio_id_match is wrong - it returns 0 if
id->vendor == dev->id.vendor.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If the device fills less than 4 bytes of our random buffer, we'll
BUG_ON. It's nicer to handle the case where it partially fills the
buffer (the protocol doesn't explicitly bad that).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The remove member of the virtio_driver structure uses __devexit_p(), so
the remove function itself should be marked with __devexit. And where
there be __devexit on the remove, so is there __devinit on the probe.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Add a new feature flag for indirect ring entries. These are ring
entries which point to a table of buffer descriptors.
The idea here is to increase the ring capacity by allowing a larger
effective ring size whereby the ring size dictates the number of
requests that may be outstanding, rather than the size of those
requests.
This should be most effective in the case of block I/O where we can
potentially benefit by concurrently dispatching a large number of
large requests. Even in the simple case of single segment block
requests, this results in a threefold increase in ring capacity.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Drivers don't add transport features to their table, so we
shouldn't check these with virtio_check_driver_offered_feature().
We could perhaps add an ->offered_feature() virtio_config_op,
but that perhaps that would be overkill for a consitency check
like this.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This implements optional MSI-X support in virtio_pci.
MSI-X is used whenever the host supports at least 2 MSI-X
vectors: 1 for configuration changes and 1 for virtqueues.
Per-virtqueue vectors are allocated if enough vectors
available.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (+ whitespace, style)
This reorganizes virtio-pci code in vp_interrupt slightly, so that
it's easier to add per-vq MSI support on top.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This replaces find_vq/del_vq with find_vqs/del_vqs virtio operations,
and updates all drivers. This is needed for MSI support, because MSI
needs to know the total number of vectors upfront.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (+ lguest/9p compile fixes)
Add a linked list of all virtqueues for a virtio device: this helps for
debugging and is also needed for upcoming interface change.
Also, add a "name" field for clearer debug messages.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Virtio devices are supposed to negotiate features before they start using
the device, but the current code doesn't do this. This is because the
driver's probe() function invariably has to add buffers to a virtqueue,
or probe the disk (virtio_blk).
This currently doesn't matter since no existing backend is strict about
the feature negotiation. But it's possible to imagine a future feature
which completely changes how a device operates: in this case, we'd need
to acknowledge it before using the device.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Everyone cut and paste this comment from my original one. We now do
it generically, so cut the comments.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Amerigo Wang <amwang@redhat.com>
It's theoretically possible that there are exception table entries
which point into the (freed) init text of modules. These could cause
future problems if other modules get loaded into that memory and cause
an exception as we'd see the wrong fixup. The only case I know of is
kvm-intel.ko (when CONFIG_CC_OPTIMIZE_FOR_SIZE=n).
Amerigo fixed this long-standing FIXME in the x86 version, but this
patch is more general.
This implements trim_init_extable(); most archs are simple since they
use the standard lib/extable.c sort code. Alpha and IA64 use relative
addresses in their fixups, so thier trimming is a slight variation.
Sparc32 is unique; it doesn't seem to define ARCH_HAS_SORT_EXTABLE,
yet it defines its own sort_extable() which overrides the one in lib.
It doesn't sort, so we have to mark deleted entries instead of
actually trimming them.
Inspired-by: Amerigo Wang <amwang@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: linux-alpha@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: linux-ia64@vger.kernel.org
As Christoph Hellwig suggested, module_alloc() actually can be
unified for i386 and x86_64 (of course, also UML).
Signed-off-by: WANG Cong <amwang@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: 'Ingo Molnar' <mingo@elte.hu>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Due to the previous merge, uml needs to be fixed.
Signed-off-by: WANG Cong <amwang@redhat.com>
Cc: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Merge the same functions both in module_32.c and module_64.c into
module.c.
This is the first step to merge both of them finally.
Signed-off-by: WANG Cong <amwang@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1) Now module_param(..., invbool, ...) requires a bool, and similarly
module_param(..., bool, ...) allows it, change pmi_setpal to a bool.
2) #define param_get_scroll to NULL, since it can never be called (perm
argument to module_param_named is 0).
3) Return -EINVAL from param_set_scroll if the value is bad, so it's
reported.
Note that I don't think the old fb_get_options() is required for new
drivers: the parameters automatically work as uvesafb.XXX=... anyway.
Acked-by: Michał Januszewski <spock@gentoo.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Impact: API cleanup
For historical reasons, 'bool' parameters must be an int, not a bool.
But there are around 600 users, so a conversion seems like useless churn.
So we use __same_type() to distinguish, and handle both cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Impact: new API
__builtin_types_compatible_p() is a little awkward to use: it takes two
types rather than types or variables, and it's just damn long.
(typeof(type) == type, so this works on types as well as vars).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Impact: cleanup
Rather than hack KPARAM_KMALLOCED into the perm field, separate it out.
Since the perm field was 32 bits and only needs 16, we don't add bloat.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It takes an 'int' for historical reasons, and there are only two
users: simply switch it over to bool.
The other user (uvesafb.c) will get a (harmless-on-x86) warning until
the next patch is applied.
Cc: Brad Douglas <brad@neruo.com>
Cc: Michal Januszewski <spock@gentoo.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Russell explains the __module_get():
> cyber2000fb.c does it in its module initialization function
> to prevent the module (when built for Shark) from being unloaded. It
> does this because it's from the days of 2.2 kernels and no one bothered
> writing the module unload support for Shark.
Since 2.4, the correct answer has been to not define an unload fn.
Cc: Russell King <rmk+lkml@arm.linux.org.uk>
Cc: alex@shark-linux.de
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now, SLAB is configured in very early stage and it can be used in
init routine now.
But replacing alloc_bootmem() in FLAT/DISCONTIGMEM's page_cgroup()
initialization breaks the allocation, now.
(Works well in SPARSEMEM case...it supports MEMORY_HOTPLUG and
size of page_cgroup is in reasonable size (< 1 << MAX_ORDER.)
This patch revive FLATMEM+memory cgroup by using alloc_bootmem.
In future,
We stop to support FLATMEM (if no users) or rewrite codes for flatmem
completely.But this will adds more messy codes and overheads.
Reported-by: Li Zefan <lizf@cn.fujitsu.com>
Tested-by: Li Zefan <lizf@cn.fujitsu.com>
Tested-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
This patch adds the ability to trace various aspects of the GFS2
filesystem. The trace points are divided into three groups,
glocks, logging and bmap. These points have been chosen because
they allow inspection of the major internal functions of GFS2
and they are also generic enough that they are unlikely to need
any major changes as the filesystem evolves.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
_sdata is a common symbol defined by many architectures and made
available to the kernel via asm-generic/sections.h. Kmemleak uses this
symbol when scanning the data sections.
[ Impact: add new global symbol ]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
LKML-Reference: <20090511122105.26556.96593.stgit@pc1117.cambridge.arm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6: (87 commits)
nilfs2: get rid of bd_mount_sem use from nilfs
nilfs2: correct exclusion control in nilfs_remount function
nilfs2: simplify remaining sget() use
nilfs2: get rid of sget use for checking if current mount is present
nilfs2: get rid of sget use for acquiring nilfs object
nilfs2: remove meaningless EBUSY case from nilfs_get_sb function
remove the call to ->write_super in __sync_filesystem
nilfs2: call nilfs2_write_super from nilfs2_sync_fs
jffs2: call jffs2_write_super from jffs2_sync_fs
ufs: add ->sync_fs
sysv: add ->sync_fs
hfsplus: add ->sync_fs
hfs: add ->sync_fs
fat: add ->sync_fs
ext2: add ->sync_fs
exofs: add ->sync_fs
bfs: add ->sync_fs
affs: add ->sync_fs
sanitize ->fsync() for affs
repair bfs_write_inode(), switch bfs to simple_fsync()
...
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/gerg/m68knommu:
m68knommu: remove unecessary include of thread_info.h in entry.S
m68knommu: enumerate INIT_THREAD fields properly
headers_check fix: m68k, swab.h
arch/m68knommu: Convert #ifdef DEBUG printk(KERN_DEBUG to pr_debug(
m68knommu: remove obsolete reset code
m68knommu: move CPU reset code for the 5272 ColdFire into its platform code
m68knommu: move CPU reset code for the 528x ColdFire into its platform code
m68knommu: move CPU reset code for the 527x ColdFire into its platform code
m68knommu: move CPU reset code for the 523x ColdFire into its platform code
m68knommu: move CPU reset code for the 520x ColdFire into its platform code
m68knommu: add CPU reset code for the 532x ColdFire
m68knommu: add CPU reset code for the 5249 ColdFire
m68knommu: add CPU reset code for the 5206e ColdFire
m68knommu: add CPU reset code for the 5206 ColdFire
m68knommu: add CPU reset code for the 5407 ColdFire
m68knommu: add CPU reset code for the 5307 ColdFire
m68knommu: merge system reset for code ColdFire 523x family
m68knommu: fix system reset for ColdFire 527x family
So we make sure MAXSMP gets a cleared cpumask
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This will remove every bd_mount_sem use in nilfs.
The intended exclusion control was replaced by the previous patch
("nilfs2: correct exclusion control in nilfs_remount function") for
nilfs_remount(), and this patch will replace remains with a new mutex
that this inserts in nilfs object.
Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
nilfs_remount() changes mount state of a superblock instance. Even
though nilfs accesses other superblock instances during mount or
remount, the mount state was not properly protected in
nilfs_remount().
Moreover, nilfs_remount() has a lock order reversal problem;
nilfs_get_sb() holds:
1. bdev->bd_mount_sem
2. sb->s_umount (sget acquires)
and nilfs_remount() holds:
1. sb->s_umount (locked by the caller in vfs)
2. bdev->bd_mount_sem
To avoid these problems, this patch divides a semaphore protecting
super block instances from nilfs->ns_sem, and applies it to the mount
state protection in nilfs_remount().
With this change, bd_mount_sem use is removed from nilfs_remount() and
the lock order reversal will be resolved. And the new rw-semaphore,
nilfs->ns_super_sem will properly protect the mount state except the
modification from nilfs_error function.
Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>