Disable MSI support on HD-audio driver as default since there are too
many broken devices.
The module option is changed from disable_msi to enable_msi, too. For
turning MSI support on, pass enable_msi=1, instead.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
port is dereferenced even if it is NULL. Dereference it _after_ the
check if (!port)... Thanks Eric <ef87@yahoo.com> for reporting this.
This fixes
http://bugzilla.kernel.org/show_bug.cgi?id=7527
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6:
[PATCH] x86-64: Fix race in exit_idle
[PATCH] x86-64: Fix vgetcpu when CONFIG_HOTPLUG_CPU is disabled
[PATCH] x86: Add acpi_user_timer_override option for Asus boards
[PATCH] x86-64: setup saved_max_pfn correctly (kdump)
[PATCH] x86-64: Handle reserve_bootmem_generic beyond end_pfn
[PATCH] x86-64: shorten the x86_64 boot setup GDT to what the comment says
[PATCH] x86-64: Fix PTRACE_[SG]ET_THREAD_AREA regression with ia32 emulation.
[PATCH] x86-64: Fix partial page check to ensure unusable memory is not being marked usable.
Revert "[PATCH] MMCONFIG and new Intel motherboards"
This reverts commit 0130b0b32e.
Sergey Vlasov points out (and Vadim Lobanov concurs) that the bug it was
supposed to fix must be some unrelated memory corruption, and the "fix"
actually causes more problems:
"However, the new code does not look safe in all cases. If some other
task has opened more files while dup_fd() released oldf->file_lock, the
new code will update open_files to the new larger value. But newf was
allocated with the old smaller value of open_files, therefore subsequent
accesses to newf may try to write into unallocated memory."
so revert it.
Cc: Sharyathi Nagesh <sharyath@in.ibm.com>
Cc: Sergey Vlasov <vsu@altlinux.ru>
Cc: Vadim Lobanov <vlobanov@speakeasy.net>
Cc: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Commit cb07c9a186 causes the wrong return
value. is_hugepage_only_range() is a boolean, so we should return
-EINVAL rather than 1.
Also - we can use "mm" instead of looking up "current->mm" again.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When building a monolithic kernel, the load order of drivers does not
work for SAS libata users, resulting in a kernel oops.
Convert libata to use subsys_initcall instead of module_init, which
ensures that libata gets loaded before any LLDD.
This is the same thing that scsi core does to solve the problem. The
load order problem was observed on ipr SAS adapters and should exist for
other SAS users as well.
Signed-off-by: Brian King <brking@us.ibm.com>
Acked-by: Jeff Garzik <jgarzik@pobox.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Unlike mmap(), the codepath for brk() creates a vma without first checking
that it doesn't touch a region exclusively reserved for hugepages. On
powerpc, this can allow it to create a normal page vma in a hugepage
region, causing oopses and other badness.
Add a test to prevent this. With this patch, brk() will simply fail if it
attempts to move the break into a hugepage reserved region.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
(David:)
If hugetlbfs_file_mmap() returns a failure to do_mmap_pgoff() - for example,
because the given file offset is not hugepage aligned - then do_mmap_pgoff
will go to the unmap_and_free_vma backout path.
But at this stage the vma hasn't been marked as hugepage, and the backout path
will call unmap_region() on it. That will eventually call down to the
non-hugepage version of unmap_page_range(). On ppc64, at least, that will
cause serious problems if there are any existing hugepage pagetable entries in
the vicinity - for example if there are any other hugepage mappings under the
same PUD. unmap_page_range() will trigger a bad_pud() on the hugepage pud
entries. I suspect this will also cause bad problems on ia64, though I don't
have a machine to test it on.
(Hugh:)
prepare_hugepage_range() should check file offset alignment when it checks
virtual address and length, to stop MAP_FIXED with a bad huge offset from
unmapping before it fails further down. PowerPC should apply the same
prepare_hugepage_range alignment checks as ia64 and all the others do.
Then none of the alignment checks in hugetlbfs_file_mmap are required (nor
is the check for too small a mapping); but even so, move up setting of
VM_HUGETLB and add a comment to warn of what David Gibson discovered - if
hugetlbfs_file_mmap fails before setting it, do_mmap_pgoff's unmap_region
when unwinding from error will go the non-huge way, which may cause bad
behaviour on architectures (powerpc and ia64) which segregate their huge
mappings into a separate region of the address space.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Acked-by: Adam Litke <agl@us.ibm.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Looks like I still take care of the USB gadget/peripheral framework.
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Acked-by: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix binary/logical operator typo which leads to unreachable code. Noticed
while looking at other issues; I don't have the relevant hardware to test
this.
Signed-off-by: Nathan Lynch <ntl@pobox.com>
Cc: "Antonino A. Daplas" <adaplas@pol.net>
Acked-by: James Simmons <jsimmons@infradead.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Resolve the panic on failed mount of an autofs filesystem originally
reported by Mao Bibo.
It addresses two issues that happen after the mount fail. The first a NULL
pointer reference to a field (pipe) in the autofs superblock info structure
and second the lack of super block cleanup by the autofs and autofs4
modules.
Signed-off-by: Ian Kent <raven@themaw.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Stray bracket in debug code.
Signed-off-by: Nicolas Kaiser <nikai@nikai.net>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix http://bugzilla.kernel.org/show_bug.cgi?id=7264
We need to target this quirk a little more tightly, using the T20 DMI string.
Cc: Pavel Kysilka <goldenfish@bsys.cz>
Acked-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com>
Cc: Greg Kroah-Hartman <gregkh@suse.de>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Daniel Ritz <daniel.ritz@gmx.ch>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix interrupt routing for via 586 bridges. pirq can be 5 which needs to be
mapped to INTD. But currently the access functions can handle only pirq
1-4. this is similar to the other via chipsets where pirq 4 and 5 are both
mapped to INTD. Fixes bugzilla #7490
Cc: Daniel Paschka <monkey20181@gmx.net>
Cc: Adrian Bunk <bunk@susta.de>
Signed-off-by: Daniel Ritz <daniel.ritz@gmx.ch>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When we get a mismatch between handlers on the same IRQ, all we get is "IRQ
handler type mismatch for IRQ n". Let's print the name of the
presently-registered handler with which we got the mismatch.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When another interrupt happens in exit_idle the exit idle notifier
could be called an incorrect number of times.
Add a test_and_clear_bit_pda and use it handle the bit
atomically against interrupts to avoid this.
Pointed out by Stephane Eranian
Signed-off-by: Andi Kleen <ak@suse.de>
The vgetcpu per CPU initialization previously relied on CPU hotplug
events for all CPUs to initialize the per CPU state. That only
worked only on kernels with CONFIG_HOTPLUG_CPU enabled. On the
others some CPUs didn't get their state initialized properly
and vgetcpu wouldn't work.
Change the initialization sequence to instead run in a normal
initcall (which runs after the normal CPU bootup) and initialize
all running CPUs there. Later hotplug CPUs are still handled
with an hotplug notifier.
This actually simplifies the code somewhat.
Signed-off-by: Andi Kleen <ak@suse.de>
Timer overrides are normally disabled on Nvidia board because
they are commonly wrong, except on new ones with HPET support.
Unfortunately there are quite some Asus boards around that
don't have HPET, but need a timer override.
We don't know yet how to handle this transparently,
but at least add a command line option to force the timer override
and let them boot.
Cc: len.brown@intel.com
Signed-off-by: Andi Kleen <ak@suse.de>
x86_64: setup saved_max_pfn correctly
2.6.19-rc4 has broken CONFIG_CRASH_DUMP support on x86_64. It is impossible
to read out the kernel contents from /proc/vmcore because saved_max_pfn is set
to zero instead of the max_pfn value before the user map is setup.
This happens because saved_max_pfn is initialized at parse_early_param() time,
and at this time no active regions have been registered. save_max_pfn is setup
from e820_end_of_ram(), more exact find_max_pfn_with_active_regions() which
returns 0 because no regions exist.
This patch fixes this by registering before and removing after the call
to e820_end_of_ram().
Signed-off-by: Magnus Damm <magnus@valinux.co.jp>
Signed-off-by: Andi Kleen <ak@suse.de>
This can happen on kexec kernels with some configurations, in particularly
on Unisys ES7000 systems.
Analysis by Amul Shah
Cc: Amul Shah <amul.shah@unisys.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Stephen Tweedie, Herbert Xu, and myself have been struggling with a very
nasty bug in Xen. But it also pointed out a small bug in the x86_64
kernel boot setup.
The GDT limit being setup by the initial bzImage code when entering into
protected mode is way too big. The comment by the code states that the
size of the GDT is 2048, but the actual size being set up is much bigger
(32768). This happens simply because of one extra '0'.
Instead of setting up a 0x800 size, 0x8000 is set up. On bare metal this
is fine because the CPU wont load any segments unless they are
explicitly used. But unfortunately, this breaks Xen on vmx FV, since it
(for now) blindly loads all the segments into the VMCS if they are less
than the gdt limit. Since the real mode segments are around 0x3000, we are
getting junk into the VMCS and that later causes an exception.
Stephen Tweedie has written up a patch to fix the Xen side and will be
submitting that to those folks. But that doesn't excuse the GDT limit
being a magnitude too big.
AK: changed to compute true gdt size in assembler, fixed comment
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andi Kleen <ak@suse.de>
ptrace(PTRACE_[SG]ET_THREAD_AREA) calls from ia32 code
should be passed onto the x86_64 implementation.
The default case in sys32_ptrace used to call to sys_ptrace(), but is
now EINVAL. This patch fixes a regression caused by that changed.
Signed-off-by: Mike McCormack <mike@codeweavers.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Fix partial page check in e820_register_active_regions to ensure
partial pages are
not being marked as active in the memory pool.
Signed-off-by: Aaron Durbin <adurbin@google.com>
Signed-off-by: Andi Kleen <ak@suse.de>
OP_MAX_COUNTER never referenced, and is a reminant of an earlier
oprofile implementation. Remove it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
A curious thing happens, however, when ata_qc_new_init fails to get
an ata_queued_cmd:
First, ata_qc_new_init handles the failure like this:
cmd->result = (DID_OK << 16) | (QUEUE_FULL << 1);
done(cmd);
Then, we return to ata_scsi_translate and do this:
err_mem:
cmd->result = (DID_ERROR << 16);
done(cmd);
It appears to me that first we set a status code indicating that we're
ok but the device queue is full and finish the command, but then
we blow away that status code and replace it with an error flag and
finish the command a second time! That does not seem to be desirable
behavior since we merely want the I/O to wait until a command slot
frees up, not send errors up the block layer.
In the err_mem case, we should simply exit out of ata_scsi_translate
instead.
Signed-off-by: Darrick J. Wong <djwong@us.ibm.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Helps for PATA but SATA bridged devices lie and always set all the bits
so will need the error handling fixes from Tejun.
Signed-off-by: Alan Cox <alan@redhat.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/drzeus/mmc:
MMC: Do not set unsupported bits in OCR response
MMC: Poll card status after rescanning cards
* 'for-linus' of master.kernel.org:/pub/scm/linux/kernel/git/roland/infiniband:
IB/mad: Fix race between cancel and receive completion
RDMA/amso1100: Fix && typo
RDMA/amso1100: Fix unitialized pseudo_netdev accessed in c2_register_device
IB/ehca: Activate scaling code by default
IB/ehca: Use named constant for max mtu
IB/ehca: Assure 4K alignment for firmware control blocks
We should only set ->errors to CHECK_CONDITION and so on for requests
that use this field in the SCSI manner.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Contrary to what the name misleads you to believe, SG_DXFER_TO_FROM_DEV
is really just a normal read seen from the device side.
This patch fixes http://lkml.org/lkml/2006/10/13/100
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When ib_cancel_mad() is called, it puts the canceled send on a list
and schedules a "flushed" callback from process context. However,
this leaves a window where a receive completion could be processed
before the send is fully flushed.
This is fine, except that ib_find_send_mad() will find the MAD and
return it to the receive processing, which results in the sender
getting both a successful receive and a "flushed" send completion for
the same request. Understandably, this confuses the sender, which is
expecting only one of these two callbacks, and leads to grief such as
a use-after-free in IPoIB.
Fix this by changing ib_find_send_mad() to return a send struct only
if the status is still successful (and not "flushed"). The search of
the send_list already had this check, so this patch just adds the same
check to the search of the wait_list.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Fix the AMSO1100 firmware version computation, which was broken
due to "&&" being used where "&" should have.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Rework some load-time error handling: c2_register_device() leaked when
it failed, and the function that called it didn't check the return code.
Signed-off-by: Tom Tucker <tom@opengridcomputing.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Change ehca's Kconfig to activates scaling code as default. After
several measurements we saw that this feature prevents dropped packets
(UD) in stress situation. Thus, enabling it helps to improve ehca's
bandwidth through IPoIB.
Signed-off-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Define and use a constant EHCA_MAX_MTU instead hardcoded value.
Signed-off-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
* 'master' of master.kernel.org:/pub/scm/linux/kernel/git/mchehab/v4l-dvb:
V4L/DVB (4818): Flexcop-usb: fix debug printk
V4L/DVB (4817): Fix uses of "&&" where "&" was intended
V4L/DVB (4816): Change tuner type for Avermedia A16AR
V4L/DVB (4815): Remote support for Avermedia A16AR
V4L/DVB (4814): Remote support for Avermedia 777
V4L/DVB (4804): Fix missing i2c dependency for saa7110
V4L/DVB (4802): Cx88: fix remote control on WinFast 2000XP Expert
V4L/DVB (4795): Tda826x: use correct max frequency
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc:
[POWERPC] cell: set ARCH_SPARSEMEM_DEFAULT in Kconfig
[POWERPC] Fix cell "new style" mapping and add debug
[POWERPC] pseries: Force 4k update_flash block and list sizes
[POWERPC] CPM_UART: Fix non-console initialisation
[POWERPC] CPM_UART: Fix non-console transmit
[POWERPC] Make sure initrd and dtb sections get into zImage correctly
* git://oss.sgi.com:8090/xfs/xfs-2.6:
[XFS] Remove KERNEL_VERSION macros from xfs_dmapi.h
[XFS] Prevent a deadlock when xfslogd unpins inodes.
[XFS] Clean up i_flags and i_flags_lock handling.
[XFS] 956664: dm_read_invis() changes i_atime
[XFS] rename uio_read() to xfs_uio_read()
[XFS] Keep lockdep happy.
[XFS] 956618: Linux crashes on boot with XFS-DMAPI filesystem when
* master.kernel.org:/pub/scm/linux/kernel/git/sfrench/cifs-2.6:
[CIFS] Fix minor problem with previous patch
[CIFS] Fix mount failure when domain not specified
[CIFS] Explicitly set stat->blksize
[CIFS] NFS stress test generates flood of "close with pending write" messages
This patch (as810c) copies a minimum of 36 bytes of INQUIRY data, even if
the device claims that not all of them are valid. Often badly behaved
devices put plausible data in the Vendor, Product, and Revision strings but
set the Additional Length byte to a small value. Using potentially valid
data is certainly better than allocating a short buffer and then reading
beyond the end of it, which is what we do now.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Cc: James Bottomley <James.Bottomley@steeleye.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
While testing kernel on machine with "irqpoll" option I've caught such a
lockup:
__do_IRQ()
spin_lock(&desc->lock);
desc->chip->ack(); /* IRQ is ACKed */
note_interrupt()
misrouted_irq()
handle_IRQ_event()
if (...)
local_irq_enable_in_hardirq();
/* interrupts are enabled from now */
...
__do_IRQ() /* same IRQ we've started from */
spin_lock(&desc->lock); /* LOCKUP */
Looking at misrouted_irq() code I've found that a potential deadlock like
this can also take place:
1CPU:
__do_IRQ()
spin_lock(&desc->lock); /* irq = A */
misrouted_irq()
for (i = 1; i < NR_IRQS; i++) {
spin_lock(&desc->lock); /* irq = B */
if (desc->status & IRQ_INPROGRESS) {
2CPU:
__do_IRQ()
spin_lock(&desc->lock); /* irq = B */
misrouted_irq()
for (i = 1; i < NR_IRQS; i++) {
spin_lock(&desc->lock); /* irq = A */
if (desc->status & IRQ_INPROGRESS) {
As the second lock on both CPUs is taken before checking that this irq is
being handled in another processor this may cause a deadlock. This issue
is only theoretical.
I propose the attached patch to fix booth problems: when trying to handle
misrouted IRQ active desc->lock may be unlocked.
Acked-by: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
On running the Stress Test on machine for more than 72 hours following
error message was observed.
0:mon> e
cpu 0x0: Vector: 300 (Data Access) at [c00000007ce2f7f0]
pc: c000000000060d90: .dup_fd+0x240/0x39c
lr: c000000000060d6c: .dup_fd+0x21c/0x39c
sp: c00000007ce2fa70
msr: 800000000000b032
dar: ffffffff00000028
dsisr: 40000000
current = 0xc000000074950980
paca = 0xc000000000454500
pid = 27330, comm = bash
0:mon> t
[c00000007ce2fa70] c000000000060d28 .dup_fd+0x1d8/0x39c (unreliable)
[c00000007ce2fb30] c000000000060f48 .copy_files+0x5c/0x88
[c00000007ce2fbd0] c000000000061f5c .copy_process+0x574/0x1520
[c00000007ce2fcd0] c000000000062f88 .do_fork+0x80/0x1c4
[c00000007ce2fdc0] c000000000011790 .sys_clone+0x5c/0x74
[c00000007ce2fe30] c000000000008950 .ppc_clone+0x8/0xc
The problem is because of race window. When if(expand) block is executed in
dup_fd unlocking of oldf->file_lock give a window for fdtable in oldf to be
modified. So actual open_files in oldf may not match with open_files
variable.
Cc: Vadim Lobanov <vlobanov@speakeasy.net>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>