The windfarm_pm112 module relies on smu_sat_get_sdb_partition which is in
windfarm_smu_sat.c but is not exported to modules, so despite Kconfig
having the option to build the pm112 as modules, this can never be loaded.
This patch fixes that by exporting smu_sat_get_sdb_partition with
EXPORT_SYMBOL_GPL
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
There's a rather theoretical case of the BUG triggering in
fuse_reset_request():
- iget() fails because of OOM after a successful CREATE_OPEN request
- during IO on the resulting RELEASE request the connection is aborted
Fix and add warning to fuse_reset_request().
Signed-off-by: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Restore the compatibility with the older code and make it possible to
suspend if the kernel command line doesn't contain the "resume=" argument
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Just rename the compat system call to keep the name consistent with all the
other *64 compat system calls.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The last changes that introduced the additional_cpus command line parameter
also introduced a regression regarding smp initialization speed. In
smp_setup_cpu_possible_map() cpu_present_map is set to the same value as
cpu_possible_map. Especially that means that bits in the present map will be
set for cpus that are not present. This will cause a slow down in the initial
cpu_up() loop in smp_init() since trying to take cpus online that aren't
present takes a while.
Fix this by setting only bits for present cpus in cpu_present_map and set
cpu_present_map to cpu_possible_map in smp_cpus_done().
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Introduce possible_cpus command line option. Hard sets the number of bits set
in cpu_possible_map. Unlike the additional_cpus parameter this one guarantees
that num_possible_cpus() will stay constant even if the system gets rebooted
and a different number of cpus are present at startup.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Introduce additional_cpus command line option. By default no additional cpu
can be attached to the system anymore. Only the cpus present at IPL time can
be switched on/off. If it is desired that additional cpus can be attached to
the system the maximum number of additional cpus needs to be specified with
this option.
This change is necessary in order to limit the waste of per_cpu data
structures.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Set preempt_count of idle_thread to zero before switching off cpu. Otherwise
the preempt_count will be wrong if the cpu is switched on again since the
thread will be reused.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
If __ccw_device_disband_start() fails to initiate disbanding, it should finish
with ccw_device_disband_done() (which leaves the device in offline state)
instead of ccw_device_verify_done() (which leaves the device in online state).
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Heiko Carstens <heiko.carstens@de.ibm.com> wrote:
The boot sequence on s390 sometimes takes ages and we spend a very long
time (up to one or two minutes) in calibrate_migration_costs. The time
spent there differs from boot to boot. Also the calculated costs differ
a lot. I've seen differences by up to a factor of 15 (yes, factor not
percent). Also I doubt that making these measurements make much sense on
a completely virtualized architecture where you cannot tell how much cpu
time you will get anyway.
So introduce the CONFIG_DEFAULT_MIGRATION_COST method for an architecture
to set the scheduler migration costs. This turns off automatic detection
of migration costs. Makes sense on virtual platforms, where migration
costs are hard to measure accurately.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Jean-Luc Leger <reiga@dspnet.fr.eu.org> found this obvious typo.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix IO-port leakage from request_region in case of error during TPM
initialization, adds more pnp-verification and fixes a WTX-bug.
Signed-off-by: Marcel Selhorst <selhorst@crypto.rub.de>
Acked-by: Kylene Jo Hall <kjhall@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix a deadlock possible in the ext2 file system implementation. This
deadlock occurs when a file is removed from an ext2 file system which was
mounted with the "sync" mount option.
The problem is that ext2_xattr_delete_inode() was invoking the routine,
sync_dirty_buffer(), using a buffer head which was previously locked via
lock_buffer(). The first thing that sync_dirty_buffer() does is to lock
the buffer head that it was passed. It does this via lock_buffer(). Oops.
The solution is to unlock the buffer head in ext2_xattr_delete_inode()
before invoking sync_dirty_buffer(). This makes the code in
ext2_xattr_delete_inode() obey the same locking rules as all other callers
of sync_dirty_buffer() in the ext2 file system implementation.
Signed-off-by: Peter Staubach <staubach@redhat.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
- Fix the array index value in ata_rwcmd_protocol() for the added FUA commands.
- Filter out ATAPI packet command error messages in ata_pio_error()
Signed-off-by: Albert Lee <albertcc@tw.ibm.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
* libata does not care about error interrupts, so handle them locally
* the interrupts that are ignored only appear to happen at init time
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Change the find_next_best_node algorithm to correctly skip
over holes in the node online mask. Previously it would not handle
missing nodes correctly and cause crashes at boot.
[Written by Linus, tested by AK]
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
bond_release returns EINVAL without releasing the bond lock if the
slave device is not being bonded by the bond. The following patch
ensures that the lock is released in this case.
Signed-off-by: Stephen J. Bevan <stephen@dino.dnsalias.com>
Acked-by: Jay Vosburgh <fubar@us.ibm.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
[patch 2/2] s390: some qeth driver fixes
From: Frank Pavlic <fpavlic@de.ibm.com>
- fixed kernel panic when using EDDP support in Layer 2 mode
- NULL pointer exception in qeth_set_offline fixed.
- setting EDDP in Layer 2 mode did not set NETIF_F_(SG/TSO)
flags when device became online.
- use sscanf for parsing and converting IPv4 addresses
from string to __u8 values.
- qeth_string_to_ipaddr6 fixed. in case of double colon
the converted IPv6 address out from the string was not correct
in previous implementation.
Signed-off-by: Frank Pavlic <fpavlic@de.ibm.com>
diffstat:
qeth.h | 112 +++++++++++++++++++++++++-----------------------------------
qeth_eddp.c | 11 ++++-
qeth_main.c | 17 +++------
3 files changed, 63 insertions(+), 77 deletions(-)
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
[patch 1/2] s390: lcs performance enhancements
From: Klaus Wacker <kdwacker@de.ibm.com>
- When flood pinging (with large packet size) an LCS device,
about 90 % of all packets are dropped by driver.
- increased number of lcs IO buffers to 32.
- use netif_stop_queue/netif_wake_queue in lcs_start_xmit routine
- don't lock the whole xmit routine but just the piece of code where
tx_buffer is touched.
Signed-off-by: Frank Pavlic <fpavlic@de.ibm.com>
diffstat:
lcs.c | 31 +++++++++++++++++--------------
lcs.h | 2 +-
2 files changed, 18 insertions(+), 15 deletions(-)
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
drivers/net/tokenring/smctr.c: In function `smctr_load_firmware':
drivers/net/tokenring/smctr.c:2981: warning: assignment discards qualifiers from pointer target type
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Users report problems w/ auto-negotiation disabled and the link set
to 100/Half or 10/Half. Problems range from poor performance to no
link at all.
The current sky2 code does not set things properly on link up if
autonegotiation is disabled. Plus it does not contemplate a 10Mbit
setting at all. This patch corrects that.
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
This is a clone of John Linville's fixed for speed setting on sky2 driver.
The skge driver has the same code (and bug). It would not allow manually forcing
100 and 10 mbit.
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Take the experimental dependency of skge driver, it is as stable as the
others.
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
The sk98lin driver was changed a while ago to remove support for the
D-Link 530T card because that hardware has no working VPD data. The help
text for Kconfig was not updated.
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Some bitfields were incorrectly initialised in wavelan_cs,
causing some compiler warning. Also killed a error message that should
not be there...
Signed-off-by: Jean Tourrilhes <jt@hpl.hp.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Do not mask TIF_SINGLESTEP bit in _TIF_WORK_MASK. Masking this stopped
do_notify_resume() from being called when it should have been.
Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
A sysfs function call uses the wrong parameter, and thus breaks a build on
SGI O2.
CC drivers/video/gbefb.o
drivers/video/gbefb.c: In function ‘gbefb_remove’:
drivers/video/gbefb.c:1246: error: ‘dev’ undeclared (first use in this function)
drivers/video/gbefb.c:1246: error: (Each undeclared identifier is reported only once
drivers/video/gbefb.c:1246: error: for each function it appears in.)
make[2]: *** [drivers/video/gbefb.o] Error 1
Signed-off-by: Joshua Kinard <kumba@gentoo.org>
Signed-off-by: Antonino Daplas <adaplas@pol.net>
Signed-off-by: Martin Michlmayr <tbm@cyrius.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This provides an interface for arch code to find out how many
nanoseconds are going to be added on to xtime by the next call to
do_timer. The value returned is a fixed-point number in 52.12 format
in nanoseconds. The reason for this format is that it gives the
full precision that the timekeeping code is using internally.
The motivation for this is to fix a problem that has arisen on 32-bit
powerpc in that the value returned by do_gettimeofday drifts apart
from xtime if NTP is being used. PowerPC is now using a lockless
do_gettimeofday based on reading the timebase register and performing
some simple arithmetic. (This method of getting the time is also
exported to userspace via the VDSO.) However, the factor and offset
it uses were calculated based on the nominal tick length and weren't
being adjusted when NTP varied the tick length.
Note that 64-bit powerpc has had the lockless do_gettimeofday for a
long time now. It also had an extremely hairy routine that got called
from the 32-bit compat routine for adjtimex, which adjusted the
factor and offset according to what it thought the timekeeping code
was going to do. Not only was this only called if a 32-bit task did
adjtimex (i.e. not if a 64-bit task did adjtimex), it was also
duplicating computations from kernel/timer.c and it wasn't clear that
it was (still) correct.
The simple solution is to ask the timekeeping code how long the
current jiffy will be on each timer interrupt, after calling
do_timer. If this jiffy will be a different length from the last one,
we then need to compute new values for the factor and offset used in
the lockless do_gettimeofday. In this way we can keep xtime and
do_gettimeofday in sync, even when NTP is varying the tick length.
Note that when adjtimex varies the tick length, it almost always
introduces the variation from the next tick on. The only case I could
see where adjtimex would vary the length of the current tick is when
an old-style adjtime adjustment is being cancelled. (It's not clear
to me why the adjustment has to be cancelled immediately rather than
from the next tick on.) Thus I don't see any real need for a hook in
adjtimex; the rare case of an old-style adjustment being cancelled can
be fixed up at the next tick.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: john stultz <johnstul@us.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The memory allocator doesn't like empty zones (which have an
uninitialized freelist), so a x86-64 system with a node fully
in GFP_DMA32 only would crash on mbind.
Fix that up by putting all possible zones as fallback into the zonelist
and skipping the empty ones.
In fact the code always enough allocated space for all zones,
but only used it for the highest. This change just uses all the
memory that was allocated before.
This should work fine for now, but whoever implements node hot removal
needs to fix this somewhere else too (or make sure zone datastructures
by itself never go away, only their memory)
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Previously the numa hash code would be confused by holes in the node space
and stop early. This is the first part of the fix for the non boot issue
with empty nodes on Opterons.
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Code was refusing good SRATs because about 12K got lost somewhere.
Allow less than 1MB of difference before rejecting it.
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
But do it after everything else to risk less from recursive
crashes.
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Many laptops have problems with ticking the local APIC timer in C2/C3.
The code added earlier to use it by default on ATI didn't really work
for them. Don't enable it when the system supports C2/C3.
This doesn't fix the problem fully, but at least it's not worse than before.
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This caused a sigreturn with bad argument on a preemptible kernel
to complain with
Debug: sleeping function called from invalid context at /home/lsrc/quilt/linux/include/linux/rwsem.h:43
in_atomic():0, irqs_disabled():1
Call Trace: {__might_sleep+190} {profile_task_exit+21}
{__do_exit+34} {do_wait+0}
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
AMD SimNow!'s JIT doesn't like them at all in the guest. For distribution
installation it's easiest if it's a boot time option.
Also I moved the variable to a more appropiate place and make
it independent from sysctl
And marked __read_mostly which it is.
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Along with that, also suppress the memory touching altogether when the
watchdog is not running, to eliminate needless crosstalk. Plus ad a call
to it to make things consistent (one could also consider removing the call
in enable_timer_nmi_watchdog()).
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch fixes a number of bugs in the authentication process:
1) When falling back to Shared Key authentication mode from Open System,
a missing 'return' would cause the auth request to be sent, but would
drop the card into Management Error state. When falling back, the
driver should also indicate that it is switching to Shared Key mode by
setting exclude_unencrypted.
2) Initial authentication modes were apparently wrong in some cases,
causing the driver to attempt Shared Key authentication mode when in
fact the access point didn't support that mode or even had WEP disabled.
The driver should set the correct initial authentication mode based on
wep_is_on and exclude_unencrypted.
3) Authentication response packets from the access point in Open System
mode were getting ignored because the driver was expecting the sequence
number of a Shared Key mode response. The patch separates the OS and SK
mode handling to provide the correct behavior.
Signed-off-by: Dan Williams <dcbw@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
The previous patch that added ENCODEEXT and AUTH support to the atmel
driver contained a slight error which would cause just setting the TX
key index to also set the encryption key again. This patch allows any
combination of setting the TX key index and setting an encryption key.
Signed-off-by: Dan Williams <dcbw@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>