* 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6:
[PATCH] x86-64: Use stricter in process stack check for unwinder
[PATCH] i386: Fix compilation with UP genericarch
[PATCH] x86-64: Fix warning in io_apic.c
[PATCH] x86-64: work around gcc4 issue with -Os in Dwarf2 stack unwind
[PATCH] x86_64: Align data segment to PAGE_SIZE boundary
Previously it would check for alignment only, which could break
if the stack pointer was unaligned. Now explicitely check if the
stack pointer is in the stack page of the current process.
Ported from i386.
Signed-off-by: Andi Kleen <ak@suse.de>
Commit 2c8c0e6b8d ("[PATCH] Convert x86-64
to early param") broke the earlyprintk=...,keep feature.
This restores that functionality. Tested on x86_64. Must-have for
v2.6.19, no risk.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
o Explicitly align data segment to PAGE_SIZE boundary otherwise depending on
config options and tool chain it might be placed on a non PAGE_SIZE aligned
boundary and vmlinux loaders like kexec fail when they encounter a
PT_LOAD type segment which is not aligned to PAGE_SIZE boundary.
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
o Explicitly align data segment to PAGE_SIZE boundary otherwise depending on
config options and tool chain it might be placed on a non PAGE_SIZE aligned
boundary and vmlinux loaders like kexec fail when they encounter a
PT_LOAD type segment which is not aligned to PAGE_SIZE boundary.
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
The scheduler on Andreas Friedrich's hyperthreading system stopped
working properly: the scheduler would never move tasks to another CPU!
The lask known working kernel was 2.6.8.
After a couple of attempts to corner the bug, the following smoking gun
was found:
BIOS reported wrong ACPI idfor the processor
CPU#1: set_cpus_allowed(), swapper:1, 3 -> 2
[<c0103bbe>] show_trace_log_lvl+0x34/0x4a
[<c0103ceb>] show_trace+0x2c/0x2e
[<c01045f8>] dump_stack+0x2b/0x2d
[<c0116a77>] set_cpus_allowed+0x52/0xec
[<c0101d86>] cpu_idle_wait+0x2e/0x100
[<c0259c57>] acpi_processor_power_exit+0x45/0x58
[<c0259752>] acpi_processor_remove+0x46/0xea
[<c025c6fb>] acpi_start_single_object+0x47/0x54
[<c025cee5>] acpi_bus_register_driver+0xa4/0xd3
[<c04ab2d7>] acpi_processor_init+0x57/0x77
[<c01004d7>] init+0x146/0x2fd
[<c0103a87>] kernel_thread_helper+0x7/0x10
a quick look at cpu_idle_wait() shows how broken that code is
on i386: it changes the init task's affinity map but never
restores it ...
and because all userspace tasks get forked by init, they all
inherited that single-CPU affinity mask. x86_64 cloned this
bug too.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Andreas Friedrich <andreas.friedrich@fujitsu-siemens.com>
Cc: Wolfgang Erig <Wolfgang.Erig@fujitsu-siemens.com>
Cc: Andrew Morton <akpm@osdl.org>
Cc: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
the new dwarf2 unwinder crashes while trying to dump the stack:
Leftover inexact backtrace:
Unable to handle kernel paging request at ffffffff82800000 RIP:
[<ffffffff8026cf26>] dump_trace+0x35b/0x3d2
PGD 203027 PUD 205027 PMD 0
Oops: 0000 [2] PREEMPT SMP
CPU 0
Modules linked in:
Pid: 30, comm: khelper Not tainted 2.6.19-rc6-rt1 #11
RIP: 0010:[<ffffffff8026cf26>] [<ffffffff8026cf26>] dump_trace+0x35b/0x3d2
RSP: 0000:ffff81003fb9d848 EFLAGS: 00010006
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffff805b3520 RDI: 0000000000000000
RBP: ffffffff827ffff9 R08: ffffffff80aad000 R09: 0000000000000005
R10: ffffffff80aae000 R11: ffffffff8037961b R12: ffff81003fb9d858
R13: 0000000000000000 R14: ffffffff80598460 R15: ffffffff80ab1fc0
FS: 0000000000000000(0000) GS:ffffffff806c4200(0000) knlGS:0000000000000000
CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: ffffffff82800000 CR3: 0000000000201000 CR4: 00000000000006e0
this crash happened because it did not sanitize the dwarf2 data it
got, and got an unaligned stack pointer - which happily walked past
the process stack (and eventually reached the end of kernel memory
and pagefaulted there) due to this naive iteration condition:
HANDLE_STACK (((long) stack & (THREAD_SIZE-1)) != 0);
note that i386 is alot more conservative when it comes to trusting
stack pointers:
static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
{
return p > (void *)tinfo &&
p < (void *)tinfo + THREAD_SIZE - 3;
}
but the x86_64 code did not take this bit of i386 code.
The fix is to align the stack pointer.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Andi Kleen <ak@suse.de>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Komuro reports that ISA interrupts do not work after a disable_irq(),
causing some PCMCIA drivers to not work, with messages like
eth0: Asix AX88190: io 0x300, irq 3, hw_addr xx:xx:xx:xx:xx:xx
eth0: found link beat
eth0: autonegotiation complete: 100baseT-FD selected
eth0: interrupt(s) dropped!
eth0: interrupt(s) dropped!
eth0: interrupt(s) dropped!
...
Linus Torvalds <torvalds@osdl.org> said:
"Now, edge-triggered interrupts are a _lot_ harder to mask, because the
Intel APIC is an unbelievable piece of sh*t, and has the edge-detect logic
_before_ the mask logic, so if a edge happens _while_ the device is
masked, you'll never ever see the edge ever again (unmasking will not
cause a new edge, so you simply lost the interrupt).
So when you "mask" an edge-triggered IRQ, you can't really mask it at all,
because if you did that, you'd lose it forever if the IRQ comes in while
you masked it. Instead, we're supposed to leave it active, and set a flag,
and IF the IRQ comes in, we just remember it, and mask it at that point
instead, and then on unmasking, we have to replay it by sending a
self-IPI."
This trivial patch solves the problem.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Cc: Ingo Molnar <mingo@redhat.com>
Acked-by: Komuro <komurojun-mbn@nifty.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When another interrupt happens in exit_idle the exit idle notifier
could be called an incorrect number of times.
Add a test_and_clear_bit_pda and use it handle the bit
atomically against interrupts to avoid this.
Pointed out by Stephane Eranian
Signed-off-by: Andi Kleen <ak@suse.de>
The vgetcpu per CPU initialization previously relied on CPU hotplug
events for all CPUs to initialize the per CPU state. That only
worked only on kernels with CONFIG_HOTPLUG_CPU enabled. On the
others some CPUs didn't get their state initialized properly
and vgetcpu wouldn't work.
Change the initialization sequence to instead run in a normal
initcall (which runs after the normal CPU bootup) and initialize
all running CPUs there. Later hotplug CPUs are still handled
with an hotplug notifier.
This actually simplifies the code somewhat.
Signed-off-by: Andi Kleen <ak@suse.de>
Timer overrides are normally disabled on Nvidia board because
they are commonly wrong, except on new ones with HPET support.
Unfortunately there are quite some Asus boards around that
don't have HPET, but need a timer override.
We don't know yet how to handle this transparently,
but at least add a command line option to force the timer override
and let them boot.
Cc: len.brown@intel.com
Signed-off-by: Andi Kleen <ak@suse.de>
x86_64: setup saved_max_pfn correctly
2.6.19-rc4 has broken CONFIG_CRASH_DUMP support on x86_64. It is impossible
to read out the kernel contents from /proc/vmcore because saved_max_pfn is set
to zero instead of the max_pfn value before the user map is setup.
This happens because saved_max_pfn is initialized at parse_early_param() time,
and at this time no active regions have been registered. save_max_pfn is setup
from e820_end_of_ram(), more exact find_max_pfn_with_active_regions() which
returns 0 because no regions exist.
This patch fixes this by registering before and removing after the call
to e820_end_of_ram().
Signed-off-by: Magnus Damm <magnus@valinux.co.jp>
Signed-off-by: Andi Kleen <ak@suse.de>
Fix partial page check in e820_register_active_regions to ensure
partial pages are
not being marked as active in the memory pool.
Signed-off-by: Aaron Durbin <adurbin@google.com>
Signed-off-by: Andi Kleen <ak@suse.de>
This refactoring actually optimizes the code a little by caching the value
that we think the device is programmed with instead of reading it back from
the hardware. Which simplifies the code a little and should speed things up a
bit.
This patch introduces the concept of a ht_irq_msg and modifies the
architecture read/write routines to update this code.
There is a minor consistency fix here as well as x86_64 forgot to initialize
the htirq as masked.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Cc: Andi Kleen <ak@suse.de>
Acked-by: Bryan O'Sullivan <bos@pathscale.com>
Cc: <olson@pathscale.com>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This is the x86-64 version of f9dadfa71b
that did the same thing on i386.
Since the "mask" bit is in the low word, when we write a new entry, we
need to write the high word first, before we potentially unmask it.
The exception is when we actually want to mask the interrupt, in which
case we want to write the low word first to make sure that the high word
doesn't change while the interrupt routing is still active.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This is just commit 130fe05dbc ported to
x86-64, for all the same reasons. It cleans up the IO-APIC accesses in
order to then fix the ordering issues.
We move the accessor functions (that were only used by io_apic.c) out of
a header file, and use proper memory-mapped accesses rather than making
up our own "volatile" pointers.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Add a vmlinux.lds.h helper macro for defining the eight-level initcall table,
teach all the architectures to use it.
This is a prerequisite for a patch which performs initcall synchronisation for
multithreaded-probing.
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
[ Added AVR32 as well ]
Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When I generalized __assign_irq_vector I failed to pay attention
to what happens when you access a per cpu data structure for
a cpu that is not online. It is an undefined case making any
code that does it have undefined behavior as well.
The code still needs to be able to allocate a vector across cpus
that are not online to properly handle combinations like lowest
priority interrupt delivery and cpu_hotplug. Not that we can do
that today but the infrastructure shouldn't prevent it.
So this patch updates the places where we touch per cpu data
to only touch online cpus, it makes cpu vector allocation
an atomic operation with respect to cpu hotplug, and it updates
the cpu start code to properly initialize vector_irq so we
don't have inconsistencies.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Andi Kleen <ak@suse.de>
There is no reason to remember a per cpu position of which vector
to try. Keeping a global position is simpler and more likely to
result in a global vector allocation even if I don't need or require
it. For level triggered interrupts this means we are less likely to
acknowledge another cpus irq, and cause the level triggered irq to
harmlessly refire.
This simplification makes it easier to only access data structures
of online cpus, by having fewer special cases to deal with.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Andi Kleen <ak@suse.de>
This patch increases the timeout for PCI split transactions on PHB1 on
the first Calgary to work around an issue with the aic94xx
adapter. Fixes kernel.org bugzilla #7180
(http://bugzilla.kernel.org/show_bug.cgi?id=7180)
Based on excellent debugging and a patch by Darrick J. Wong
<djwong@us.ibm.com>
Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Darrick J. Wong <djwong@us.ibm.com>
There was a typo in the C3 latency test to decide of the TSC
should be used or not. It used the C2 latency threshold, not the
C3 one. Fix that.
This should fix the time on various dual core laptops.
Signed-off-by: Andi Kleen <ak@suse.de>
By default route the 8254 over the 8259 and only disable
it on ATI boards where this causes double timer interrupts.
This should unbreak some Nvidia boards where the timer doesn't
seem to tick of it isn't enabled in the 8259. At least one
VIA board also seemed to have a little trouble with the disabled
8259.
For 2.6.20 we'll try both dynamically without black listing, but I think
for .19 this is the safer approach because it has been already well tested
in earlier kernels. This also makes the x86-64 behaviour the same
as i386.
Command line options can change all this of course.
Signed-off-by: Andi Kleen <ak@suse.de>
o A recent change to vmlinux.ld.S file broke kexec as now resulting vmlinux
program headers are overlapping in physical address space.
o Now all the vsyscall related sections are placed after data and after
that mostly init data sections are placed. To avoid physical overlap
among phdrs, there are three possible solutions.
- Place vsyscall sections also in data phdrs instead of user
- move vsyscal sections after init data in bss.
- create another phdrs say data.init and move all the sections
after vsyscall into this new phdr.
o This patch implements the third solution.
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Magnus Damm <magnus@valinux.co.jp>
Cc: Andi Kleen <ak@suse.de>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
TARGET_CPUS is the default irq routing poicy. It specifies which cpus the
kernel should aim an irq at. In physflat delivery mode we can route an irq to
a single cpu. But that doesn't mean our default policy should only be a
single cpu is allowed.
By allowing the irq routing code to select from multiple cpus this enables
systems with more irqs then we can service on a single processor to actually
work.
I just audited and tested the code and irqbalance doesn't care, and the
io_apic.c doesn't care if we have extra cpus in the mask. Everything will use
or assume we are using the lowest numbered cpu in the mask if we can't use
them all.
So this should result in no behavior changes except on systems that need it.
Thanks for YH Lu for spotting this problem in his testing.
Cc: Yinghai Lu <yinghai.lu@amd.com>
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Andi Kleen <ak@suse.de>
Jan convinced me that it was unnecessary because the assembly stubs do
this already on the stack.
Cc: jbeulich@novell.com
Signed-off-by: Andi Kleen <ak@suse.de>
Thanks to YH Lu for spotting this. It appears I missed this function when I
refactored allocate_irq_vector and introduced irq_domain, with the result that
all retriggered irqs would go to cpu 0 even if we were not prepared to receive
them there.
While reviewing YH's patch I also noticed that this function was missing
locking, and since I am now reading two values from two diffrent arrays that
looks like a race we might be able to hit in the real world.
Cc: Yinghai Lu <yinghai.lu@amd.com>
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Andi Kleen <ak@suse.de>
This patch:
- out of range system calls failing to return -ENOSYS under
system call tracing
[AK: split out from another patch by Jan as separate bugfix]
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Currently some code pieces assume that address returned by find_e820_area()
are page aligned. But looks like find_e820_area() had no such intention
and hence one might end up stomping over some of the data. One such case
is bootmem allocator initialization code stomped over bss.
This patch modified find_e820_area() to return page aligned address. This
might be little wasteful of memory but at the same time probably it is
easier to handle page aligned memory.
Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Cc: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Introduce desc->name and eliminate the handle_irq_name() hack. Add
set_irq_chip_and_handler_name() to set the flow type and name at once.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Matthew Wilcox <willy@debian.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Intel processors starting with the Core Duo support
support processor native C-state using the MWAIT instruction.
Refer: Intel Architecture Software Developer's Manual
http://www.intel.com/design/Pentium4/manuals/253668.htm
Platform firmware exports the support for Native C-state to OS using
ACPI _PDC and _CST methods.
Refer: Intel Processor Vendor-Specific ACPI: Interface Specification
http://www.intel.com/technology/iapc/acpi/downloads/302223.htm
With Processor Native C-state, we use 'MWAIT' instruction on the processor
to enter different C-states (C1, C2, C3). We won't use the special IO
ports to enter C-state and no SMM mode etc required to enter C-state.
Overall this will mean better C-state support.
One major advantage of using MWAIT for all C-states is, with this and
"treat interrupt as break event" feature of MWAIT, we can now get accurate
timing for the time spent in C1, C2, .. states.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Len Brown <len.brown@intel.com>
Kernel build breaks with CONFIG_X86_VSMP. Probably due to some header
file cleanups in 2.6.19-rc1.
Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch fixes my one line thinko where I was clearing
the vector_irq entries on the wrong cpus.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
hw_interrupt_type is deprecated in favour of struct irq_chip.
[mingo@elte.hu: do x86_64 too]
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Due to code bugs or misbehaving hardware it is possible that we can
receive an interrupt that we have not mapped into a linux irq. Calling
BUG when that happens is very rude, and if the problem is mild enough
prevents anything else from getting done.
So instead of calling BUG just scream loudly about the problem and
continue running. We don't have enough knowledge to know which
interrupt triggered this behavior so we don't acknowledge it. This will
likely prevent a recurrence of the problem by jamming up the works with
an unacknowledged interrupt.
If the interrupt was something important it is quite possible that
nothing productive will happen past this point. But it is now at least
possible to keep working if the kernel can survive without the interrupt
we dropped on the floor.
Solutions like irqpoll should generally make dropped irqs non-fatal.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The problem we can't take advantage of lowest priority delivery mode if
the vectors are allocated for only one cpu at a time. Nor can we work
around hardware that assumes lowest priority delivery mode is always
used with several cpus.
So this patch introduces the concept of a vector_allocation_domain. A
set of cpus that will receive an irq on the same vector. Currently the
code for implementing this is placed in the genapic structure so we can
vary this depending on how we are using the io_apics.
This allows us to restore the previous behaviour of genapic_flat without
removing the benefits of having separate vector allocation for large
machines.
This should also fix the problem report where a hyperthreaded cpu was
receving the irq on the wrong hyperthread when in logical delivery mode
because the previous behaviour is restored.
This patch properly records our allocation of the first 16 irqs to the
first 16 available vectors on all cpus. This should be fine but it may
run into problems with multiple interrupts at the same interrupt level.
Except for some badly maintained comments in the code and the behaviour
of the interrupt allocator I have no real understanding of that problem.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Which vector an irq is assigned to now varies dynamically and is
not needed outside of io_apic.c. So remove the possibility
of accessing the information outside of io_apic.c and remove
the silly macro that makes looking for users of irq_vector
difficult.
The fact this compiles ensures there aren't any more pieces
of the old CONFIG_PCI_MSI weirdness that I failed to remove.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
smp_apic_timer_interrupt() needs to stack the pt_regs* for profile_tick.
If any other of those APIC interrupt handlers want to run get_irq_regs() then
their C entrypoint handlers will need the same treatment.
Cc: Andi Kleen <ak@muc.de>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Acked-by: Andrew Vasquez <andrew.vasquez@qlogic.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* git://git.infradead.org/~dhowells/irq-2.6:
IRQ: Maintain regs pointer globally rather than passing to IRQ handlers
IRQ: Typedef the IRQ handler function type
IRQ: Typedef the IRQ flow handler function type
This reverts an earlier patch that was found to cause FPU
state corruption. I think the corruption happens because
unlazy_fpu() can cause FPU exceptions and when it happens
after the current switch some processing would affect
the state in the wrong process.
Thanks to Douglas Crosher and Tom Hughes for testing.
Cc: jbeulich@novell.com
Signed-off-by: Andi Kleen <ak@suse.de>
Always make sure RIP/EIP is 0 in the registers stored on the top
of the stack of a kernel thread. This makes sure the unwinder code
won't try a fallback but knows the stack has ended.
AK: this patch is a bit mysterious. in theory they should be terminated
anyways, but it seems to fix at least one crash. Anyways double termination
probably doesn't hurt.
Signed-off-by: Andi Kleen <ak@suse.de>
Make the references to the bus number in hex instead of decimal, as
that is the way that lspci prints out the bus numbers.
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Also add copyright for work done after leaving IBM.
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
The purpose of the code being modified is to determine the location
of the calgary chip address space. This is done by a magical formula
of FE0MB-8MB*OneBasedChassisNumber+1MB*(RioNodeId-ChassisBase) to
find the offset where BIOS puts it. In this formula,
OneBasedChassisNumber corresponds to the NUMA node, and rionodeid is
always 2 or 3 depending on which chip in the system it is. The
problem was that we had an off by one error that caused us to account
some busses to the wrong chip and thus give them the wrong address
space.
Fixes RH bugzilla #203971.
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Signed-off-bu: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
calgary_init's for loop does not correspond to the actual device being
checked, which makes its upperbound check for array overflow useless.
Changing this to a do-while loop is the correct way of doing this.
There should be no possibility of spinning forever in this loop, as
pci_get_device states that it will go through all iterations, then
return NULL (thus breaking the loop).
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>