Merge branch 'akpm' (patch-bomb from Andrew Morton)

Merge patches from Andrew Morton:
 - memstick fixes

 - the rest of MM

 - various misc bits that were awaiting merges from linux-next into
   mainline: seq_file, printk, rtc, completions, w1, softirqs, llist,
   kfifo, hfsplus

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (72 commits)
  cmdline-parser: fix build
  hfsplus: Fix undefined __divdi3 in hfsplus_init_header_node()
  kfifo API type safety
  kfifo: kfifo_copy_{to,from}_user: fix copied bytes calculation
  sound/core/memalloc.c: use gen_pool_dma_alloc() to allocate iram buffer
  llists-move-llist_reverse_order-from-raid5-to-llistc-fix
  llists: move llist_reverse_order from raid5 to llist.c
  kernel: fix generic_exec_single indentation
  kernel-provide-a-__smp_call_function_single-stub-for-config_smp-fix
  kernel: provide a __smp_call_function_single stub for !CONFIG_SMP
  kernel: remove CONFIG_USE_GENERIC_SMP_HELPERS
  revert "softirq: Add support for triggering softirq work on softirqs"
  drivers/w1/masters/w1-gpio.c: use dev_get_platdata()
  sched: remove INIT_COMPLETION
  tree-wide: use reinit_completion instead of INIT_COMPLETION
  sched: replace INIT_COMPLETION with reinit_completion
  drivers/rtc/rtc-hid-sensor-time.c: enable HID input processing early
  drivers/rtc/rtc-hid-sensor-time.c: use dev_get_platdata()
  vsprintf: ignore %n again
  seq_file: remove "%n" usage from seq_file users
  ...
This commit is contained in:
Linus Torvalds 2013-11-15 09:32:31 +09:00
commit d8fe4acc88
213 changed files with 1165 additions and 890 deletions

View file

@ -0,0 +1,94 @@
Split page table lock
=====================
Originally, mm->page_table_lock spinlock protected all page tables of the
mm_struct. But this approach leads to poor page fault scalability of
multi-threaded applications due high contention on the lock. To improve
scalability, split page table lock was introduced.
With split page table lock we have separate per-table lock to serialize
access to the table. At the moment we use split lock for PTE and PMD
tables. Access to higher level tables protected by mm->page_table_lock.
There are helpers to lock/unlock a table and other accessor functions:
- pte_offset_map_lock()
maps pte and takes PTE table lock, returns pointer to the taken
lock;
- pte_unmap_unlock()
unlocks and unmaps PTE table;
- pte_alloc_map_lock()
allocates PTE table if needed and take the lock, returns pointer
to taken lock or NULL if allocation failed;
- pte_lockptr()
returns pointer to PTE table lock;
- pmd_lock()
takes PMD table lock, returns pointer to taken lock;
- pmd_lockptr()
returns pointer to PMD table lock;
Split page table lock for PTE tables is enabled compile-time if
CONFIG_SPLIT_PTLOCK_CPUS (usually 4) is less or equal to NR_CPUS.
If split lock is disabled, all tables guaded by mm->page_table_lock.
Split page table lock for PMD tables is enabled, if it's enabled for PTE
tables and the architecture supports it (see below).
Hugetlb and split page table lock
---------------------------------
Hugetlb can support several page sizes. We use split lock only for PMD
level, but not for PUD.
Hugetlb-specific helpers:
- huge_pte_lock()
takes pmd split lock for PMD_SIZE page, mm->page_table_lock
otherwise;
- huge_pte_lockptr()
returns pointer to table lock;
Support of split page table lock by an architecture
---------------------------------------------------
There's no need in special enabling of PTE split page table lock:
everything required is done by pgtable_page_ctor() and pgtable_page_dtor(),
which must be called on PTE table allocation / freeing.
Make sure the architecture doesn't use slab allocator for page table
allocation: slab uses page->slab_cache and page->first_page for its pages.
These fields share storage with page->ptl.
PMD split lock only makes sense if you have more than two page table
levels.
PMD split lock enabling requires pgtable_pmd_page_ctor() call on PMD table
allocation and pgtable_pmd_page_dtor() on freeing.
Allocation usually happens in pmd_alloc_one(), freeing in pmd_free(), but
make sure you cover all PMD table allocation / freeing paths: i.e X86_PAE
preallocate few PMDs on pgd_alloc().
With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK.
NOTE: pgtable_page_ctor() and pgtable_pmd_page_ctor() can fail -- it must
be handled properly.
page->ptl
---------
page->ptl is used to access split page table lock, where 'page' is struct
page of page containing the table. It shares storage with page->private
(and few other fields in union).
To avoid increasing size of struct page and have best performance, we use a
trick:
- if spinlock_t fits into long, we use page->ptr as spinlock, so we
can avoid indirect access and save a cache line.
- if size of spinlock_t is bigger then size of long, we use page->ptl as
pointer to spinlock_t and allocate it dynamically. This allows to use
split lock with enabled DEBUG_SPINLOCK or DEBUG_LOCK_ALLOC, but costs
one more cache line for indirect access;
The spinlock_t allocated in pgtable_page_ctor() for PTE table and in
pgtable_pmd_page_ctor() for PMD table.
Please, never access page->ptl directly -- use appropriate helper.

View file

@ -207,9 +207,6 @@ config HAVE_DMA_ATTRS
config HAVE_DMA_CONTIGUOUS
bool
config USE_GENERIC_SMP_HELPERS
bool
config GENERIC_SMP_IDLE_THREAD
bool

View file

@ -522,7 +522,6 @@ config ARCH_MAY_HAVE_PC_FDC
config SMP
bool "Symmetric multi-processing support"
depends on ALPHA_SABLE || ALPHA_LYNX || ALPHA_RAWHIDE || ALPHA_DP264 || ALPHA_WILDFIRE || ALPHA_TITAN || ALPHA_GENERIC || ALPHA_SHARK || ALPHA_MARVEL
select USE_GENERIC_SMP_HELPERS
---help---
This enables support for systems with more than one CPU. If you have
a system with only one CPU, like most personal computers, say N. If

View file

@ -72,7 +72,10 @@ pte_alloc_one(struct mm_struct *mm, unsigned long address)
if (!pte)
return NULL;
page = virt_to_page(pte);
pgtable_page_ctor(page);
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
return page;
}

View file

@ -125,7 +125,6 @@ config ARC_PLAT_NEEDS_CPU_TO_DMA
config SMP
bool "Symmetric Multi-Processing (Incomplete)"
default n
select USE_GENERIC_SMP_HELPERS
help
This enables support for systems with more than one CPU. If you have
a system with only one CPU, like most personal computers, say N. If

View file

@ -105,11 +105,16 @@ static inline pgtable_t
pte_alloc_one(struct mm_struct *mm, unsigned long address)
{
pgtable_t pte_pg;
struct page *page;
pte_pg = __get_free_pages(GFP_KERNEL | __GFP_REPEAT, __get_order_pte());
if (pte_pg) {
memzero((void *)pte_pg, PTRS_PER_PTE * 4);
pgtable_page_ctor(virt_to_page(pte_pg));
if (!pte_pg)
return 0;
memzero((void *)pte_pg, PTRS_PER_PTE * 4);
page = virt_to_page(pte_pg);
if (!pgtable_page_ctor(page)) {
__free_page(page);
return 0;
}
return pte_pg;

View file

@ -1432,7 +1432,6 @@ config SMP
depends on GENERIC_CLOCKEVENTS
depends on HAVE_SMP
depends on MMU || ARM_MPU
select USE_GENERIC_SMP_HELPERS
help
This enables support for systems with more than one CPU. If you have
a system with only one CPU, like most personal computers, say N. If

View file

@ -102,12 +102,14 @@ pte_alloc_one(struct mm_struct *mm, unsigned long addr)
#else
pte = alloc_pages(PGALLOC_GFP, 0);
#endif
if (pte) {
if (!PageHighMem(pte))
clean_pte_table(page_address(pte));
pgtable_page_ctor(pte);
if (!pte)
return NULL;
if (!PageHighMem(pte))
clean_pte_table(page_address(pte));
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}

View file

@ -114,7 +114,7 @@ static int do_dma_transfer(unsigned long apb_add,
dma_desc->callback = apb_dma_complete;
dma_desc->callback_param = NULL;
INIT_COMPLETION(tegra_apb_wait);
reinit_completion(&tegra_apb_wait);
dmaengine_submit(dma_desc);
dma_async_issue_pending(tegra_apb_dma_chan);

View file

@ -65,7 +65,7 @@ static int do_adjust_pte(struct vm_area_struct *vma, unsigned long address,
return ret;
}
#if USE_SPLIT_PTLOCKS
#if USE_SPLIT_PTE_PTLOCKS
/*
* If we are using split PTE locks, then we need to take the page
* lock here. Otherwise we are using shared mm->page_table_lock
@ -84,10 +84,10 @@ static inline void do_pte_unlock(spinlock_t *ptl)
{
spin_unlock(ptl);
}
#else /* !USE_SPLIT_PTLOCKS */
#else /* !USE_SPLIT_PTE_PTLOCKS */
static inline void do_pte_lock(spinlock_t *ptl) {}
static inline void do_pte_unlock(spinlock_t *ptl) {}
#endif /* USE_SPLIT_PTLOCKS */
#endif /* USE_SPLIT_PTE_PTLOCKS */
static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
unsigned long pfn)

View file

@ -143,7 +143,6 @@ config CPU_BIG_ENDIAN
config SMP
bool "Symmetric Multi-Processing"
select USE_GENERIC_SMP_HELPERS
help
This enables support for systems with more than one CPU. If
you say N here, the kernel will run on single and

View file

@ -63,9 +63,12 @@ pte_alloc_one(struct mm_struct *mm, unsigned long addr)
struct page *pte;
pte = alloc_pages(PGALLOC_GFP, 0);
if (pte)
pgtable_page_ctor(pte);
if (!pte)
return NULL;
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}

View file

@ -68,7 +68,10 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
return NULL;
page = virt_to_page(pg);
pgtable_page_ctor(page);
if (!pgtable_page_ctor(page)) {
quicklist_free(QUICK_PT, NULL, pg);
return NULL;
}
return page;
}

View file

@ -34,7 +34,6 @@ config BLACKFIN
select ARCH_WANT_IPC_PARSE_VERSION
select GENERIC_ATOMIC64
select GENERIC_IRQ_PROBE
select USE_GENERIC_SMP_HELPERS if SMP
select HAVE_NMI_WATCHDOG if NMI_WATCHDOG
select GENERIC_SMP_IDLE_THREAD
select ARCH_USES_GETTIMEOFFSET if !GENERIC_CLOCKEVENTS

View file

@ -32,7 +32,12 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addres
{
struct page *pte;
pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, 0);
pgtable_page_ctor(pte);
if (!pte)
return NULL;
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}

View file

@ -37,11 +37,15 @@ pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address)
#else
page = alloc_pages(GFP_KERNEL|__GFP_REPEAT, 0);
#endif
if (page) {
clear_highpage(page);
pgtable_page_ctor(page);
flush_dcache_page(page);
if (!page)
return NULL;
clear_highpage(page);
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
flush_dcache_page(page);
return page;
}

View file

@ -4,7 +4,6 @@ comment "Linux Kernel Configuration for Hexagon"
config HEXAGON
def_bool y
select HAVE_OPROFILE
select USE_GENERIC_SMP_HELPERS if SMP
# Other pending projects/to-do items.
# select HAVE_REGS_AND_STACK_ACCESS_API
# select HAVE_HW_BREAKPOINT if PERF_EVENTS

View file

@ -65,10 +65,12 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm,
struct page *pte;
pte = alloc_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
if (pte)
pgtable_page_ctor(pte);
if (!pte)
return NULL;
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}

View file

@ -343,7 +343,6 @@ config FORCE_MAX_ZONEORDER
config SMP
bool "Symmetric multi-processing support"
select USE_GENERIC_SMP_HELPERS
help
This enables support for systems with more than one CPU. If you have
a system with only one CPU, say N. If you have a system with more

View file

@ -91,7 +91,10 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr)
if (!pg)
return NULL;
page = virt_to_page(pg);
pgtable_page_ctor(page);
if (!pgtable_page_ctor(page)) {
quicklist_free(0, NULL, pg);
return NULL;
}
return page;
}

View file

@ -275,7 +275,6 @@ source "kernel/Kconfig.preempt"
config SMP
bool "Symmetric multi-processing support"
select USE_GENERIC_SMP_HELPERS
---help---
This enables support for systems with more than one CPU. If you have
a system with only one CPU, like most personal computers, say N. If

View file

@ -43,7 +43,12 @@ static __inline__ pgtable_t pte_alloc_one(struct mm_struct *mm,
{
struct page *pte = alloc_page(GFP_KERNEL|__GFP_ZERO);
pgtable_page_ctor(pte);
if (!pte)
return NULL;
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}

View file

@ -56,6 +56,10 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm,
if (!page)
return NULL;
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
pte = kmap(page);
if (pte) {

View file

@ -29,18 +29,22 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
static inline pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address)
{
struct page *page = alloc_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, 0);
struct page *page;
pte_t *pte;
page = alloc_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, 0);
if(!page)
return NULL;
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
pte = kmap(page);
__flush_page_to_ram(pte);
flush_tlb_kernel_page(pte);
nocache_page(pte);
kunmap(page);
pgtable_page_ctor(page);
return page;
}

View file

@ -59,7 +59,10 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
return NULL;
clear_highpage(page);
pgtable_page_ctor(page);
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
return page;
}

View file

@ -111,7 +111,6 @@ config METAG_META21
config SMP
bool "Symmetric multi-processing support"
depends on METAG_META21 && METAG_META21_MMU
select USE_GENERIC_SMP_HELPERS
help
This enables support for systems with more than one thread running
Linux. If you have a system with only one thread running Linux,

View file

@ -52,8 +52,12 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
{
struct page *pte;
pte = alloc_pages(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO, 0);
if (pte)
pgtable_page_ctor(pte);
if (!pte)
return NULL;
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}

View file

@ -122,8 +122,13 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm,
#endif
ptepage = alloc_pages(flags, 0);
if (ptepage)
clear_highpage(ptepage);
if (!ptepage)
return NULL;
clear_highpage(ptepage);
if (!pgtable_page_ctor(ptepage)) {
__free_page(ptepage);
return NULL;
}
return ptepage;
}
@ -158,8 +163,9 @@ extern inline void pte_free_slow(struct page *ptepage)
__free_page(ptepage);
}
extern inline void pte_free(struct mm_struct *mm, struct page *ptepage)
static inline void pte_free(struct mm_struct *mm, struct page *ptepage)
{
pgtable_page_dtor(ptepage);
__free_page(ptepage);
}

View file

@ -2125,7 +2125,6 @@ source "mm/Kconfig"
config SMP
bool "Multi-Processing support"
depends on SYS_SUPPORTS_SMP
select USE_GENERIC_SMP_HELPERS
help
This enables support for systems with more than one CPU. If you have
a system with only one CPU, like most personal computers, say N. If

View file

@ -80,9 +80,12 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm,
struct page *pte;
pte = alloc_pages(GFP_KERNEL | __GFP_REPEAT, PTE_ORDER);
if (pte) {
clear_highpage(pte);
pgtable_page_ctor(pte);
if (!pte)
return NULL;
clear_highpage(pte);
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}

View file

@ -181,7 +181,6 @@ endmenu
config SMP
bool "Symmetric multi-processing support"
default y
select USE_GENERIC_SMP_HELPERS
depends on MN10300_PROC_MN2WS0038 || MN10300_PROC_MN2WS0050
---help---
This enables support for systems with more than one CPU. If you have

View file

@ -46,6 +46,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
static inline void pte_free(struct mm_struct *mm, struct page *pte)
{
pgtable_page_dtor(pte);
__free_page(pte);
}

View file

@ -78,8 +78,13 @@ struct page *pte_alloc_one(struct mm_struct *mm, unsigned long address)
#else
pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT, 0);
#endif
if (pte)
clear_highpage(pte);
if (!pte)
return NULL;
clear_highpage(pte);
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}

View file

@ -78,8 +78,13 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm,
{
struct page *pte;
pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT, 0);
if (pte)
clear_page(page_address(pte));
if (!pte)
return NULL;
clear_page(page_address(pte));
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}
@ -90,6 +95,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
static inline void pte_free(struct mm_struct *mm, struct page *pte)
{
pgtable_page_dtor(pte);
__free_page(pte);
}

View file

@ -226,7 +226,6 @@ endchoice
config SMP
bool "Symmetric multi-processing support"
select USE_GENERIC_SMP_HELPERS
---help---
This enables support for systems with more than one CPU. If you have
a system with only one CPU, like most personal computers, say N. If

View file

@ -121,8 +121,12 @@ static inline pgtable_t
pte_alloc_one(struct mm_struct *mm, unsigned long address)
{
struct page *page = alloc_page(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO);
if (page)
pgtable_page_ctor(page);
if (!page)
return NULL;
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
return page;
}

View file

@ -106,7 +106,6 @@ config PPC
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_DMA_ATTRS
select HAVE_DMA_API_DEBUG
select USE_GENERIC_SMP_HELPERS if SMP
select HAVE_OPROFILE
select HAVE_DEBUG_KMEMLEAK
select GENERIC_ATOMIC64 if PPC32

View file

@ -91,7 +91,10 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
if (!pte)
return NULL;
page = virt_to_page(pte);
pgtable_page_ctor(page);
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
return page;
}

View file

@ -121,7 +121,10 @@ pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address)
ptepage = alloc_pages(flags, 0);
if (!ptepage)
return NULL;
pgtable_page_ctor(ptepage);
if (!pgtable_page_ctor(ptepage)) {
__free_page(ptepage);
return NULL;
}
return ptepage;
}

View file

@ -378,6 +378,10 @@ static pte_t *__alloc_for_cache(struct mm_struct *mm, int kernel)
__GFP_REPEAT | __GFP_ZERO);
if (!page)
return NULL;
if (!kernel && !pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
ret = page_address(page);
spin_lock(&mm->page_table_lock);
@ -392,9 +396,6 @@ static pte_t *__alloc_for_cache(struct mm_struct *mm, int kernel)
}
spin_unlock(&mm->page_table_lock);
if (!kernel)
pgtable_page_ctor(page);
return (pte_t *)ret;
}

View file

@ -452,7 +452,7 @@ static int kw_i2c_xfer(struct pmac_i2c_bus *bus, u8 addrdir, int subsize,
*/
if (use_irq) {
/* Clear completion */
INIT_COMPLETION(host->complete);
reinit_completion(&host->complete);
/* Ack stale interrupts */
kw_write_reg(reg_isr, kw_read_reg(reg_isr));
/* Arm timeout */
@ -717,7 +717,7 @@ static int pmu_i2c_xfer(struct pmac_i2c_bus *bus, u8 addrdir, int subsize,
return -EINVAL;
}
INIT_COMPLETION(comp);
reinit_completion(&comp);
req->data[0] = PMU_I2C_CMD;
req->reply[0] = 0xff;
req->nbytes = sizeof(struct pmu_i2c_hdr) + 1;
@ -748,7 +748,7 @@ static int pmu_i2c_xfer(struct pmac_i2c_bus *bus, u8 addrdir, int subsize,
hdr->bus = PMU_I2C_BUS_STATUS;
INIT_COMPLETION(comp);
reinit_completion(&comp);
req->data[0] = PMU_I2C_CMD;
req->reply[0] = 0xff;
req->nbytes = 2;

View file

@ -106,7 +106,7 @@ static int pseries_prepare_late(void)
atomic_set(&suspend_data.done, 0);
atomic_set(&suspend_data.error, 0);
suspend_data.complete = &suspend_work;
INIT_COMPLETION(suspend_work);
reinit_completion(&suspend_work);
return 0;
}

View file

@ -141,7 +141,6 @@ config S390
select OLD_SIGACTION
select OLD_SIGSUSPEND3
select SYSCTL_EXCEPTION_TRACE
select USE_GENERIC_SMP_HELPERS if SMP
select VIRT_CPU_ACCOUNTING
select VIRT_TO_BUS

View file

@ -772,7 +772,11 @@ static inline unsigned long *page_table_alloc_pgste(struct mm_struct *mm,
__free_page(page);
return NULL;
}
pgtable_page_ctor(page);
if (!pgtable_page_ctor(page)) {
kfree(mp);
__free_page(page);
return NULL;
}
mp->vmaddr = vmaddr & PMD_MASK;
INIT_LIST_HEAD(&mp->mapper);
page->index = (unsigned long) mp;
@ -902,7 +906,10 @@ unsigned long *page_table_alloc(struct mm_struct *mm, unsigned long vmaddr)
page = alloc_page(GFP_KERNEL|__GFP_REPEAT);
if (!page)
return NULL;
pgtable_page_ctor(page);
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
atomic_set(&page->_mapcount, 1);
table = (unsigned long *) page_to_phys(page);
clear_table(table, _PAGE_INVALID, PAGE_SIZE);
@ -1244,11 +1251,11 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
assert_spin_locked(&mm->page_table_lock);
/* FIFO */
if (!mm->pmd_huge_pte)
if (!pmd_huge_pte(mm, pmdp))
INIT_LIST_HEAD(lh);
else
list_add(lh, (struct list_head *) mm->pmd_huge_pte);
mm->pmd_huge_pte = pgtable;
list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp));
pmd_huge_pte(mm, pmdp) = pgtable;
}
pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
@ -1260,12 +1267,12 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
assert_spin_locked(&mm->page_table_lock);
/* FIFO */
pgtable = mm->pmd_huge_pte;
pgtable = pmd_huge_pte(mm, pmdp);
lh = (struct list_head *) pgtable;
if (list_empty(lh))
mm->pmd_huge_pte = NULL;
pmd_huge_pte(mm, pmdp) = NULL;
else {
mm->pmd_huge_pte = (pgtable_t) lh->next;
pmd_huge_pte(mm, pmdp) = (pgtable_t) lh->next;
list_del(lh);
}
ptep = (pte_t *) pgtable;

View file

@ -54,9 +54,12 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm,
struct page *pte;
pte = alloc_pages(GFP_KERNEL | __GFP_REPEAT, PTE_ORDER);
if (pte) {
clear_highpage(pte);
pgtable_page_ctor(pte);
if (!pte)
return NULL;
clear_highpage(pte);
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}

View file

@ -711,7 +711,6 @@ config CC_STACKPROTECTOR
config SMP
bool "Symmetric multi-processing support"
depends on SYS_SUPPORTS_SMP
select USE_GENERIC_SMP_HELPERS
---help---
This enables support for systems with more than one CPU. If you have
a system with only one CPU, like most personal computers, say N. If

View file

@ -47,7 +47,10 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
if (!pg)
return NULL;
page = virt_to_page(pg);
pgtable_page_ctor(page);
if (!pgtable_page_ctor(page)) {
quicklist_free(QUICK_PT, NULL, pg);
return NULL;
}
return page;
}

View file

@ -28,7 +28,6 @@ config SPARC
select HAVE_ARCH_JUMP_LABEL
select GENERIC_IRQ_SHOW
select ARCH_WANT_IPC_PARSE_VERSION
select USE_GENERIC_SMP_HELPERS if SMP
select GENERIC_PCI_IOMAP
select HAVE_NMI_WATCHDOG if SPARC64
select HAVE_BPF_JIT

View file

@ -2519,12 +2519,13 @@ pgtable_t pte_alloc_one(struct mm_struct *mm,
return pte;
page = __alloc_for_cache(mm);
if (page) {
pgtable_page_ctor(page);
pte = (pte_t *) page_address(page);
if (!page)
return NULL;
if (!pgtable_page_ctor(page)) {
free_hot_cold_page(page, 0);
return NULL;
}
return pte;
return (pte_t *) page_address(page);
}
void pte_free_kernel(struct mm_struct *mm, pte_t *pte)

View file

@ -345,7 +345,10 @@ pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address)
if ((pte = (unsigned long)pte_alloc_one_kernel(mm, address)) == 0)
return NULL;
page = pfn_to_page(__nocache_pa(pte) >> PAGE_SHIFT);
pgtable_page_ctor(page);
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
return page;
}

View file

@ -196,11 +196,11 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
assert_spin_locked(&mm->page_table_lock);
/* FIFO */
if (!mm->pmd_huge_pte)
if (!pmd_huge_pte(mm, pmdp))
INIT_LIST_HEAD(lh);
else
list_add(lh, (struct list_head *) mm->pmd_huge_pte);
mm->pmd_huge_pte = pgtable;
list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp));
pmd_huge_pte(mm, pmdp) = pgtable;
}
pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
@ -211,12 +211,12 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
assert_spin_locked(&mm->page_table_lock);
/* FIFO */
pgtable = mm->pmd_huge_pte;
pgtable = pmd_huge_pte(mm, pmdp);
lh = (struct list_head *) pgtable;
if (list_empty(lh))
mm->pmd_huge_pte = NULL;
pmd_huge_pte(mm, pmdp) = NULL;
else {
mm->pmd_huge_pte = (pgtable_t) lh->next;
pmd_huge_pte(mm, pmdp) = (pgtable_t) lh->next;
list_del(lh);
}
pte_val(pgtable[0]) = 0;

View file

@ -8,7 +8,6 @@ config TILE
select HAVE_KVM if !TILEGX
select GENERIC_FIND_FIRST_BIT
select SYSCTL_EXCEPTION_TRACE
select USE_GENERIC_SMP_HELPERS
select CC_OPTIMIZE_FOR_SIZE
select HAVE_DEBUG_KMEMLEAK
select GENERIC_IRQ_PROBE

View file

@ -241,6 +241,11 @@ struct page *pgtable_alloc_one(struct mm_struct *mm, unsigned long address,
if (p == NULL)
return NULL;
if (!pgtable_page_ctor(p)) {
__free_pages(p, L2_USER_PGTABLE_ORDER);
return NULL;
}
/*
* Make every page have a page_count() of one, not just the first.
* We don't use __GFP_COMP since it doesn't look like it works
@ -251,7 +256,6 @@ struct page *pgtable_alloc_one(struct mm_struct *mm, unsigned long address,
inc_zone_page_state(p+i, NR_PAGETABLE);
}
pgtable_page_ctor(p);
return p;
}

View file

@ -279,8 +279,12 @@ pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address)
struct page *pte;
pte = alloc_page(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO);
if (pte)
pgtable_page_ctor(pte);
if (!pte)
return NULL;
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}

View file

@ -51,12 +51,14 @@ pte_alloc_one(struct mm_struct *mm, unsigned long addr)
struct page *pte;
pte = alloc_pages(PGALLOC_GFP, 0);
if (pte) {
if (!PageHighMem(pte)) {
void *page = page_address(pte);
clean_dcache_area(page, PTRS_PER_PTE * sizeof(pte_t));
}
pgtable_page_ctor(pte);
if (!pte)
return NULL;
if (!PageHighMem(pte)) {
void *page = page_address(pte);
clean_dcache_area(page, PTRS_PER_PTE * sizeof(pte_t));
}
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
}
return pte;

View file

@ -90,7 +90,6 @@ config X86
select GENERIC_IRQ_SHOW
select GENERIC_CLOCKEVENTS_MIN_ADJUST
select IRQ_FORCED_THREADING
select USE_GENERIC_SMP_HELPERS if SMP
select HAVE_BPF_JIT if X86_64
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select CLKEVT_I8253
@ -1885,6 +1884,10 @@ config USE_PERCPU_NUMA_NODE_ID
def_bool y
depends on NUMA
config ARCH_ENABLE_SPLIT_PMD_PTLOCK
def_bool y
depends on X86_64 || X86_PAE
menu "Power management and ACPI options"
config ARCH_HIBERNATION_HEADER

View file

@ -80,12 +80,21 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
#if PAGETABLE_LEVELS > 2
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
{
return (pmd_t *)get_zeroed_page(GFP_KERNEL|__GFP_REPEAT);
struct page *page;
page = alloc_pages(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO, 0);
if (!page)
return NULL;
if (!pgtable_pmd_page_ctor(page)) {
__free_pages(page, 0);
return NULL;
}
return (pmd_t *)page_address(page);
}
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
{
BUG_ON((unsigned long)pmd & (PAGE_SIZE-1));
pgtable_pmd_page_dtor(virt_to_page(pmd));
free_page((unsigned long)pmd);
}

View file

@ -25,8 +25,12 @@ pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address)
struct page *pte;
pte = alloc_pages(__userpte_alloc_gfp, 0);
if (pte)
pgtable_page_ctor(pte);
if (!pte)
return NULL;
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}
@ -189,8 +193,10 @@ static void free_pmds(pmd_t *pmds[])
int i;
for(i = 0; i < PREALLOCATED_PMDS; i++)
if (pmds[i])
if (pmds[i]) {
pgtable_pmd_page_dtor(virt_to_page(pmds[i]));
free_page((unsigned long)pmds[i]);
}
}
static int preallocate_pmds(pmd_t *pmds[])
@ -200,8 +206,13 @@ static int preallocate_pmds(pmd_t *pmds[])
for(i = 0; i < PREALLOCATED_PMDS; i++) {
pmd_t *pmd = (pmd_t *)__get_free_page(PGALLOC_GFP);
if (pmd == NULL)
if (!pmd)
failed = true;
if (pmd && !pgtable_pmd_page_ctor(virt_to_page(pmd))) {
free_page((unsigned long)pmds[i]);
pmd = NULL;
failed = true;
}
pmds[i] = pmd;
}

View file

@ -796,8 +796,8 @@ static spinlock_t *xen_pte_lock(struct page *page, struct mm_struct *mm)
{
spinlock_t *ptl = NULL;
#if USE_SPLIT_PTLOCKS
ptl = __pte_lockptr(page);
#if USE_SPLIT_PTE_PTLOCKS
ptl = ptlock_ptr(page);
spin_lock_nest_lock(ptl, &mm->page_table_lock);
#endif
@ -1637,7 +1637,7 @@ static inline void xen_alloc_ptpage(struct mm_struct *mm, unsigned long pfn,
__set_pfn_prot(pfn, PAGE_KERNEL_RO);
if (level == PT_PTE && USE_SPLIT_PTLOCKS)
if (level == PT_PTE && USE_SPLIT_PTE_PTLOCKS)
__pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE, pfn);
xen_mc_issue(PARAVIRT_LAZY_MMU);
@ -1671,7 +1671,7 @@ static inline void xen_release_ptpage(unsigned long pfn, unsigned level)
if (!PageHighMem(page)) {
xen_mc_batch();
if (level == PT_PTE && USE_SPLIT_PTLOCKS)
if (level == PT_PTE && USE_SPLIT_PTE_PTLOCKS)
__pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, pfn);
__set_pfn_prot(pfn, PAGE_KERNEL);

View file

@ -38,35 +38,46 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
free_page((unsigned long)pgd);
}
/* Use a slab cache for the pte pages (see also sparc64 implementation) */
extern struct kmem_cache *pgtable_cache;
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
{
return kmem_cache_alloc(pgtable_cache, GFP_KERNEL|__GFP_REPEAT);
pte_t *ptep;
int i;
ptep = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT);
if (!ptep)
return NULL;
for (i = 0; i < 1024; i++)
pte_clear(NULL, 0, ptep + i);
return ptep;
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
unsigned long addr)
{
pte_t *pte;
struct page *page;
page = virt_to_page(pte_alloc_one_kernel(mm, addr));
pgtable_page_ctor(page);
pte = pte_alloc_one_kernel(mm, addr);
if (!pte)
return NULL;
page = virt_to_page(pte);
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
return page;
}
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
kmem_cache_free(pgtable_cache, pte);
free_page((unsigned long)pte);
}
static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
{
pgtable_page_dtor(pte);
kmem_cache_free(pgtable_cache, page_address(pte));
__free_page(pte);
}
#define pmd_pgtable(pmd) pmd_page(pmd)

View file

@ -220,12 +220,11 @@ extern unsigned long empty_zero_page[1024];
#ifdef CONFIG_MMU
extern pgd_t swapper_pg_dir[PAGE_SIZE/sizeof(pgd_t)];
extern void paging_init(void);
extern void pgtable_cache_init(void);
#else
# define swapper_pg_dir NULL
static inline void paging_init(void) { }
static inline void pgtable_cache_init(void) { }
#endif
static inline void pgtable_cache_init(void) { }
/*
* The pmd contains the kernel virtual address of the pte page.

View file

@ -50,23 +50,3 @@ void __init init_mmu(void)
*/
set_ptevaddr_register(PGTABLE_START);
}
struct kmem_cache *pgtable_cache __read_mostly;
static void pgd_ctor(void *addr)
{
pte_t *ptep = (pte_t *)addr;
int i;
for (i = 0; i < 1024; i++, ptep++)
pte_clear(NULL, 0, ptep);
}
void __init pgtable_cache_init(void)
{
pgtable_cache = kmem_cache_create("pgd",
PAGE_SIZE, PAGE_SIZE,
SLAB_HWCACHE_ALIGN,
pgd_ctor);
}

View file

@ -319,7 +319,7 @@ void __blk_mq_end_io(struct request *rq, int error)
blk_mq_complete_request(rq, error);
}
#if defined(CONFIG_SMP) && defined(CONFIG_USE_GENERIC_SMP_HELPERS)
#if defined(CONFIG_SMP)
/*
* Called with interrupts disabled.
@ -361,7 +361,7 @@ static int ipi_remote_cpu(struct blk_mq_ctx *ctx, const int cpu,
return true;
}
#else /* CONFIG_SMP && CONFIG_USE_GENERIC_SMP_HELPERS */
#else /* CONFIG_SMP */
static int ipi_remote_cpu(struct blk_mq_ctx *ctx, const int cpu,
struct request *rq, const int error)
{

View file

@ -36,7 +36,7 @@ static void blk_done_softirq(struct softirq_action *h)
}
}
#if defined(CONFIG_SMP) && defined(CONFIG_USE_GENERIC_SMP_HELPERS)
#ifdef CONFIG_SMP
static void trigger_softirq(void *data)
{
struct request *rq = data;
@ -71,7 +71,7 @@ static int raise_blk_irq(int cpu, struct request *rq)
return 1;
}
#else /* CONFIG_SMP && CONFIG_USE_GENERIC_SMP_HELPERS */
#else /* CONFIG_SMP */
static int raise_blk_irq(int cpu, struct request *rq)
{
return 1;

View file

@ -288,7 +288,7 @@ static ssize_t
queue_rq_affinity_store(struct request_queue *q, const char *page, size_t count)
{
ssize_t ret = -EINVAL;
#if defined(CONFIG_USE_GENERIC_SMP_HELPERS)
#ifdef CONFIG_SMP
unsigned long val;
ret = queue_var_store(&val, page, count);

View file

@ -434,7 +434,7 @@ int af_alg_wait_for_completion(int err, struct af_alg_completion *completion)
case -EINPROGRESS:
case -EBUSY:
wait_for_completion(&completion->completion);
INIT_COMPLETION(completion->completion);
reinit_completion(&completion->completion);
err = completion->err;
break;
};

View file

@ -493,7 +493,7 @@ static inline int do_one_ahash_op(struct ahash_request *req, int ret)
ret = wait_for_completion_interruptible(&tr->completion);
if (!ret)
ret = tr->err;
INIT_COMPLETION(tr->completion);
reinit_completion(&tr->completion);
}
return ret;
}
@ -721,7 +721,7 @@ static inline int do_one_acipher_op(struct ablkcipher_request *req, int ret)
ret = wait_for_completion_interruptible(&tr->completion);
if (!ret)
ret = tr->err;
INIT_COMPLETION(tr->completion);
reinit_completion(&tr->completion);
}
return ret;

View file

@ -179,7 +179,7 @@ static int do_one_async_hash_op(struct ahash_request *req,
ret = wait_for_completion_interruptible(&tr->completion);
if (!ret)
ret = tr->err;
INIT_COMPLETION(tr->completion);
reinit_completion(&tr->completion);
}
return ret;
}
@ -336,7 +336,7 @@ static int __test_hash(struct crypto_ahash *tfm, struct hash_testvec *template,
ret = wait_for_completion_interruptible(
&tresult.completion);
if (!ret && !(ret = tresult.err)) {
INIT_COMPLETION(tresult.completion);
reinit_completion(&tresult.completion);
break;
}
/* fall through */
@ -543,7 +543,7 @@ static int __test_aead(struct crypto_aead *tfm, int enc,
ret = wait_for_completion_interruptible(
&result.completion);
if (!ret && !(ret = result.err)) {
INIT_COMPLETION(result.completion);
reinit_completion(&result.completion);
break;
}
case -EBADMSG:
@ -697,7 +697,7 @@ static int __test_aead(struct crypto_aead *tfm, int enc,
ret = wait_for_completion_interruptible(
&result.completion);
if (!ret && !(ret = result.err)) {
INIT_COMPLETION(result.completion);
reinit_completion(&result.completion);
break;
}
case -EBADMSG:
@ -983,7 +983,7 @@ static int __test_skcipher(struct crypto_ablkcipher *tfm, int enc,
ret = wait_for_completion_interruptible(
&result.completion);
if (!ret && !((ret = result.err))) {
INIT_COMPLETION(result.completion);
reinit_completion(&result.completion);
break;
}
/* fall through */
@ -1086,7 +1086,7 @@ static int __test_skcipher(struct crypto_ablkcipher *tfm, int enc,
ret = wait_for_completion_interruptible(
&result.completion);
if (!ret && !((ret = result.err))) {
INIT_COMPLETION(result.completion);
reinit_completion(&result.completion);
break;
}
/* fall through */

View file

@ -3017,7 +3017,7 @@ static inline void ata_eh_pull_park_action(struct ata_port *ap)
* ourselves at the beginning of each pass over the loop.
*
* Additionally, all write accesses to &ap->park_req_pending
* through INIT_COMPLETION() (see below) or complete_all()
* through reinit_completion() (see below) or complete_all()
* (see ata_scsi_park_store()) are protected by the host lock.
* As a result we have that park_req_pending.done is zero on
* exit from this function, i.e. when ATA_EH_PARK actions for
@ -3031,7 +3031,7 @@ static inline void ata_eh_pull_park_action(struct ata_port *ap)
*/
spin_lock_irqsave(ap->lock, flags);
INIT_COMPLETION(ap->park_req_pending);
reinit_completion(&ap->park_req_pending);
ata_for_each_link(link, ap, EDGE) {
ata_for_each_dev(dev, link, ALL) {
struct ata_eh_info *ehi = &link->eh_info;

View file

@ -757,7 +757,7 @@ void dpm_resume(pm_message_t state)
async_error = 0;
list_for_each_entry(dev, &dpm_suspended_list, power.entry) {
INIT_COMPLETION(dev->power.completion);
reinit_completion(&dev->power.completion);
if (is_async(dev)) {
get_device(dev);
async_schedule(async_resume, dev);
@ -1237,7 +1237,7 @@ static void async_suspend(void *data, async_cookie_t cookie)
static int device_suspend(struct device *dev)
{
INIT_COMPLETION(dev->power.completion);
reinit_completion(&dev->power.completion);
if (pm_async_enabled && dev->power.async_suspend) {
get_device(dev);

View file

@ -343,7 +343,7 @@ static int fd_motor_on(int nr)
unit[nr].motor = 1;
fd_select(nr);
INIT_COMPLETION(motor_on_completion);
reinit_completion(&motor_on_completion);
motor_on_timer.data = nr;
mod_timer(&motor_on_timer, jiffies + HZ/2);

View file

@ -2808,7 +2808,7 @@ static int sendcmd_withirq_core(ctlr_info_t *h, CommandList_struct *c,
/* erase the old error information */
memset(c->err_info, 0, sizeof(ErrorInfo_struct));
return_status = IO_OK;
INIT_COMPLETION(wait);
reinit_completion(&wait);
goto resend_cmd2;
}
@ -3669,7 +3669,7 @@ static int add_to_scan_list(struct ctlr_info *h)
}
}
if (!found && !h->busy_scanning) {
INIT_COMPLETION(h->scan_wait);
reinit_completion(&h->scan_wait);
list_add_tail(&h->scan_list, &scan_q);
ret = 1;
}

View file

@ -79,7 +79,7 @@ static int timeriomem_rng_data_read(struct hwrng *rng, u32 *data)
priv->expires = cur + delay;
priv->present = 0;
INIT_COMPLETION(priv->completion);
reinit_completion(&priv->completion);
mod_timer(&priv->timer, priv->expires);
return 4;

View file

@ -268,7 +268,7 @@ static int aes_start_crypt(struct tegra_aes_dev *dd, u32 in_addr, u32 out_addr,
aes_writel(dd, value, TEGRA_AES_SECURE_INPUT_SELECT);
aes_writel(dd, out_addr, TEGRA_AES_SECURE_DEST_ADDR);
INIT_COMPLETION(dd->op_complete);
reinit_completion(&dd->op_complete);
for (i = 0; i < AES_HW_MAX_ICQ_LENGTH - 1; i++) {
do {

View file

@ -477,7 +477,7 @@ void fw_send_phy_config(struct fw_card *card,
phy_config_packet.header[1] = data;
phy_config_packet.header[2] = ~data;
phy_config_packet.generation = generation;
INIT_COMPLETION(phy_config_done);
reinit_completion(&phy_config_done);
card->driver->send_request(card, &phy_config_packet);
wait_for_completion_timeout(&phy_config_done, timeout);

View file

@ -34,7 +34,7 @@
*/
void drm_flip_work_queue(struct drm_flip_work *work, void *val)
{
if (kfifo_put(&work->fifo, (const void **)&val)) {
if (kfifo_put(&work->fifo, val)) {
atomic_inc(&work->pending);
} else {
DRM_ERROR("%s fifo full!\n", work->name);

View file

@ -99,7 +99,7 @@ static int xfer_read(struct i2c_adapter *adap, struct i2c_msg *pmsg)
i2c_dev->status = I2C_STAT_INIT;
i2c_dev->msg = pmsg;
i2c_dev->buf_offset = 0;
INIT_COMPLETION(i2c_dev->complete);
reinit_completion(&i2c_dev->complete);
/* Enable I2C transaction */
temp = ((pmsg->len) << 20) | HI2C_EDID_READ | HI2C_ENABLE_TRANSACTION;

View file

@ -327,7 +327,7 @@ static inline void wiimote_cmd_acquire_noint(struct wiimote_data *wdata)
static inline void wiimote_cmd_set(struct wiimote_data *wdata, int cmd,
__u32 opt)
{
INIT_COMPLETION(wdata->state.ready);
reinit_completion(&wdata->state.ready);
wdata->state.cmd = cmd;
wdata->state.opt = opt;
}

View file

@ -66,7 +66,7 @@ static ssize_t jz4740_hwmon_read_adcin(struct device *dev,
mutex_lock(&hwmon->lock);
INIT_COMPLETION(*completion);
reinit_completion(completion);
enable_irq(hwmon->irq);
hwmon->cell->enable(to_platform_device(dev));

View file

@ -371,7 +371,7 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev)
dev_dbg(dev->dev, "transfer: %s %d bytes.\n",
(dev->msg->flags & I2C_M_RD) ? "read" : "write", dev->buf_len);
INIT_COMPLETION(dev->cmd_complete);
reinit_completion(&dev->cmd_complete);
dev->transfer_status = 0;
if (!dev->buf_len) {

View file

@ -151,7 +151,7 @@ static int bcm2835_i2c_xfer_msg(struct bcm2835_i2c_dev *i2c_dev,
i2c_dev->msg_buf = msg->buf;
i2c_dev->msg_buf_remaining = msg->len;
INIT_COMPLETION(i2c_dev->completion);
reinit_completion(&i2c_dev->completion);
bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_C, BCM2835_I2C_C_CLEAR);

View file

@ -323,7 +323,7 @@ i2c_davinci_xfer_msg(struct i2c_adapter *adap, struct i2c_msg *msg, int stop)
davinci_i2c_write_reg(dev, DAVINCI_I2C_CNT_REG, dev->buf_len);
INIT_COMPLETION(dev->cmd_complete);
reinit_completion(&dev->cmd_complete);
dev->cmd_err = 0;
/* Take I2C out of reset and configure it as master */

View file

@ -613,7 +613,7 @@ i2c_dw_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
mutex_lock(&dev->lock);
pm_runtime_get_sync(dev->dev);
INIT_COMPLETION(dev->cmd_complete);
reinit_completion(&dev->cmd_complete);
dev->msgs = msgs;
dev->msgs_num = num;
dev->cmd_err = 0;

View file

@ -541,7 +541,7 @@ static int ismt_access(struct i2c_adapter *adap, u16 addr,
desc->dptr_high = upper_32_bits(dma_addr);
}
INIT_COMPLETION(priv->cmp);
reinit_completion(&priv->cmp);
/* Add the descriptor */
ismt_submit_desc(priv);

View file

@ -505,7 +505,7 @@ static int mxs_i2c_xfer_msg(struct i2c_adapter *adap, struct i2c_msg *msg,
return err;
}
} else {
INIT_COMPLETION(i2c->cmd_complete);
reinit_completion(&i2c->cmd_complete);
ret = mxs_i2c_dma_setup_xfer(adap, msg, flags);
if (ret)
return ret;

View file

@ -543,7 +543,7 @@ static int omap_i2c_xfer_msg(struct i2c_adapter *adap,
w |= OMAP_I2C_BUF_RXFIF_CLR | OMAP_I2C_BUF_TXFIF_CLR;
omap_i2c_write_reg(dev, OMAP_I2C_BUF_REG, w);
INIT_COMPLETION(dev->cmd_complete);
reinit_completion(&dev->cmd_complete);
dev->cmd_err = 0;
w = OMAP_I2C_CON_EN | OMAP_I2C_CON_MST | OMAP_I2C_CON_STT;

View file

@ -544,7 +544,7 @@ static int tegra_i2c_xfer_msg(struct tegra_i2c_dev *i2c_dev,
i2c_dev->msg_buf_remaining = msg->len;
i2c_dev->msg_err = I2C_ERR_NONE;
i2c_dev->msg_read = (msg->flags & I2C_M_RD);
INIT_COMPLETION(i2c_dev->msg_complete);
reinit_completion(&i2c_dev->msg_complete);
packet_header = (0 << PACKET_HEADER0_HEADER_SIZE_SHIFT) |
PACKET_HEADER0_PROTOCOL_I2C |

View file

@ -158,7 +158,7 @@ static int wmt_i2c_write(struct i2c_adapter *adap, struct i2c_msg *pmsg,
writew(val, i2c_dev->base + REG_CR);
}
INIT_COMPLETION(i2c_dev->complete);
reinit_completion(&i2c_dev->complete);
if (i2c_dev->mode == I2C_MODE_STANDARD)
tcr_val = TCR_STANDARD_MODE;
@ -247,7 +247,7 @@ static int wmt_i2c_read(struct i2c_adapter *adap, struct i2c_msg *pmsg,
writew(val, i2c_dev->base + REG_CR);
}
INIT_COMPLETION(i2c_dev->complete);
reinit_completion(&i2c_dev->complete);
if (i2c_dev->mode == I2C_MODE_STANDARD)
tcr_val = TCR_STANDARD_MODE;

View file

@ -188,7 +188,7 @@ static int ad_sd_calibrate(struct ad_sigma_delta *sigma_delta,
spi_bus_lock(sigma_delta->spi->master);
sigma_delta->bus_locked = true;
INIT_COMPLETION(sigma_delta->completion);
reinit_completion(&sigma_delta->completion);
ret = ad_sigma_delta_set_mode(sigma_delta, mode);
if (ret < 0)
@ -259,7 +259,7 @@ int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev,
spi_bus_lock(sigma_delta->spi->master);
sigma_delta->bus_locked = true;
INIT_COMPLETION(sigma_delta->completion);
reinit_completion(&sigma_delta->completion);
ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_SINGLE);
@ -343,7 +343,7 @@ static int ad_sd_buffer_postdisable(struct iio_dev *indio_dev)
{
struct ad_sigma_delta *sigma_delta = iio_device_get_drvdata(indio_dev);
INIT_COMPLETION(sigma_delta->completion);
reinit_completion(&sigma_delta->completion);
wait_for_completion_timeout(&sigma_delta->completion, HZ);
if (!sigma_delta->irq_dis) {

View file

@ -190,7 +190,7 @@ static int nau7802_read_irq(struct iio_dev *indio_dev,
struct nau7802_state *st = iio_priv(indio_dev);
int ret;
INIT_COMPLETION(st->value_ok);
reinit_completion(&st->value_ok);
enable_irq(st->client->irq);
nau7802_sync(st);

View file

@ -56,7 +56,7 @@ int iio_push_event(struct iio_dev *indio_dev, u64 ev_code, s64 timestamp)
ev.id = ev_code;
ev.timestamp = timestamp;
copied = kfifo_put(&ev_int->det_events, &ev);
copied = kfifo_put(&ev_int->det_events, ev);
if (copied != 0)
wake_up_locked_poll(&ev_int->wait, POLLIN);
}

View file

@ -242,7 +242,7 @@ static int cyttsp_soft_reset(struct cyttsp *ts)
int retval;
/* wait for interrupt to set ready completion */
INIT_COMPLETION(ts->bl_ready);
reinit_completion(&ts->bl_ready);
ts->state = CY_BL_STATE;
enable_irq(ts->irq);

View file

@ -1212,7 +1212,10 @@ static int arm_smmu_alloc_init_pte(struct arm_smmu_device *smmu, pmd_t *pmd,
arm_smmu_flush_pgtable(smmu, page_address(table),
ARM_SMMU_PTE_HWTABLE_SIZE);
pgtable_page_ctor(table);
if (!pgtable_page_ctor(table)) {
__free_page(table);
return -ENOMEM;
}
pmd_populate(NULL, pmd, table);
arm_smmu_flush_pgtable(smmu, pmd, sizeof(*pmd));
}

View file

@ -950,7 +950,7 @@ static int crypt_convert(struct crypt_config *cc,
/* async */
case -EBUSY:
wait_for_completion(&ctx->restart);
INIT_COMPLETION(ctx->restart);
reinit_completion(&ctx->restart);
/* fall through*/
case -EINPROGRESS:
this_cc->req = NULL;

View file

@ -293,20 +293,6 @@ static void __release_stripe(struct r5conf *conf, struct stripe_head *sh)
do_release_stripe(conf, sh);
}
static struct llist_node *llist_reverse_order(struct llist_node *head)
{
struct llist_node *new_head = NULL;
while (head) {
struct llist_node *tmp = head;
head = head->next;
tmp->next = new_head;
new_head = tmp;
}
return new_head;
}
/* should hold conf->device_lock already */
static int release_stripe_list(struct r5conf *conf)
{

View file

@ -422,7 +422,7 @@ static int bcap_start_streaming(struct vb2_queue *vq, unsigned int count)
return ret;
}
INIT_COMPLETION(bcap_dev->comp);
reinit_completion(&bcap_dev->comp);
bcap_dev->stop = false;
return 0;
}

View file

@ -375,7 +375,7 @@ static int wl1273_fm_set_tx_freq(struct wl1273_device *radio, unsigned int freq)
if (r)
return r;
INIT_COMPLETION(radio->busy);
reinit_completion(&radio->busy);
/* wait for the FR IRQ */
r = wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(2000));
@ -389,7 +389,7 @@ static int wl1273_fm_set_tx_freq(struct wl1273_device *radio, unsigned int freq)
if (r)
return r;
INIT_COMPLETION(radio->busy);
reinit_completion(&radio->busy);
/* wait for the POWER_ENB IRQ */
r = wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(1000));
@ -444,7 +444,7 @@ static int wl1273_fm_set_rx_freq(struct wl1273_device *radio, unsigned int freq)
goto err;
}
INIT_COMPLETION(radio->busy);
reinit_completion(&radio->busy);
r = wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(2000));
if (!r) {
@ -805,7 +805,7 @@ static int wl1273_fm_set_seek(struct wl1273_device *radio,
if (level < SCHAR_MIN || level > SCHAR_MAX)
return -EINVAL;
INIT_COMPLETION(radio->busy);
reinit_completion(&radio->busy);
dev_dbg(radio->dev, "%s: BUSY\n", __func__);
r = core->write(core, WL1273_INT_MASK_SET, radio->irq_flags);
@ -847,7 +847,7 @@ static int wl1273_fm_set_seek(struct wl1273_device *radio,
if (r)
goto out;
INIT_COMPLETION(radio->busy);
reinit_completion(&radio->busy);
dev_dbg(radio->dev, "%s: BUSY\n", __func__);
r = core->write(core, WL1273_TUNER_MODE_SET, TUNER_MODE_AUTO_SEEK);

View file

@ -218,7 +218,7 @@ static int si470x_set_chan(struct si470x_device *radio, unsigned short chan)
goto done;
/* wait till tune operation has completed */
INIT_COMPLETION(radio->completion);
reinit_completion(&radio->completion);
retval = wait_for_completion_timeout(&radio->completion,
msecs_to_jiffies(tune_timeout));
if (!retval)
@ -341,7 +341,7 @@ static int si470x_set_seek(struct si470x_device *radio,
return retval;
/* wait till tune operation has completed */
INIT_COMPLETION(radio->completion);
reinit_completion(&radio->completion);
retval = wait_for_completion_timeout(&radio->completion,
msecs_to_jiffies(seek_timeout));
if (!retval)

View file

@ -207,7 +207,7 @@ static int iguanair_send(struct iguanair *ir, unsigned size)
{
int rc;
INIT_COMPLETION(ir->completion);
reinit_completion(&ir->completion);
ir->urb_out->transfer_buffer_length = size;
rc = usb_submit_urb(ir->urb_out, GFP_KERNEL);

View file

@ -253,7 +253,7 @@ void memstick_new_req(struct memstick_host *host)
{
if (host->card) {
host->retries = cmd_retries;
INIT_COMPLETION(host->card->mrq_complete);
reinit_completion(&host->card->mrq_complete);
host->request(host);
}
}

Some files were not shown because too many files have changed in this diff Show more