Commit f227e588 authored by Stephen Rothwell's avatar Stephen Rothwell

Merge branch 'akpm/master'

parents 6234346e b12bf9fe
......@@ -75,9 +75,10 @@ number of times a page is mapped.
20. NOPAGE
21. KSM
22. THP
23. BALLOON
23. OFFLINE
24. ZERO_PAGE
25. IDLE
26. PGTABLE
* ``/proc/kpagecgroup``. This file contains a 64-bit inode number of the
memory cgroup each page is charged to, indexed by PFN. Only available when
......@@ -118,8 +119,8 @@ Short descriptions to the page flags
identical memory pages dynamically shared between one or more processes
22 - THP
contiguous pages which construct transparent hugepages
23 - BALLOON
balloon compaction page
23 - OFFLINE
page is logically offline
24 - ZERO_PAGE
zero page for pfn_zero or huge_zero page
25 - IDLE
......@@ -128,6 +129,8 @@ Short descriptions to the page flags
Note that this flag may be stale in case the page was accessed via
a PTE. To make sure the flag is up-to-date one has to read
``/sys/kernel/mm/page_idle/bitmap`` first.
26 - PGTABLE
page is in use as a page table
IO related page flags
---------------------
......
===================================
Using flexible arrays in the kernel
===================================
Large contiguous memory allocations can be unreliable in the Linux kernel.
Kernel programmers will sometimes respond to this problem by allocating
pages with :c:func:`vmalloc()`. This solution not ideal, though. On 32-bit
systems, memory from vmalloc() must be mapped into a relatively small address
space; it's easy to run out. On SMP systems, the page table changes required
by vmalloc() allocations can require expensive cross-processor interrupts on
all CPUs. And, on all systems, use of space in the vmalloc() range increases
pressure on the translation lookaside buffer (TLB), reducing the performance
of the system.
In many cases, the need for memory from vmalloc() can be eliminated by piecing
together an array from smaller parts; the flexible array library exists to make
this task easier.
A flexible array holds an arbitrary (within limits) number of fixed-sized
objects, accessed via an integer index. Sparse arrays are handled
reasonably well. Only single-page allocations are made, so memory
allocation failures should be relatively rare. The down sides are that the
arrays cannot be indexed directly, individual object size cannot exceed the
system page size, and putting data into a flexible array requires a copy
operation. It's also worth noting that flexible arrays do no internal
locking at all; if concurrent access to an array is possible, then the
caller must arrange for appropriate mutual exclusion.
The creation of a flexible array is done with :c:func:`flex_array_alloc()`::
#include <linux/flex_array.h>
struct flex_array *flex_array_alloc(int element_size,
unsigned int total,
gfp_t flags);
The individual object size is provided by ``element_size``, while total is the
maximum number of objects which can be stored in the array. The flags
argument is passed directly to the internal memory allocation calls. With
the current code, using flags to ask for high memory is likely to lead to
notably unpleasant side effects.
It is also possible to define flexible arrays at compile time with::
DEFINE_FLEX_ARRAY(name, element_size, total);
This macro will result in a definition of an array with the given name; the
element size and total will be checked for validity at compile time.
Storing data into a flexible array is accomplished with a call to
:c:func:`flex_array_put()`::
int flex_array_put(struct flex_array *array, unsigned int element_nr,
void *src, gfp_t flags);
This call will copy the data from src into the array, in the position
indicated by ``element_nr`` (which must be less than the maximum specified when
the array was created). If any memory allocations must be performed, flags
will be used. The return value is zero on success, a negative error code
otherwise.
There might possibly be a need to store data into a flexible array while
running in some sort of atomic context; in this situation, sleeping in the
memory allocator would be a bad thing. That can be avoided by using
``GFP_ATOMIC`` for the flags value, but, often, there is a better way. The
trick is to ensure that any needed memory allocations are done before
entering atomic context, using :c:func:`flex_array_prealloc()`::
int flex_array_prealloc(struct flex_array *array, unsigned int start,
unsigned int nr_elements, gfp_t flags);
This function will ensure that memory for the elements indexed in the range
defined by ``start`` and ``nr_elements`` has been allocated. Thereafter, a
``flex_array_put()`` call on an element in that range is guaranteed not to
block.
Getting data back out of the array is done with :c:func:`flex_array_get()`::
void *flex_array_get(struct flex_array *fa, unsigned int element_nr);
The return value is a pointer to the data element, or NULL if that
particular element has never been allocated.
Note that it is possible to get back a valid pointer for an element which
has never been stored in the array. Memory for array elements is allocated
one page at a time; a single allocation could provide memory for several
adjacent elements. Flexible array elements are normally initialized to the
value ``FLEX_ARRAY_FREE`` (defined as 0x6c in <linux/poison.h>), so errors
involving that number probably result from use of unstored array entries.
Note that, if array elements are allocated with ``__GFP_ZERO``, they will be
initialized to zero and this poisoning will not happen.
Individual elements in the array can be cleared with
:c:func:`flex_array_clear()`::
int flex_array_clear(struct flex_array *array, unsigned int element_nr);
This function will set the given element to ``FLEX_ARRAY_FREE`` and return
zero. If storage for the indicated element is not allocated for the array,
``flex_array_clear()`` will return ``-EINVAL`` instead. Note that clearing an
element does not release the storage associated with it; to reduce the
allocated size of an array, call :c:func:`flex_array_shrink()`::
int flex_array_shrink(struct flex_array *array);
The return value will be the number of pages of memory actually freed.
This function works by scanning the array for pages containing nothing but
``FLEX_ARRAY_FREE`` bytes, so (1) it can be expensive, and (2) it will not work
if the array's pages are allocated with ``__GFP_ZERO``.
It is possible to remove all elements of an array with a call to
:c:func:`flex_array_free_parts()`::
void flex_array_free_parts(struct flex_array *array);
This call frees all elements, but leaves the array itself in place.
Freeing the entire array is done with :c:func:`flex_array_free()`::
void flex_array_free(struct flex_array *array);
As of this writing, there are no users of flexible arrays in the mainline
kernel. The functions described here are also not exported to modules;
that will probably be fixed when somebody comes up with a need for it.
Flexible array functions
------------------------
.. kernel-doc:: include/linux/flex_array.h
=================================
Generic radix trees/sparse arrays
=================================
.. kernel-doc:: include/linux/generic-radix-tree.h
:doc: Generic radix trees/sparse arrays
generic radix tree functions
----------------------------
.. kernel-doc:: include/linux/generic-radix-tree.h
:functions:
......@@ -28,6 +28,7 @@ Core utilities
errseq
printk-formats
circular-buffers
generic-radix-tree
memory-allocation
mm-api
gfp_mask-from-fs-io
......
===================================
Using flexible arrays in the kernel
===================================
:Updated: Last updated for 2.6.32
:Author: Jonathan Corbet <corbet@lwn.net>
Large contiguous memory allocations can be unreliable in the Linux kernel.
Kernel programmers will sometimes respond to this problem by allocating
pages with vmalloc(). This solution not ideal, though. On 32-bit systems,
memory from vmalloc() must be mapped into a relatively small address space;
it's easy to run out. On SMP systems, the page table changes required by
vmalloc() allocations can require expensive cross-processor interrupts on
all CPUs. And, on all systems, use of space in the vmalloc() range
increases pressure on the translation lookaside buffer (TLB), reducing the
performance of the system.
In many cases, the need for memory from vmalloc() can be eliminated by
piecing together an array from smaller parts; the flexible array library
exists to make this task easier.
A flexible array holds an arbitrary (within limits) number of fixed-sized
objects, accessed via an integer index. Sparse arrays are handled
reasonably well. Only single-page allocations are made, so memory
allocation failures should be relatively rare. The down sides are that the
arrays cannot be indexed directly, individual object size cannot exceed the
system page size, and putting data into a flexible array requires a copy
operation. It's also worth noting that flexible arrays do no internal
locking at all; if concurrent access to an array is possible, then the
caller must arrange for appropriate mutual exclusion.
The creation of a flexible array is done with::
#include <linux/flex_array.h>
struct flex_array *flex_array_alloc(int element_size,
unsigned int total,
gfp_t flags);
The individual object size is provided by element_size, while total is the
maximum number of objects which can be stored in the array. The flags
argument is passed directly to the internal memory allocation calls. With
the current code, using flags to ask for high memory is likely to lead to
notably unpleasant side effects.
It is also possible to define flexible arrays at compile time with::
DEFINE_FLEX_ARRAY(name, element_size, total);
This macro will result in a definition of an array with the given name; the
element size and total will be checked for validity at compile time.
Storing data into a flexible array is accomplished with a call to::
int flex_array_put(struct flex_array *array, unsigned int element_nr,
void *src, gfp_t flags);
This call will copy the data from src into the array, in the position
indicated by element_nr (which must be less than the maximum specified when
the array was created). If any memory allocations must be performed, flags
will be used. The return value is zero on success, a negative error code
otherwise.
There might possibly be a need to store data into a flexible array while
running in some sort of atomic context; in this situation, sleeping in the
memory allocator would be a bad thing. That can be avoided by using
GFP_ATOMIC for the flags value, but, often, there is a better way. The
trick is to ensure that any needed memory allocations are done before
entering atomic context, using::
int flex_array_prealloc(struct flex_array *array, unsigned int start,
unsigned int nr_elements, gfp_t flags);
This function will ensure that memory for the elements indexed in the range
defined by start and nr_elements has been allocated. Thereafter, a
flex_array_put() call on an element in that range is guaranteed not to
block.
Getting data back out of the array is done with::
void *flex_array_get(struct flex_array *fa, unsigned int element_nr);
The return value is a pointer to the data element, or NULL if that
particular element has never been allocated.
Note that it is possible to get back a valid pointer for an element which
has never been stored in the array. Memory for array elements is allocated
one page at a time; a single allocation could provide memory for several
adjacent elements. Flexible array elements are normally initialized to the
value FLEX_ARRAY_FREE (defined as 0x6c in <linux/poison.h>), so errors
involving that number probably result from use of unstored array entries.
Note that, if array elements are allocated with __GFP_ZERO, they will be
initialized to zero and this poisoning will not happen.
Individual elements in the array can be cleared with::
int flex_array_clear(struct flex_array *array, unsigned int element_nr);
This function will set the given element to FLEX_ARRAY_FREE and return
zero. If storage for the indicated element is not allocated for the array,
flex_array_clear() will return -EINVAL instead. Note that clearing an
element does not release the storage associated with it; to reduce the
allocated size of an array, call::
int flex_array_shrink(struct flex_array *array);
The return value will be the number of pages of memory actually freed.
This function works by scanning the array for pages containing nothing but
FLEX_ARRAY_FREE bytes, so (1) it can be expensive, and (2) it will not work
if the array's pages are allocated with __GFP_ZERO.
It is possible to remove all elements of an array with a call to::
void flex_array_free_parts(struct flex_array *array);
This call frees all elements, but leaves the array itself in place.
Freeing the entire array is done with::
void flex_array_free(struct flex_array *array);
As of this writing, there are no users of flexible arrays in the mainline
kernel. The functions described here are also not exported to modules;
that will probably be fixed when somebody comes up with a need for it.
......@@ -510,7 +510,7 @@ tracking your trees, and to people trying to troubleshoot bugs in your
tree.
12) When to use Acked-by:, Cc:, and Co-Developed-by:
12) When to use Acked-by:, Cc:, and Co-developed-by:
-------------------------------------------------------
The Signed-off-by: tag indicates that the signer was involved in the
......@@ -543,7 +543,7 @@ person it names - but it should indicate that this person was copied on the
patch. This tag documents that potentially interested parties
have been included in the discussion.
A Co-Developed-by: states that the patch was also created by another developer
A Co-developed-by: states that the patch was also created by another developer
along with the original author. This is useful at times when multiple people
work on a single patch. Note, this person also needs to have a Signed-off-by:
line in the patch as well.
......
......@@ -6,8 +6,7 @@
# 2) Generate timeconst.h
# 3) Generate asm-offsets.h (may need bounds.h and timeconst.h)
# 4) Check for missing system calls
# 5) check atomics headers are up-to-date
# 6) Generate constants.py (may need bounds.h)
# 5) Generate constants.py (may need bounds.h)
#####
# 1) Generate bounds.h
......@@ -66,20 +65,7 @@ missing-syscalls: scripts/checksyscalls.sh $(offsets-file) FORCE
$(call cmd,syscalls)
#####
# 5) Check atomic headers are up-to-date
#
always += old-atomics
targets += old-atomics
quiet_cmd_atomics = CALL $<
cmd_atomics = $(CONFIG_SHELL) $<
old-atomics: scripts/atomic/check-atomics.sh FORCE
$(call cmd,atomics)
#####
# 6) Generate constants for Python GDB integration
# 5) Generate constants for Python GDB integration
#
extra-$(CONFIG_GDB_SCRIPTS) += build_constants_py
......
......@@ -234,7 +234,7 @@ clean-targets := %clean mrproper cleandocs
no-dot-config-targets := $(clean-targets) \
cscope gtags TAGS tags help% %docs check% coccicheck \
$(version_h) headers_% archheaders archscripts \
%asm-generic kernelversion %src-pkg
%asm-generic genheader kernelversion %src-pkg
no-sync-config-targets := $(no-dot-config-targets) install %install \
kernelrelease
......@@ -1099,7 +1099,7 @@ endif
# prepare2 creates a makefile if using a separate output directory.
# From this point forward, .config has been reprocessed, so any rules
# that need to depend on updated CONFIG_* values can be checked here.
prepare2: prepare3 outputmakefile asm-generic
prepare2: prepare3 outputmakefile asm-generic genheader
prepare1: prepare2 $(version_h) $(autoksyms_h) include/generated/utsrelease.h
$(cmd_crmodverdir)
......@@ -1122,6 +1122,10 @@ asm-generic: uapi-asm-generic
uapi-asm-generic:
$(Q)$(MAKE) $(asm-generic)=arch/$(SRCARCH)/include/generated/uapi/asm
PHONY += genheader
genheader:
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.genheader obj=include/generated
PHONY += prepare-objtool
prepare-objtool: $(objtool_target)
ifeq ($(SKIP_STACK_VALIDATION),1)
......
......@@ -535,6 +535,11 @@ config HAVE_IRQ_TIME_ACCOUNTING
Archs need to ensure they use a high enough resolution clock to
support irq time accounting and then call enable_sched_clock_irqtime().
config HAVE_MOVE_PMD
bool
help
Archs that select this are able to move page tables at the PMD level.
config HAVE_ARCH_TRANSPARENT_HUGEPAGE
bool
......
......@@ -52,7 +52,7 @@ pmd_free(struct mm_struct *mm, pmd_t *pmd)
}
static inline pte_t *
pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address)
pte_alloc_one_kernel(struct mm_struct *mm)
{
pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
return pte;
......@@ -65,9 +65,9 @@ pte_free_kernel(struct mm_struct *mm, pte_t *pte)
}
static inline pgtable_t
pte_alloc_one(struct mm_struct *mm, unsigned long address)
pte_alloc_one(struct mm_struct *mm)
{
pte_t *pte = pte_alloc_one_kernel(mm, address);
pte_t *pte = pte_alloc_one_kernel(mm);
struct page *page;
if (!pte)
......
......@@ -90,8 +90,7 @@ static inline int __get_order_pte(void)
return get_order(PTRS_PER_PTE * sizeof(pte_t));
}
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
pte_t *pte;
......@@ -102,7 +101,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
}
static inline pgtable_t
pte_alloc_one(struct mm_struct *mm, unsigned long address)
pte_alloc_one(struct mm_struct *mm)
{
pgtable_t pte_pg;
struct page *page;
......
......@@ -142,7 +142,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
fault = handle_mm_fault(vma, address, flags);
/* If Pagefault was interrupted by SIGKILL, exit page fault "early" */
if (unlikely(fatal_signal_pending(current))) {
if (fatal_signal_pending(current)) {
if ((fault & VM_FAULT_ERROR) && !(fault & VM_FAULT_RETRY))
up_read(&mm->mmap_sem);
if (user_mode(regs))
......
......@@ -81,7 +81,7 @@ static inline void clean_pte_table(pte_t *pte)
* +------------+
*/
static inline pte_t *
pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr)
pte_alloc_one_kernel(struct mm_struct *mm)
{
pte_t *pte;
......@@ -93,7 +93,7 @@ pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr)
}
static inline pgtable_t
pte_alloc_one(struct mm_struct *mm, unsigned long addr)
pte_alloc_one(struct mm_struct *mm)
{
struct page *pte;
......
......@@ -166,7 +166,7 @@
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
#include <asm-generic/atomic-instrumented.h>
#include <generated/atomic-instrumented.h>
#endif
#endif
......@@ -91,13 +91,13 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm);
extern void pgd_free(struct mm_struct *mm, pgd_t *pgdp);
static inline pte_t *
pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr)
pte_alloc_one_kernel(struct mm_struct *mm)
{
return (pte_t *)__get_free_page(PGALLOC_GFP);
}
static inline pgtable_t
pte_alloc_one(struct mm_struct *mm, unsigned long addr)
pte_alloc_one(struct mm_struct *mm)
{
struct page *pte;
......
......@@ -59,8 +59,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
free_page((unsigned long) pgd);
}
static inline struct page *pte_alloc_one(struct mm_struct *mm,
unsigned long address)
static inline struct page *pte_alloc_one(struct mm_struct *mm)
{
struct page *pte;
......@@ -75,8 +74,7 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm,
}
/* _kernel variant gets to use a different allocator */
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
gfp_t flags = GFP_KERNEL | __GFP_ZERO;
return (pte_t *) __get_free_page(flags);
......
......@@ -83,7 +83,7 @@ pmd_populate_kernel(struct mm_struct *mm, pmd_t * pmd_entry, pte_t * pte)
pmd_val(*pmd_entry) = __pa(pte);
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr)
static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
{
struct page *page;
void *pg;
......@@ -99,8 +99,7 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr)
return page;
}
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long addr)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
return quicklist_alloc(0, GFP_KERNEL, NULL);
}
......
......@@ -12,8 +12,7 @@ extern inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
extern const char bad_pmd_string[];
extern inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
extern inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
unsigned long page = __get_free_page(GFP_DMA);
......@@ -32,8 +31,6 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address)
#define pmd_alloc_one_fast(mm, address) ({ BUG(); ((pmd_t *)1); })
#define pmd_alloc_one(mm, address) ({ BUG(); ((pmd_t *)2); })
#define pte_alloc_one_fast(mm, addr) pte_alloc_one(mm, addr)
#define pmd_populate(mm, pmd, page) (pmd_val(*pmd) = \
(unsigned long)(page_address(page)))
......@@ -50,8 +47,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t page,
#define __pmd_free_tlb(tlb, pmd, address) do { } while (0)
static inline struct page *pte_alloc_one(struct mm_struct *mm,
unsigned long address)
static inline struct page *pte_alloc_one(struct mm_struct *mm)
{
struct page *page = alloc_pages(GFP_DMA, 0);
pte_t *pte;
......
......@@ -8,7 +8,7 @@
extern pmd_t *get_pointer_table(void);
extern int free_pointer_table(pmd_t *);
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
pte_t *pte;
......@@ -28,7 +28,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
free_page((unsigned long) pte);
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address)
static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
{
struct page *page;
pte_t *pte;
......
......@@ -35,8 +35,7 @@ do { \
tlb_remove_page((tlb), pte); \
} while (0)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
unsigned long page = __get_free_page(GFP_KERNEL);
......@@ -47,8 +46,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
return (pte_t *) (page);
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
unsigned long address)
static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
{
struct page *page = alloc_pages(GFP_KERNEL, 0);
......
......@@ -108,10 +108,9 @@ static inline void free_pgd_slow(pgd_t *pgd)
#define pmd_alloc_one_fast(mm, address) ({ BUG(); ((pmd_t *)1); })
#define pmd_alloc_one(mm, address) ({ BUG(); ((pmd_t *)2); })
extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr);
extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm);
static inline struct page *pte_alloc_one(struct mm_struct *mm,
unsigned long address)
static inline struct page *pte_alloc_one(struct mm_struct *mm)
{
struct page *ptepage;
......@@ -132,20 +131,6 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm,
return ptepage;
}
static inline pte_t *pte_alloc_one_fast(struct mm_struct *mm,
unsigned long address)
{
unsigned long *ret;
ret = pte_quicklist;
if (ret != NULL) {
pte_quicklist = (unsigned long *)(*ret);
ret[0] = 0;
pgtable_cache_size--;
}
return (pte_t *)ret;
}
static inline void pte_free_fast(pte_t *pte)
{
*(unsigned long **)pte = pte_quicklist;
......
......@@ -235,8 +235,7 @@ unsigned long iopa(unsigned long addr)
return pa;
}
__ref pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
__ref pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
pte_t *pte;
if (mem_init_done) {
......
......@@ -50,14 +50,12 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
free_pages((unsigned long)pgd, PGD_ORDER);
}
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
return (pte_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, PTE_ORDER);
}
static inline struct page *pte_alloc_one(struct mm_struct *mm,
unsigned long address)
static inline struct page *pte_alloc_one(struct mm_struct *mm)
{
struct page *pte;
......
......@@ -22,8 +22,7 @@ extern void pgd_free(struct mm_struct *mm, pgd_t * pgd);
#define check_pgt_cache() do { } while (0)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long addr)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
pte_t *pte;
......@@ -34,7 +33,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
return pte;
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr)
static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
{
pgtable_t pte;
......
......@@ -37,8 +37,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
free_pages((unsigned long)pgd, PGD_ORDER);
}
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
pte_t *pte;
......@@ -47,8 +46,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
return pte;
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
unsigned long address)
static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
{