Skip to content

Commit

Permalink
Merge branch 'akpm' (patches from Andrew)
Browse files Browse the repository at this point in the history
Merge second patch-bomb from Andrew Morton:

 - most of the rest of MM

 - procfs

 - lib/ updates

 - printk updates

 - bitops infrastructure tweaks

 - checkpatch updates

 - nilfs2 update

 - signals

 - various other misc bits: coredump, seqfile, kexec, pidns, zlib, ipc,
   dma-debug, dma-mapping, ...

* emailed patches from Andrew Morton <[email protected]>: (102 commits)
  ipc,msg: drop dst nil validation in copy_msg
  include/linux/zutil.h: fix usage example of zlib_adler32()
  panic: release stale console lock to always get the logbuf printed out
  dma-debug: check nents in dma_sync_sg*
  dma-mapping: tidy up dma_parms default handling
  pidns: fix set/getpriority and ioprio_set/get in PRIO_USER mode
  kexec: use file name as the output message prefix
  fs, seqfile: always allow oom killer
  seq_file: reuse string_escape_str()
  fs/seq_file: use seq_* helpers in seq_hex_dump()
  coredump: change zap_threads() and zap_process() to use for_each_thread()
  coredump: ensure all coredumping tasks have SIGNAL_GROUP_COREDUMP
  signal: remove jffs2_garbage_collect_thread()->allow_signal(SIGCONT)
  signal: introduce kernel_signal_stop() to fix jffs2_garbage_collect_thread()
  signal: turn dequeue_signal_lock() into kernel_dequeue_signal()
  signals: kill block_all_signals() and unblock_all_signals()
  nilfs2: fix gcc uninitialized-variable warnings in powerpc build
  nilfs2: fix gcc unused-but-set-variable warnings
  MAINTAINERS: nilfs2: add header file for tracing
  nilfs2: add tracepoints for analyzing reading and writing metadata files
  ...
  • Loading branch information
torvalds committed Nov 7, 2015
2 parents ab9f2fa + 5f2a2d5 commit ad804a0
Show file tree
Hide file tree
Showing 211 changed files with 2,298 additions and 1,632 deletions.
29 changes: 29 additions & 0 deletions Documentation/printk-formats.txt
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,10 @@ Example:

Reminder: sizeof() result is of type size_t.

The kernel's printf does not support %n. For obvious reasons, floating
point formats (%e, %f, %g, %a) are also not recognized. Use of any
unsupported specifier or length qualifier results in a WARN and early
return from vsnprintf.

Raw pointer value SHOULD be printed with %p. The kernel supports
the following extended format specifiers for pointer types:
Expand Down Expand Up @@ -119,6 +123,7 @@ Raw buffer as an escaped string:
If field width is omitted the 1 byte only will be escaped.

Raw buffer as a hex string:

%*ph 00 01 02 ... 3f
%*phC 00:01:02: ... :3f
%*phD 00-01-02- ... -3f
Expand Down Expand Up @@ -234,6 +239,7 @@ UUID/GUID addresses:
Passed by reference.

dentry names:

%pd{,2,3,4}
%pD{,2,3,4}

Expand All @@ -256,6 +262,8 @@ struct va_format:
va_list *va;
};

Implements a "recursive vsnprintf".

Do not use this feature without some mechanism to verify the
correctness of the format string and va_list arguments.

Expand Down Expand Up @@ -284,6 +292,27 @@ bitmap and its derivatives such as cpumask and nodemask:

Passed by reference.

Network device features:

%pNF 0x000000000000c000

For printing netdev_features_t.

Passed by reference.

Command from struct task_struct

%pT ls

For printing executable name excluding path from struct
task_struct.

Passed by reference.

If you add other %p extensions, please extend lib/test_printf.c with
one or more test cases, if at all feasible.


Thank you for your cooperation and attention.


Expand Down
14 changes: 8 additions & 6 deletions Documentation/vm/balance
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
Started Jan 2000 by Kanoj Sarcar <[email protected]>

Memory balancing is needed for non __GFP_WAIT as well as for non
__GFP_IO allocations.
Memory balancing is needed for !__GFP_ATOMIC and !__GFP_KSWAPD_RECLAIM as
well as for non __GFP_IO allocations.

There are two reasons to be requesting non __GFP_WAIT allocations:
the caller can not sleep (typically intr context), or does not want
to incur cost overheads of page stealing and possible swap io for
whatever reasons.
The first reason why a caller may avoid reclaim is that the caller can not
sleep due to holding a spinlock or is in interrupt context. The second may
be that the caller is willing to fail the allocation without incurring the
overhead of page reclaim. This may happen for opportunistic high-order
allocation requests that have order-0 fallback options. In such cases,
the caller may also wish to avoid waking kswapd.

__GFP_IO allocation requests are made to prevent file system deadlocks.

Expand Down
4 changes: 2 additions & 2 deletions Documentation/vm/split_page_table_lock
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,8 @@ everything required is done by pgtable_page_ctor() and pgtable_page_dtor(),
which must be called on PTE table allocation / freeing.

Make sure the architecture doesn't use slab allocator for page table
allocation: slab uses page->slab_cache and page->first_page for its pages.
These fields share storage with page->ptl.
allocation: slab uses page->slab_cache for its pages.
This field shares storage with page->ptl.

PMD split lock only makes sense if you have more than two page table
levels.
Expand Down
4 changes: 4 additions & 0 deletions MAINTAINERS
Original file line number Diff line number Diff line change
Expand Up @@ -4209,7 +4209,10 @@ L: [email protected]
T: git git://git.kernel.org/pub/scm/linux/kernel/git/chanwoo/extcon.git
S: Maintained
F: drivers/extcon/
F: include/linux/extcon/
F: include/linux/extcon.h
F: Documentation/extcon/
F: Documentation/devicetree/bindings/extcon/

EXYNOS DP DRIVER
M: Jingoo Han <[email protected]>
Expand Down Expand Up @@ -7490,6 +7493,7 @@ S: Supported
F: Documentation/filesystems/nilfs2.txt
F: fs/nilfs2/
F: include/linux/nilfs2_fs.h
F: include/trace/events/nilfs2.h

NINJA SCSI-3 / NINJA SCSI-32Bi (16bit/CardBus) PCMCIA SCSI HOST ADAPTER DRIVER
M: YOKOTA Hiroshi <[email protected]>
Expand Down
6 changes: 3 additions & 3 deletions arch/arm/mm/dma-mapping.c
Original file line number Diff line number Diff line change
Expand Up @@ -651,12 +651,12 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,

if (nommu())
addr = __alloc_simple_buffer(dev, size, gfp, &page);
else if (dev_get_cma_area(dev) && (gfp & __GFP_WAIT))
else if (dev_get_cma_area(dev) && (gfp & __GFP_DIRECT_RECLAIM))
addr = __alloc_from_contiguous(dev, size, prot, &page,
caller, want_vaddr);
else if (is_coherent)
addr = __alloc_simple_buffer(dev, size, gfp, &page);
else if (!(gfp & __GFP_WAIT))
else if (!gfpflags_allow_blocking(gfp))
addr = __alloc_from_pool(size, &page);
else
addr = __alloc_remap_buffer(dev, size, gfp, prot, &page,
Expand Down Expand Up @@ -1363,7 +1363,7 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
*handle = DMA_ERROR_CODE;
size = PAGE_ALIGN(size);

if (!(gfp & __GFP_WAIT))
if (!gfpflags_allow_blocking(gfp))
return __iommu_alloc_atomic(dev, size, handle);

/*
Expand Down
2 changes: 1 addition & 1 deletion arch/arm/xen/mm.c
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
unsigned long xen_get_swiotlb_free_pages(unsigned int order)
{
struct memblock_region *reg;
gfp_t flags = __GFP_NOWARN;
gfp_t flags = __GFP_NOWARN|__GFP_KSWAPD_RECLAIM;

for_each_memblock(memory, reg) {
if (reg->base < (phys_addr_t)0xffffffff) {
Expand Down
4 changes: 2 additions & 2 deletions arch/arm64/mm/dma-mapping.c
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size,
if (IS_ENABLED(CONFIG_ZONE_DMA) &&
dev->coherent_dma_mask <= DMA_BIT_MASK(32))
flags |= GFP_DMA;
if (dev_get_cma_area(dev) && (flags & __GFP_WAIT)) {
if (dev_get_cma_area(dev) && gfpflags_allow_blocking(flags)) {
struct page *page;
void *addr;

Expand Down Expand Up @@ -148,7 +148,7 @@ static void *__dma_alloc(struct device *dev, size_t size,

size = PAGE_ALIGN(size);

if (!coherent && !(flags & __GFP_WAIT)) {
if (!coherent && !gfpflags_allow_blocking(flags)) {
struct page *page = NULL;
void *addr = __alloc_from_pool(size, &page, flags);

Expand Down
2 changes: 1 addition & 1 deletion arch/sh/kernel/cpu/sh5/unwind.c
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ static int lookup_prev_stack_frame(unsigned long fp, unsigned long pc,

/* Sign extend */
regcache[dest] =
((((s64)(u64)op >> 10) & 0xffff) << 54) >> 54;
sign_extend64((((u64)op >> 10) & 0xffff), 9);
break;
case (0xd0 >> 2): /* addi */
case (0xd4 >> 2): /* addi.l */
Expand Down
2 changes: 1 addition & 1 deletion arch/sh/kernel/traps_64.c
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ static int generate_and_check_address(struct pt_regs *regs,
if (displacement_not_indexed) {
__s64 displacement;
displacement = (opcode >> 10) & 0x3ff;
displacement = ((displacement << 54) >> 54); /* sign extend */
displacement = sign_extend64(displacement, 9);
addr = (__u64)((__s64)base_address + (displacement << width_shift));
} else {
__u64 offset;
Expand Down
7 changes: 3 additions & 4 deletions arch/x86/kernel/cpu/perf_event_msr.c
Original file line number Diff line number Diff line change
Expand Up @@ -163,10 +163,9 @@ static void msr_event_update(struct perf_event *event)
goto again;

delta = now - prev;
if (unlikely(event->hw.event_base == MSR_SMI_COUNT)) {
delta <<= 32;
delta >>= 32; /* sign extend */
}
if (unlikely(event->hw.event_base == MSR_SMI_COUNT))
delta = sign_extend64(delta, 31);

local64_add(now - prev, &event->count);
}

Expand Down
2 changes: 1 addition & 1 deletion arch/x86/kernel/pci-dma.c
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size,
again:
page = NULL;
/* CMA can be used only in the context which permits sleeping */
if (flag & __GFP_WAIT) {
if (gfpflags_allow_blocking(flag)) {
page = dma_alloc_from_contiguous(dev, count, get_order(size));
if (page && page_to_phys(page) + size > dma_mask) {
dma_release_from_contiguous(dev, page, count);
Expand Down
1 change: 0 additions & 1 deletion arch/xtensa/configs/iss_defconfig
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,6 @@ CONFIG_FLATMEM_MANUAL=y
# CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
# CONFIG_PHYS_ADDR_T_64BIT is not set
CONFIG_ZONE_DMA_FLAG=1
Expand Down
26 changes: 13 additions & 13 deletions block/bio.c
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ struct bio_vec *bvec_alloc(gfp_t gfp_mask, int nr, unsigned long *idx,
bvl = mempool_alloc(pool, gfp_mask);
} else {
struct biovec_slab *bvs = bvec_slabs + *idx;
gfp_t __gfp_mask = gfp_mask & ~(__GFP_WAIT | __GFP_IO);
gfp_t __gfp_mask = gfp_mask & ~(__GFP_DIRECT_RECLAIM | __GFP_IO);

/*
* Make this allocation restricted and don't dump info on
Expand All @@ -221,11 +221,11 @@ struct bio_vec *bvec_alloc(gfp_t gfp_mask, int nr, unsigned long *idx,
__gfp_mask |= __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN;

/*
* Try a slab allocation. If this fails and __GFP_WAIT
* Try a slab allocation. If this fails and __GFP_DIRECT_RECLAIM
* is set, retry with the 1-entry mempool
*/
bvl = kmem_cache_alloc(bvs->slab, __gfp_mask);
if (unlikely(!bvl && (gfp_mask & __GFP_WAIT))) {
if (unlikely(!bvl && (gfp_mask & __GFP_DIRECT_RECLAIM))) {
*idx = BIOVEC_MAX_IDX;
goto fallback;
}
Expand Down Expand Up @@ -395,12 +395,12 @@ static void punt_bios_to_rescuer(struct bio_set *bs)
* If @bs is NULL, uses kmalloc() to allocate the bio; else the allocation is
* backed by the @bs's mempool.
*
* When @bs is not NULL, if %__GFP_WAIT is set then bio_alloc will always be
* able to allocate a bio. This is due to the mempool guarantees. To make this
* work, callers must never allocate more than 1 bio at a time from this pool.
* Callers that need to allocate more than 1 bio must always submit the
* previously allocated bio for IO before attempting to allocate a new one.
* Failure to do so can cause deadlocks under memory pressure.
* When @bs is not NULL, if %__GFP_DIRECT_RECLAIM is set then bio_alloc will
* always be able to allocate a bio. This is due to the mempool guarantees.
* To make this work, callers must never allocate more than 1 bio at a time
* from this pool. Callers that need to allocate more than 1 bio must always
* submit the previously allocated bio for IO before attempting to allocate
* a new one. Failure to do so can cause deadlocks under memory pressure.
*
* Note that when running under generic_make_request() (i.e. any block
* driver), bios are not submitted until after you return - see the code in
Expand Down Expand Up @@ -459,13 +459,13 @@ struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs)
* We solve this, and guarantee forward progress, with a rescuer
* workqueue per bio_set. If we go to allocate and there are
* bios on current->bio_list, we first try the allocation
* without __GFP_WAIT; if that fails, we punt those bios we
* would be blocking to the rescuer workqueue before we retry
* with the original gfp_flags.
* without __GFP_DIRECT_RECLAIM; if that fails, we punt those
* bios we would be blocking to the rescuer workqueue before
* we retry with the original gfp_flags.
*/

if (current->bio_list && !bio_list_empty(current->bio_list))
gfp_mask &= ~__GFP_WAIT;
gfp_mask &= ~__GFP_DIRECT_RECLAIM;

p = mempool_alloc(bs->bio_pool, gfp_mask);
if (!p && gfp_mask != saved_gfp) {
Expand Down
20 changes: 10 additions & 10 deletions block/blk-core.c
Original file line number Diff line number Diff line change
Expand Up @@ -638,7 +638,7 @@ int blk_queue_enter(struct request_queue *q, gfp_t gfp)
if (percpu_ref_tryget_live(&q->q_usage_counter))
return 0;

if (!(gfp & __GFP_WAIT))
if (!gfpflags_allow_blocking(gfp))
return -EBUSY;

ret = wait_event_interruptible(q->mq_freeze_wq,
Expand Down Expand Up @@ -1206,8 +1206,8 @@ static struct request *__get_request(struct request_list *rl, int rw_flags,
* @bio: bio to allocate request for (can be %NULL)
* @gfp_mask: allocation mask
*
* Get a free request from @q. If %__GFP_WAIT is set in @gfp_mask, this
* function keeps retrying under memory pressure and fails iff @q is dead.
* Get a free request from @q. If %__GFP_DIRECT_RECLAIM is set in @gfp_mask,
* this function keeps retrying under memory pressure and fails iff @q is dead.
*
* Must be called with @q->queue_lock held and,
* Returns ERR_PTR on failure, with @q->queue_lock held.
Expand All @@ -1227,7 +1227,7 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
if (!IS_ERR(rq))
return rq;

if (!(gfp_mask & __GFP_WAIT) || unlikely(blk_queue_dying(q))) {
if (!gfpflags_allow_blocking(gfp_mask) || unlikely(blk_queue_dying(q))) {
blk_put_rl(rl);
return rq;
}
Expand Down Expand Up @@ -1305,11 +1305,11 @@ EXPORT_SYMBOL(blk_get_request);
* BUG.
*
* WARNING: When allocating/cloning a bio-chain, careful consideration should be
* given to how you allocate bios. In particular, you cannot use __GFP_WAIT for
* anything but the first bio in the chain. Otherwise you risk waiting for IO
* completion of a bio that hasn't been submitted yet, thus resulting in a
* deadlock. Alternatively bios should be allocated using bio_kmalloc() instead
* of bio_alloc(), as that avoids the mempool deadlock.
* given to how you allocate bios. In particular, you cannot use
* __GFP_DIRECT_RECLAIM for anything but the first bio in the chain. Otherwise
* you risk waiting for IO completion of a bio that hasn't been submitted yet,
* thus resulting in a deadlock. Alternatively bios should be allocated using
* bio_kmalloc() instead of bio_alloc(), as that avoids the mempool deadlock.
* If possible a big IO should be split into smaller parts when allocation
* fails. Partial allocation should not be an error, or you risk a live-lock.
*/
Expand Down Expand Up @@ -2038,7 +2038,7 @@ void generic_make_request(struct bio *bio)
do {
struct request_queue *q = bdev_get_queue(bio->bi_bdev);

if (likely(blk_queue_enter(q, __GFP_WAIT) == 0)) {
if (likely(blk_queue_enter(q, __GFP_DIRECT_RECLAIM) == 0)) {

q->make_request_fn(q, bio);

Expand Down
2 changes: 1 addition & 1 deletion block/blk-ioc.c
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@ struct io_context *get_task_io_context(struct task_struct *task,
{
struct io_context *ioc;

might_sleep_if(gfp_flags & __GFP_WAIT);
might_sleep_if(gfpflags_allow_blocking(gfp_flags));

do {
task_lock(task);
Expand Down
2 changes: 1 addition & 1 deletion block/blk-mq-tag.c
Original file line number Diff line number Diff line change
Expand Up @@ -268,7 +268,7 @@ static int bt_get(struct blk_mq_alloc_data *data,
if (tag != -1)
return tag;

if (!(data->gfp & __GFP_WAIT))
if (!gfpflags_allow_blocking(data->gfp))
return -1;

bs = bt_wait_ptr(bt, hctx);
Expand Down
6 changes: 3 additions & 3 deletions block/blk-mq.c
Original file line number Diff line number Diff line change
Expand Up @@ -244,11 +244,11 @@ struct request *blk_mq_alloc_request(struct request_queue *q, int rw, gfp_t gfp,

ctx = blk_mq_get_ctx(q);
hctx = q->mq_ops->map_queue(q, ctx->cpu);
blk_mq_set_alloc_data(&alloc_data, q, gfp & ~__GFP_WAIT,
blk_mq_set_alloc_data(&alloc_data, q, gfp & ~__GFP_DIRECT_RECLAIM,
reserved, ctx, hctx);

rq = __blk_mq_alloc_request(&alloc_data, rw);
if (!rq && (gfp & __GFP_WAIT)) {
if (!rq && (gfp & __GFP_DIRECT_RECLAIM)) {
__blk_mq_run_hw_queue(hctx);
blk_mq_put_ctx(ctx);

Expand Down Expand Up @@ -1186,7 +1186,7 @@ static struct request *blk_mq_map_request(struct request_queue *q,
ctx = blk_mq_get_ctx(q);
hctx = q->mq_ops->map_queue(q, ctx->cpu);
blk_mq_set_alloc_data(&alloc_data, q,
__GFP_WAIT|GFP_ATOMIC, false, ctx, hctx);
__GFP_RECLAIM|__GFP_HIGH, false, ctx, hctx);
rq = __blk_mq_alloc_request(&alloc_data, rw);
ctx = alloc_data.ctx;
hctx = alloc_data.hctx;
Expand Down
6 changes: 4 additions & 2 deletions block/ioprio.c
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,8 @@ SYSCALL_DEFINE3(ioprio_set, int, which, int, who, int, ioprio)
break;

do_each_thread(g, p) {
if (!uid_eq(task_uid(p), uid))
if (!uid_eq(task_uid(p), uid) ||
!task_pid_vnr(p))
continue;
ret = set_task_ioprio(p, ioprio);
if (ret)
Expand Down Expand Up @@ -220,7 +221,8 @@ SYSCALL_DEFINE2(ioprio_get, int, which, int, who)
break;

do_each_thread(g, p) {
if (!uid_eq(task_uid(p), user->uid))
if (!uid_eq(task_uid(p), user->uid) ||
!task_pid_vnr(p))
continue;
tmpio = get_task_ioprio(p);
if (tmpio < 0)
Expand Down
Loading

0 comments on commit ad804a0

Please sign in to comment.