diff --git a/Documentation/arm/small_task_packing.txt b/Documentation/arm/small_task_packing.txt new file mode 100644 index 000000000000..43f0a8b80234 --- /dev/null +++ b/Documentation/arm/small_task_packing.txt @@ -0,0 +1,136 @@ +Small Task Packing in the big.LITTLE MP Reference Patch Set + +What is small task packing? +---- +Simply that the scheduler will fit as many small tasks on a single CPU +as possible before using other CPUs. A small task is defined as one +whose tracked load is less than 90% of a NICE_0 task. This is a change +from the usual behavior since the scheduler will normally use an idle +CPU for a waking task unless that task is considered cache hot. + + +How is it implemented? +---- +Since all small tasks must wake up relatively frequently, the main +requirement for packing small tasks is to select a partly-busy CPU when +waking rather than looking for an idle CPU. We use the tracked load of +the CPU runqueue to determine how heavily loaded each CPU is and the +tracked load of the task to determine if it will fit on the CPU. We +always start with the lowest-numbered CPU in a sched domain and stop +looking when we find a CPU with enough space for the task. + +Some further tweaks are necessary to suppress load balancing when the +CPU is not fully loaded, otherwise the scheduler attempts to spread +tasks evenly across the domain. + + +How does it interact with the HMP patches? +---- +Firstly, we only enable packing on the little domain. The intent is that +the big domain is intended to spread tasks amongst the available CPUs +one-task-per-CPU. The little domain however is attempting to use as +little power as possible while servicing its tasks. + +Secondly, since we offload big tasks onto little CPUs in order to try +to devote one CPU to each task, we have a threshold above which we do +not try to pack a task and instead will select an idle CPU if possible. +This maintains maximum forward progress for busy tasks temporarily +demoted from big CPUs. + + +Can the behaviour be tuned? +---- +Yes, the load level of a 'full' CPU can be easily modified in the source +and is exposed through sysfs as /sys/kernel/hmp/packing_limit to be +changed at runtime. The presence of the packing behaviour is controlled +by CONFIG_SCHED_HMP_LITTLE_PACKING and can be disabled at run-time +using /sys/kernel/hmp/packing_enable. +The definition of a small task is hard coded as 90% of NICE_0_LOAD +and cannot be modified at run time. + + +Why do I need to tune it? +---- +The optimal configuration is likely to be different depending upon the +design and manufacturing of your SoC. + +In the main, there are two system effects from enabling small task +packing. + +1. CPU operating point may increase +2. wakeup latency of tasks may be increased + +There are also likely to be secondary effects from loading one CPU +rather than spreading tasks. + +Note that all of these system effects are dependent upon the workload +under consideration. + + +CPU Operating Point +---- +The primary impact of loading one CPU with a number of light tasks is to +increase the compute requirement of that CPU since it is no longer idle +as often. Increased compute requirement causes an increase in the +frequency of the CPU through CPUfreq. + +Consider this example: +We have a system with 3 CPUs which can operate at any frequency between +350MHz and 1GHz. The system has 6 tasks which would each produce 10% +load at 1GHz. The scheduler has frequency-invariant load scaling +enabled. Our DVFS governor aims for 80% utilization at the chosen +frequency. + +Without task packing, these tasks will be spread out amongst all CPUs +such that each has 2. This will produce roughly 20% system load, and +the frequency of the package will remain at 350MHz. + +With task packing set to the default packing_limit, all of these tasks +will sit on one CPU and require a package frequency of ~750MHz to reach +80% utilization. (0.75 = 0.6 * 0.8). + +When a package operates on a single frequency domain, all CPUs in that +package share frequency and voltage. + +Depending upon the SoC implementation there can be a significant amount +of energy lost to leakage from idle CPUs. The decision about how +loaded a CPU must be to be considered 'full' is therefore controllable +through sysfs (sys/kernel/hmp/packing_limit) and directly in the code. + +Continuing the example, lets set packing_limit to 450 which means we +will pack tasks until the total load of all running tasks >= 450. In +practise, this is very similar to a 55% idle 1Ghz CPU. + +Now we are only able to place 4 tasks on CPU0, and two will overflow +onto CPU1. CPU0 will have a load of 40% and CPU1 will have a load of +20%. In order to still hit 80% utilization, CPU0 now only needs to +operate at (0.4*0.8=0.32) 320MHz, which means that the lowest operating +point will be selected, the same as in the non-packing case, except that +now CPU2 is no longer needed and can be power-gated. + +In order to use less energy, the saving from power-gating CPU2 must be +more than the energy spent running CPU0 for the extra cycles. This +depends upon the SoC implementation. + +This is obviously a contrived example requiring all the tasks to +be runnable at the same time, but it illustrates the point. + + +Wakeup Latency +---- +This is an unavoidable consequence of trying to pack tasks together +rather than giving them a CPU each. If you cannot find an acceptable +level of wakeup latency, you should turn packing off. + +Cyclictest is a good test application for determining the added latency +when configuring packing. + + +Why is it turned off for the VersatileExpress V2P_CA15A7 CoreTile? +---- +Simply, this core tile only has power gating for the whole A7 package. +When small task packing is enabled, all our low-energy use cases +normally fit onto one A7 CPU. We therefore end up with 2 mostly-idle +CPUs and one mostly-busy CPU. This decreases the amount of time +available where the whole package is idle and can be turned off. + diff --git a/Makefile b/Makefile index 11f0b5836b4a..a1594bbbe9d5 100644 --- a/Makefile +++ b/Makefile @@ -1,6 +1,6 @@ VERSION = 3 PATCHLEVEL = 10 -SUBLEVEL = 19 +SUBLEVEL = 21 EXTRAVERSION = NAME = TOSSUG Baby Fish diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index eb0523897d6c..fe8d330568e5 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1513,6 +1513,17 @@ config SCHED_HMP There is currently no support for migration of task groups, hence !SCHED_AUTOGROUP. Furthermore, normal load-balancing must be disabled between cpus of different type (DISABLE_CPU_SCHED_DOMAIN_BALANCE). + When turned on, this option adds sys/kernel/hmp directory which + contains the following files: + up_threshold - the load average threshold used for up migration + (0 - 1023) + down_threshold - the load average threshold used for down migration + (0 - 1023) + hmp_domains - a list of cpumasks for the present HMP domains, + starting with the 'biggest' and ending with the + 'smallest'. + Note that both the threshold files can be written at runtime to + control scheduler behaviour. config SCHED_HMP_PRIO_FILTER bool "(EXPERIMENTAL) Filter HMP migrations by task priority" @@ -1547,28 +1558,24 @@ config HMP_VARIABLE_SCALE bool "Allows changing the load tracking scale through sysfs" depends on SCHED_HMP help - When turned on, this option exports the thresholds and load average - period value for the load tracking patches through sysfs. + When turned on, this option exports the load average period value + for the load tracking patches through sysfs. The values can be modified to change the rate of load accumulation - and the thresholds used for HMP migration. - The load_avg_period_ms is the time in ms to reach a load average of - 0.5 for an idle task of 0 load average ratio that start a busy loop. - The up_threshold and down_threshold is the value to go to a faster - CPU or to go back to a slower cpu. - The {up,down}_threshold are devided by 1024 before being compared - to the load average. - For examples, with load_avg_period_ms = 128 and up_threshold = 512, + used for HMP migration. 'load_avg_period_ms' is the time in ms to + reach a load average of 0.5 for an idle task of 0 load average + ratio which becomes 100% busy. + For example, with load_avg_period_ms = 128 and up_threshold = 512, a running task with a load of 0 will be migrated to a bigger CPU after 128ms, because after 128ms its load_avg_ratio is 0.5 and the real up_threshold is 0.5. This patch has the same behavior as changing the Y of the load average computation to (1002/1024)^(LOAD_AVG_PERIOD/load_avg_period_ms) - but it remove intermadiate overflows in computation. + but removes intermediate overflows in computation. config HMP_FREQUENCY_INVARIANT_SCALE bool "(EXPERIMENTAL) Frequency-Invariant Tracked Load for HMP" - depends on HMP_VARIABLE_SCALE && CPU_FREQ + depends on SCHED_HMP && CPU_FREQ help Scales the current load contribution in line with the frequency of the CPU that the task was executed on. diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 84ba67b982c0..e04613906f1b 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -313,6 +313,17 @@ out: return err; } +static phys_addr_t kvm_kaddr_to_phys(void *kaddr) +{ + if (!is_vmalloc_addr(kaddr)) { + BUG_ON(!virt_addr_valid(kaddr)); + return __pa(kaddr); + } else { + return page_to_phys(vmalloc_to_page(kaddr)) + + offset_in_page(kaddr); + } +} + /** * create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode * @from: The virtual kernel start address of the range @@ -324,16 +335,27 @@ out: */ int create_hyp_mappings(void *from, void *to) { - unsigned long phys_addr = virt_to_phys(from); + phys_addr_t phys_addr; + unsigned long virt_addr; unsigned long start = KERN_TO_HYP((unsigned long)from); unsigned long end = KERN_TO_HYP((unsigned long)to); - /* Check for a valid kernel memory mapping */ - if (!virt_addr_valid(from) || !virt_addr_valid(to - 1)) - return -EINVAL; + start = start & PAGE_MASK; + end = PAGE_ALIGN(end); - return __create_hyp_mappings(hyp_pgd, start, end, - __phys_to_pfn(phys_addr), PAGE_HYP); + for (virt_addr = start; virt_addr < end; virt_addr += PAGE_SIZE) { + int err; + + phys_addr = kvm_kaddr_to_phys(from + virt_addr - start); + err = __create_hyp_mappings(hyp_pgd, virt_addr, + virt_addr + PAGE_SIZE, + __phys_to_pfn(phys_addr), + PAGE_HYP); + if (err) + return err; + } + + return 0; } /** diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c index 9c742edb7ae6..2b519eee84d8 100644 --- a/arch/arm/mach-vexpress/tc2_pm.c +++ b/arch/arm/mach-vexpress/tc2_pm.c @@ -122,7 +122,15 @@ static void tc2_pm_down(u64 residency) } else BUG(); - gic_cpu_if_down(); + /* + * If the CPU is committed to power down, make sure + * the power controller will be in charge of waking it + * up upon IRQ, ie IRQ lines are cut from GIC CPU IF + * to the CPU by disabling the GIC CPU IF to prevent wfi + * from completing execution behind power controller back + */ + if (!skip_wfi) + gic_cpu_if_down(); if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { arch_spin_unlock(&tc2_pm_lock); diff --git a/arch/cris/include/asm/io.h b/arch/cris/include/asm/io.h index ac12ae2b9286..db9a16c704f3 100644 --- a/arch/cris/include/asm/io.h +++ b/arch/cris/include/asm/io.h @@ -3,6 +3,7 @@ #include /* for __va, __pa */ #include +#include #include struct cris_io_operations diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h index e0a899a1a8a6..5a84b3a50741 100644 --- a/arch/ia64/include/asm/processor.h +++ b/arch/ia64/include/asm/processor.h @@ -319,7 +319,7 @@ struct thread_struct { regs->loadrs = 0; \ regs->r8 = get_dumpable(current->mm); /* set "don't zap registers" flag */ \ regs->r12 = new_sp - 16; /* allocate 16 byte scratch area */ \ - if (unlikely(!get_dumpable(current->mm))) { \ + if (unlikely(get_dumpable(current->mm) != SUID_DUMP_USER)) { \ /* \ * Zap scratch regs to avoid leaking bits between processes with different \ * uid/privileges. \ diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c index 0f83122e6676..323309963cd3 100644 --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -454,7 +454,15 @@ static int save_user_regs(struct pt_regs *regs, struct mcontext __user *frame, if (copy_vsx_to_user(&frame->mc_vsregs, current)) return 1; msr |= MSR_VSX; - } + } else if (!ctx_has_vsx_region) + /* + * With a small context structure we can't hold the VSX + * registers, hence clear the MSR value to indicate the state + * was not saved. + */ + msr &= ~MSR_VSX; + + #endif /* CONFIG_VSX */ #ifdef CONFIG_SPE /* save spe registers */ diff --git a/arch/powerpc/kernel/vio.c b/arch/powerpc/kernel/vio.c index 2d845d8199fc..56d2e72c85de 100644 --- a/arch/powerpc/kernel/vio.c +++ b/arch/powerpc/kernel/vio.c @@ -1530,12 +1530,12 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr, dn = dev->of_node; if (!dn) { - strcat(buf, "\n"); + strcpy(buf, "\n"); return strlen(buf); } cp = of_get_property(dn, "compatible", NULL); if (!cp) { - strcat(buf, "\n"); + strcpy(buf, "\n"); return strlen(buf); } diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c index 3e99c149271a..7ce9cf3b6988 100644 --- a/arch/powerpc/mm/slice.c +++ b/arch/powerpc/mm/slice.c @@ -258,7 +258,7 @@ static bool slice_scan_available(unsigned long addr, slice = GET_HIGH_SLICE_INDEX(addr); *boundary_addr = (slice + end) ? ((slice + end) << SLICE_HIGH_SHIFT) : SLICE_LOW_TOP; - return !!(available.high_slices & (1u << slice)); + return !!(available.high_slices & (1ul << slice)); } } diff --git a/arch/powerpc/platforms/52xx/Kconfig b/arch/powerpc/platforms/52xx/Kconfig index 90f4496017e4..af54174801f7 100644 --- a/arch/powerpc/platforms/52xx/Kconfig +++ b/arch/powerpc/platforms/52xx/Kconfig @@ -57,5 +57,5 @@ config PPC_MPC5200_BUGFIX config PPC_MPC5200_LPBFIFO tristate "MPC5200 LocalPlus bus FIFO driver" - depends on PPC_MPC52xx + depends on PPC_MPC52xx && PPC_BESTCOMM select PPC_BESTCOMM_GEN_BD diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index 7816beff1db8..f75607c93e8a 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -151,13 +151,23 @@ static int pnv_ioda_configure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe) rid_end = pe->rid + 1; } - /* Associate PE in PELT */ + /* + * Associate PE in PELT. We need add the PE into the + * corresponding PELT-V as well. Otherwise, the error + * originated from the PE might contribute to other + * PEs. + */ rc = opal_pci_set_pe(phb->opal_id, pe->pe_number, pe->rid, bcomp, dcomp, fcomp, OPAL_MAP_PE); if (rc) { pe_err(pe, "OPAL error %ld trying to setup PELT table\n", rc); return -ENXIO; } + + rc = opal_pci_set_peltv(phb->opal_id, pe->pe_number, + pe->pe_number, OPAL_ADD_PE_TO_DOMAIN); + if (rc) + pe_warn(pe, "OPAL error %d adding self to PELTV\n", rc); opal_pci_eeh_freeze_clear(phb->opal_id, pe->pe_number, OPAL_EEH_ACTION_CLEAR_FREEZE_ALL); diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c index b4dbade8ca24..2e4b5be31a1b 100644 --- a/arch/s390/crypto/aes_s390.c +++ b/arch/s390/crypto/aes_s390.c @@ -35,7 +35,6 @@ static u8 *ctrblk; static char keylen_flag; struct s390_aes_ctx { - u8 iv[AES_BLOCK_SIZE]; u8 key[AES_MAX_KEY_SIZE]; long enc; long dec; @@ -441,30 +440,36 @@ static int cbc_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, return aes_set_key(tfm, in_key, key_len); } -static int cbc_aes_crypt(struct blkcipher_desc *desc, long func, void *param, +static int cbc_aes_crypt(struct blkcipher_desc *desc, long func, struct blkcipher_walk *walk) { + struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm); int ret = blkcipher_walk_virt(desc, walk); unsigned int nbytes = walk->nbytes; + struct { + u8 iv[AES_BLOCK_SIZE]; + u8 key[AES_MAX_KEY_SIZE]; + } param; if (!nbytes) goto out; - memcpy(param, walk->iv, AES_BLOCK_SIZE); + memcpy(param.iv, walk->iv, AES_BLOCK_SIZE); + memcpy(param.key, sctx->key, sctx->key_len); do { /* only use complete blocks */ unsigned int n = nbytes & ~(AES_BLOCK_SIZE - 1); u8 *out = walk->dst.virt.addr; u8 *in = walk->src.virt.addr; - ret = crypt_s390_kmc(func, param, out, in, n); + ret = crypt_s390_kmc(func, ¶m, out, in, n); if (ret < 0 || ret != n) return -EIO; nbytes &= AES_BLOCK_SIZE - 1; ret = blkcipher_walk_done(desc, walk, nbytes); } while ((nbytes = walk->nbytes)); - memcpy(walk->iv, param, AES_BLOCK_SIZE); + memcpy(walk->iv, param.iv, AES_BLOCK_SIZE); out: return ret; @@ -481,7 +486,7 @@ static int cbc_aes_encrypt(struct blkcipher_desc *desc, return fallback_blk_enc(desc, dst, src, nbytes); blkcipher_walk_init(&walk, dst, src, nbytes); - return cbc_aes_crypt(desc, sctx->enc, sctx->iv, &walk); + return cbc_aes_crypt(desc, sctx->enc, &walk); } static int cbc_aes_decrypt(struct blkcipher_desc *desc, @@ -495,7 +500,7 @@ static int cbc_aes_decrypt(struct blkcipher_desc *desc, return fallback_blk_dec(desc, dst, src, nbytes); blkcipher_walk_init(&walk, dst, src, nbytes); - return cbc_aes_crypt(desc, sctx->dec, sctx->iv, &walk); + return cbc_aes_crypt(desc, sctx->dec, &walk); } static struct crypto_alg cbc_aes_alg = { diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c index 4f977d0d25c2..14647fe09d0c 100644 --- a/arch/s390/kernel/smp.c +++ b/arch/s390/kernel/smp.c @@ -933,7 +933,7 @@ static ssize_t show_idle_count(struct device *dev, idle_count = ACCESS_ONCE(idle->idle_count); if (ACCESS_ONCE(idle->clock_idle_enter)) idle_count++; - } while ((sequence & 1) || (idle->sequence != sequence)); + } while ((sequence & 1) || (ACCESS_ONCE(idle->sequence) != sequence)); return sprintf(buf, "%llu\n", idle_count); } static DEVICE_ATTR(idle_count, 0444, show_idle_count, NULL); @@ -951,7 +951,7 @@ static ssize_t show_idle_time(struct device *dev, idle_time = ACCESS_ONCE(idle->idle_time); idle_enter = ACCESS_ONCE(idle->clock_idle_enter); idle_exit = ACCESS_ONCE(idle->clock_idle_exit); - } while ((sequence & 1) || (idle->sequence != sequence)); + } while ((sequence & 1) || (ACCESS_ONCE(idle->sequence) != sequence)); idle_time += idle_enter ? ((idle_exit ? : now) - idle_enter) : 0; return sprintf(buf, "%llu\n", idle_time >> 12); } diff --git a/arch/s390/kernel/vtime.c b/arch/s390/kernel/vtime.c index 3fb09359eda6..737d50caa4fe 100644 --- a/arch/s390/kernel/vtime.c +++ b/arch/s390/kernel/vtime.c @@ -190,7 +190,7 @@ cputime64_t s390_get_idle_time(int cpu) sequence = ACCESS_ONCE(idle->sequence); idle_enter = ACCESS_ONCE(idle->clock_idle_enter); idle_exit = ACCESS_ONCE(idle->clock_idle_exit); - } while ((sequence & 1) || (idle->sequence != sequence)); + } while ((sequence & 1) || (ACCESS_ONCE(idle->sequence) != sequence)); return idle_enter ? ((idle_exit ?: now) - idle_enter) : 0; } diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 42a392a9fd02..d4bdd253fea7 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -248,6 +248,15 @@ int ftrace_update_ftrace_func(ftrace_func_t func) return ret; } +static int is_ftrace_caller(unsigned long ip) +{ + if (ip == (unsigned long)(&ftrace_call) || + ip == (unsigned long)(&ftrace_regs_call)) + return 1; + + return 0; +} + /* * A breakpoint was added to the code address we are about to * modify, and this is the handle that will just skip over it. @@ -257,10 +266,13 @@ int ftrace_update_ftrace_func(ftrace_func_t func) */ int ftrace_int3_handler(struct pt_regs *regs) { + unsigned long ip; + if (WARN_ON_ONCE(!regs)) return 0; - if (!ftrace_location(regs->ip - 1)) + ip = regs->ip - 1; + if (!ftrace_location(ip) && !is_ftrace_caller(ip)) return 0; regs->ip += MCOUNT_INSN_SIZE - 1; diff --git a/arch/x86/kernel/microcode_amd.c b/arch/x86/kernel/microcode_amd.c index efdec7cd8e01..b516dfb411ec 100644 --- a/arch/x86/kernel/microcode_amd.c +++ b/arch/x86/kernel/microcode_amd.c @@ -430,7 +430,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device, snprintf(fw_name, sizeof(fw_name), "amd-ucode/microcode_amd_fam%.2xh.bin", c->x86); if (request_firmware(&fw, (const char *)fw_name, device)) { - pr_err("failed to load file %s\n", fw_name); + pr_debug("failed to load file %s\n", fw_name); goto out; } diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 1ce8966f2488..48f439953436 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -378,9 +378,9 @@ static void amd_e400_idle(void) * The switch back from broadcast mode needs to be * called with interrupts disabled. */ - local_irq_disable(); - clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu); - local_irq_enable(); + local_irq_disable(); + clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu); + local_irq_enable(); } else default_idle(); } diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 5953dcea752d..5484d54582ca 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -4207,7 +4207,10 @@ static int decode_operand(struct x86_emulate_ctxt *ctxt, struct operand *op, case OpMem8: ctxt->memop.bytes = 1; if (ctxt->memop.type == OP_REG) { - ctxt->memop.addr.reg = decode_register(ctxt, ctxt->modrm_rm, 1); + int highbyte_regs = ctxt->rex_prefix == 0; + + ctxt->memop.addr.reg = decode_register(ctxt, ctxt->modrm_rm, + highbyte_regs); fetch_register_operand(&ctxt->memop); } goto mem_common; diff --git a/block/blk-core.c b/block/blk-core.c index 0852e5d43436..f6fcc9709347 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2229,6 +2229,7 @@ void blk_start_request(struct request *req) if (unlikely(blk_bidi_rq(req))) req->next_rq->resid_len = blk_rq_bytes(req->next_rq); + BUG_ON(test_bit(REQ_ATOM_COMPLETE, &req->atomic_flags)); blk_add_timer(req); } EXPORT_SYMBOL(blk_start_request); diff --git a/block/blk-settings.c b/block/blk-settings.c index c50ecf0ea3b1..53309333c2f0 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -144,6 +144,7 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->discard_zeroes_data = 1; lim->max_segments = USHRT_MAX; lim->max_hw_sectors = UINT_MAX; + lim->max_segment_size = UINT_MAX; lim->max_sectors = UINT_MAX; lim->max_write_same_sectors = UINT_MAX; } diff --git a/block/blk-timeout.c b/block/blk-timeout.c index 6e4744cbfb56..5a6296ef9a81 100644 --- a/block/blk-timeout.c +++ b/block/blk-timeout.c @@ -90,8 +90,8 @@ static void blk_rq_timed_out(struct request *req) __blk_complete_request(req); break; case BLK_EH_RESET_TIMER: - blk_clear_rq_complete(req); blk_add_timer(req); + blk_clear_rq_complete(req); break; case BLK_EH_NOT_HANDLED: /* @@ -173,7 +173,6 @@ void blk_add_timer(struct request *req) return; BUG_ON(!list_empty(&req->timeout_list)); - BUG_ON(test_bit(REQ_ATOM_COMPLETE, &req->atomic_flags)); /* * Some LLDs, like scsi, peek at the timeout to prevent a diff --git a/crypto/ansi_cprng.c b/crypto/ansi_cprng.c index c0bb3778f1ae..666f1962a160 100644 --- a/crypto/ansi_cprng.c +++ b/crypto/ansi_cprng.c @@ -230,11 +230,11 @@ remainder: */ if (byte_count < DEFAULT_BLK_SZ) { empty_rbuf: - for (; ctx->rand_data_valid < DEFAULT_BLK_SZ; - ctx->rand_data_valid++) { + while (ctx->rand_data_valid < DEFAULT_BLK_SZ) { *ptr = ctx->rand_data[ctx->rand_data_valid]; ptr++; byte_count--; + ctx->rand_data_valid++; if (byte_count == 0) goto done; } diff --git a/drivers/acpi/acpica/exoparg1.c b/drivers/acpi/acpica/exoparg1.c index b60c877f5906..c3241b188434 100644 --- a/drivers/acpi/acpica/exoparg1.c +++ b/drivers/acpi/acpica/exoparg1.c @@ -963,10 +963,17 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state) */ return_desc = *(operand[0]->reference.where); - if (return_desc) { - acpi_ut_add_reference - (return_desc); + if (!return_desc) { + /* + * Element is NULL, do not allow the dereference. + * This provides compatibility with other ACPI + * implementations. + */ + return_ACPI_STATUS + (AE_AML_UNINITIALIZED_ELEMENT); } + + acpi_ut_add_reference(return_desc); break; default: @@ -991,11 +998,40 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state) acpi_namespace_node *) return_desc); + if (!return_desc) { + break; + } + + /* + * June 2013: + * buffer_fields/field_units require additional resolution + */ + switch (return_desc->common.type) { + case ACPI_TYPE_BUFFER_FIELD: + case ACPI_TYPE_LOCAL_REGION_FIELD: + case ACPI_TYPE_LOCAL_BANK_FIELD: + case ACPI_TYPE_LOCAL_INDEX_FIELD: + + status = + acpi_ex_read_data_from_field + (walk_state, return_desc, + &temp_desc); + if (ACPI_FAILURE(status)) { + goto cleanup; + } + + return_desc = temp_desc; + break; + + default: + + /* Add another reference to the object */ + + acpi_ut_add_reference + (return_desc); + break; + } } - - /* Add another reference to the object! */ - - acpi_ut_add_reference(return_desc); break; default: diff --git a/drivers/acpi/acpica/exstore.c b/drivers/acpi/acpica/exstore.c index 93c6049c2d75..b1ad39443cb6 100644 --- a/drivers/acpi/acpica/exstore.c +++ b/drivers/acpi/acpica/exstore.c @@ -57,6 +57,11 @@ acpi_ex_store_object_to_index(union acpi_operand_object *val_desc, union acpi_operand_object *dest_desc, struct acpi_walk_state *walk_state); +static acpi_status +acpi_ex_store_direct_to_node(union acpi_operand_object *source_desc, + struct acpi_namespace_node *node, + struct acpi_walk_state *walk_state); + /******************************************************************************* * * FUNCTION: acpi_ex_store @@ -376,7 +381,11 @@ acpi_ex_store_object_to_index(union acpi_operand_object *source_desc, * When storing into an object the data is converted to the * target object type then stored in the object. This means * that the target object type (for an initialized target) will - * not be changed by a store operation. + * not be changed by a store operation. A copy_object can change + * the target type, however. + * + * The implicit_conversion flag is set to NO/FALSE only when + * storing to an arg_x -- as per the rules of the ACPI spec. * * Assumes parameters are already validated. * @@ -400,7 +409,7 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc, target_type = acpi_ns_get_type(node); target_desc = acpi_ns_get_attached_object(node); - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Storing %p(%s) into node %p(%s)\n", + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Storing %p (%s) to node %p (%s)\n", source_desc, acpi_ut_get_object_type_name(source_desc), node, acpi_ut_get_type_name(target_type))); @@ -414,46 +423,31 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc, return_ACPI_STATUS(status); } - /* If no implicit conversion, drop into the default case below */ - - if ((!implicit_conversion) || - ((walk_state->opcode == AML_COPY_OP) && - (target_type != ACPI_TYPE_LOCAL_REGION_FIELD) && - (target_type != ACPI_TYPE_LOCAL_BANK_FIELD) && - (target_type != ACPI_TYPE_LOCAL_INDEX_FIELD))) { - /* - * Force execution of default (no implicit conversion). Note: - * copy_object does not perform an implicit conversion, as per the ACPI - * spec -- except in case of region/bank/index fields -- because these - * objects must retain their original type permanently. - */ - target_type = ACPI_TYPE_ANY; - } - /* Do the actual store operation */ switch (target_type) { - case ACPI_TYPE_BUFFER_FIELD: - case ACPI_TYPE_LOCAL_REGION_FIELD: - case ACPI_TYPE_LOCAL_BANK_FIELD: - case ACPI_TYPE_LOCAL_INDEX_FIELD: - - /* For fields, copy the source data to the target field. */ - - status = acpi_ex_write_data_to_field(source_desc, target_desc, - &walk_state->result_obj); - break; - case ACPI_TYPE_INTEGER: case ACPI_TYPE_STRING: case ACPI_TYPE_BUFFER: /* - * These target types are all of type Integer/String/Buffer, and - * therefore support implicit conversion before the store. - * - * Copy and/or convert the source object to a new target object + * The simple data types all support implicit source operand + * conversion before the store. */ + + if ((walk_state->opcode == AML_COPY_OP) || !implicit_conversion) { + /* + * However, copy_object and Stores to arg_x do not perform + * an implicit conversion, as per the ACPI specification. + * A direct store is performed instead. + */ + status = acpi_ex_store_direct_to_node(source_desc, node, + walk_state); + break; + } + + /* Store with implicit source operand conversion support */ + status = acpi_ex_store_object_to_object(source_desc, target_desc, &new_desc, walk_state); @@ -467,13 +461,12 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc, * the Name's type to that of the value being stored in it. * source_desc reference count is incremented by attach_object. * - * Note: This may change the type of the node if an explicit store - * has been performed such that the node/object type has been - * changed. + * Note: This may change the type of the node if an explicit + * store has been performed such that the node/object type + * has been changed. */ - status = - acpi_ns_attach_object(node, new_desc, - new_desc->common.type); + status = acpi_ns_attach_object(node, new_desc, + new_desc->common.type); ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Store %s into %s via Convert/Attach\n", @@ -484,38 +477,83 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc, } break; + case ACPI_TYPE_BUFFER_FIELD: + case ACPI_TYPE_LOCAL_REGION_FIELD: + case ACPI_TYPE_LOCAL_BANK_FIELD: + case ACPI_TYPE_LOCAL_INDEX_FIELD: + /* + * For all fields, always write the source data to the target + * field. Any required implicit source operand conversion is + * performed in the function below as necessary. Note, field + * objects must retain their original type permanently. + */ + status = acpi_ex_write_data_to_field(source_desc, target_desc, + &walk_state->result_obj); + break; + default: - - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, - "Storing [%s] (%p) directly into node [%s] (%p)" - " with no implicit conversion\n", - acpi_ut_get_object_type_name(source_desc), - source_desc, - acpi_ut_get_object_type_name(target_desc), - node)); - /* * No conversions for all other types. Directly store a copy of - * the source object. NOTE: This is a departure from the ACPI - * spec, which states "If conversion is impossible, abort the - * running control method". + * the source object. This is the ACPI spec-defined behavior for + * the copy_object operator. * - * This code implements "If conversion is impossible, treat the - * Store operation as a CopyObject". + * NOTE: For the Store operator, this is a departure from the + * ACPI spec, which states "If conversion is impossible, abort + * the running control method". Instead, this code implements + * "If conversion is impossible, treat the Store operation as + * a CopyObject". */ - status = - acpi_ut_copy_iobject_to_iobject(source_desc, &new_desc, - walk_state); - if (ACPI_FAILURE(status)) { - return_ACPI_STATUS(status); - } - - status = - acpi_ns_attach_object(node, new_desc, - new_desc->common.type); - acpi_ut_remove_reference(new_desc); + status = acpi_ex_store_direct_to_node(source_desc, node, + walk_state); break; } return_ACPI_STATUS(status); } + +/******************************************************************************* + * + * FUNCTION: acpi_ex_store_direct_to_node + * + * PARAMETERS: source_desc - Value to be stored + * node - Named object to receive the value + * walk_state - Current walk state + * + * RETURN: Status + * + * DESCRIPTION: "Store" an object directly to a node. This involves a copy + * and an attach. + * + ******************************************************************************/ + +static acpi_status +acpi_ex_store_direct_to_node(union acpi_operand_object *source_desc, + struct acpi_namespace_node *node, + struct acpi_walk_state *walk_state) +{ + acpi_status status; + union acpi_operand_object *new_desc; + + ACPI_FUNCTION_TRACE(ex_store_direct_to_node); + + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, + "Storing [%s] (%p) directly into node [%s] (%p)" + " with no implicit conversion\n", + acpi_ut_get_object_type_name(source_desc), + source_desc, acpi_ut_get_type_name(node->type), + node)); + + /* Copy the source object to a new object */ + + status = + acpi_ut_copy_iobject_to_iobject(source_desc, &new_desc, walk_state); + if (ACPI_FAILURE(status)) { + return_ACPI_STATUS(status); + } + + /* Attach the new object to the node */ + + status = acpi_ns_attach_object(node, new_desc, new_desc->common.type); + acpi_ut_remove_reference(new_desc); + return_ACPI_STATUS(status); +} diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c index 45af90a1ec1b..1ad5a4f9e0c3 100644 --- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -175,9 +175,10 @@ static void start_transaction(struct acpi_ec *ec) static void advance_transaction(struct acpi_ec *ec, u8 status) { unsigned long flags; - struct transaction *t = ec->curr; + struct transaction *t; spin_lock_irqsave(&ec->lock, flags); + t = ec->curr; if (!t) goto unlock; if (t->wlen > t->wi) { diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c index e427dc516c76..e36842b9e1fa 100644 --- a/drivers/acpi/pci_root.c +++ b/drivers/acpi/pci_root.c @@ -614,9 +614,12 @@ static void handle_root_bridge_removal(struct acpi_device *device) ej_event->device = device; ej_event->event = ACPI_NOTIFY_EJECT_REQUEST; + get_device(&device->dev); status = acpi_os_hotplug_execute(acpi_bus_hot_remove_device, ej_event); - if (ACPI_FAILURE(status)) + if (ACPI_FAILURE(status)) { + put_device(&device->dev); kfree(ej_event); + } } static void _handle_hotplug_event_root(struct work_struct *work) diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c index eb133c77aadb..4056d3175178 100644 --- a/drivers/acpi/processor_idle.c +++ b/drivers/acpi/processor_idle.c @@ -121,17 +121,10 @@ static struct dmi_system_id __cpuinitdata processor_power_dmi_table[] = { */ static void acpi_safe_halt(void) { - current_thread_info()->status &= ~TS_POLLING; - /* - * TS_POLLING-cleared state must be visible before we - * test NEED_RESCHED: - */ - smp_mb(); - if (!need_resched()) { + if (!tif_need_resched()) { safe_halt(); local_irq_disable(); } - current_thread_info()->status |= TS_POLLING; } #ifdef ARCH_APICTIMER_STOPS_ON_C3 @@ -739,6 +732,11 @@ static int acpi_idle_enter_c1(struct cpuidle_device *dev, if (unlikely(!pr)) return -EINVAL; + if (cx->entry_method == ACPI_CSTATE_FFH) { + if (current_set_polling_and_test()) + return -EINVAL; + } + lapic_timer_state_broadcast(pr, cx, 1); acpi_idle_do_entry(cx); @@ -792,18 +790,9 @@ static int acpi_idle_enter_simple(struct cpuidle_device *dev, if (unlikely(!pr)) return -EINVAL; - if (cx->entry_method != ACPI_CSTATE_FFH) { - current_thread_info()->status &= ~TS_POLLING; - /* - * TS_POLLING-cleared state must be visible before we test - * NEED_RESCHED: - */ - smp_mb(); - - if (unlikely(need_resched())) { - current_thread_info()->status |= TS_POLLING; + if (cx->entry_method == ACPI_CSTATE_FFH) { + if (current_set_polling_and_test()) return -EINVAL; - } } /* @@ -821,9 +810,6 @@ static int acpi_idle_enter_simple(struct cpuidle_device *dev, sched_clock_idle_wakeup_event(0); - if (cx->entry_method != ACPI_CSTATE_FFH) - current_thread_info()->status |= TS_POLLING; - lapic_timer_state_broadcast(pr, cx, 0); return index; } @@ -860,18 +846,9 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev, } } - if (cx->entry_method != ACPI_CSTATE_FFH) { - current_thread_info()->status &= ~TS_POLLING; - /* - * TS_POLLING-cleared state must be visible before we test - * NEED_RESCHED: - */ - smp_mb(); - - if (unlikely(need_resched())) { - current_thread_info()->status |= TS_POLLING; + if (cx->entry_method == ACPI_CSTATE_FFH) { + if (current_set_polling_and_test()) return -EINVAL; - } } acpi_unlazy_tlb(smp_processor_id()); @@ -917,9 +894,6 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev, sched_clock_idle_wakeup_event(0); - if (cx->entry_method != ACPI_CSTATE_FFH) - current_thread_info()->status |= TS_POLLING; - lapic_timer_state_broadcast(pr, cx, 0); return index; } diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c index af658b2ff279..362f0c2aa1ea 100644 --- a/drivers/acpi/scan.c +++ b/drivers/acpi/scan.c @@ -244,8 +244,6 @@ static void acpi_scan_bus_device_check(acpi_handle handle, u32 ost_source) goto out; } } - acpi_evaluate_hotplug_ost(handle, ost_source, - ACPI_OST_SC_INSERT_IN_PROGRESS, NULL); error = acpi_bus_scan(handle); if (error) { acpi_handle_warn(handle, "Namespace scan failure\n"); diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c index 0e4b96b62c75..055dfdfd7348 100644 --- a/drivers/acpi/video.c +++ b/drivers/acpi/video.c @@ -846,7 +846,7 @@ acpi_video_init_brightness(struct acpi_video_device *device) for (i = 2; i < br->count; i++) if (level_old == br->levels[i]) break; - if (i == br->count) + if (i == br->count || !level) level = max_level; } diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 9bf4371755f2..d91f1a56e861 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -545,7 +545,7 @@ static struct kobject *brd_probe(dev_t dev, int *part, void *data) mutex_lock(&brd_devices_mutex); brd = brd_init_one(MINOR(dev) >> part_shift); - kobj = brd ? get_disk(brd->brd_disk) : ERR_PTR(-ENOMEM); + kobj = brd ? get_disk(brd->brd_disk) : NULL; mutex_unlock(&brd_devices_mutex); *part = 0; diff --git a/drivers/block/loop.c b/drivers/block/loop.c index d92d50fd84b7..00559736cee4 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1741,7 +1741,7 @@ static struct kobject *loop_probe(dev_t dev, int *part, void *data) if (err < 0) err = loop_add(&lo, MINOR(dev) >> part_shift); if (err < 0) - kobj = ERR_PTR(err); + kobj = NULL; else kobj = get_disk(lo->lo_disk); mutex_unlock(&loop_index_mutex); diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c index 7dc9c4efbcfb..038c7cca0cf3 100644 --- a/drivers/cpufreq/cpufreq_stats.c +++ b/drivers/cpufreq/cpufreq_stats.c @@ -21,7 +21,9 @@ #include #include #include +#ifdef CONFIG_BL_SWITCHER #include +#endif static spinlock_t cpufreq_stats_lock; @@ -448,6 +450,7 @@ static void cpufreq_stats_cleanup(void) } } +#ifdef CONFIG_BL_SWITCHER static int cpufreq_stats_switcher_notifier(struct notifier_block *nfb, unsigned long action, void *_arg) { @@ -472,6 +475,7 @@ static int cpufreq_stats_switcher_notifier(struct notifier_block *nfb, static struct notifier_block switcher_notifier = { .notifier_call = cpufreq_stats_switcher_notifier, }; +#endif static int __init cpufreq_stats_init(void) { @@ -479,15 +483,18 @@ static int __init cpufreq_stats_init(void) spin_lock_init(&cpufreq_stats_lock); ret = cpufreq_stats_setup(); +#ifdef CONFIG_BL_SWITCHER if (!ret) bL_switcher_register_notifier(&switcher_notifier); - +#endif return ret; } static void __exit cpufreq_stats_exit(void) { +#ifdef CONFIG_BL_SWITCHER bL_switcher_unregister_notifier(&switcher_notifier); +#endif cpufreq_stats_cleanup(); } diff --git a/drivers/firmware/dmi_scan.c b/drivers/firmware/dmi_scan.c index b95159b33c39..eb760a218da4 100644 --- a/drivers/firmware/dmi_scan.c +++ b/drivers/firmware/dmi_scan.c @@ -551,9 +551,15 @@ static bool dmi_matches(const struct dmi_system_id *dmi) int s = dmi->matches[i].slot; if (s == DMI_NONE) break; - if (dmi_ident[s] - && strstr(dmi_ident[s], dmi->matches[i].substr)) - continue; + if (dmi_ident[s]) { + if (!dmi->matches[i].exact_match && + strstr(dmi_ident[s], dmi->matches[i].substr)) + continue; + else if (dmi->matches[i].exact_match && + !strcmp(dmi_ident[s], dmi->matches[i].substr)) + continue; + } + /* No match */ return false; } diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c index 29412cc89c7a..f77d42f74427 100644 --- a/drivers/gpu/drm/i915/intel_lvds.c +++ b/drivers/gpu/drm/i915/intel_lvds.c @@ -869,6 +869,30 @@ static const struct dmi_system_id intel_no_lvds[] = { DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO Q900"), }, }, + { + .callback = intel_no_lvds_dmi_callback, + .ident = "Intel D410PT", + .matches = { + DMI_MATCH(DMI_BOARD_VENDOR, "Intel"), + DMI_MATCH(DMI_BOARD_NAME, "D410PT"), + }, + }, + { + .callback = intel_no_lvds_dmi_callback, + .ident = "Intel D425KT", + .matches = { + DMI_MATCH(DMI_BOARD_VENDOR, "Intel"), + DMI_EXACT_MATCH(DMI_BOARD_NAME, "D425KT"), + }, + }, + { + .callback = intel_no_lvds_dmi_callback, + .ident = "Intel D510MO", + .matches = { + DMI_MATCH(DMI_BOARD_VENDOR, "Intel"), + DMI_EXACT_MATCH(DMI_BOARD_NAME, "D510MO"), + }, + }, { } /* terminating entry */ }; diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/hdanva3.c b/drivers/gpu/drm/nouveau/core/engine/disp/hdanva3.c index 373dbcc523b2..a19e7d79b847 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/hdanva3.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/hdanva3.c @@ -36,6 +36,8 @@ nva3_hda_eld(struct nv50_disp_priv *priv, int or, u8 *data, u32 size) if (data && data[0]) { for (i = 0; i < size; i++) nv_wr32(priv, 0x61c440 + soff, (i << 8) | data[i]); + for (; i < 0x60; i++) + nv_wr32(priv, 0x61c440 + soff, (i << 8)); nv_mask(priv, 0x61c448 + soff, 0x80000003, 0x80000003); } else if (data) { diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/hdanvd0.c b/drivers/gpu/drm/nouveau/core/engine/disp/hdanvd0.c index dc57e24fc1df..717639386ced 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/hdanvd0.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/hdanvd0.c @@ -41,6 +41,8 @@ nvd0_hda_eld(struct nv50_disp_priv *priv, int or, u8 *data, u32 size) if (data && data[0]) { for (i = 0; i < size; i++) nv_wr32(priv, 0x10ec00 + soff, (i << 8) | data[i]); + for (; i < 0x60; i++) + nv_wr32(priv, 0x10ec00 + soff, (i << 8)); nv_mask(priv, 0x10ec10 + soff, 0x80000003, 0x80000003); } else if (data) { diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c b/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c index ab1e918469a8..526b75242899 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/sornv50.c @@ -47,14 +47,8 @@ int nv50_sor_mthd(struct nouveau_object *object, u32 mthd, void *args, u32 size) { struct nv50_disp_priv *priv = (void *)object->engine; - struct nouveau_bios *bios = nouveau_bios(priv); - const u16 type = (mthd & NV50_DISP_SOR_MTHD_TYPE) >> 12; const u8 head = (mthd & NV50_DISP_SOR_MTHD_HEAD) >> 3; - const u8 link = (mthd & NV50_DISP_SOR_MTHD_LINK) >> 2; const u8 or = (mthd & NV50_DISP_SOR_MTHD_OR); - const u16 mask = (0x0100 << head) | (0x0040 << link) | (0x0001 << or); - struct dcb_output outp; - u8 ver, hdr; u32 data; int ret = -EINVAL; @@ -62,8 +56,6 @@ nv50_sor_mthd(struct nouveau_object *object, u32 mthd, void *args, u32 size) return -EINVAL; data = *(u32 *)args; - if (type && !dcb_outp_match(bios, type, mask, &ver, &hdr, &outp)) - return -ENODEV; switch (mthd & ~0x3f) { case NV50_DISP_SOR_PWR: diff --git a/drivers/hwmon/lm90.c b/drivers/hwmon/lm90.c index 8eeb141c85ac..74813130d211 100644 --- a/drivers/hwmon/lm90.c +++ b/drivers/hwmon/lm90.c @@ -278,7 +278,7 @@ static const struct lm90_params lm90_params[] = { [max6696] = { .flags = LM90_HAVE_EMERGENCY | LM90_HAVE_EMERGENCY_ALARM | LM90_HAVE_TEMP3, - .alert_alarms = 0x187c, + .alert_alarms = 0x1c7c, .max_convrate = 6, .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, }, @@ -1500,19 +1500,22 @@ static void lm90_alert(struct i2c_client *client, unsigned int flag) if ((alarms & 0x7f) == 0 && (alarms2 & 0xfe) == 0) { dev_info(&client->dev, "Everything OK\n"); } else { - if (alarms & 0x61) + if ((alarms & 0x61) || (alarms2 & 0x80)) dev_warn(&client->dev, "temp%d out of range, please check!\n", 1); - if (alarms & 0x1a) + if ((alarms & 0x1a) || (alarms2 & 0x20)) dev_warn(&client->dev, "temp%d out of range, please check!\n", 2); if (alarms & 0x04) dev_warn(&client->dev, "temp%d diode open, please check!\n", 2); - if (alarms2 & 0x18) + if (alarms2 & 0x5a) dev_warn(&client->dev, "temp%d out of range, please check!\n", 3); + if (alarms2 & 0x04) + dev_warn(&client->dev, + "temp%d diode open, please check!\n", 3); /* * Disable ALERT# output, because these chips don't implement diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c index fa6964d8681a..f116d664b473 100644 --- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -359,7 +359,7 @@ static int intel_idle(struct cpuidle_device *dev, if (!(lapic_timer_reliable_states & (1 << (cstate)))) clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu); - if (!need_resched()) { + if (!current_set_polling_and_test()) { __monitor((void *)¤t_thread_info()->flags, 0, 0); smp_mb(); diff --git a/drivers/media/platform/sh_vou.c b/drivers/media/platform/sh_vou.c index 7d0235069c87..5d538e7cd1bb 100644 --- a/drivers/media/platform/sh_vou.c +++ b/drivers/media/platform/sh_vou.c @@ -776,7 +776,7 @@ static int sh_vou_try_fmt_vid_out(struct file *file, void *priv, v4l_bound_align_image(&pix->width, 0, VOU_MAX_IMAGE_WIDTH, 1, &pix->height, 0, VOU_MAX_IMAGE_HEIGHT, 1, 0); - for (i = 0; ARRAY_SIZE(vou_fmt); i++) + for (i = 0; i < ARRAY_SIZE(vou_fmt); i++) if (vou_fmt[i].pfmt == pix->pixelformat) return 0; diff --git a/drivers/misc/atmel_pwm.c b/drivers/misc/atmel_pwm.c index 494d0500bda6..a6dc56e1bc58 100644 --- a/drivers/misc/atmel_pwm.c +++ b/drivers/misc/atmel_pwm.c @@ -90,8 +90,10 @@ int pwm_channel_alloc(int index, struct pwm_channel *ch) unsigned long flags; int status = 0; - /* insist on PWM init, with this signal pinned out */ - if (!pwm || !(pwm->mask & 1 << index)) + if (!pwm) + return -EPROBE_DEFER; + + if (!(pwm->mask & 1 << index)) return -ENODEV; if (index < 0 || index >= PWM_NCHAN || !ch) diff --git a/drivers/misc/mei/nfc.c b/drivers/misc/mei/nfc.c index d0c6907dfd92..994ca4aff1a3 100644 --- a/drivers/misc/mei/nfc.c +++ b/drivers/misc/mei/nfc.c @@ -485,8 +485,11 @@ int mei_nfc_host_init(struct mei_device *dev) if (ndev->cl_info) return 0; - cl_info = mei_cl_allocate(dev); - cl = mei_cl_allocate(dev); + ndev->cl_info = mei_cl_allocate(dev); + ndev->cl = mei_cl_allocate(dev); + + cl = ndev->cl; + cl_info = ndev->cl_info; if (!cl || !cl_info) { ret = -ENOMEM; @@ -527,10 +530,9 @@ int mei_nfc_host_init(struct mei_device *dev) cl->device_uuid = mei_nfc_guid; + list_add_tail(&cl->device_link, &dev->device_list); - ndev->cl_info = cl_info; - ndev->cl = cl; ndev->req_id = 1; INIT_WORK(&ndev->init_work, mei_nfc_init); diff --git a/drivers/net/can/c_can/c_can.c b/drivers/net/can/c_can/c_can.c index a668cd491cb3..e3fc07cf2f62 100644 --- a/drivers/net/can/c_can/c_can.c +++ b/drivers/net/can/c_can/c_can.c @@ -814,9 +814,6 @@ static int c_can_do_rx_poll(struct net_device *dev, int quota) msg_ctrl_save = priv->read_reg(priv, C_CAN_IFACE(MSGCTRL_REG, 0)); - if (msg_ctrl_save & IF_MCONT_EOB) - return num_rx_pkts; - if (msg_ctrl_save & IF_MCONT_MSGLST) { c_can_handle_lost_msg_obj(dev, 0, msg_obj); num_rx_pkts++; @@ -824,6 +821,9 @@ static int c_can_do_rx_poll(struct net_device *dev, int quota) continue; } + if (msg_ctrl_save & IF_MCONT_EOB) + return num_rx_pkts; + if (!(msg_ctrl_save & IF_MCONT_NEWDAT)) continue; diff --git a/drivers/net/can/usb/kvaser_usb.c b/drivers/net/can/usb/kvaser_usb.c index 3b9546588240..4b2d5ed62b11 100644 --- a/drivers/net/can/usb/kvaser_usb.c +++ b/drivers/net/can/usb/kvaser_usb.c @@ -1544,9 +1544,9 @@ static int kvaser_usb_init_one(struct usb_interface *intf, return 0; } -static void kvaser_usb_get_endpoints(const struct usb_interface *intf, - struct usb_endpoint_descriptor **in, - struct usb_endpoint_descriptor **out) +static int kvaser_usb_get_endpoints(const struct usb_interface *intf, + struct usb_endpoint_descriptor **in, + struct usb_endpoint_descriptor **out) { const struct usb_host_interface *iface_desc; struct usb_endpoint_descriptor *endpoint; @@ -1557,12 +1557,18 @@ static void kvaser_usb_get_endpoints(const struct usb_interface *intf, for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { endpoint = &iface_desc->endpoint[i].desc; - if (usb_endpoint_is_bulk_in(endpoint)) + if (!*in && usb_endpoint_is_bulk_in(endpoint)) *in = endpoint; - if (usb_endpoint_is_bulk_out(endpoint)) + if (!*out && usb_endpoint_is_bulk_out(endpoint)) *out = endpoint; + + /* use first bulk endpoint for in and out */ + if (*in && *out) + return 0; } + + return -ENODEV; } static int kvaser_usb_probe(struct usb_interface *intf, @@ -1576,8 +1582,8 @@ static int kvaser_usb_probe(struct usb_interface *intf, if (!dev) return -ENOMEM; - kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out); - if (!dev->bulk_in || !dev->bulk_out) { + err = kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out); + if (err) { dev_err(&intf->dev, "Cannot get usb endpoint(s)"); return err; } diff --git a/drivers/net/ethernet/chelsio/cxgb3/sge.c b/drivers/net/ethernet/chelsio/cxgb3/sge.c index f12e6b85a653..f057a189d975 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/sge.c +++ b/drivers/net/ethernet/chelsio/cxgb3/sge.c @@ -1600,7 +1600,8 @@ static void write_ofld_wr(struct adapter *adap, struct sk_buff *skb, flits = skb_transport_offset(skb) / 8; sgp = ndesc == 1 ? (struct sg_ent *)&d->flit[flits] : sgl; sgl_flits = make_sgl(skb, sgp, skb_transport_header(skb), - skb->tail - skb->transport_header, + skb_tail_pointer(skb) - + skb_transport_header(skb), adap->pdev); if (need_skb_unmap()) { setup_deferred_unmapping(skb, adap->pdev, sgp, sgl_flits); diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c index 0e572a527154..28d706bd12eb 100644 --- a/drivers/net/ethernet/mellanox/mlx4/cmd.c +++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c @@ -1544,7 +1544,7 @@ static void mlx4_master_deactivate_admin_state(struct mlx4_priv *priv, int slave vp_oper->vlan_idx = NO_INDX; } if (NO_INDX != vp_oper->mac_idx) { - __mlx4_unregister_mac(&priv->dev, port, vp_oper->mac_idx); + __mlx4_unregister_mac(&priv->dev, port, vp_oper->state.mac); vp_oper->mac_idx = NO_INDX; } } diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 1d01534c2020..64cf70247048 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1096,11 +1096,6 @@ static int virtnet_cpu_callback(struct notifier_block *nfb, { struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb); - mutex_lock(&vi->config_lock); - - if (!vi->config_enable) - goto done; - switch(action & ~CPU_TASKS_FROZEN) { case CPU_ONLINE: case CPU_DOWN_FAILED: @@ -1114,8 +1109,6 @@ static int virtnet_cpu_callback(struct notifier_block *nfb, break; } -done: - mutex_unlock(&vi->config_lock); return NOTIFY_OK; } @@ -1672,6 +1665,8 @@ static int virtnet_freeze(struct virtio_device *vdev) struct virtnet_info *vi = vdev->priv; int i; + unregister_hotcpu_notifier(&vi->nb); + /* Prevent config work handler from accessing the device */ mutex_lock(&vi->config_lock); vi->config_enable = false; @@ -1720,6 +1715,10 @@ static int virtnet_restore(struct virtio_device *vdev) virtnet_set_queues(vi, vi->curr_queue_pairs); rtnl_unlock(); + err = register_hotcpu_notifier(&vi->nb); + if (err) + return err; + return 0; } #endif diff --git a/drivers/net/wireless/iwlwifi/iwl-7000.c b/drivers/net/wireless/iwlwifi/iwl-7000.c index dc94d44d95cd..822443c5a3b5 100644 --- a/drivers/net/wireless/iwlwifi/iwl-7000.c +++ b/drivers/net/wireless/iwlwifi/iwl-7000.c @@ -125,7 +125,7 @@ static const struct iwl_ht_params iwl7000_ht_params = { const struct iwl_cfg iwl7260_2ac_cfg = { - .name = "Intel(R) Dual Band Wireless AC7260", + .name = "Intel(R) Dual Band Wireless AC 7260", .fw_name_pre = IWL7260_FW_PRE, IWL_DEVICE_7000, .ht_params = &iwl7000_ht_params, @@ -133,8 +133,44 @@ const struct iwl_cfg iwl7260_2ac_cfg = { .nvm_calib_ver = IWL7260_TX_POWER_VERSION, }; -const struct iwl_cfg iwl3160_ac_cfg = { - .name = "Intel(R) Dual Band Wireless AC3160", +const struct iwl_cfg iwl7260_2n_cfg = { + .name = "Intel(R) Dual Band Wireless N 7260", + .fw_name_pre = IWL7260_FW_PRE, + IWL_DEVICE_7000, + .ht_params = &iwl7000_ht_params, + .nvm_ver = IWL7260_NVM_VERSION, + .nvm_calib_ver = IWL7260_TX_POWER_VERSION, +}; + +const struct iwl_cfg iwl7260_n_cfg = { + .name = "Intel(R) Wireless N 7260", + .fw_name_pre = IWL7260_FW_PRE, + IWL_DEVICE_7000, + .ht_params = &iwl7000_ht_params, + .nvm_ver = IWL7260_NVM_VERSION, + .nvm_calib_ver = IWL7260_TX_POWER_VERSION, +}; + +const struct iwl_cfg iwl3160_2ac_cfg = { + .name = "Intel(R) Dual Band Wireless AC 3160", + .fw_name_pre = IWL3160_FW_PRE, + IWL_DEVICE_7000, + .ht_params = &iwl7000_ht_params, + .nvm_ver = IWL3160_NVM_VERSION, + .nvm_calib_ver = IWL3160_TX_POWER_VERSION, +}; + +const struct iwl_cfg iwl3160_2n_cfg = { + .name = "Intel(R) Dual Band Wireless N 3160", + .fw_name_pre = IWL3160_FW_PRE, + IWL_DEVICE_7000, + .ht_params = &iwl7000_ht_params, + .nvm_ver = IWL3160_NVM_VERSION, + .nvm_calib_ver = IWL3160_TX_POWER_VERSION, +}; + +const struct iwl_cfg iwl3160_n_cfg = { + .name = "Intel(R) Wireless N 3160", .fw_name_pre = IWL3160_FW_PRE, IWL_DEVICE_7000, .ht_params = &iwl7000_ht_params, diff --git a/drivers/net/wireless/iwlwifi/iwl-config.h b/drivers/net/wireless/iwlwifi/iwl-config.h index c67e29655b2d..44e3370ce343 100644 --- a/drivers/net/wireless/iwlwifi/iwl-config.h +++ b/drivers/net/wireless/iwlwifi/iwl-config.h @@ -321,6 +321,10 @@ extern const struct iwl_cfg iwl105_bgn_cfg; extern const struct iwl_cfg iwl105_bgn_d_cfg; extern const struct iwl_cfg iwl135_bgn_cfg; extern const struct iwl_cfg iwl7260_2ac_cfg; -extern const struct iwl_cfg iwl3160_ac_cfg; +extern const struct iwl_cfg iwl7260_2n_cfg; +extern const struct iwl_cfg iwl7260_n_cfg; +extern const struct iwl_cfg iwl3160_2ac_cfg; +extern const struct iwl_cfg iwl3160_2n_cfg; +extern const struct iwl_cfg iwl3160_n_cfg; #endif /* __IWL_CONFIG_H__ */ diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c index b7858a595973..b53e5c3f403b 100644 --- a/drivers/net/wireless/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/iwlwifi/pcie/drv.c @@ -267,10 +267,83 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { /* 7000 Series */ {IWL_PCI_DEVICE(0x08B1, 0x4070, iwl7260_2ac_cfg)}, - {IWL_PCI_DEVICE(0x08B1, 0x4062, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4072, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4170, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4060, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x406A, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4160, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4062, iwl7260_n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4162, iwl7260_n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0x4270, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0x4272, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0x4260, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0x426A, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0x4262, iwl7260_n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4470, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4472, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4460, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x446A, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4462, iwl7260_n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4870, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x486E, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4570, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4560, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0x4370, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0x4360, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x5070, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4020, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x402A, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0x4220, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0x4420, iwl7260_2n_cfg)}, {IWL_PCI_DEVICE(0x08B1, 0xC070, iwl7260_2ac_cfg)}, - {IWL_PCI_DEVICE(0x08B3, 0x0070, iwl3160_ac_cfg)}, - {IWL_PCI_DEVICE(0x08B3, 0x8070, iwl3160_ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC072, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC170, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC060, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC06A, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC160, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC062, iwl7260_n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC162, iwl7260_n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC770, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC760, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC270, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC272, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC260, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC26A, iwl7260_n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC262, iwl7260_n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC470, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC472, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC460, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC462, iwl7260_n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC570, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC560, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC370, iwl7260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC360, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC020, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC02A, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B2, 0xC220, iwl7260_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B1, 0xC420, iwl7260_2n_cfg)}, + +/* 3160 Series */ + {IWL_PCI_DEVICE(0x08B3, 0x0070, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x0072, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x0170, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x0172, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x0060, iwl3160_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x0062, iwl3160_n_cfg)}, + {IWL_PCI_DEVICE(0x08B4, 0x0270, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B4, 0x0272, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x0470, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x0472, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B4, 0x0370, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x8070, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x8072, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x8170, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x8172, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x8060, iwl3160_2n_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x8062, iwl3160_n_cfg)}, + {IWL_PCI_DEVICE(0x08B4, 0x8270, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x8470, iwl3160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x08B3, 0x8570, iwl3160_2ac_cfg)}, {0} }; diff --git a/drivers/net/wireless/libertas/debugfs.c b/drivers/net/wireless/libertas/debugfs.c index 668dd27616a0..cc6a0a586f0b 100644 --- a/drivers/net/wireless/libertas/debugfs.c +++ b/drivers/net/wireless/libertas/debugfs.c @@ -913,7 +913,10 @@ static ssize_t lbs_debugfs_write(struct file *f, const char __user *buf, char *p2; struct debug_data *d = f->private_data; - pdata = kmalloc(cnt, GFP_KERNEL); + if (cnt == 0) + return 0; + + pdata = kmalloc(cnt + 1, GFP_KERNEL); if (pdata == NULL) return 0; @@ -922,6 +925,7 @@ static ssize_t lbs_debugfs_write(struct file *f, const char __user *buf, kfree(pdata); return 0; } + pdata[cnt] = '\0'; p0 = pdata; for (i = 0; i < num_of_items; i++) { diff --git a/drivers/net/wireless/rt2x00/rt2800lib.c b/drivers/net/wireless/rt2x00/rt2800lib.c index f281971919f5..12652d204f7e 100644 --- a/drivers/net/wireless/rt2x00/rt2800lib.c +++ b/drivers/net/wireless/rt2x00/rt2800lib.c @@ -3400,10 +3400,13 @@ void rt2800_link_tuner(struct rt2x00_dev *rt2x00dev, struct link_qual *qual, vgc = rt2800_get_default_vgc(rt2x00dev); - if (rt2x00_rt(rt2x00dev, RT5592) && qual->rssi > -65) - vgc += 0x20; - else if (qual->rssi > -80) - vgc += 0x10; + if (rt2x00_rt(rt2x00dev, RT5592)) { + if (qual->rssi > -65) + vgc += 0x20; + } else { + if (qual->rssi > -80) + vgc += 0x10; + } rt2800_set_vgc(rt2x00dev, qual, vgc); } diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c index ac854d75bd6c..9ef0711a5cc1 100644 --- a/drivers/net/wireless/rt2x00/rt2800usb.c +++ b/drivers/net/wireless/rt2x00/rt2800usb.c @@ -148,6 +148,8 @@ static bool rt2800usb_txstatus_timeout(struct rt2x00_dev *rt2x00dev) return false; } +#define TXSTATUS_READ_INTERVAL 1000000 + static bool rt2800usb_tx_sta_fifo_read_completed(struct rt2x00_dev *rt2x00dev, int urb_status, u32 tx_status) { @@ -176,8 +178,9 @@ static bool rt2800usb_tx_sta_fifo_read_completed(struct rt2x00_dev *rt2x00dev, queue_work(rt2x00dev->workqueue, &rt2x00dev->txdone_work); if (rt2800usb_txstatus_pending(rt2x00dev)) { - /* Read register after 250 us */ - hrtimer_start(&rt2x00dev->txstatus_timer, ktime_set(0, 250000), + /* Read register after 1 ms */ + hrtimer_start(&rt2x00dev->txstatus_timer, + ktime_set(0, TXSTATUS_READ_INTERVAL), HRTIMER_MODE_REL); return false; } @@ -202,8 +205,9 @@ static void rt2800usb_async_read_tx_status(struct rt2x00_dev *rt2x00dev) if (test_and_set_bit(TX_STATUS_READING, &rt2x00dev->flags)) return; - /* Read TX_STA_FIFO register after 500 us */ - hrtimer_start(&rt2x00dev->txstatus_timer, ktime_set(0, 500000), + /* Read TX_STA_FIFO register after 2 ms */ + hrtimer_start(&rt2x00dev->txstatus_timer, + ktime_set(0, 2*TXSTATUS_READ_INTERVAL), HRTIMER_MODE_REL); } diff --git a/drivers/net/wireless/rt2x00/rt2x00dev.c b/drivers/net/wireless/rt2x00/rt2x00dev.c index 90dc14336980..a2889d1cfe37 100644 --- a/drivers/net/wireless/rt2x00/rt2x00dev.c +++ b/drivers/net/wireless/rt2x00/rt2x00dev.c @@ -181,6 +181,7 @@ static void rt2x00lib_autowakeup(struct work_struct *work) static void rt2x00lib_bc_buffer_iter(void *data, u8 *mac, struct ieee80211_vif *vif) { + struct ieee80211_tx_control control = {}; struct rt2x00_dev *rt2x00dev = data; struct sk_buff *skb; @@ -195,7 +196,7 @@ static void rt2x00lib_bc_buffer_iter(void *data, u8 *mac, */ skb = ieee80211_get_buffered_bc(rt2x00dev->hw, vif); while (skb) { - rt2x00mac_tx(rt2x00dev->hw, NULL, skb); + rt2x00mac_tx(rt2x00dev->hw, &control, skb); skb = ieee80211_get_buffered_bc(rt2x00dev->hw, vif); } } diff --git a/drivers/net/wireless/rt2x00/rt2x00lib.h b/drivers/net/wireless/rt2x00/rt2x00lib.h index a0935987fa3a..7f40ab8e1bd8 100644 --- a/drivers/net/wireless/rt2x00/rt2x00lib.h +++ b/drivers/net/wireless/rt2x00/rt2x00lib.h @@ -146,7 +146,7 @@ void rt2x00queue_remove_l2pad(struct sk_buff *skb, unsigned int header_length); * @local: frame is not from mac80211 */ int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb, - bool local); + struct ieee80211_sta *sta, bool local); /** * rt2x00queue_update_beacon - Send new beacon from mac80211 diff --git a/drivers/net/wireless/rt2x00/rt2x00mac.c b/drivers/net/wireless/rt2x00/rt2x00mac.c index f883802f3505..f8cff1f0b6b7 100644 --- a/drivers/net/wireless/rt2x00/rt2x00mac.c +++ b/drivers/net/wireless/rt2x00/rt2x00mac.c @@ -90,7 +90,7 @@ static int rt2x00mac_tx_rts_cts(struct rt2x00_dev *rt2x00dev, frag_skb->data, data_length, tx_info, (struct ieee80211_rts *)(skb->data)); - retval = rt2x00queue_write_tx_frame(queue, skb, true); + retval = rt2x00queue_write_tx_frame(queue, skb, NULL, true); if (retval) { dev_kfree_skb_any(skb); rt2x00_warn(rt2x00dev, "Failed to send RTS/CTS frame\n"); @@ -151,7 +151,7 @@ void rt2x00mac_tx(struct ieee80211_hw *hw, goto exit_fail; } - if (unlikely(rt2x00queue_write_tx_frame(queue, skb, false))) + if (unlikely(rt2x00queue_write_tx_frame(queue, skb, control->sta, false))) goto exit_fail; /* @@ -754,6 +754,9 @@ void rt2x00mac_flush(struct ieee80211_hw *hw, u32 queues, bool drop) struct rt2x00_dev *rt2x00dev = hw->priv; struct data_queue *queue; + if (!test_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags)) + return; + tx_queue_for_each(rt2x00dev, queue) rt2x00queue_flush_queue(queue, drop); } diff --git a/drivers/net/wireless/rt2x00/rt2x00queue.c b/drivers/net/wireless/rt2x00/rt2x00queue.c index d955741e48ff..1f17f5b64625 100644 --- a/drivers/net/wireless/rt2x00/rt2x00queue.c +++ b/drivers/net/wireless/rt2x00/rt2x00queue.c @@ -635,7 +635,7 @@ static void rt2x00queue_bar_check(struct queue_entry *entry) } int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb, - bool local) + struct ieee80211_sta *sta, bool local) { struct ieee80211_tx_info *tx_info; struct queue_entry *entry; @@ -649,7 +649,7 @@ int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb, * after that we are free to use the skb->cb array * for our information. */ - rt2x00queue_create_tx_descriptor(queue->rt2x00dev, skb, &txdesc, NULL); + rt2x00queue_create_tx_descriptor(queue->rt2x00dev, skb, &txdesc, sta); /* * All information is retrieved from the skb->cb array, diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index 1a285083d24a..f2faa779e3fe 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -88,6 +88,7 @@ struct xenvif { unsigned long credit_usec; unsigned long remaining_credit; struct timer_list credit_timeout; + u64 credit_window_start; /* Statistics */ unsigned long rx_gso_checksum_fixup; diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c index 3a294c2528d5..c4a2eb2cd8a0 100644 --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -275,8 +275,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, vif->credit_bytes = vif->remaining_credit = ~0UL; vif->credit_usec = 0UL; init_timer(&vif->credit_timeout); - /* Initialize 'expires' now: it's used to track the credit window. */ - vif->credit_timeout.expires = jiffies; + vif->credit_window_start = get_jiffies_64(); dev->netdev_ops = &xenvif_netdev_ops; dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO; diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 0071f211a08a..36efb418c26f 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -1423,9 +1423,8 @@ out: static bool tx_credit_exceeded(struct xenvif *vif, unsigned size) { - unsigned long now = jiffies; - unsigned long next_credit = - vif->credit_timeout.expires + + u64 now = get_jiffies_64(); + u64 next_credit = vif->credit_window_start + msecs_to_jiffies(vif->credit_usec / 1000); /* Timer could already be pending in rare cases. */ @@ -1433,8 +1432,8 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size) return true; /* Passed the point where we can replenish credit? */ - if (time_after_eq(now, next_credit)) { - vif->credit_timeout.expires = now; + if (time_after_eq64(now, next_credit)) { + vif->credit_window_start = now; tx_add_credit(vif); } @@ -1446,6 +1445,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size) tx_credit_callback; mod_timer(&vif->credit_timeout, next_credit); + vif->credit_window_start = next_credit; return true; } diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c index abe24ff000f0..8a9e8750703f 100644 --- a/drivers/net/xen-netback/xenbus.c +++ b/drivers/net/xen-netback/xenbus.c @@ -24,6 +24,12 @@ struct backend_info { struct xenbus_device *dev; struct xenvif *vif; + + /* This is the state that will be reflected in xenstore when any + * active hotplug script completes. + */ + enum xenbus_state state; + enum xenbus_state frontend_state; struct xenbus_watch hotplug_status_watch; u8 have_hotplug_status_watch:1; @@ -33,11 +39,15 @@ static int connect_rings(struct backend_info *); static void connect(struct backend_info *); static void backend_create_xenvif(struct backend_info *be); static void unregister_hotplug_status_watch(struct backend_info *be); +static void set_backend_state(struct backend_info *be, + enum xenbus_state state); static int netback_remove(struct xenbus_device *dev) { struct backend_info *be = dev_get_drvdata(&dev->dev); + set_backend_state(be, XenbusStateClosed); + unregister_hotplug_status_watch(be); if (be->vif) { kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE); @@ -126,6 +136,8 @@ static int netback_probe(struct xenbus_device *dev, if (err) goto fail; + be->state = XenbusStateInitWait; + /* This kicks hotplug scripts, so do it immediately. */ backend_create_xenvif(be); @@ -198,24 +210,113 @@ static void backend_create_xenvif(struct backend_info *be) kobject_uevent(&dev->dev.kobj, KOBJ_ONLINE); } - -static void disconnect_backend(struct xenbus_device *dev) +static void backend_disconnect(struct backend_info *be) { - struct backend_info *be = dev_get_drvdata(&dev->dev); - if (be->vif) xenvif_disconnect(be->vif); } -static void destroy_backend(struct xenbus_device *dev) +static void backend_connect(struct backend_info *be) { - struct backend_info *be = dev_get_drvdata(&dev->dev); + if (be->vif) + connect(be); +} - if (be->vif) { - kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE); - xenbus_rm(XBT_NIL, dev->nodename, "hotplug-status"); - xenvif_free(be->vif); - be->vif = NULL; +static inline void backend_switch_state(struct backend_info *be, + enum xenbus_state state) +{ + struct xenbus_device *dev = be->dev; + + pr_debug("%s -> %s\n", dev->nodename, xenbus_strstate(state)); + be->state = state; + + /* If we are waiting for a hotplug script then defer the + * actual xenbus state change. + */ + if (!be->have_hotplug_status_watch) + xenbus_switch_state(dev, state); +} + +/* Handle backend state transitions: + * + * The backend state starts in InitWait and the following transitions are + * allowed. + * + * InitWait -> Connected + * + * ^ \ | + * | \ | + * | \ | + * | \ | + * | \ | + * | \ | + * | V V + * + * Closed <-> Closing + * + * The state argument specifies the eventual state of the backend and the + * function transitions to that state via the shortest path. + */ +static void set_backend_state(struct backend_info *be, + enum xenbus_state state) +{ + while (be->state != state) { + switch (be->state) { + case XenbusStateClosed: + switch (state) { + case XenbusStateInitWait: + case XenbusStateConnected: + pr_info("%s: prepare for reconnect\n", + be->dev->nodename); + backend_switch_state(be, XenbusStateInitWait); + break; + case XenbusStateClosing: + backend_switch_state(be, XenbusStateClosing); + break; + default: + BUG(); + } + break; + case XenbusStateInitWait: + switch (state) { + case XenbusStateConnected: + backend_connect(be); + backend_switch_state(be, XenbusStateConnected); + break; + case XenbusStateClosing: + case XenbusStateClosed: + backend_switch_state(be, XenbusStateClosing); + break; + default: + BUG(); + } + break; + case XenbusStateConnected: + switch (state) { + case XenbusStateInitWait: + case XenbusStateClosing: + case XenbusStateClosed: + backend_disconnect(be); + backend_switch_state(be, XenbusStateClosing); + break; + default: + BUG(); + } + break; + case XenbusStateClosing: + switch (state) { + case XenbusStateInitWait: + case XenbusStateConnected: + case XenbusStateClosed: + backend_switch_state(be, XenbusStateClosed); + break; + default: + BUG(); + } + break; + default: + BUG(); + } } } @@ -227,41 +328,33 @@ static void frontend_changed(struct xenbus_device *dev, { struct backend_info *be = dev_get_drvdata(&dev->dev); - pr_debug("frontend state %s", xenbus_strstate(frontend_state)); + pr_debug("%s -> %s\n", dev->otherend, xenbus_strstate(frontend_state)); be->frontend_state = frontend_state; switch (frontend_state) { case XenbusStateInitialising: - if (dev->state == XenbusStateClosed) { - printk(KERN_INFO "%s: %s: prepare for reconnect\n", - __func__, dev->nodename); - xenbus_switch_state(dev, XenbusStateInitWait); - } + set_backend_state(be, XenbusStateInitWait); break; case XenbusStateInitialised: break; case XenbusStateConnected: - if (dev->state == XenbusStateConnected) - break; - if (be->vif) - connect(be); + set_backend_state(be, XenbusStateConnected); break; case XenbusStateClosing: - disconnect_backend(dev); - xenbus_switch_state(dev, XenbusStateClosing); + set_backend_state(be, XenbusStateClosing); break; case XenbusStateClosed: - xenbus_switch_state(dev, XenbusStateClosed); + set_backend_state(be, XenbusStateClosed); if (xenbus_dev_is_online(dev)) break; - destroy_backend(dev); /* fall through if not online */ case XenbusStateUnknown: + set_backend_state(be, XenbusStateClosed); device_unregister(&dev->dev); break; @@ -354,7 +447,9 @@ static void hotplug_status_changed(struct xenbus_watch *watch, if (IS_ERR(str)) return; if (len == sizeof("connected")-1 && !memcmp(str, "connected", len)) { - xenbus_switch_state(be->dev, XenbusStateConnected); + /* Complete any pending state change */ + xenbus_switch_state(be->dev, be->state); + /* Not interested in this watch anymore. */ unregister_hotplug_status_watch(be); } @@ -384,12 +479,8 @@ static void connect(struct backend_info *be) err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, hotplug_status_changed, "%s/%s", dev->nodename, "hotplug-status"); - if (err) { - /* Switch now, since we can't do a watch. */ - xenbus_switch_state(dev, XenbusStateConnected); - } else { + if (!err) be->have_hotplug_status_watch = 1; - } netif_wake_queue(be->vif->dev); } diff --git a/drivers/pci/access.c b/drivers/pci/access.c index 1cc23661f79b..061da8c3ab4b 100644 --- a/drivers/pci/access.c +++ b/drivers/pci/access.c @@ -484,28 +484,29 @@ static inline bool pcie_cap_has_lnkctl(const struct pci_dev *dev) { int type = pci_pcie_type(dev); - return pcie_cap_version(dev) > 1 || + return type == PCI_EXP_TYPE_ENDPOINT || + type == PCI_EXP_TYPE_LEG_END || type == PCI_EXP_TYPE_ROOT_PORT || - type == PCI_EXP_TYPE_ENDPOINT || - type == PCI_EXP_TYPE_LEG_END; + type == PCI_EXP_TYPE_UPSTREAM || + type == PCI_EXP_TYPE_DOWNSTREAM || + type == PCI_EXP_TYPE_PCI_BRIDGE || + type == PCI_EXP_TYPE_PCIE_BRIDGE; } static inline bool pcie_cap_has_sltctl(const struct pci_dev *dev) { int type = pci_pcie_type(dev); - return pcie_cap_version(dev) > 1 || - type == PCI_EXP_TYPE_ROOT_PORT || - (type == PCI_EXP_TYPE_DOWNSTREAM && - pcie_caps_reg(dev) & PCI_EXP_FLAGS_SLOT); + return (type == PCI_EXP_TYPE_ROOT_PORT || + type == PCI_EXP_TYPE_DOWNSTREAM) && + pcie_caps_reg(dev) & PCI_EXP_FLAGS_SLOT; } static inline bool pcie_cap_has_rtctl(const struct pci_dev *dev) { int type = pci_pcie_type(dev); - return pcie_cap_version(dev) > 1 || - type == PCI_EXP_TYPE_ROOT_PORT || + return type == PCI_EXP_TYPE_ROOT_PORT || type == PCI_EXP_TYPE_RC_EC; } diff --git a/drivers/scsi/aacraid/commctrl.c b/drivers/scsi/aacraid/commctrl.c index 1ef041bc60c8..ee6caddd978c 100644 --- a/drivers/scsi/aacraid/commctrl.c +++ b/drivers/scsi/aacraid/commctrl.c @@ -510,7 +510,8 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg) goto cleanup; } - if (fibsize > (dev->max_fib_size - sizeof(struct aac_fibhdr))) { + if ((fibsize < (sizeof(struct user_aac_srb) - sizeof(struct user_sgentry))) || + (fibsize > (dev->max_fib_size - sizeof(struct aac_fibhdr)))) { rcode = -EINVAL; goto cleanup; } diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c index d53547d2e4c7..d3aa353908aa 100644 --- a/drivers/usb/core/hcd.c +++ b/drivers/usb/core/hcd.c @@ -1010,6 +1010,7 @@ static int register_root_hub(struct usb_hcd *hcd) dev_name(&usb_dev->dev), retval); return retval; } + usb_dev->lpm_capable = usb_device_supports_lpm(usb_dev); } retval = usb_new_device (usb_dev); diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c index 6cf2ae0aa1f7..1424a8988849 100644 --- a/drivers/usb/core/hub.c +++ b/drivers/usb/core/hub.c @@ -135,7 +135,7 @@ struct usb_hub *usb_hub_to_struct_hub(struct usb_device *hdev) return usb_get_intfdata(hdev->actconfig->interface[0]); } -static int usb_device_supports_lpm(struct usb_device *udev) +int usb_device_supports_lpm(struct usb_device *udev) { /* USB 2.1 (and greater) devices indicate LPM support through * their USB 2.0 Extended Capabilities BOS descriptor. @@ -156,6 +156,11 @@ static int usb_device_supports_lpm(struct usb_device *udev) "Power management will be impacted.\n"); return 0; } + + /* udev is root hub */ + if (!udev->parent) + return 1; + if (udev->parent->lpm_capable) return 1; @@ -1124,6 +1129,11 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type) usb_clear_port_feature(hub->hdev, port1, USB_PORT_FEAT_C_ENABLE); } + if (portchange & USB_PORT_STAT_C_RESET) { + need_debounce_delay = true; + usb_clear_port_feature(hub->hdev, port1, + USB_PORT_FEAT_C_RESET); + } if ((portchange & USB_PORT_STAT_C_BH_RESET) && hub_is_superspeed(hub->hdev)) { need_debounce_delay = true; @@ -1557,10 +1567,15 @@ static int hub_configure(struct usb_hub *hub, if (hub->has_indicators && blinkenlights) hub->indicator [0] = INDICATOR_CYCLE; - for (i = 0; i < hdev->maxchild; i++) - if (usb_hub_create_port_device(hub, i + 1) < 0) + for (i = 0; i < hdev->maxchild; i++) { + ret = usb_hub_create_port_device(hub, i + 1); + if (ret < 0) { dev_err(hub->intfdev, "couldn't create port%d device.\n", i + 1); + hdev->maxchild = i; + goto fail_keep_maxchild; + } + } usb_hub_adjust_deviceremovable(hdev, hub->descriptor); @@ -1568,6 +1583,8 @@ static int hub_configure(struct usb_hub *hub, return 0; fail: + hdev->maxchild = 0; +fail_keep_maxchild: dev_err (hub_dev, "config failed, %s (err %d)\n", message, ret); /* hub_disconnect() frees urb and descriptor */ diff --git a/drivers/usb/core/usb.h b/drivers/usb/core/usb.h index 823857767a16..c49383669cd8 100644 --- a/drivers/usb/core/usb.h +++ b/drivers/usb/core/usb.h @@ -35,6 +35,7 @@ extern int usb_get_device_descriptor(struct usb_device *dev, unsigned int size); extern int usb_get_bos_descriptor(struct usb_device *dev); extern void usb_release_bos_descriptor(struct usb_device *dev); +extern int usb_device_supports_lpm(struct usb_device *udev); extern char *usb_cache_string(struct usb_device *udev, int index); extern int usb_set_configuration(struct usb_device *dev, int configuration); extern int usb_choose_configuration(struct usb_device *udev); diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c index 2c1749da1f7e..8b3d0abe7e14 100644 --- a/drivers/usb/serial/mos7840.c +++ b/drivers/usb/serial/mos7840.c @@ -1593,7 +1593,11 @@ static int mos7840_tiocmget(struct tty_struct *tty) return -ENODEV; status = mos7840_get_uart_reg(port, MODEM_STATUS_REGISTER, &msr); + if (status != 1) + return -EIO; status = mos7840_get_uart_reg(port, MODEM_CONTROL_REGISTER, &mcr); + if (status != 1) + return -EIO; result = ((mcr & MCR_DTR) ? TIOCM_DTR : 0) | ((mcr & MCR_RTS) ? TIOCM_RTS : 0) | ((mcr & MCR_LOOPBACK) ? TIOCM_LOOP : 0) diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index acaee066b99a..c3d94853b4ab 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -1376,6 +1376,23 @@ static const struct usb_device_id option_ids[] = { .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */ .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1545, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1546, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1547, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1565, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1566, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1567, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1589, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1590, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1591, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1592, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1594, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1596, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1598, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1600, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2002, 0xff, 0xff, 0xff), .driver_info = (kernel_ulong_t)&zte_k3765_z_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2003, 0xff, 0xff, 0xff) }, diff --git a/drivers/video/adf/adf_client.c b/drivers/video/adf/adf_client.c index e4a792135072..bba873d34bbb 100644 --- a/drivers/video/adf/adf_client.c +++ b/drivers/video/adf/adf_client.c @@ -49,6 +49,9 @@ int adf_interface_blank(struct adf_interface *intf, u8 state) if (!intf->ops || !intf->ops->blank) return -EOPNOTSUPP; + if (state > DRM_MODE_DPMS_OFF) + return -EINVAL; + mutex_lock(&dev->client_lock); if (state != DRM_MODE_DPMS_ON) flush_kthread_worker(&dev->post_worker); diff --git a/drivers/video/adf/adf_fbdev.c b/drivers/video/adf/adf_fbdev.c index 477abd63ccc2..cac34d14cbc2 100644 --- a/drivers/video/adf/adf_fbdev.c +++ b/drivers/video/adf/adf_fbdev.c @@ -519,10 +519,10 @@ int adf_fbdev_blank(int blank, struct fb_info *info) dpms_state = DRM_MODE_DPMS_STANDBY; break; case FB_BLANK_VSYNC_SUSPEND: - dpms_state = DRM_MODE_DPMS_STANDBY; + dpms_state = DRM_MODE_DPMS_SUSPEND; break; case FB_BLANK_HSYNC_SUSPEND: - dpms_state = DRM_MODE_DPMS_SUSPEND; + dpms_state = DRM_MODE_DPMS_STANDBY; break; case FB_BLANK_POWERDOWN: dpms_state = DRM_MODE_DPMS_OFF; diff --git a/drivers/video/backlight/atmel-pwm-bl.c b/drivers/video/backlight/atmel-pwm-bl.c index a60d6afca97c..30e4ed52d701 100644 --- a/drivers/video/backlight/atmel-pwm-bl.c +++ b/drivers/video/backlight/atmel-pwm-bl.c @@ -118,7 +118,7 @@ static const struct backlight_ops atmel_pwm_bl_ops = { .update_status = atmel_pwm_bl_set_intensity, }; -static int __init atmel_pwm_bl_probe(struct platform_device *pdev) +static int atmel_pwm_bl_probe(struct platform_device *pdev) { struct backlight_properties props; const struct atmel_pwm_bl_platform_data *pdata; @@ -203,7 +203,7 @@ err_free_mem: return retval; } -static int __exit atmel_pwm_bl_remove(struct platform_device *pdev) +static int atmel_pwm_bl_remove(struct platform_device *pdev) { struct atmel_pwm_bl *pwmbl = platform_get_drvdata(pdev); @@ -222,10 +222,11 @@ static struct platform_driver atmel_pwm_bl_driver = { .name = "atmel-pwm-bl", }, /* REVISIT add suspend() and resume() */ - .remove = __exit_p(atmel_pwm_bl_remove), + .probe = atmel_pwm_bl_probe, + .remove = atmel_pwm_bl_remove, }; -module_platform_driver_probe(atmel_pwm_bl_driver, atmel_pwm_bl_probe); +module_platform_driver(atmel_pwm_bl_driver); MODULE_AUTHOR("Hans-Christian egtvedt "); MODULE_DESCRIPTION("Atmel PWM backlight driver"); diff --git a/drivers/video/hyperv_fb.c b/drivers/video/hyperv_fb.c index d4d2c5fe2488..0f3b33cf13ef 100644 --- a/drivers/video/hyperv_fb.c +++ b/drivers/video/hyperv_fb.c @@ -795,12 +795,21 @@ static int hvfb_remove(struct hv_device *hdev) } +static DEFINE_PCI_DEVICE_TABLE(pci_stub_id_table) = { + { + .vendor = PCI_VENDOR_ID_MICROSOFT, + .device = PCI_DEVICE_ID_HYPERV_VIDEO, + }, + { /* end of list */ } +}; + static const struct hv_vmbus_device_id id_table[] = { /* Synthetic Video Device GUID */ {HV_SYNTHVID_GUID}, {} }; +MODULE_DEVICE_TABLE(pci, pci_stub_id_table); MODULE_DEVICE_TABLE(vmbus, id_table); static struct hv_driver hvfb_drv = { @@ -810,14 +819,43 @@ static struct hv_driver hvfb_drv = { .remove = hvfb_remove, }; +static int hvfb_pci_stub_probe(struct pci_dev *pdev, + const struct pci_device_id *ent) +{ + return 0; +} + +static void hvfb_pci_stub_remove(struct pci_dev *pdev) +{ +} + +static struct pci_driver hvfb_pci_stub_driver = { + .name = KBUILD_MODNAME, + .id_table = pci_stub_id_table, + .probe = hvfb_pci_stub_probe, + .remove = hvfb_pci_stub_remove, +}; static int __init hvfb_drv_init(void) { - return vmbus_driver_register(&hvfb_drv); + int ret; + + ret = vmbus_driver_register(&hvfb_drv); + if (ret != 0) + return ret; + + ret = pci_register_driver(&hvfb_pci_stub_driver); + if (ret != 0) { + vmbus_driver_unregister(&hvfb_drv); + return ret; + } + + return 0; } static void __exit hvfb_drv_exit(void) { + pci_unregister_driver(&hvfb_pci_stub_driver); vmbus_driver_unregister(&hvfb_drv); } diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c index 7aabc6ad4e9b..fa38d076697d 100644 --- a/fs/configfs/dir.c +++ b/fs/configfs/dir.c @@ -56,10 +56,19 @@ static void configfs_d_iput(struct dentry * dentry, struct configfs_dirent *sd = dentry->d_fsdata; if (sd) { - BUG_ON(sd->s_dentry != dentry); /* Coordinate with configfs_readdir */ spin_lock(&configfs_dirent_lock); - sd->s_dentry = NULL; + /* Coordinate with configfs_attach_attr where will increase + * sd->s_count and update sd->s_dentry to new allocated one. + * Only set sd->dentry to null when this dentry is the only + * sd owner. + * If not do so, configfs_d_iput may run just after + * configfs_attach_attr and set sd->s_dentry to null + * even it's still in use. + */ + if (atomic_read(&sd->s_count) <= 2) + sd->s_dentry = NULL; + spin_unlock(&configfs_dirent_lock); configfs_put(sd); } @@ -426,8 +435,11 @@ static int configfs_attach_attr(struct configfs_dirent * sd, struct dentry * den struct configfs_attribute * attr = sd->s_element; int error; + spin_lock(&configfs_dirent_lock); dentry->d_fsdata = configfs_get(sd); sd->s_dentry = dentry; + spin_unlock(&configfs_dirent_lock); + error = configfs_create(dentry, (attr->ca_mode & S_IALLUGO) | S_IFREG, configfs_init_file); if (error) { diff --git a/fs/exec.c b/fs/exec.c index 1f446705636b..bb60cda5ee30 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1669,6 +1669,12 @@ int __get_dumpable(unsigned long mm_flags) return (ret > SUID_DUMP_USER) ? SUID_DUMP_ROOT : ret; } +/* + * This returns the actual value of the suid_dumpable flag. For things + * that are using this for checking for privilege transitions, it must + * test against SUID_DUMP_USER rather than treating it as a boolean + * value. + */ int get_dumpable(struct mm_struct *mm) { return __get_dumpable(mm->flags); diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 28241a42f363..b69c70a1e2f3 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -1160,29 +1160,24 @@ _nfs4_opendata_reclaim_to_nfs4_state(struct nfs4_opendata *data) int ret; if (!data->rpc_done) { - ret = data->rpc_status; - goto err; + if (data->rpc_status) { + ret = data->rpc_status; + goto err; + } + /* cached opens have already been processed */ + goto update; } - ret = -ESTALE; - if (!(data->f_attr.valid & NFS_ATTR_FATTR_TYPE) || - !(data->f_attr.valid & NFS_ATTR_FATTR_FILEID) || - !(data->f_attr.valid & NFS_ATTR_FATTR_CHANGE)) - goto err; - - ret = -ENOMEM; - state = nfs4_get_open_state(inode, data->owner); - if (state == NULL) - goto err; - ret = nfs_refresh_inode(inode, &data->f_attr); if (ret) goto err; if (data->o_res.delegation_type != 0) nfs4_opendata_check_deleg(data, state); +update: update_open_stateid(state, &data->o_res.stateid, NULL, data->o_arg.fmode); + atomic_inc(&state->count); return state; err: @@ -4572,6 +4567,7 @@ static int _nfs4_proc_getlk(struct nfs4_state *state, int cmd, struct file_lock status = 0; } request->fl_ops->fl_release_private(request); + request->fl_ops = NULL; out: return status; } diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c index 5f38ea36e266..af51cf9bf2e3 100644 --- a/fs/nfsd/export.c +++ b/fs/nfsd/export.c @@ -536,16 +536,12 @@ static int svc_export_parse(struct cache_detail *cd, char *mesg, int mlen) if (err) goto out3; exp.ex_anon_uid= make_kuid(&init_user_ns, an_int); - if (!uid_valid(exp.ex_anon_uid)) - goto out3; /* anon gid */ err = get_int(&mesg, &an_int); if (err) goto out3; exp.ex_anon_gid= make_kgid(&init_user_ns, an_int); - if (!gid_valid(exp.ex_anon_gid)) - goto out3; /* fsid */ err = get_int(&mesg, &an_int); @@ -583,6 +579,17 @@ static int svc_export_parse(struct cache_detail *cd, char *mesg, int mlen) exp.ex_uuid); if (err) goto out4; + /* + * For some reason exportfs has been passing down an + * invalid (-1) uid & gid on the "dummy" export which it + * uses to test export support. To make sure exportfs + * sees errors from check_export we therefore need to + * delay these checks till after check_export: + */ + if (!uid_valid(exp.ex_anon_uid)) + goto out4; + if (!gid_valid(exp.ex_anon_gid)) + goto out4; } expp = svc_export_lookup(&exp); diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c index baf149a85263..62fd6616801d 100644 --- a/fs/nfsd/vfs.c +++ b/fs/nfsd/vfs.c @@ -297,8 +297,104 @@ commit_metadata(struct svc_fh *fhp) } /* - * Set various file attributes. - * N.B. After this call fhp needs an fh_put + * Go over the attributes and take care of the small differences between + * NFS semantics and what Linux expects. + */ +static void +nfsd_sanitize_attrs(struct inode *inode, struct iattr *iap) +{ + /* + * NFSv2 does not differentiate between "set-[ac]time-to-now" + * which only requires access, and "set-[ac]time-to-X" which + * requires ownership. + * So if it looks like it might be "set both to the same time which + * is close to now", and if inode_change_ok fails, then we + * convert to "set to now" instead of "set to explicit time" + * + * We only call inode_change_ok as the last test as technically + * it is not an interface that we should be using. + */ +#define BOTH_TIME_SET (ATTR_ATIME_SET | ATTR_MTIME_SET) +#define MAX_TOUCH_TIME_ERROR (30*60) + if ((iap->ia_valid & BOTH_TIME_SET) == BOTH_TIME_SET && + iap->ia_mtime.tv_sec == iap->ia_atime.tv_sec) { + /* + * Looks probable. + * + * Now just make sure time is in the right ballpark. + * Solaris, at least, doesn't seem to care what the time + * request is. We require it be within 30 minutes of now. + */ + time_t delta = iap->ia_atime.tv_sec - get_seconds(); + if (delta < 0) + delta = -delta; + if (delta < MAX_TOUCH_TIME_ERROR && + inode_change_ok(inode, iap) != 0) { + /* + * Turn off ATTR_[AM]TIME_SET but leave ATTR_[AM]TIME. + * This will cause notify_change to set these times + * to "now" + */ + iap->ia_valid &= ~BOTH_TIME_SET; + } + } + + /* sanitize the mode change */ + if (iap->ia_valid & ATTR_MODE) { + iap->ia_mode &= S_IALLUGO; + iap->ia_mode |= (inode->i_mode & ~S_IALLUGO); + } + + /* Revoke setuid/setgid on chown */ + if (!S_ISDIR(inode->i_mode) && + (((iap->ia_valid & ATTR_UID) && !uid_eq(iap->ia_uid, inode->i_uid)) || + ((iap->ia_valid & ATTR_GID) && !gid_eq(iap->ia_gid, inode->i_gid)))) { + iap->ia_valid |= ATTR_KILL_PRIV; + if (iap->ia_valid & ATTR_MODE) { + /* we're setting mode too, just clear the s*id bits */ + iap->ia_mode &= ~S_ISUID; + if (iap->ia_mode & S_IXGRP) + iap->ia_mode &= ~S_ISGID; + } else { + /* set ATTR_KILL_* bits and let VFS handle it */ + iap->ia_valid |= (ATTR_KILL_SUID | ATTR_KILL_SGID); + } + } +} + +static __be32 +nfsd_get_write_access(struct svc_rqst *rqstp, struct svc_fh *fhp, + struct iattr *iap) +{ + struct inode *inode = fhp->fh_dentry->d_inode; + int host_err; + + if (iap->ia_size < inode->i_size) { + __be32 err; + + err = nfsd_permission(rqstp, fhp->fh_export, fhp->fh_dentry, + NFSD_MAY_TRUNC | NFSD_MAY_OWNER_OVERRIDE); + if (err) + return err; + } + + host_err = get_write_access(inode); + if (host_err) + goto out_nfserrno; + + host_err = locks_verify_truncate(inode, NULL, iap->ia_size); + if (host_err) + goto out_put_write_access; + return 0; + +out_put_write_access: + put_write_access(inode); +out_nfserrno: + return nfserrno(host_err); +} + +/* + * Set various file attributes. After this call fhp needs an fh_put. */ __be32 nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap, @@ -332,114 +428,43 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap, if (!iap->ia_valid) goto out; + nfsd_sanitize_attrs(inode, iap); + /* - * NFSv2 does not differentiate between "set-[ac]time-to-now" - * which only requires access, and "set-[ac]time-to-X" which - * requires ownership. - * So if it looks like it might be "set both to the same time which - * is close to now", and if inode_change_ok fails, then we - * convert to "set to now" instead of "set to explicit time" - * - * We only call inode_change_ok as the last test as technically - * it is not an interface that we should be using. It is only - * valid if the filesystem does not define it's own i_op->setattr. - */ -#define BOTH_TIME_SET (ATTR_ATIME_SET | ATTR_MTIME_SET) -#define MAX_TOUCH_TIME_ERROR (30*60) - if ((iap->ia_valid & BOTH_TIME_SET) == BOTH_TIME_SET && - iap->ia_mtime.tv_sec == iap->ia_atime.tv_sec) { - /* - * Looks probable. - * - * Now just make sure time is in the right ballpark. - * Solaris, at least, doesn't seem to care what the time - * request is. We require it be within 30 minutes of now. - */ - time_t delta = iap->ia_atime.tv_sec - get_seconds(); - if (delta < 0) - delta = -delta; - if (delta < MAX_TOUCH_TIME_ERROR && - inode_change_ok(inode, iap) != 0) { - /* - * Turn off ATTR_[AM]TIME_SET but leave ATTR_[AM]TIME. - * This will cause notify_change to set these times - * to "now" - */ - iap->ia_valid &= ~BOTH_TIME_SET; - } - } - - /* - * The size case is special. - * It changes the file as well as the attributes. + * The size case is special, it changes the file in addition to the + * attributes. */ if (iap->ia_valid & ATTR_SIZE) { - if (iap->ia_size < inode->i_size) { - err = nfsd_permission(rqstp, fhp->fh_export, dentry, - NFSD_MAY_TRUNC|NFSD_MAY_OWNER_OVERRIDE); - if (err) - goto out; - } - - host_err = get_write_access(inode); - if (host_err) - goto out_nfserr; - + err = nfsd_get_write_access(rqstp, fhp, iap); + if (err) + goto out; size_change = 1; - host_err = locks_verify_truncate(inode, NULL, iap->ia_size); - if (host_err) { - put_write_access(inode); - goto out_nfserr; - } } - /* sanitize the mode change */ - if (iap->ia_valid & ATTR_MODE) { - iap->ia_mode &= S_IALLUGO; - iap->ia_mode |= (inode->i_mode & ~S_IALLUGO); - } - - /* Revoke setuid/setgid on chown */ - if (!S_ISDIR(inode->i_mode) && - (((iap->ia_valid & ATTR_UID) && !uid_eq(iap->ia_uid, inode->i_uid)) || - ((iap->ia_valid & ATTR_GID) && !gid_eq(iap->ia_gid, inode->i_gid)))) { - iap->ia_valid |= ATTR_KILL_PRIV; - if (iap->ia_valid & ATTR_MODE) { - /* we're setting mode too, just clear the s*id bits */ - iap->ia_mode &= ~S_ISUID; - if (iap->ia_mode & S_IXGRP) - iap->ia_mode &= ~S_ISGID; - } else { - /* set ATTR_KILL_* bits and let VFS handle it */ - iap->ia_valid |= (ATTR_KILL_SUID | ATTR_KILL_SGID); - } - } - - /* Change the attributes. */ - iap->ia_valid |= ATTR_CTIME; - err = nfserr_notsync; - if (!check_guard || guardtime == inode->i_ctime.tv_sec) { - host_err = nfsd_break_lease(inode); - if (host_err) - goto out_nfserr; - fh_lock(fhp); - - host_err = notify_change(dentry, iap); - err = nfserrno(host_err); - fh_unlock(fhp); + if (check_guard && guardtime != inode->i_ctime.tv_sec) { + err = nfserr_notsync; + goto out_put_write_access; } + + host_err = nfsd_break_lease(inode); + if (host_err) + goto out_put_write_access_nfserror; + + fh_lock(fhp); + host_err = notify_change(dentry, iap); + fh_unlock(fhp); + +out_put_write_access_nfserror: + err = nfserrno(host_err); +out_put_write_access: if (size_change) put_write_access(inode); if (!err) commit_metadata(fhp); out: return err; - -out_nfserr: - err = nfserrno(host_err); - goto out; } #if defined(CONFIG_NFSD_V2_ACL) || \ diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h index 70cf138690e9..df97ca4aae52 100644 --- a/include/linux/binfmts.h +++ b/include/linux/binfmts.h @@ -99,9 +99,6 @@ extern void setup_new_exec(struct linux_binprm * bprm); extern void would_dump(struct linux_binprm *, struct file *); extern int suid_dumpable; -#define SUID_DUMP_DISABLE 0 /* No setuid dumping */ -#define SUID_DUMP_USER 1 /* Dump as user of process */ -#define SUID_DUMP_ROOT 2 /* Dump as root */ /* Stack area protections */ #define EXSTACK_DEFAULT 0 /* Whatever the arch defaults to */ diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h index b508016fb76d..b3bd7e737e8b 100644 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -456,7 +456,8 @@ enum dmi_field { }; struct dmi_strmatch { - unsigned char slot; + unsigned char slot:7; + unsigned char exact_match:1; char substr[79]; }; @@ -474,7 +475,8 @@ struct dmi_system_id { */ #define dmi_device_id dmi_system_id -#define DMI_MATCH(a, b) { a, b } +#define DMI_MATCH(a, b) { .slot = a, .substr = b } +#define DMI_EXACT_MATCH(a, b) { .slot = a, .substr = b, .exact_match = 1 } #define PLATFORM_NAME_SIZE 20 #define PLATFORM_MODULE_PREFIX "platform:" diff --git a/include/linux/sched.h b/include/linux/sched.h index 351294b4614b..a3c8b270931b 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -332,6 +332,10 @@ static inline void arch_pick_mmap_layout(struct mm_struct *mm) {} extern void set_dumpable(struct mm_struct *mm, int value); extern int get_dumpable(struct mm_struct *mm); +#define SUID_DUMP_DISABLE 0 /* No setuid dumping */ +#define SUID_DUMP_USER 1 /* Dump as user of process */ +#define SUID_DUMP_ROOT 2 /* Dump as root */ + /* mm flags */ /* dumpable bits */ #define MMF_DUMPABLE 0 /* core dump is permitted */ @@ -2485,34 +2489,98 @@ static inline int tsk_is_polling(struct task_struct *p) { return task_thread_info(p)->status & TS_POLLING; } -static inline void current_set_polling(void) +static inline void __current_set_polling(void) { current_thread_info()->status |= TS_POLLING; } -static inline void current_clr_polling(void) +static inline bool __must_check current_set_polling_and_test(void) +{ + __current_set_polling(); + + /* + * Polling state must be visible before we test NEED_RESCHED, + * paired by resched_task() + */ + smp_mb(); + + return unlikely(tif_need_resched()); +} + +static inline void __current_clr_polling(void) { current_thread_info()->status &= ~TS_POLLING; - smp_mb__after_clear_bit(); +} + +static inline bool __must_check current_clr_polling_and_test(void) +{ + __current_clr_polling(); + + /* + * Polling state must be visible before we test NEED_RESCHED, + * paired by resched_task() + */ + smp_mb(); + + return unlikely(tif_need_resched()); } #elif defined(TIF_POLLING_NRFLAG) static inline int tsk_is_polling(struct task_struct *p) { return test_tsk_thread_flag(p, TIF_POLLING_NRFLAG); } -static inline void current_set_polling(void) + +static inline void __current_set_polling(void) { set_thread_flag(TIF_POLLING_NRFLAG); } -static inline void current_clr_polling(void) +static inline bool __must_check current_set_polling_and_test(void) +{ + __current_set_polling(); + + /* + * Polling state must be visible before we test NEED_RESCHED, + * paired by resched_task() + * + * XXX: assumes set/clear bit are identical barrier wise. + */ + smp_mb__after_clear_bit(); + + return unlikely(tif_need_resched()); +} + +static inline void __current_clr_polling(void) { clear_thread_flag(TIF_POLLING_NRFLAG); } + +static inline bool __must_check current_clr_polling_and_test(void) +{ + __current_clr_polling(); + + /* + * Polling state must be visible before we test NEED_RESCHED, + * paired by resched_task() + */ + smp_mb__after_clear_bit(); + + return unlikely(tif_need_resched()); +} + #else static inline int tsk_is_polling(struct task_struct *p) { return 0; } -static inline void current_set_polling(void) { } -static inline void current_clr_polling(void) { } +static inline void __current_set_polling(void) { } +static inline void __current_clr_polling(void) { } + +static inline bool __must_check current_set_polling_and_test(void) +{ + return unlikely(tif_need_resched()); +} +static inline bool __must_check current_clr_polling_and_test(void) +{ + return unlikely(tif_need_resched()); +} #endif /* diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h index e7e04736802f..4ae6f32c8033 100644 --- a/include/linux/thread_info.h +++ b/include/linux/thread_info.h @@ -107,6 +107,8 @@ static inline int test_ti_thread_flag(struct thread_info *ti, int flag) #define set_need_resched() set_thread_flag(TIF_NEED_RESCHED) #define clear_need_resched() clear_thread_flag(TIF_NEED_RESCHED) +#define tif_need_resched() test_thread_flag(TIF_NEED_RESCHED) + #if defined TIF_RESTORE_SIGMASK && !defined HAVE_SET_RESTORE_SIGMASK /* * An arch can define its own version of set_restore_sigmask() to get the diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h index 2a601e7da1bf..665e0cee59bd 100644 --- a/include/net/ip6_fib.h +++ b/include/net/ip6_fib.h @@ -165,6 +165,7 @@ static inline struct inet6_dev *ip6_dst_idev(struct dst_entry *dst) static inline void rt6_clean_expires(struct rt6_info *rt) { rt->rt6i_flags &= ~RTF_EXPIRES; + rt->dst.expires = 0; } static inline void rt6_set_expires(struct rt6_info *rt, unsigned long expires) diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h index a9942e1faefb..7ac7f91f0242 100644 --- a/include/net/ip_tunnels.h +++ b/include/net/ip_tunnels.h @@ -113,7 +113,7 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, __be32 key); int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb, - const struct tnl_ptk_info *tpi, bool log_ecn_error); + const struct tnl_ptk_info *tpi, int hdr_len, bool log_ecn_error); int ip_tunnel_changelink(struct net_device *dev, struct nlattr *tb[], struct ip_tunnel_parm *p); int ip_tunnel_newlink(struct net_device *dev, struct nlattr *tb[], diff --git a/include/sound/compress_driver.h b/include/sound/compress_driver.h index 9031a26249b5..ae6c3b8ed2f5 100644 --- a/include/sound/compress_driver.h +++ b/include/sound/compress_driver.h @@ -171,4 +171,13 @@ static inline void snd_compr_fragment_elapsed(struct snd_compr_stream *stream) wake_up(&stream->runtime->sleep); } +static inline void snd_compr_drain_notify(struct snd_compr_stream *stream) +{ + if (snd_BUG_ON(!stream)) + return; + + stream->runtime->state = SNDRV_PCM_STATE_SETUP; + wake_up(&stream->runtime->sleep); +} + #endif diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index 66dc53bca19a..2afcb71857fd 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -579,6 +579,55 @@ TRACE_EVENT(sched_task_usage_ratio, __entry->ratio) ); +/* + * Tracepoint for HMP (CONFIG_SCHED_HMP) task migrations, + * marking the forced transition of runnable or running tasks. + */ +TRACE_EVENT(sched_hmp_migrate_force_running, + + TP_PROTO(struct task_struct *tsk, int running), + + TP_ARGS(tsk, running), + + TP_STRUCT__entry( + __array(char, comm, TASK_COMM_LEN) + __field(int, running) + ), + + TP_fast_assign( + memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN); + __entry->running = running; + ), + + TP_printk("running=%d comm=%s", + __entry->running, __entry->comm) +); + +/* + * Tracepoint for HMP (CONFIG_SCHED_HMP) task migrations, + * marking the forced transition of runnable or running + * tasks when a task is about to go idle. + */ +TRACE_EVENT(sched_hmp_migrate_idle_running, + + TP_PROTO(struct task_struct *tsk, int running), + + TP_ARGS(tsk, running), + + TP_STRUCT__entry( + __array(char, comm, TASK_COMM_LEN) + __field(int, running) + ), + + TP_fast_assign( + memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN); + __entry->running = running; + ), + + TP_printk("running=%d comm=%s", + __entry->running, __entry->comm) +); + /* * Tracepoint for HMP (CONFIG_SCHED_HMP) task migrations. */ diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h index fb104e51496e..9e59950f55cf 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -425,13 +425,15 @@ struct perf_event_mmap_page { /* * Control data for the mmap() data buffer. * - * User-space reading the @data_head value should issue an rmb(), on - * SMP capable platforms, after reading this value -- see - * perf_event_wakeup(). + * User-space reading the @data_head value should issue an smp_rmb(), + * after reading this value. * * When the mapping is PROT_WRITE the @data_tail value should be - * written by userspace to reflect the last read data. In this case - * the kernel will not over-write unread data. + * written by userspace to reflect the last read data, after issueing + * an smp_mb() to separate the data read from the ->data_tail store. + * In this case the kernel will not over-write unread data. + * + * See perf_output_put_handle() for the data ordering. */ __u64 data_head; /* head in the data section */ __u64 data_tail; /* user-space written tail */ diff --git a/include/uapi/video/adf.h b/include/uapi/video/adf.h index 2ba345ca458b..38458f6428b5 100644 --- a/include/uapi/video/adf.h +++ b/include/uapi/video/adf.h @@ -22,7 +22,7 @@ #include #define ADF_NAME_LEN 32 -#define ADF_MAX_CUSTOM_DATA_SIZE PAGE_SIZE +#define ADF_MAX_CUSTOM_DATA_SIZE 4096 enum adf_interface_type { ADF_INTF_DSI = 0, @@ -126,7 +126,7 @@ struct adf_buffer_config { __s64 acquire_fence; }; -#define ADF_MAX_BUFFERS (PAGE_SIZE / sizeof(struct adf_buffer_config)) +#define ADF_MAX_BUFFERS (4096 / sizeof(struct adf_buffer_config)) /** * struct adf_post_config - request to flip to a new set of buffers @@ -152,7 +152,7 @@ struct adf_post_config { __s64 complete_fence; }; -#define ADF_MAX_INTERFACES (PAGE_SIZE / sizeof(__u32)) +#define ADF_MAX_INTERFACES (4096 / sizeof(__u32)) /** * struct adf_simple_buffer_allocate - request to allocate a "simple" buffer @@ -233,7 +233,7 @@ struct adf_device_data { size_t custom_data_size; void __user *custom_data; }; -#define ADF_MAX_ATTACHMENTS (PAGE_SIZE / sizeof(struct adf_attachment)) +#define ADF_MAX_ATTACHMENTS (4096 / sizeof(struct adf_attachment_config)) /** * struct adf_device_data - describes a display interface @@ -273,7 +273,7 @@ struct adf_interface_data { size_t custom_data_size; void __user *custom_data; }; -#define ADF_MAX_MODES (PAGE_SIZE / sizeof(struct drm_mode_modeinfo)) +#define ADF_MAX_MODES (4096 / sizeof(struct drm_mode_modeinfo)) /** * struct adf_overlay_engine_data - describes an overlay engine @@ -293,7 +293,7 @@ struct adf_overlay_engine_data { size_t custom_data_size; void __user *custom_data; }; -#define ADF_MAX_SUPPORTED_FORMATS (PAGE_SIZE / sizeof(__u32)) +#define ADF_MAX_SUPPORTED_FORMATS (4096 / sizeof(__u32)) #define ADF_SET_EVENT _IOW('D', 0, struct adf_set_event) #define ADF_BLANK _IOW('D', 1, __u8) diff --git a/ipc/shm.c b/ipc/shm.c index 7b87bea5245b..6dc55af8a29b 100644 --- a/ipc/shm.c +++ b/ipc/shm.c @@ -208,15 +208,18 @@ static void shm_open(struct vm_area_struct *vma) */ static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp) { + struct file *shm_file; + + shm_file = shp->shm_file; + shp->shm_file = NULL; ns->shm_tot -= (shp->shm_segsz + PAGE_SIZE - 1) >> PAGE_SHIFT; shm_rmid(ns, shp); shm_unlock(shp); - if (!is_file_hugepages(shp->shm_file)) - shmem_lock(shp->shm_file, 0, shp->mlock_user); + if (!is_file_hugepages(shm_file)) + shmem_lock(shm_file, 0, shp->mlock_user); else if (shp->mlock_user) - user_shm_unlock(file_inode(shp->shm_file)->i_size, - shp->mlock_user); - fput (shp->shm_file); + user_shm_unlock(file_inode(shm_file)->i_size, shp->mlock_user); + fput(shm_file); ipc_rcu_putref(shp, shm_rcu_free); } @@ -974,15 +977,25 @@ SYSCALL_DEFINE3(shmctl, int, shmid, int, cmd, struct shmid_ds __user *, buf) ipc_lock_object(&shp->shm_perm); if (!ns_capable(ns->user_ns, CAP_IPC_LOCK)) { kuid_t euid = current_euid(); - err = -EPERM; if (!uid_eq(euid, shp->shm_perm.uid) && - !uid_eq(euid, shp->shm_perm.cuid)) + !uid_eq(euid, shp->shm_perm.cuid)) { + err = -EPERM; goto out_unlock0; - if (cmd == SHM_LOCK && !rlimit(RLIMIT_MEMLOCK)) + } + if (cmd == SHM_LOCK && !rlimit(RLIMIT_MEMLOCK)) { + err = -EPERM; goto out_unlock0; + } } shm_file = shp->shm_file; + + /* check if shm_destroy() is tearing down shp */ + if (shm_file == NULL) { + err = -EIDRM; + goto out_unlock0; + } + if (is_file_hugepages(shm_file)) goto out_unlock0; @@ -1101,6 +1114,14 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr, goto out_unlock; ipc_lock_object(&shp->shm_perm); + + /* check if shm_destroy() is tearing down shp */ + if (shp->shm_file == NULL) { + ipc_unlock_object(&shp->shm_perm); + err = -EIDRM; + goto out_unlock; + } + path = shp->shm_file->f_path; path_get(&path); shp->shm_nattch++; diff --git a/kernel/cpu/idle.c b/kernel/cpu/idle.c index e695c0a0bcb5..c261409500e4 100644 --- a/kernel/cpu/idle.c +++ b/kernel/cpu/idle.c @@ -44,7 +44,7 @@ static inline int cpu_idle_poll(void) rcu_idle_enter(); trace_cpu_idle_rcuidle(0, smp_processor_id()); local_irq_enable(); - while (!need_resched()) + while (!tif_need_resched()) cpu_relax(); trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id()); rcu_idle_exit(); @@ -92,8 +92,7 @@ static void cpu_idle_loop(void) if (cpu_idle_force_poll || tick_check_broadcast_expired()) { cpu_idle_poll(); } else { - current_clr_polling(); - if (!need_resched()) { + if (!current_clr_polling_and_test()) { stop_critical_timings(); rcu_idle_enter(); arch_cpu_idle(); @@ -103,7 +102,7 @@ static void cpu_idle_loop(void) } else { local_irq_enable(); } - current_set_polling(); + __current_set_polling(); } arch_cpu_idle_exit(); } @@ -129,7 +128,7 @@ void cpu_startup_entry(enum cpuhp_state state) */ boot_init_stack_canary(); #endif - current_set_polling(); + __current_set_polling(); arch_cpu_idle_prepare(); cpu_idle_loop(); } diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index cd55144270b5..9c2ddfbf4525 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -87,10 +87,31 @@ again: goto out; /* - * Publish the known good head. Rely on the full barrier implied - * by atomic_dec_and_test() order the rb->head read and this - * write. + * Since the mmap() consumer (userspace) can run on a different CPU: + * + * kernel user + * + * READ ->data_tail READ ->data_head + * smp_mb() (A) smp_rmb() (C) + * WRITE $data READ $data + * smp_wmb() (B) smp_mb() (D) + * STORE ->data_head WRITE ->data_tail + * + * Where A pairs with D, and B pairs with C. + * + * I don't think A needs to be a full barrier because we won't in fact + * write data until we see the store from userspace. So we simply don't + * issue the data WRITE until we observe it. Be conservative for now. + * + * OTOH, D needs to be a full barrier since it separates the data READ + * from the tail WRITE. + * + * For B a WMB is sufficient since it separates two WRITEs, and for C + * an RMB is sufficient since it separates two READs. + * + * See perf_output_begin(). */ + smp_wmb(); rb->user_page->data_head = head; /* @@ -154,9 +175,11 @@ int perf_output_begin(struct perf_output_handle *handle, * Userspace could choose to issue a mb() before updating the * tail pointer. So that all reads will be completed before the * write is issued. + * + * See perf_output_put_handle(). */ tail = ACCESS_ONCE(rb->user_page->data_tail); - smp_rmb(); + smp_mb(); offset = head = local_read(&rb->head); head += size; if (unlikely(!perf_output_space(rb, tail, offset, head))) diff --git a/kernel/ptrace.c b/kernel/ptrace.c index 335a7ae697f5..afadcf7b4a22 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -257,7 +257,8 @@ ok: if (task->mm) dumpable = get_dumpable(task->mm); rcu_read_lock(); - if (!dumpable && !ptrace_has_cap(__task_cred(task)->user_ns, mode)) { + if (dumpable != SUID_DUMP_USER && + !ptrace_has_cap(__task_cred(task)->user_ns, mode)) { rcu_read_unlock(); return -EPERM; } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 643da90f3a7a..3e242209bc3f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -31,7 +31,6 @@ #include #include -#ifdef CONFIG_HMP_VARIABLE_SCALE #include #include #ifdef CONFIG_HMP_FREQUENCY_INVARIANT_SCALE @@ -40,7 +39,6 @@ */ #include #endif /* CONFIG_HMP_FREQUENCY_INVARIANT_SCALE */ -#endif /* CONFIG_HMP_VARIABLE_SCALE */ #include "sched.h" @@ -1212,8 +1210,7 @@ static u32 __compute_runnable_contrib(u64 n) return contrib + runnable_avg_yN_sum[n]; } -#ifdef CONFIG_HMP_VARIABLE_SCALE - +#ifdef CONFIG_SCHED_HMP #define HMP_VARIABLE_SCALE_SHIFT 16ULL struct hmp_global_attr { struct attribute attr; @@ -1224,6 +1221,7 @@ struct hmp_global_attr { int *value; int (*to_sysfs)(int); int (*from_sysfs)(int); + ssize_t (*to_sysfs_text)(char *buf, int buf_size); }; #define HMP_DATA_SYSFS_MAX 8 @@ -1294,7 +1292,7 @@ struct cpufreq_extents { static struct cpufreq_extents freq_scale[CONFIG_NR_CPUS]; #endif /* CONFIG_HMP_FREQUENCY_INVARIANT_SCALE */ -#endif /* CONFIG_HMP_VARIABLE_SCALE */ +#endif /* CONFIG_SCHED_HMP */ /* We can represent the historical contribution to runnable average as the * coefficients of a geometric series. To do this we sub-divide our runnable @@ -1340,7 +1338,7 @@ static __always_inline int __update_entity_runnable_avg(u64 now, #endif /* CONFIG_HMP_FREQUENCY_INVARIANT_SCALE */ delta = now - sa->last_runnable_update; -#ifdef CONFIG_HMP_VARIABLE_SCALE +#ifdef CONFIG_SCHED_HMP delta = hmp_variable_scale_convert(delta); #endif /* @@ -3843,7 +3841,6 @@ static inline void hmp_next_down_delay(struct sched_entity *se, int cpu) cpu_rq(cpu)->avg.hmp_last_up_migration = 0; } -#ifdef CONFIG_HMP_VARIABLE_SCALE /* * Heterogenous multiprocessor (HMP) optimizations * @@ -3876,27 +3873,35 @@ static inline void hmp_next_down_delay(struct sched_entity *se, int cpu) * The scale factor hmp_data.multiplier is a fixed point * number: (32-HMP_VARIABLE_SCALE_SHIFT).HMP_VARIABLE_SCALE_SHIFT */ -static u64 hmp_variable_scale_convert(u64 delta) +static inline u64 hmp_variable_scale_convert(u64 delta) { +#ifdef CONFIG_HMP_VARIABLE_SCALE u64 high = delta >> 32ULL; u64 low = delta & 0xffffffffULL; low *= hmp_data.multiplier; high *= hmp_data.multiplier; return (low >> HMP_VARIABLE_SCALE_SHIFT) + (high << (32ULL - HMP_VARIABLE_SCALE_SHIFT)); +#else + return delta; +#endif } static ssize_t hmp_show(struct kobject *kobj, struct attribute *attr, char *buf) { - ssize_t ret = 0; struct hmp_global_attr *hmp_attr = container_of(attr, struct hmp_global_attr, attr); - int temp = *(hmp_attr->value); + int temp; + + if (hmp_attr->to_sysfs_text != NULL) + return hmp_attr->to_sysfs_text(buf, PAGE_SIZE); + + temp = *(hmp_attr->value); if (hmp_attr->to_sysfs != NULL) temp = hmp_attr->to_sysfs(temp); - ret = sprintf(buf, "%d\n", temp); - return ret; + + return (ssize_t)sprintf(buf, "%d\n", temp); } static ssize_t hmp_store(struct kobject *a, struct attribute *attr, @@ -3925,11 +3930,31 @@ static ssize_t hmp_store(struct kobject *a, struct attribute *attr, return ret; } +static ssize_t hmp_print_domains(char *outbuf, int outbufsize) +{ + char buf[64]; + const char nospace[] = "%s", space[] = " %s"; + const char *fmt = nospace; + struct hmp_domain *domain; + struct list_head *pos; + int outpos = 0; + list_for_each(pos, &hmp_domains) { + domain = list_entry(pos, struct hmp_domain, hmp_domains); + if (cpumask_scnprintf(buf, 64, &domain->possible_cpus)) { + outpos += sprintf(outbuf+outpos, fmt, buf); + fmt = space; + } + } + strcat(outbuf, "\n"); + return outpos+1; +} + +#ifdef CONFIG_HMP_VARIABLE_SCALE static int hmp_period_tofrom_sysfs(int value) { return (LOAD_AVG_PERIOD << HMP_VARIABLE_SCALE_SHIFT) / value; } - +#endif /* max value for threshold is 1024 */ static int hmp_theshold_from_sysfs(int value) { @@ -3937,9 +3962,10 @@ static int hmp_theshold_from_sysfs(int value) return -1; return value; } -#ifdef CONFIG_HMP_FREQUENCY_INVARIANT_SCALE -/* freqinvar control is only 0,1 off/on */ -static int hmp_freqinvar_from_sysfs(int value) +#if defined(CONFIG_SCHED_HMP_LITTLE_PACKING) || \ + defined(CONFIG_HMP_FREQUENCY_INVARIANT_SCALE) +/* toggle control is only 0,1 off/on */ +static int hmp_toggle_from_sysfs(int value) { if (value < 0 || value > 1) return -1; @@ -3959,7 +3985,9 @@ static void hmp_attr_add( const char *name, int *value, int (*to_sysfs)(int), - int (*from_sysfs)(int)) + int (*from_sysfs)(int), + ssize_t (*to_sysfs_text)(char *, int), + umode_t mode) { int i = 0; while (hmp_data.attributes[i] != NULL) { @@ -3967,13 +3995,17 @@ static void hmp_attr_add( if (i >= HMP_DATA_SYSFS_MAX) return; } - hmp_data.attr[i].attr.mode = 0644; + if (mode) + hmp_data.attr[i].attr.mode = mode; + else + hmp_data.attr[i].attr.mode = 0644; hmp_data.attr[i].show = hmp_show; hmp_data.attr[i].store = hmp_store; hmp_data.attr[i].attr.name = name; hmp_data.attr[i].value = value; hmp_data.attr[i].to_sysfs = to_sysfs; hmp_data.attr[i].from_sysfs = from_sysfs; + hmp_data.attr[i].to_sysfs_text = to_sysfs_text; hmp_data.attributes[i] = &hmp_data.attr[i].attr; hmp_data.attributes[i + 1] = NULL; } @@ -3982,40 +4014,59 @@ static int hmp_attr_init(void) { int ret; memset(&hmp_data, sizeof(hmp_data), 0); + hmp_attr_add("hmp_domains", + NULL, + NULL, + NULL, + hmp_print_domains, + 0444); + hmp_attr_add("up_threshold", + &hmp_up_threshold, + NULL, + hmp_theshold_from_sysfs, + NULL, + 0); + hmp_attr_add("down_threshold", + &hmp_down_threshold, + NULL, + hmp_theshold_from_sysfs, + NULL, + 0); +#ifdef CONFIG_HMP_VARIABLE_SCALE /* by default load_avg_period_ms == LOAD_AVG_PERIOD * meaning no change */ hmp_data.multiplier = hmp_period_tofrom_sysfs(LOAD_AVG_PERIOD); - hmp_attr_add("load_avg_period_ms", &hmp_data.multiplier, hmp_period_tofrom_sysfs, - hmp_period_tofrom_sysfs); - hmp_attr_add("up_threshold", - &hmp_up_threshold, + hmp_period_tofrom_sysfs, NULL, - hmp_theshold_from_sysfs); - hmp_attr_add("down_threshold", - &hmp_down_threshold, - NULL, - hmp_theshold_from_sysfs); + 0); +#endif #ifdef CONFIG_HMP_FREQUENCY_INVARIANT_SCALE /* default frequency-invariant scaling ON */ hmp_data.freqinvar_load_scale_enabled = 1; hmp_attr_add("frequency_invariant_load_scale", &hmp_data.freqinvar_load_scale_enabled, NULL, - hmp_freqinvar_from_sysfs); + hmp_toggle_from_sysfs, + NULL, + 0); #endif #ifdef CONFIG_SCHED_HMP_LITTLE_PACKING hmp_attr_add("packing_enable", &hmp_packing_enabled, NULL, - hmp_freqinvar_from_sysfs); + hmp_toggle_from_sysfs, + NULL, + 0); hmp_attr_add("packing_limit", &hmp_full_threshold, NULL, - hmp_packing_from_sysfs); + hmp_packing_from_sysfs, + NULL, + 0); #endif hmp_data.attr_group.name = "hmp"; hmp_data.attr_group.attrs = hmp_data.attributes; @@ -4024,7 +4075,6 @@ static int hmp_attr_init(void) return 0; } late_initcall(hmp_attr_init); -#endif /* CONFIG_HMP_VARIABLE_SCALE */ /* * return the load of the lowest-loaded CPU in a given HMP domain * min_cpu optionally points to an int to receive the CPU. @@ -6915,6 +6965,69 @@ out_unlock: return 0; } +/* + * Move task in a runnable state to another CPU. + * + * Tailored on 'active_load_balance_stop_cpu' with slight + * modification to locking and pre-transfer checks. Note + * rq->lock must be held before calling. + */ +static void hmp_migrate_runnable_task(struct rq *rq) +{ + struct sched_domain *sd; + int src_cpu = cpu_of(rq); + struct rq *src_rq = rq; + int dst_cpu = rq->push_cpu; + struct rq *dst_rq = cpu_rq(dst_cpu); + struct task_struct *p = rq->migrate_task; + /* + * One last check to make sure nobody else is playing + * with the source rq. + */ + if (src_rq->active_balance) + return; + + if (src_rq->nr_running <= 1) + return; + + if (task_rq(p) != src_rq) + return; + /* + * Not sure if this applies here but one can never + * be too cautious + */ + BUG_ON(src_rq == dst_rq); + + double_lock_balance(src_rq, dst_rq); + + rcu_read_lock(); + for_each_domain(dst_cpu, sd) { + if (cpumask_test_cpu(src_cpu, sched_domain_span(sd))) + break; + } + + if (likely(sd)) { + struct lb_env env = { + .sd = sd, + .dst_cpu = dst_cpu, + .dst_rq = dst_rq, + .src_cpu = src_cpu, + .src_rq = src_rq, + .idle = CPU_IDLE, + }; + + schedstat_inc(sd, alb_count); + + if (move_specific_task(&env, p)) + schedstat_inc(sd, alb_pushed); + else + schedstat_inc(sd, alb_failed); + } + + rcu_read_unlock(); + double_unlock_balance(src_rq, dst_rq); +} + static DEFINE_SPINLOCK(hmp_force_migration); /* @@ -6927,13 +7040,14 @@ static void hmp_force_up_migration(int this_cpu) struct sched_entity *curr, *orig; struct rq *target; unsigned long flags; - unsigned int force; + unsigned int force, got_target; struct task_struct *p; if (!spin_trylock(&hmp_force_migration)) return; for_each_online_cpu(cpu) { force = 0; + got_target = 0; target = cpu_rq(cpu); raw_spin_lock_irqsave(&target->lock, flags); curr = target->cfs.curr; @@ -6956,15 +7070,14 @@ static void hmp_force_up_migration(int this_cpu) if (hmp_up_migration(cpu, &target_cpu, curr)) { if (!target->active_balance) { get_task_struct(p); - target->active_balance = 1; target->push_cpu = target_cpu; target->migrate_task = p; - force = 1; + got_target = 1; trace_sched_hmp_migrate(p, target->push_cpu, HMP_MIGRATE_FORCE); hmp_next_up_delay(&p->se, target->push_cpu); } } - if (!force && !target->active_balance) { + if (!got_target && !target->active_balance) { /* * For now we just check the currently running task. * Selecting the lightest task for offloading will @@ -6975,14 +7088,29 @@ static void hmp_force_up_migration(int this_cpu) target->push_cpu = hmp_offload_down(cpu, curr); if (target->push_cpu < NR_CPUS) { get_task_struct(p); - target->active_balance = 1; target->migrate_task = p; - force = 1; + got_target = 1; trace_sched_hmp_migrate(p, target->push_cpu, HMP_MIGRATE_OFFLOAD); hmp_next_down_delay(&p->se, target->push_cpu); } } + /* + * We have a target with no active_balance. If the task + * is not currently running move it, otherwise let the + * CPU stopper take care of it. + */ + if (got_target && !target->active_balance) { + if (!task_running(target, p)) { + trace_sched_hmp_migrate_force_running(p, 0); + hmp_migrate_runnable_task(target); + } else { + target->active_balance = 1; + force = 1; + } + } + raw_spin_unlock_irqrestore(&target->lock, flags); + if (force) stop_one_cpu_nowait(cpu_of(target), hmp_active_task_migration_cpu_stop, @@ -7002,7 +7130,7 @@ static unsigned int hmp_idle_pull(int this_cpu) int cpu; struct sched_entity *curr, *orig; struct hmp_domain *hmp_domain = NULL; - struct rq *target, *rq; + struct rq *target = NULL, *rq; unsigned long flags, ratio = 0; unsigned int force = 0; struct task_struct *p = NULL; @@ -7054,14 +7182,25 @@ static unsigned int hmp_idle_pull(int this_cpu) raw_spin_lock_irqsave(&target->lock, flags); if (!target->active_balance && task_rq(p) == target) { get_task_struct(p); - target->active_balance = 1; target->push_cpu = this_cpu; target->migrate_task = p; - force = 1; trace_sched_hmp_migrate(p, target->push_cpu, HMP_MIGRATE_IDLE_PULL); hmp_next_up_delay(&p->se, target->push_cpu); + /* + * if the task isn't running move it right away. + * Otherwise setup the active_balance mechanic and let + * the CPU stopper do its job. + */ + if (!task_running(target, p)) { + trace_sched_hmp_migrate_idle_running(p, 0); + hmp_migrate_runnable_task(target); + } else { + target->active_balance = 1; + force = 1; + } } raw_spin_unlock_irqrestore(&target->lock, flags); + if (force) { stop_one_cpu_nowait(cpu_of(target), hmp_idle_pull_cpu_stop, diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index c431a16f1866..3abfc7b31afb 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -827,9 +827,12 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf, if (isspace(ch)) { parser->buffer[parser->idx] = 0; parser->cont = false; - } else { + } else if (parser->idx < parser->size - 1) { parser->cont = true; parser->buffer[parser->idx++] = ch; + } else { + ret = -EINVAL; + goto out; } *ppos += read; diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c index 84b1e045faba..8354dc81ae64 100644 --- a/kernel/trace/trace_event_perf.c +++ b/kernel/trace/trace_event_perf.c @@ -26,7 +26,7 @@ static int perf_trace_event_perm(struct ftrace_event_call *tp_event, { /* The ftrace function trace is allowed only for root. */ if (ftrace_event_is_function(tp_event) && - perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN)) + perf_paranoid_tracepoint_raw() && !capable(CAP_SYS_ADMIN)) return -EPERM; /* No tracing, just counting, so no obvious leak */ diff --git a/mm/slub.c b/mm/slub.c index 57707f01bcfb..c34bd44e8be9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1201,8 +1201,8 @@ static unsigned long kmem_cache_flags(unsigned long object_size, /* * Enable debugging if selected on the kernel commandline. */ - if (slub_debug && (!slub_debug_slabs || - !strncmp(slub_debug_slabs, name, strlen(slub_debug_slabs)))) + if (slub_debug && (!slub_debug_slabs || (name && + !strncmp(slub_debug_slabs, name, strlen(slub_debug_slabs))))) flags |= slub_debug; return flags; diff --git a/net/batman-adv/main.c b/net/batman-adv/main.c index 51aafd669cbb..f1cb1f56cda9 100644 --- a/net/batman-adv/main.c +++ b/net/batman-adv/main.c @@ -61,6 +61,7 @@ static int __init batadv_init(void) batadv_recv_handler_init(); batadv_iv_init(); + batadv_nc_init(); batadv_event_workqueue = create_singlethread_workqueue("bat_events"); @@ -138,7 +139,7 @@ int batadv_mesh_init(struct net_device *soft_iface) if (ret < 0) goto err; - ret = batadv_nc_init(bat_priv); + ret = batadv_nc_mesh_init(bat_priv); if (ret < 0) goto err; @@ -163,7 +164,7 @@ void batadv_mesh_free(struct net_device *soft_iface) batadv_vis_quit(bat_priv); batadv_gw_node_purge(bat_priv); - batadv_nc_free(bat_priv); + batadv_nc_mesh_free(bat_priv); batadv_dat_free(bat_priv); batadv_bla_free(bat_priv); diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c index e84629ece9b7..f97aeee2201c 100644 --- a/net/batman-adv/network-coding.c +++ b/net/batman-adv/network-coding.c @@ -34,6 +34,20 @@ static void batadv_nc_worker(struct work_struct *work); static int batadv_nc_recv_coded_packet(struct sk_buff *skb, struct batadv_hard_iface *recv_if); +/** + * batadv_nc_init - one-time initialization for network coding + */ +int __init batadv_nc_init(void) +{ + int ret; + + /* Register our packet type */ + ret = batadv_recv_handler_register(BATADV_CODED, + batadv_nc_recv_coded_packet); + + return ret; +} + /** * batadv_nc_start_timer - initialise the nc periodic worker * @bat_priv: the bat priv with all the soft interface information @@ -45,10 +59,10 @@ static void batadv_nc_start_timer(struct batadv_priv *bat_priv) } /** - * batadv_nc_init - initialise coding hash table and start house keeping + * batadv_nc_mesh_init - initialise coding hash table and start house keeping * @bat_priv: the bat priv with all the soft interface information */ -int batadv_nc_init(struct batadv_priv *bat_priv) +int batadv_nc_mesh_init(struct batadv_priv *bat_priv) { bat_priv->nc.timestamp_fwd_flush = jiffies; bat_priv->nc.timestamp_sniffed_purge = jiffies; @@ -70,11 +84,6 @@ int batadv_nc_init(struct batadv_priv *bat_priv) batadv_hash_set_lock_class(bat_priv->nc.coding_hash, &batadv_nc_decoding_hash_lock_class_key); - /* Register our packet type */ - if (batadv_recv_handler_register(BATADV_CODED, - batadv_nc_recv_coded_packet) < 0) - goto err; - INIT_DELAYED_WORK(&bat_priv->nc.work, batadv_nc_worker); batadv_nc_start_timer(bat_priv); @@ -1722,12 +1731,11 @@ free_nc_packet: } /** - * batadv_nc_free - clean up network coding memory + * batadv_nc_mesh_free - clean up network coding memory * @bat_priv: the bat priv with all the soft interface information */ -void batadv_nc_free(struct batadv_priv *bat_priv) +void batadv_nc_mesh_free(struct batadv_priv *bat_priv) { - batadv_recv_handler_unregister(BATADV_CODED); cancel_delayed_work_sync(&bat_priv->nc.work); batadv_nc_purge_paths(bat_priv, bat_priv->nc.coding_hash, NULL); diff --git a/net/batman-adv/network-coding.h b/net/batman-adv/network-coding.h index 4fa6d0caddbd..bd4295fb960f 100644 --- a/net/batman-adv/network-coding.h +++ b/net/batman-adv/network-coding.h @@ -22,8 +22,9 @@ #ifdef CONFIG_BATMAN_ADV_NC -int batadv_nc_init(struct batadv_priv *bat_priv); -void batadv_nc_free(struct batadv_priv *bat_priv); +int batadv_nc_init(void); +int batadv_nc_mesh_init(struct batadv_priv *bat_priv); +void batadv_nc_mesh_free(struct batadv_priv *bat_priv); void batadv_nc_update_nc_node(struct batadv_priv *bat_priv, struct batadv_orig_node *orig_node, struct batadv_orig_node *orig_neigh_node, @@ -47,12 +48,17 @@ int batadv_nc_init_debugfs(struct batadv_priv *bat_priv); #else /* ifdef CONFIG_BATMAN_ADV_NC */ -static inline int batadv_nc_init(struct batadv_priv *bat_priv) +static inline int batadv_nc_init(void) { return 0; } -static inline void batadv_nc_free(struct batadv_priv *bat_priv) +static inline int batadv_nc_mesh_init(struct batadv_priv *bat_priv) +{ + return 0; +} + +static inline void batadv_nc_mesh_free(struct batadv_priv *bat_priv) { return; } diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c index 44db78ae6a65..f97101b4d373 100644 --- a/net/core/flow_dissector.c +++ b/net/core/flow_dissector.c @@ -40,7 +40,7 @@ again: struct iphdr _iph; ip: iph = skb_header_pointer(skb, nhoff, sizeof(_iph), &_iph); - if (!iph) + if (!iph || iph->ihl < 5) return false; if (ip_is_fragment(iph)) diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c index c52fee0976da..64e4e98c8786 100644 --- a/net/ipv4/ip_gre.c +++ b/net/ipv4/ip_gre.c @@ -335,7 +335,7 @@ static int ipgre_rcv(struct sk_buff *skb) iph->saddr, iph->daddr, tpi.key); if (tunnel) { - ip_tunnel_rcv(tunnel, skb, &tpi, log_ecn_error); + ip_tunnel_rcv(tunnel, skb, &tpi, hdr_len, log_ecn_error); return 0; } icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PORT_UNREACH, 0); diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c index 92d2f0f5d7bf..46dcf32c012e 100644 --- a/net/ipv4/ip_tunnel.c +++ b/net/ipv4/ip_tunnel.c @@ -402,7 +402,7 @@ static struct ip_tunnel *ip_tunnel_create(struct net *net, } int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb, - const struct tnl_ptk_info *tpi, bool log_ecn_error) + const struct tnl_ptk_info *tpi, int hdr_len, bool log_ecn_error) { struct pcpu_tstats *tstats; const struct iphdr *iph = ip_hdr(skb); @@ -413,7 +413,7 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb, skb->protocol = tpi->proto; skb->mac_header = skb->network_header; - __pskb_pull(skb, tunnel->hlen); + __pskb_pull(skb, hdr_len); skb_postpull_rcsum(skb, skb_transport_header(skb), tunnel->hlen); #ifdef CONFIG_NET_IPGRE_BROADCAST if (ipv4_is_multicast(iph->daddr)) { diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c index 7cfc45624b6d..f5cc7b331511 100644 --- a/net/ipv4/ipip.c +++ b/net/ipv4/ipip.c @@ -195,7 +195,7 @@ static int ipip_rcv(struct sk_buff *skb) if (tunnel) { if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) goto drop; - return ip_tunnel_rcv(tunnel, skb, &tpi, log_ecn_error); + return ip_tunnel_rcv(tunnel, skb, &tpi, 0, log_ecn_error); } return -1; diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 3c1f493ccc63..548a1f7c1a29 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -1084,10 +1084,13 @@ static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie) if (rt->rt6i_genid != rt_genid(dev_net(rt->dst.dev))) return NULL; - if (rt->rt6i_node && (rt->rt6i_node->fn_sernum == cookie)) - return dst; + if (!rt->rt6i_node || (rt->rt6i_node->fn_sernum != cookie)) + return NULL; - return NULL; + if (rt6_check_expired(rt)) + return NULL; + + return dst; } static struct dst_entry *ip6_negative_advice(struct dst_entry *dst) diff --git a/net/netfilter/xt_qtaguid.c b/net/netfilter/xt_qtaguid.c index e476b88f9d68..4a16829969a6 100644 --- a/net/netfilter/xt_qtaguid.c +++ b/net/netfilter/xt_qtaguid.c @@ -1496,7 +1496,7 @@ static const struct file_operations proc_iface_stat_fmt_fops = { .open = proc_iface_stat_fmt_open, .read = seq_read, .llseek = seq_lseek, - .release = seq_release, + .release = seq_release_private, }; static int __init iface_stat_init(struct proc_dir_entry *parent_procdir) @@ -2904,7 +2904,7 @@ static const struct file_operations proc_qtaguid_ctrl_fops = { .read = seq_read, .write = qtaguid_ctrl_proc_write, .llseek = seq_lseek, - .release = seq_release, + .release = seq_release_private, }; static const struct seq_operations proc_qtaguid_stats_seqops = { diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c index 426f8fcc4c6c..5b1bf7b530f1 100644 --- a/net/sunrpc/clnt.c +++ b/net/sunrpc/clnt.c @@ -1407,9 +1407,9 @@ call_refreshresult(struct rpc_task *task) return; case -ETIMEDOUT: rpc_delay(task, 3*HZ); - case -EKEYEXPIRED: case -EAGAIN: status = -EACCES; + case -EKEYEXPIRED: if (!task->tk_cred_retry) break; task->tk_cred_retry--; diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index ffd50348a509..8a0e04d0928a 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -391,8 +391,10 @@ static int xs_send_kvec(struct socket *sock, struct sockaddr *addr, int addrlen, return kernel_sendmsg(sock, &msg, NULL, 0, 0); } -static int xs_send_pagedata(struct socket *sock, struct xdr_buf *xdr, unsigned int base, int more) +static int xs_send_pagedata(struct socket *sock, struct xdr_buf *xdr, unsigned int base, int more, bool zerocopy) { + ssize_t (*do_sendpage)(struct socket *sock, struct page *page, + int offset, size_t size, int flags); struct page **ppage; unsigned int remainder; int err, sent = 0; @@ -401,6 +403,9 @@ static int xs_send_pagedata(struct socket *sock, struct xdr_buf *xdr, unsigned i base += xdr->page_base; ppage = xdr->pages + (base >> PAGE_SHIFT); base &= ~PAGE_MASK; + do_sendpage = sock->ops->sendpage; + if (!zerocopy) + do_sendpage = sock_no_sendpage; for(;;) { unsigned int len = min_t(unsigned int, PAGE_SIZE - base, remainder); int flags = XS_SENDMSG_FLAGS; @@ -408,7 +413,7 @@ static int xs_send_pagedata(struct socket *sock, struct xdr_buf *xdr, unsigned i remainder -= len; if (remainder != 0 || more) flags |= MSG_MORE; - err = sock->ops->sendpage(sock, *ppage, base, len, flags); + err = do_sendpage(sock, *ppage, base, len, flags); if (remainder == 0 || err != len) break; sent += err; @@ -429,9 +434,10 @@ static int xs_send_pagedata(struct socket *sock, struct xdr_buf *xdr, unsigned i * @addrlen: UDP only -- length of destination address * @xdr: buffer containing this request * @base: starting position in the buffer + * @zerocopy: true if it is safe to use sendpage() * */ -static int xs_sendpages(struct socket *sock, struct sockaddr *addr, int addrlen, struct xdr_buf *xdr, unsigned int base) +static int xs_sendpages(struct socket *sock, struct sockaddr *addr, int addrlen, struct xdr_buf *xdr, unsigned int base, bool zerocopy) { unsigned int remainder = xdr->len - base; int err, sent = 0; @@ -459,7 +465,7 @@ static int xs_sendpages(struct socket *sock, struct sockaddr *addr, int addrlen, if (base < xdr->page_len) { unsigned int len = xdr->page_len - base; remainder -= len; - err = xs_send_pagedata(sock, xdr, base, remainder != 0); + err = xs_send_pagedata(sock, xdr, base, remainder != 0, zerocopy); if (remainder == 0 || err != len) goto out; sent += err; @@ -562,7 +568,7 @@ static int xs_local_send_request(struct rpc_task *task) req->rq_svec->iov_base, req->rq_svec->iov_len); status = xs_sendpages(transport->sock, NULL, 0, - xdr, req->rq_bytes_sent); + xdr, req->rq_bytes_sent, true); dprintk("RPC: %s(%u) = %d\n", __func__, xdr->len - req->rq_bytes_sent, status); if (likely(status >= 0)) { @@ -618,7 +624,7 @@ static int xs_udp_send_request(struct rpc_task *task) status = xs_sendpages(transport->sock, xs_addr(xprt), xprt->addrlen, xdr, - req->rq_bytes_sent); + req->rq_bytes_sent, true); dprintk("RPC: xs_udp_send_request(%u) = %d\n", xdr->len - req->rq_bytes_sent, status); @@ -689,6 +695,7 @@ static int xs_tcp_send_request(struct rpc_task *task) struct rpc_xprt *xprt = req->rq_xprt; struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt); struct xdr_buf *xdr = &req->rq_snd_buf; + bool zerocopy = true; int status; xs_encode_stream_record_marker(&req->rq_snd_buf); @@ -696,13 +703,20 @@ static int xs_tcp_send_request(struct rpc_task *task) xs_pktdump("packet data:", req->rq_svec->iov_base, req->rq_svec->iov_len); + /* Don't use zero copy if this is a resend. If the RPC call + * completes while the socket holds a reference to the pages, + * then we may end up resending corrupted data. + */ + if (task->tk_flags & RPC_TASK_SENT) + zerocopy = false; /* Continue transmitting the packet/record. We must be careful * to cope with writespace callbacks arriving _after_ we have * called sendmsg(). */ while (1) { status = xs_sendpages(transport->sock, - NULL, 0, xdr, req->rq_bytes_sent); + NULL, 0, xdr, req->rq_bytes_sent, + zerocopy); dprintk("RPC: xs_tcp_send_request(%u) = %d\n", xdr->len - req->rq_bytes_sent, status); diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c index 399433ad614e..a9c3d3cd1990 100644 --- a/security/integrity/ima/ima_policy.c +++ b/security/integrity/ima/ima_policy.c @@ -73,7 +73,6 @@ static struct ima_rule_entry default_rules[] = { {.action = DONT_MEASURE,.fsmagic = SYSFS_MAGIC,.flags = IMA_FSMAGIC}, {.action = DONT_MEASURE,.fsmagic = DEBUGFS_MAGIC,.flags = IMA_FSMAGIC}, {.action = DONT_MEASURE,.fsmagic = TMPFS_MAGIC,.flags = IMA_FSMAGIC}, - {.action = DONT_MEASURE,.fsmagic = RAMFS_MAGIC,.flags = IMA_FSMAGIC}, {.action = DONT_MEASURE,.fsmagic = DEVPTS_SUPER_MAGIC,.flags = IMA_FSMAGIC}, {.action = DONT_MEASURE,.fsmagic = BINFMTFS_MAGIC,.flags = IMA_FSMAGIC}, {.action = DONT_MEASURE,.fsmagic = SECURITYFS_MAGIC,.flags = IMA_FSMAGIC}, diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c index 5863ba6dd12b..19799931c51d 100644 --- a/sound/core/compress_offload.c +++ b/sound/core/compress_offload.c @@ -668,14 +668,48 @@ static int snd_compr_stop(struct snd_compr_stream *stream) return -EPERM; retval = stream->ops->trigger(stream, SNDRV_PCM_TRIGGER_STOP); if (!retval) { - stream->runtime->state = SNDRV_PCM_STATE_SETUP; - wake_up(&stream->runtime->sleep); + snd_compr_drain_notify(stream); stream->runtime->total_bytes_available = 0; stream->runtime->total_bytes_transferred = 0; } return retval; } +static int snd_compress_wait_for_drain(struct snd_compr_stream *stream) +{ + int ret; + + /* + * We are called with lock held. So drop the lock while we wait for + * drain complete notfication from the driver + * + * It is expected that driver will notify the drain completion and then + * stream will be moved to SETUP state, even if draining resulted in an + * error. We can trigger next track after this. + */ + stream->runtime->state = SNDRV_PCM_STATE_DRAINING; + mutex_unlock(&stream->device->lock); + + /* we wait for drain to complete here, drain can return when + * interruption occurred, wait returned error or success. + * For the first two cases we don't do anything different here and + * return after waking up + */ + + ret = wait_event_interruptible(stream->runtime->sleep, + (stream->runtime->state != SNDRV_PCM_STATE_DRAINING)); + if (ret == -ERESTARTSYS) + pr_debug("wait aborted by a signal"); + else if (ret) + pr_debug("wait for drain failed with %d\n", ret); + + + wake_up(&stream->runtime->sleep); + mutex_lock(&stream->device->lock); + + return ret; +} + static int snd_compr_drain(struct snd_compr_stream *stream) { int retval; @@ -683,12 +717,15 @@ static int snd_compr_drain(struct snd_compr_stream *stream) if (stream->runtime->state == SNDRV_PCM_STATE_PREPARED || stream->runtime->state == SNDRV_PCM_STATE_SETUP) return -EPERM; + retval = stream->ops->trigger(stream, SND_COMPR_TRIGGER_DRAIN); - if (!retval) { - stream->runtime->state = SNDRV_PCM_STATE_DRAINING; + if (retval) { + pr_debug("SND_COMPR_TRIGGER_DRAIN failed %d\n", retval); wake_up(&stream->runtime->sleep); + return retval; } - return retval; + + return snd_compress_wait_for_drain(stream); } static int snd_compr_next_track(struct snd_compr_stream *stream) @@ -724,9 +761,14 @@ static int snd_compr_partial_drain(struct snd_compr_stream *stream) return -EPERM; retval = stream->ops->trigger(stream, SND_COMPR_TRIGGER_PARTIAL_DRAIN); + if (retval) { + pr_debug("Partial drain returned failure\n"); + wake_up(&stream->runtime->sleep); + return retval; + } stream->next_track = false; - return retval; + return snd_compress_wait_for_drain(stream); } static long snd_compr_ioctl(struct file *f, unsigned int cmd, unsigned long arg) diff --git a/sound/isa/msnd/msnd_pinnacle.c b/sound/isa/msnd/msnd_pinnacle.c index ddabb406b14c..3a7946ebbe23 100644 --- a/sound/isa/msnd/msnd_pinnacle.c +++ b/sound/isa/msnd/msnd_pinnacle.c @@ -73,9 +73,11 @@ #ifdef MSND_CLASSIC # include "msnd_classic.h" # define LOGNAME "msnd_classic" +# define DEV_NAME "msnd-classic" #else # include "msnd_pinnacle.h" # define LOGNAME "snd_msnd_pinnacle" +# define DEV_NAME "msnd-pinnacle" #endif static void set_default_audio_parameters(struct snd_msnd *chip) @@ -1068,8 +1070,6 @@ static int snd_msnd_isa_remove(struct device *pdev, unsigned int dev) return 0; } -#define DEV_NAME "msnd-pinnacle" - static struct isa_driver snd_msnd_driver = { .match = snd_msnd_isa_match, .probe = snd_msnd_isa_probe, diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c index 31461ba32d3c..aeefec74a061 100644 --- a/sound/pci/hda/hda_codec.c +++ b/sound/pci/hda/hda_codec.c @@ -2517,9 +2517,6 @@ int snd_hda_codec_reset(struct hda_codec *codec) cancel_delayed_work_sync(&codec->jackpoll_work); #ifdef CONFIG_PM cancel_delayed_work_sync(&codec->power_work); - codec->power_on = 0; - codec->power_transition = 0; - codec->power_jiffies = jiffies; flush_workqueue(bus->workq); #endif snd_hda_ctls_clear(codec); @@ -3927,6 +3924,10 @@ static void hda_call_codec_resume(struct hda_codec *codec) * in the resume / power-save sequence */ hda_keep_power_on(codec); + if (codec->pm_down_notified) { + codec->pm_down_notified = 0; + hda_call_pm_notify(codec->bus, true); + } hda_set_power_state(codec, AC_PWRST_D0); restore_shutup_pins(codec); hda_exec_init_verbs(codec); diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c index d0cc796f778a..26ed56f00b7c 100644 --- a/sound/pci/hda/hda_generic.c +++ b/sound/pci/hda/hda_generic.c @@ -786,10 +786,10 @@ static void set_pin_eapd(struct hda_codec *codec, hda_nid_t pin, bool enable) if (spec->own_eapd_ctl || !(snd_hda_query_pin_caps(codec, pin) & AC_PINCAP_EAPD)) return; - if (codec->inv_eapd) - enable = !enable; if (spec->keep_eapd_on && !enable) return; + if (codec->inv_eapd) + enable = !enable; snd_hda_codec_update_cache(codec, pin, 0, AC_VERB_SET_EAPD_BTLENABLE, enable ? 0x02 : 0x00); diff --git a/sound/pci/hda/patch_analog.c b/sound/pci/hda/patch_analog.c index d97f0d61a15b..e17b55a95bc5 100644 --- a/sound/pci/hda/patch_analog.c +++ b/sound/pci/hda/patch_analog.c @@ -1197,8 +1197,12 @@ static int alloc_ad_spec(struct hda_codec *codec) static void ad_fixup_inv_jack_detect(struct hda_codec *codec, const struct hda_fixup *fix, int action) { - if (action == HDA_FIXUP_ACT_PRE_PROBE) + struct ad198x_spec *spec = codec->spec; + + if (action == HDA_FIXUP_ACT_PRE_PROBE) { codec->inv_jack_detect = 1; + spec->gen.keep_eapd_on = 1; + } } enum { diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c index c96e1945059d..1868d3a6e310 100644 --- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -3491,6 +3491,8 @@ static const struct hda_codec_preset snd_hda_preset_conexant[] = { .patch = patch_conexant_auto }, { .id = 0x14f15115, .name = "CX20757", .patch = patch_conexant_auto }, + { .id = 0x14f151d7, .name = "CX20952", + .patch = patch_conexant_auto }, {} /* terminator */ }; @@ -3517,6 +3519,7 @@ MODULE_ALIAS("snd-hda-codec-id:14f15111"); MODULE_ALIAS("snd-hda-codec-id:14f15113"); MODULE_ALIAS("snd-hda-codec-id:14f15114"); MODULE_ALIAS("snd-hda-codec-id:14f15115"); +MODULE_ALIAS("snd-hda-codec-id:14f151d7"); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Conexant HD-audio codec"); diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c index aecf088f40af..b93799201578 100644 --- a/sound/pci/hda/patch_hdmi.c +++ b/sound/pci/hda/patch_hdmi.c @@ -738,9 +738,10 @@ static int hdmi_manual_setup_channel_mapping(struct hda_codec *codec, static void hdmi_setup_fake_chmap(unsigned char *map, int ca) { int i; + int ordered_ca = get_channel_allocation_order(ca); for (i = 0; i < 8; i++) { - if (i < channel_allocations[ca].channels) - map[i] = from_cea_slot((hdmi_channel_mapping[ca][i] >> 4) & 0x0f); + if (i < channel_allocations[ordered_ca].channels) + map[i] = from_cea_slot(hdmi_channel_mapping[ca][i] & 0x0f); else map[i] = 0; } diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 4496e0ab693d..8bce044583ed 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -1037,6 +1037,7 @@ enum { ALC880_FIXUP_UNIWILL, ALC880_FIXUP_UNIWILL_DIG, ALC880_FIXUP_Z71V, + ALC880_FIXUP_ASUS_W5A, ALC880_FIXUP_3ST_BASE, ALC880_FIXUP_3ST, ALC880_FIXUP_3ST_DIG, @@ -1207,6 +1208,26 @@ static const struct hda_fixup alc880_fixups[] = { { } } }, + [ALC880_FIXUP_ASUS_W5A] = { + .type = HDA_FIXUP_PINS, + .v.pins = (const struct hda_pintbl[]) { + /* set up the whole pins as BIOS is utterly broken */ + { 0x14, 0x0121411f }, /* HP */ + { 0x15, 0x411111f0 }, /* N/A */ + { 0x16, 0x411111f0 }, /* N/A */ + { 0x17, 0x411111f0 }, /* N/A */ + { 0x18, 0x90a60160 }, /* mic */ + { 0x19, 0x411111f0 }, /* N/A */ + { 0x1a, 0x411111f0 }, /* N/A */ + { 0x1b, 0x411111f0 }, /* N/A */ + { 0x1c, 0x411111f0 }, /* N/A */ + { 0x1d, 0x411111f0 }, /* N/A */ + { 0x1e, 0xb743111e }, /* SPDIF out */ + { } + }, + .chained = true, + .chain_id = ALC880_FIXUP_GPIO1, + }, [ALC880_FIXUP_3ST_BASE] = { .type = HDA_FIXUP_PINS, .v.pins = (const struct hda_pintbl[]) { @@ -1328,6 +1349,7 @@ static const struct hda_fixup alc880_fixups[] = { static const struct snd_pci_quirk alc880_fixup_tbl[] = { SND_PCI_QUIRK(0x1019, 0x0f69, "Coeus G610P", ALC880_FIXUP_W810), + SND_PCI_QUIRK(0x1043, 0x10c3, "ASUS W5A", ALC880_FIXUP_ASUS_W5A), SND_PCI_QUIRK(0x1043, 0x1964, "ASUS Z71V", ALC880_FIXUP_Z71V), SND_PCI_QUIRK_VENDOR(0x1043, "ASUS", ALC880_FIXUP_GPIO1), SND_PCI_QUIRK(0x1558, 0x5401, "Clevo GPIO2", ALC880_FIXUP_GPIO2), @@ -1473,6 +1495,7 @@ enum { ALC260_FIXUP_KN1, ALC260_FIXUP_FSC_S7020, ALC260_FIXUP_FSC_S7020_JWSE, + ALC260_FIXUP_VAIO_PINS, }; static void alc260_gpio1_automute(struct hda_codec *codec) @@ -1613,6 +1636,24 @@ static const struct hda_fixup alc260_fixups[] = { .chained = true, .chain_id = ALC260_FIXUP_FSC_S7020, }, + [ALC260_FIXUP_VAIO_PINS] = { + .type = HDA_FIXUP_PINS, + .v.pins = (const struct hda_pintbl[]) { + /* Pin configs are missing completely on some VAIOs */ + { 0x0f, 0x01211020 }, + { 0x10, 0x0001003f }, + { 0x11, 0x411111f0 }, + { 0x12, 0x01a15930 }, + { 0x13, 0x411111f0 }, + { 0x14, 0x411111f0 }, + { 0x15, 0x411111f0 }, + { 0x16, 0x411111f0 }, + { 0x17, 0x411111f0 }, + { 0x18, 0x411111f0 }, + { 0x19, 0x411111f0 }, + { } + } + }, }; static const struct snd_pci_quirk alc260_fixup_tbl[] = { @@ -1621,6 +1662,8 @@ static const struct snd_pci_quirk alc260_fixup_tbl[] = { SND_PCI_QUIRK(0x1025, 0x008f, "Acer", ALC260_FIXUP_GPIO1), SND_PCI_QUIRK(0x103c, 0x280a, "HP dc5750", ALC260_FIXUP_HP_DC5750), SND_PCI_QUIRK(0x103c, 0x30ba, "HP Presario B1900", ALC260_FIXUP_HP_B1900), + SND_PCI_QUIRK(0x104d, 0x81bb, "Sony VAIO", ALC260_FIXUP_VAIO_PINS), + SND_PCI_QUIRK(0x104d, 0x81e2, "Sony VAIO TX", ALC260_FIXUP_HP_PIN_0F), SND_PCI_QUIRK(0x10cf, 0x1326, "FSC LifeBook S7020", ALC260_FIXUP_FSC_S7020), SND_PCI_QUIRK(0x1509, 0x4540, "Favorit 100XS", ALC260_FIXUP_GPIO1), SND_PCI_QUIRK(0x152d, 0x0729, "Quanta KN1", ALC260_FIXUP_KN1), @@ -2380,6 +2423,7 @@ static const struct hda_verb alc268_beep_init_verbs[] = { enum { ALC268_FIXUP_INV_DMIC, ALC268_FIXUP_HP_EAPD, + ALC268_FIXUP_SPDIF, }; static const struct hda_fixup alc268_fixups[] = { @@ -2394,6 +2438,13 @@ static const struct hda_fixup alc268_fixups[] = { {} } }, + [ALC268_FIXUP_SPDIF] = { + .type = HDA_FIXUP_PINS, + .v.pins = (const struct hda_pintbl[]) { + { 0x1e, 0x014b1180 }, /* enable SPDIF out */ + {} + } + }, }; static const struct hda_model_fixup alc268_fixup_models[] = { @@ -2403,6 +2454,7 @@ static const struct hda_model_fixup alc268_fixup_models[] = { }; static const struct snd_pci_quirk alc268_fixup_tbl[] = { + SND_PCI_QUIRK(0x1025, 0x0139, "Acer TravelMate 6293", ALC268_FIXUP_SPDIF), SND_PCI_QUIRK(0x1025, 0x015b, "Acer AOA 150 (ZG5)", ALC268_FIXUP_INV_DMIC), /* below is codec SSID since multiple Toshiba laptops have the * same PCI SSID 1179:ff00 @@ -2531,6 +2583,7 @@ enum { ALC269_TYPE_ALC282, ALC269_TYPE_ALC284, ALC269_TYPE_ALC286, + ALC269_TYPE_ALC255, }; /* @@ -2555,6 +2608,7 @@ static int alc269_parse_auto_config(struct hda_codec *codec) case ALC269_TYPE_ALC269VD: case ALC269_TYPE_ALC282: case ALC269_TYPE_ALC286: + case ALC269_TYPE_ALC255: ssids = alc269_ssids; break; default: @@ -2754,6 +2808,23 @@ static void alc269_fixup_mic_mute_hook(void *private_data, int enabled) snd_hda_set_pin_ctl_cache(codec, spec->mute_led_nid, pinval); } +/* Make sure the led works even in runtime suspend */ +static unsigned int led_power_filter(struct hda_codec *codec, + hda_nid_t nid, + unsigned int power_state) +{ + struct alc_spec *spec = codec->spec; + + if (power_state != AC_PWRST_D3 || nid != spec->mute_led_nid) + return power_state; + + /* Set pin ctl again, it might have just been set to 0 */ + snd_hda_set_pin_ctl(codec, nid, + snd_hda_codec_get_pin_target(codec, nid)); + + return AC_PWRST_D0; +} + static void alc269_fixup_hp_mute_led(struct hda_codec *codec, const struct hda_fixup *fix, int action) { @@ -2773,6 +2844,7 @@ static void alc269_fixup_hp_mute_led(struct hda_codec *codec, spec->mute_led_nid = pin - 0x0a + 0x18; spec->gen.vmaster_mute.hook = alc269_fixup_mic_mute_hook; spec->gen.vmaster_mute_enum = 1; + codec->power_filter = led_power_filter; snd_printd("Detected mute LED for %x:%d\n", spec->mute_led_nid, spec->mute_led_polarity); break; @@ -2788,6 +2860,7 @@ static void alc269_fixup_hp_mute_led_mic1(struct hda_codec *codec, spec->mute_led_nid = 0x18; spec->gen.vmaster_mute.hook = alc269_fixup_mic_mute_hook; spec->gen.vmaster_mute_enum = 1; + codec->power_filter = led_power_filter; } } @@ -2800,6 +2873,7 @@ static void alc269_fixup_hp_mute_led_mic2(struct hda_codec *codec, spec->mute_led_nid = 0x19; spec->gen.vmaster_mute.hook = alc269_fixup_mic_mute_hook; spec->gen.vmaster_mute_enum = 1; + codec->power_filter = led_power_filter; } } @@ -3040,8 +3114,10 @@ static void alc_update_headset_mode(struct hda_codec *codec) else new_headset_mode = ALC_HEADSET_MODE_HEADPHONE; - if (new_headset_mode == spec->current_headset_mode) + if (new_headset_mode == spec->current_headset_mode) { + snd_hda_gen_update_outputs(codec); return; + } switch (new_headset_mode) { case ALC_HEADSET_MODE_UNPLUGGED: @@ -3545,6 +3621,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1028, 0x0613, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1028, 0x0614, "Dell Inspiron 3135", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_MONO_SPEAKERS), SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), SND_PCI_QUIRK(0x103c, 0x18e6, "HP", ALC269_FIXUP_HP_GPIO_LED), @@ -3765,6 +3842,9 @@ static int patch_alc269(struct hda_codec *codec) case 0x10ec0286: spec->codec_variant = ALC269_TYPE_ALC286; break; + case 0x10ec0255: + spec->codec_variant = ALC269_TYPE_ALC255; + break; } /* automatic parse from the BIOS config */ @@ -4472,6 +4552,7 @@ static int patch_alc680(struct hda_codec *codec) static const struct hda_codec_preset snd_hda_preset_realtek[] = { { .id = 0x10ec0221, .name = "ALC221", .patch = patch_alc269 }, { .id = 0x10ec0233, .name = "ALC233", .patch = patch_alc269 }, + { .id = 0x10ec0255, .name = "ALC255", .patch = patch_alc269 }, { .id = 0x10ec0260, .name = "ALC260", .patch = patch_alc260 }, { .id = 0x10ec0262, .name = "ALC262", .patch = patch_alc262 }, { .id = 0x10ec0267, .name = "ALC267", .patch = patch_alc268 }, diff --git a/sound/usb/6fire/chip.c b/sound/usb/6fire/chip.c index 4394ae796356..0716ba691398 100644 --- a/sound/usb/6fire/chip.c +++ b/sound/usb/6fire/chip.c @@ -101,7 +101,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf, usb_set_intfdata(intf, chips[i]); mutex_unlock(®ister_mutex); return 0; - } else if (regidx < 0) + } else if (!devices[i] && regidx < 0) regidx = i; } if (regidx < 0) { diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c index 72a130bc448a..c329c8fc57f4 100644 --- a/virt/kvm/iommu.c +++ b/virt/kvm/iommu.c @@ -103,6 +103,10 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) while ((gfn << PAGE_SHIFT) & (page_size - 1)) page_size >>= 1; + /* Make sure hva is aligned to the page size we want to map */ + while (__gfn_to_hva_memslot(slot, gfn) & (page_size - 1)) + page_size >>= 1; + /* * Pin all pages we are about to map in memory. This is * important because we unmap and unpin in 4kb steps later.