mirror of
https://github.com/hardkernel/linux.git
synced 2026-04-11 07:28:10 +09:00
Merge remote-tracking branch 'origin/upstream/linux-linaro-lsk-v3.10-android+android-common-3.10' into develop-3.10
This commit is contained in:
136
Documentation/arm/small_task_packing.txt
Normal file
136
Documentation/arm/small_task_packing.txt
Normal file
@@ -0,0 +1,136 @@
|
||||
Small Task Packing in the big.LITTLE MP Reference Patch Set
|
||||
|
||||
What is small task packing?
|
||||
----
|
||||
Simply that the scheduler will fit as many small tasks on a single CPU
|
||||
as possible before using other CPUs. A small task is defined as one
|
||||
whose tracked load is less than 90% of a NICE_0 task. This is a change
|
||||
from the usual behavior since the scheduler will normally use an idle
|
||||
CPU for a waking task unless that task is considered cache hot.
|
||||
|
||||
|
||||
How is it implemented?
|
||||
----
|
||||
Since all small tasks must wake up relatively frequently, the main
|
||||
requirement for packing small tasks is to select a partly-busy CPU when
|
||||
waking rather than looking for an idle CPU. We use the tracked load of
|
||||
the CPU runqueue to determine how heavily loaded each CPU is and the
|
||||
tracked load of the task to determine if it will fit on the CPU. We
|
||||
always start with the lowest-numbered CPU in a sched domain and stop
|
||||
looking when we find a CPU with enough space for the task.
|
||||
|
||||
Some further tweaks are necessary to suppress load balancing when the
|
||||
CPU is not fully loaded, otherwise the scheduler attempts to spread
|
||||
tasks evenly across the domain.
|
||||
|
||||
|
||||
How does it interact with the HMP patches?
|
||||
----
|
||||
Firstly, we only enable packing on the little domain. The intent is that
|
||||
the big domain is intended to spread tasks amongst the available CPUs
|
||||
one-task-per-CPU. The little domain however is attempting to use as
|
||||
little power as possible while servicing its tasks.
|
||||
|
||||
Secondly, since we offload big tasks onto little CPUs in order to try
|
||||
to devote one CPU to each task, we have a threshold above which we do
|
||||
not try to pack a task and instead will select an idle CPU if possible.
|
||||
This maintains maximum forward progress for busy tasks temporarily
|
||||
demoted from big CPUs.
|
||||
|
||||
|
||||
Can the behaviour be tuned?
|
||||
----
|
||||
Yes, the load level of a 'full' CPU can be easily modified in the source
|
||||
and is exposed through sysfs as /sys/kernel/hmp/packing_limit to be
|
||||
changed at runtime. The presence of the packing behaviour is controlled
|
||||
by CONFIG_SCHED_HMP_LITTLE_PACKING and can be disabled at run-time
|
||||
using /sys/kernel/hmp/packing_enable.
|
||||
The definition of a small task is hard coded as 90% of NICE_0_LOAD
|
||||
and cannot be modified at run time.
|
||||
|
||||
|
||||
Why do I need to tune it?
|
||||
----
|
||||
The optimal configuration is likely to be different depending upon the
|
||||
design and manufacturing of your SoC.
|
||||
|
||||
In the main, there are two system effects from enabling small task
|
||||
packing.
|
||||
|
||||
1. CPU operating point may increase
|
||||
2. wakeup latency of tasks may be increased
|
||||
|
||||
There are also likely to be secondary effects from loading one CPU
|
||||
rather than spreading tasks.
|
||||
|
||||
Note that all of these system effects are dependent upon the workload
|
||||
under consideration.
|
||||
|
||||
|
||||
CPU Operating Point
|
||||
----
|
||||
The primary impact of loading one CPU with a number of light tasks is to
|
||||
increase the compute requirement of that CPU since it is no longer idle
|
||||
as often. Increased compute requirement causes an increase in the
|
||||
frequency of the CPU through CPUfreq.
|
||||
|
||||
Consider this example:
|
||||
We have a system with 3 CPUs which can operate at any frequency between
|
||||
350MHz and 1GHz. The system has 6 tasks which would each produce 10%
|
||||
load at 1GHz. The scheduler has frequency-invariant load scaling
|
||||
enabled. Our DVFS governor aims for 80% utilization at the chosen
|
||||
frequency.
|
||||
|
||||
Without task packing, these tasks will be spread out amongst all CPUs
|
||||
such that each has 2. This will produce roughly 20% system load, and
|
||||
the frequency of the package will remain at 350MHz.
|
||||
|
||||
With task packing set to the default packing_limit, all of these tasks
|
||||
will sit on one CPU and require a package frequency of ~750MHz to reach
|
||||
80% utilization. (0.75 = 0.6 * 0.8).
|
||||
|
||||
When a package operates on a single frequency domain, all CPUs in that
|
||||
package share frequency and voltage.
|
||||
|
||||
Depending upon the SoC implementation there can be a significant amount
|
||||
of energy lost to leakage from idle CPUs. The decision about how
|
||||
loaded a CPU must be to be considered 'full' is therefore controllable
|
||||
through sysfs (sys/kernel/hmp/packing_limit) and directly in the code.
|
||||
|
||||
Continuing the example, lets set packing_limit to 450 which means we
|
||||
will pack tasks until the total load of all running tasks >= 450. In
|
||||
practise, this is very similar to a 55% idle 1Ghz CPU.
|
||||
|
||||
Now we are only able to place 4 tasks on CPU0, and two will overflow
|
||||
onto CPU1. CPU0 will have a load of 40% and CPU1 will have a load of
|
||||
20%. In order to still hit 80% utilization, CPU0 now only needs to
|
||||
operate at (0.4*0.8=0.32) 320MHz, which means that the lowest operating
|
||||
point will be selected, the same as in the non-packing case, except that
|
||||
now CPU2 is no longer needed and can be power-gated.
|
||||
|
||||
In order to use less energy, the saving from power-gating CPU2 must be
|
||||
more than the energy spent running CPU0 for the extra cycles. This
|
||||
depends upon the SoC implementation.
|
||||
|
||||
This is obviously a contrived example requiring all the tasks to
|
||||
be runnable at the same time, but it illustrates the point.
|
||||
|
||||
|
||||
Wakeup Latency
|
||||
----
|
||||
This is an unavoidable consequence of trying to pack tasks together
|
||||
rather than giving them a CPU each. If you cannot find an acceptable
|
||||
level of wakeup latency, you should turn packing off.
|
||||
|
||||
Cyclictest is a good test application for determining the added latency
|
||||
when configuring packing.
|
||||
|
||||
|
||||
Why is it turned off for the VersatileExpress V2P_CA15A7 CoreTile?
|
||||
----
|
||||
Simply, this core tile only has power gating for the whole A7 package.
|
||||
When small task packing is enabled, all our low-energy use cases
|
||||
normally fit onto one A7 CPU. We therefore end up with 2 mostly-idle
|
||||
CPUs and one mostly-busy CPU. This decreases the amount of time
|
||||
available where the whole package is idle and can be turned off.
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,6 +1,6 @@
|
||||
VERSION = 3
|
||||
PATCHLEVEL = 10
|
||||
SUBLEVEL = 19
|
||||
SUBLEVEL = 21
|
||||
EXTRAVERSION =
|
||||
NAME = TOSSUG Baby Fish
|
||||
|
||||
|
||||
@@ -1513,6 +1513,17 @@ config SCHED_HMP
|
||||
There is currently no support for migration of task groups, hence
|
||||
!SCHED_AUTOGROUP. Furthermore, normal load-balancing must be disabled
|
||||
between cpus of different type (DISABLE_CPU_SCHED_DOMAIN_BALANCE).
|
||||
When turned on, this option adds sys/kernel/hmp directory which
|
||||
contains the following files:
|
||||
up_threshold - the load average threshold used for up migration
|
||||
(0 - 1023)
|
||||
down_threshold - the load average threshold used for down migration
|
||||
(0 - 1023)
|
||||
hmp_domains - a list of cpumasks for the present HMP domains,
|
||||
starting with the 'biggest' and ending with the
|
||||
'smallest'.
|
||||
Note that both the threshold files can be written at runtime to
|
||||
control scheduler behaviour.
|
||||
|
||||
config SCHED_HMP_PRIO_FILTER
|
||||
bool "(EXPERIMENTAL) Filter HMP migrations by task priority"
|
||||
@@ -1547,28 +1558,24 @@ config HMP_VARIABLE_SCALE
|
||||
bool "Allows changing the load tracking scale through sysfs"
|
||||
depends on SCHED_HMP
|
||||
help
|
||||
When turned on, this option exports the thresholds and load average
|
||||
period value for the load tracking patches through sysfs.
|
||||
When turned on, this option exports the load average period value
|
||||
for the load tracking patches through sysfs.
|
||||
The values can be modified to change the rate of load accumulation
|
||||
and the thresholds used for HMP migration.
|
||||
The load_avg_period_ms is the time in ms to reach a load average of
|
||||
0.5 for an idle task of 0 load average ratio that start a busy loop.
|
||||
The up_threshold and down_threshold is the value to go to a faster
|
||||
CPU or to go back to a slower cpu.
|
||||
The {up,down}_threshold are devided by 1024 before being compared
|
||||
to the load average.
|
||||
For examples, with load_avg_period_ms = 128 and up_threshold = 512,
|
||||
used for HMP migration. 'load_avg_period_ms' is the time in ms to
|
||||
reach a load average of 0.5 for an idle task of 0 load average
|
||||
ratio which becomes 100% busy.
|
||||
For example, with load_avg_period_ms = 128 and up_threshold = 512,
|
||||
a running task with a load of 0 will be migrated to a bigger CPU after
|
||||
128ms, because after 128ms its load_avg_ratio is 0.5 and the real
|
||||
up_threshold is 0.5.
|
||||
This patch has the same behavior as changing the Y of the load
|
||||
average computation to
|
||||
(1002/1024)^(LOAD_AVG_PERIOD/load_avg_period_ms)
|
||||
but it remove intermadiate overflows in computation.
|
||||
but removes intermediate overflows in computation.
|
||||
|
||||
config HMP_FREQUENCY_INVARIANT_SCALE
|
||||
bool "(EXPERIMENTAL) Frequency-Invariant Tracked Load for HMP"
|
||||
depends on HMP_VARIABLE_SCALE && CPU_FREQ
|
||||
depends on SCHED_HMP && CPU_FREQ
|
||||
help
|
||||
Scales the current load contribution in line with the frequency
|
||||
of the CPU that the task was executed on.
|
||||
|
||||
@@ -313,6 +313,17 @@ out:
|
||||
return err;
|
||||
}
|
||||
|
||||
static phys_addr_t kvm_kaddr_to_phys(void *kaddr)
|
||||
{
|
||||
if (!is_vmalloc_addr(kaddr)) {
|
||||
BUG_ON(!virt_addr_valid(kaddr));
|
||||
return __pa(kaddr);
|
||||
} else {
|
||||
return page_to_phys(vmalloc_to_page(kaddr)) +
|
||||
offset_in_page(kaddr);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode
|
||||
* @from: The virtual kernel start address of the range
|
||||
@@ -324,16 +335,27 @@ out:
|
||||
*/
|
||||
int create_hyp_mappings(void *from, void *to)
|
||||
{
|
||||
unsigned long phys_addr = virt_to_phys(from);
|
||||
phys_addr_t phys_addr;
|
||||
unsigned long virt_addr;
|
||||
unsigned long start = KERN_TO_HYP((unsigned long)from);
|
||||
unsigned long end = KERN_TO_HYP((unsigned long)to);
|
||||
|
||||
/* Check for a valid kernel memory mapping */
|
||||
if (!virt_addr_valid(from) || !virt_addr_valid(to - 1))
|
||||
return -EINVAL;
|
||||
start = start & PAGE_MASK;
|
||||
end = PAGE_ALIGN(end);
|
||||
|
||||
return __create_hyp_mappings(hyp_pgd, start, end,
|
||||
__phys_to_pfn(phys_addr), PAGE_HYP);
|
||||
for (virt_addr = start; virt_addr < end; virt_addr += PAGE_SIZE) {
|
||||
int err;
|
||||
|
||||
phys_addr = kvm_kaddr_to_phys(from + virt_addr - start);
|
||||
err = __create_hyp_mappings(hyp_pgd, virt_addr,
|
||||
virt_addr + PAGE_SIZE,
|
||||
__phys_to_pfn(phys_addr),
|
||||
PAGE_HYP);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -122,7 +122,15 @@ static void tc2_pm_down(u64 residency)
|
||||
} else
|
||||
BUG();
|
||||
|
||||
gic_cpu_if_down();
|
||||
/*
|
||||
* If the CPU is committed to power down, make sure
|
||||
* the power controller will be in charge of waking it
|
||||
* up upon IRQ, ie IRQ lines are cut from GIC CPU IF
|
||||
* to the CPU by disabling the GIC CPU IF to prevent wfi
|
||||
* from completing execution behind power controller back
|
||||
*/
|
||||
if (!skip_wfi)
|
||||
gic_cpu_if_down();
|
||||
|
||||
if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) {
|
||||
arch_spin_unlock(&tc2_pm_lock);
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
|
||||
#include <asm/page.h> /* for __va, __pa */
|
||||
#include <arch/io.h>
|
||||
#include <asm-generic/iomap.h>
|
||||
#include <linux/kernel.h>
|
||||
|
||||
struct cris_io_operations
|
||||
|
||||
@@ -319,7 +319,7 @@ struct thread_struct {
|
||||
regs->loadrs = 0; \
|
||||
regs->r8 = get_dumpable(current->mm); /* set "don't zap registers" flag */ \
|
||||
regs->r12 = new_sp - 16; /* allocate 16 byte scratch area */ \
|
||||
if (unlikely(!get_dumpable(current->mm))) { \
|
||||
if (unlikely(get_dumpable(current->mm) != SUID_DUMP_USER)) { \
|
||||
/* \
|
||||
* Zap scratch regs to avoid leaking bits between processes with different \
|
||||
* uid/privileges. \
|
||||
|
||||
@@ -454,7 +454,15 @@ static int save_user_regs(struct pt_regs *regs, struct mcontext __user *frame,
|
||||
if (copy_vsx_to_user(&frame->mc_vsregs, current))
|
||||
return 1;
|
||||
msr |= MSR_VSX;
|
||||
}
|
||||
} else if (!ctx_has_vsx_region)
|
||||
/*
|
||||
* With a small context structure we can't hold the VSX
|
||||
* registers, hence clear the MSR value to indicate the state
|
||||
* was not saved.
|
||||
*/
|
||||
msr &= ~MSR_VSX;
|
||||
|
||||
|
||||
#endif /* CONFIG_VSX */
|
||||
#ifdef CONFIG_SPE
|
||||
/* save spe registers */
|
||||
|
||||
@@ -1530,12 +1530,12 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
|
||||
|
||||
dn = dev->of_node;
|
||||
if (!dn) {
|
||||
strcat(buf, "\n");
|
||||
strcpy(buf, "\n");
|
||||
return strlen(buf);
|
||||
}
|
||||
cp = of_get_property(dn, "compatible", NULL);
|
||||
if (!cp) {
|
||||
strcat(buf, "\n");
|
||||
strcpy(buf, "\n");
|
||||
return strlen(buf);
|
||||
}
|
||||
|
||||
|
||||
@@ -258,7 +258,7 @@ static bool slice_scan_available(unsigned long addr,
|
||||
slice = GET_HIGH_SLICE_INDEX(addr);
|
||||
*boundary_addr = (slice + end) ?
|
||||
((slice + end) << SLICE_HIGH_SHIFT) : SLICE_LOW_TOP;
|
||||
return !!(available.high_slices & (1u << slice));
|
||||
return !!(available.high_slices & (1ul << slice));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -57,5 +57,5 @@ config PPC_MPC5200_BUGFIX
|
||||
|
||||
config PPC_MPC5200_LPBFIFO
|
||||
tristate "MPC5200 LocalPlus bus FIFO driver"
|
||||
depends on PPC_MPC52xx
|
||||
depends on PPC_MPC52xx && PPC_BESTCOMM
|
||||
select PPC_BESTCOMM_GEN_BD
|
||||
|
||||
@@ -151,13 +151,23 @@ static int pnv_ioda_configure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe)
|
||||
rid_end = pe->rid + 1;
|
||||
}
|
||||
|
||||
/* Associate PE in PELT */
|
||||
/*
|
||||
* Associate PE in PELT. We need add the PE into the
|
||||
* corresponding PELT-V as well. Otherwise, the error
|
||||
* originated from the PE might contribute to other
|
||||
* PEs.
|
||||
*/
|
||||
rc = opal_pci_set_pe(phb->opal_id, pe->pe_number, pe->rid,
|
||||
bcomp, dcomp, fcomp, OPAL_MAP_PE);
|
||||
if (rc) {
|
||||
pe_err(pe, "OPAL error %ld trying to setup PELT table\n", rc);
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
rc = opal_pci_set_peltv(phb->opal_id, pe->pe_number,
|
||||
pe->pe_number, OPAL_ADD_PE_TO_DOMAIN);
|
||||
if (rc)
|
||||
pe_warn(pe, "OPAL error %d adding self to PELTV\n", rc);
|
||||
opal_pci_eeh_freeze_clear(phb->opal_id, pe->pe_number,
|
||||
OPAL_EEH_ACTION_CLEAR_FREEZE_ALL);
|
||||
|
||||
|
||||
@@ -35,7 +35,6 @@ static u8 *ctrblk;
|
||||
static char keylen_flag;
|
||||
|
||||
struct s390_aes_ctx {
|
||||
u8 iv[AES_BLOCK_SIZE];
|
||||
u8 key[AES_MAX_KEY_SIZE];
|
||||
long enc;
|
||||
long dec;
|
||||
@@ -441,30 +440,36 @@ static int cbc_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
|
||||
return aes_set_key(tfm, in_key, key_len);
|
||||
}
|
||||
|
||||
static int cbc_aes_crypt(struct blkcipher_desc *desc, long func, void *param,
|
||||
static int cbc_aes_crypt(struct blkcipher_desc *desc, long func,
|
||||
struct blkcipher_walk *walk)
|
||||
{
|
||||
struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
|
||||
int ret = blkcipher_walk_virt(desc, walk);
|
||||
unsigned int nbytes = walk->nbytes;
|
||||
struct {
|
||||
u8 iv[AES_BLOCK_SIZE];
|
||||
u8 key[AES_MAX_KEY_SIZE];
|
||||
} param;
|
||||
|
||||
if (!nbytes)
|
||||
goto out;
|
||||
|
||||
memcpy(param, walk->iv, AES_BLOCK_SIZE);
|
||||
memcpy(param.iv, walk->iv, AES_BLOCK_SIZE);
|
||||
memcpy(param.key, sctx->key, sctx->key_len);
|
||||
do {
|
||||
/* only use complete blocks */
|
||||
unsigned int n = nbytes & ~(AES_BLOCK_SIZE - 1);
|
||||
u8 *out = walk->dst.virt.addr;
|
||||
u8 *in = walk->src.virt.addr;
|
||||
|
||||
ret = crypt_s390_kmc(func, param, out, in, n);
|
||||
ret = crypt_s390_kmc(func, ¶m, out, in, n);
|
||||
if (ret < 0 || ret != n)
|
||||
return -EIO;
|
||||
|
||||
nbytes &= AES_BLOCK_SIZE - 1;
|
||||
ret = blkcipher_walk_done(desc, walk, nbytes);
|
||||
} while ((nbytes = walk->nbytes));
|
||||
memcpy(walk->iv, param, AES_BLOCK_SIZE);
|
||||
memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);
|
||||
|
||||
out:
|
||||
return ret;
|
||||
@@ -481,7 +486,7 @@ static int cbc_aes_encrypt(struct blkcipher_desc *desc,
|
||||
return fallback_blk_enc(desc, dst, src, nbytes);
|
||||
|
||||
blkcipher_walk_init(&walk, dst, src, nbytes);
|
||||
return cbc_aes_crypt(desc, sctx->enc, sctx->iv, &walk);
|
||||
return cbc_aes_crypt(desc, sctx->enc, &walk);
|
||||
}
|
||||
|
||||
static int cbc_aes_decrypt(struct blkcipher_desc *desc,
|
||||
@@ -495,7 +500,7 @@ static int cbc_aes_decrypt(struct blkcipher_desc *desc,
|
||||
return fallback_blk_dec(desc, dst, src, nbytes);
|
||||
|
||||
blkcipher_walk_init(&walk, dst, src, nbytes);
|
||||
return cbc_aes_crypt(desc, sctx->dec, sctx->iv, &walk);
|
||||
return cbc_aes_crypt(desc, sctx->dec, &walk);
|
||||
}
|
||||
|
||||
static struct crypto_alg cbc_aes_alg = {
|
||||
|
||||
@@ -933,7 +933,7 @@ static ssize_t show_idle_count(struct device *dev,
|
||||
idle_count = ACCESS_ONCE(idle->idle_count);
|
||||
if (ACCESS_ONCE(idle->clock_idle_enter))
|
||||
idle_count++;
|
||||
} while ((sequence & 1) || (idle->sequence != sequence));
|
||||
} while ((sequence & 1) || (ACCESS_ONCE(idle->sequence) != sequence));
|
||||
return sprintf(buf, "%llu\n", idle_count);
|
||||
}
|
||||
static DEVICE_ATTR(idle_count, 0444, show_idle_count, NULL);
|
||||
@@ -951,7 +951,7 @@ static ssize_t show_idle_time(struct device *dev,
|
||||
idle_time = ACCESS_ONCE(idle->idle_time);
|
||||
idle_enter = ACCESS_ONCE(idle->clock_idle_enter);
|
||||
idle_exit = ACCESS_ONCE(idle->clock_idle_exit);
|
||||
} while ((sequence & 1) || (idle->sequence != sequence));
|
||||
} while ((sequence & 1) || (ACCESS_ONCE(idle->sequence) != sequence));
|
||||
idle_time += idle_enter ? ((idle_exit ? : now) - idle_enter) : 0;
|
||||
return sprintf(buf, "%llu\n", idle_time >> 12);
|
||||
}
|
||||
|
||||
@@ -190,7 +190,7 @@ cputime64_t s390_get_idle_time(int cpu)
|
||||
sequence = ACCESS_ONCE(idle->sequence);
|
||||
idle_enter = ACCESS_ONCE(idle->clock_idle_enter);
|
||||
idle_exit = ACCESS_ONCE(idle->clock_idle_exit);
|
||||
} while ((sequence & 1) || (idle->sequence != sequence));
|
||||
} while ((sequence & 1) || (ACCESS_ONCE(idle->sequence) != sequence));
|
||||
return idle_enter ? ((idle_exit ?: now) - idle_enter) : 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -248,6 +248,15 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int is_ftrace_caller(unsigned long ip)
|
||||
{
|
||||
if (ip == (unsigned long)(&ftrace_call) ||
|
||||
ip == (unsigned long)(&ftrace_regs_call))
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* A breakpoint was added to the code address we are about to
|
||||
* modify, and this is the handle that will just skip over it.
|
||||
@@ -257,10 +266,13 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
|
||||
*/
|
||||
int ftrace_int3_handler(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long ip;
|
||||
|
||||
if (WARN_ON_ONCE(!regs))
|
||||
return 0;
|
||||
|
||||
if (!ftrace_location(regs->ip - 1))
|
||||
ip = regs->ip - 1;
|
||||
if (!ftrace_location(ip) && !is_ftrace_caller(ip))
|
||||
return 0;
|
||||
|
||||
regs->ip += MCOUNT_INSN_SIZE - 1;
|
||||
|
||||
@@ -430,7 +430,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
|
||||
snprintf(fw_name, sizeof(fw_name), "amd-ucode/microcode_amd_fam%.2xh.bin", c->x86);
|
||||
|
||||
if (request_firmware(&fw, (const char *)fw_name, device)) {
|
||||
pr_err("failed to load file %s\n", fw_name);
|
||||
pr_debug("failed to load file %s\n", fw_name);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
||||
@@ -378,9 +378,9 @@ static void amd_e400_idle(void)
|
||||
* The switch back from broadcast mode needs to be
|
||||
* called with interrupts disabled.
|
||||
*/
|
||||
local_irq_disable();
|
||||
clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu);
|
||||
local_irq_enable();
|
||||
local_irq_disable();
|
||||
clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu);
|
||||
local_irq_enable();
|
||||
} else
|
||||
default_idle();
|
||||
}
|
||||
|
||||
@@ -4207,7 +4207,10 @@ static int decode_operand(struct x86_emulate_ctxt *ctxt, struct operand *op,
|
||||
case OpMem8:
|
||||
ctxt->memop.bytes = 1;
|
||||
if (ctxt->memop.type == OP_REG) {
|
||||
ctxt->memop.addr.reg = decode_register(ctxt, ctxt->modrm_rm, 1);
|
||||
int highbyte_regs = ctxt->rex_prefix == 0;
|
||||
|
||||
ctxt->memop.addr.reg = decode_register(ctxt, ctxt->modrm_rm,
|
||||
highbyte_regs);
|
||||
fetch_register_operand(&ctxt->memop);
|
||||
}
|
||||
goto mem_common;
|
||||
|
||||
@@ -2229,6 +2229,7 @@ void blk_start_request(struct request *req)
|
||||
if (unlikely(blk_bidi_rq(req)))
|
||||
req->next_rq->resid_len = blk_rq_bytes(req->next_rq);
|
||||
|
||||
BUG_ON(test_bit(REQ_ATOM_COMPLETE, &req->atomic_flags));
|
||||
blk_add_timer(req);
|
||||
}
|
||||
EXPORT_SYMBOL(blk_start_request);
|
||||
|
||||
@@ -144,6 +144,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
|
||||
lim->discard_zeroes_data = 1;
|
||||
lim->max_segments = USHRT_MAX;
|
||||
lim->max_hw_sectors = UINT_MAX;
|
||||
lim->max_segment_size = UINT_MAX;
|
||||
lim->max_sectors = UINT_MAX;
|
||||
lim->max_write_same_sectors = UINT_MAX;
|
||||
}
|
||||
|
||||
@@ -90,8 +90,8 @@ static void blk_rq_timed_out(struct request *req)
|
||||
__blk_complete_request(req);
|
||||
break;
|
||||
case BLK_EH_RESET_TIMER:
|
||||
blk_clear_rq_complete(req);
|
||||
blk_add_timer(req);
|
||||
blk_clear_rq_complete(req);
|
||||
break;
|
||||
case BLK_EH_NOT_HANDLED:
|
||||
/*
|
||||
@@ -173,7 +173,6 @@ void blk_add_timer(struct request *req)
|
||||
return;
|
||||
|
||||
BUG_ON(!list_empty(&req->timeout_list));
|
||||
BUG_ON(test_bit(REQ_ATOM_COMPLETE, &req->atomic_flags));
|
||||
|
||||
/*
|
||||
* Some LLDs, like scsi, peek at the timeout to prevent a
|
||||
|
||||
@@ -230,11 +230,11 @@ remainder:
|
||||
*/
|
||||
if (byte_count < DEFAULT_BLK_SZ) {
|
||||
empty_rbuf:
|
||||
for (; ctx->rand_data_valid < DEFAULT_BLK_SZ;
|
||||
ctx->rand_data_valid++) {
|
||||
while (ctx->rand_data_valid < DEFAULT_BLK_SZ) {
|
||||
*ptr = ctx->rand_data[ctx->rand_data_valid];
|
||||
ptr++;
|
||||
byte_count--;
|
||||
ctx->rand_data_valid++;
|
||||
if (byte_count == 0)
|
||||
goto done;
|
||||
}
|
||||
|
||||
@@ -963,10 +963,17 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
|
||||
*/
|
||||
return_desc =
|
||||
*(operand[0]->reference.where);
|
||||
if (return_desc) {
|
||||
acpi_ut_add_reference
|
||||
(return_desc);
|
||||
if (!return_desc) {
|
||||
/*
|
||||
* Element is NULL, do not allow the dereference.
|
||||
* This provides compatibility with other ACPI
|
||||
* implementations.
|
||||
*/
|
||||
return_ACPI_STATUS
|
||||
(AE_AML_UNINITIALIZED_ELEMENT);
|
||||
}
|
||||
|
||||
acpi_ut_add_reference(return_desc);
|
||||
break;
|
||||
|
||||
default:
|
||||
@@ -991,11 +998,40 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
|
||||
acpi_namespace_node
|
||||
*)
|
||||
return_desc);
|
||||
if (!return_desc) {
|
||||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
* June 2013:
|
||||
* buffer_fields/field_units require additional resolution
|
||||
*/
|
||||
switch (return_desc->common.type) {
|
||||
case ACPI_TYPE_BUFFER_FIELD:
|
||||
case ACPI_TYPE_LOCAL_REGION_FIELD:
|
||||
case ACPI_TYPE_LOCAL_BANK_FIELD:
|
||||
case ACPI_TYPE_LOCAL_INDEX_FIELD:
|
||||
|
||||
status =
|
||||
acpi_ex_read_data_from_field
|
||||
(walk_state, return_desc,
|
||||
&temp_desc);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
return_desc = temp_desc;
|
||||
break;
|
||||
|
||||
default:
|
||||
|
||||
/* Add another reference to the object */
|
||||
|
||||
acpi_ut_add_reference
|
||||
(return_desc);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/* Add another reference to the object! */
|
||||
|
||||
acpi_ut_add_reference(return_desc);
|
||||
break;
|
||||
|
||||
default:
|
||||
|
||||
@@ -57,6 +57,11 @@ acpi_ex_store_object_to_index(union acpi_operand_object *val_desc,
|
||||
union acpi_operand_object *dest_desc,
|
||||
struct acpi_walk_state *walk_state);
|
||||
|
||||
static acpi_status
|
||||
acpi_ex_store_direct_to_node(union acpi_operand_object *source_desc,
|
||||
struct acpi_namespace_node *node,
|
||||
struct acpi_walk_state *walk_state);
|
||||
|
||||
/*******************************************************************************
|
||||
*
|
||||
* FUNCTION: acpi_ex_store
|
||||
@@ -376,7 +381,11 @@ acpi_ex_store_object_to_index(union acpi_operand_object *source_desc,
|
||||
* When storing into an object the data is converted to the
|
||||
* target object type then stored in the object. This means
|
||||
* that the target object type (for an initialized target) will
|
||||
* not be changed by a store operation.
|
||||
* not be changed by a store operation. A copy_object can change
|
||||
* the target type, however.
|
||||
*
|
||||
* The implicit_conversion flag is set to NO/FALSE only when
|
||||
* storing to an arg_x -- as per the rules of the ACPI spec.
|
||||
*
|
||||
* Assumes parameters are already validated.
|
||||
*
|
||||
@@ -400,7 +409,7 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
|
||||
target_type = acpi_ns_get_type(node);
|
||||
target_desc = acpi_ns_get_attached_object(node);
|
||||
|
||||
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Storing %p(%s) into node %p(%s)\n",
|
||||
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Storing %p (%s) to node %p (%s)\n",
|
||||
source_desc,
|
||||
acpi_ut_get_object_type_name(source_desc), node,
|
||||
acpi_ut_get_type_name(target_type)));
|
||||
@@ -414,46 +423,31 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
|
||||
return_ACPI_STATUS(status);
|
||||
}
|
||||
|
||||
/* If no implicit conversion, drop into the default case below */
|
||||
|
||||
if ((!implicit_conversion) ||
|
||||
((walk_state->opcode == AML_COPY_OP) &&
|
||||
(target_type != ACPI_TYPE_LOCAL_REGION_FIELD) &&
|
||||
(target_type != ACPI_TYPE_LOCAL_BANK_FIELD) &&
|
||||
(target_type != ACPI_TYPE_LOCAL_INDEX_FIELD))) {
|
||||
/*
|
||||
* Force execution of default (no implicit conversion). Note:
|
||||
* copy_object does not perform an implicit conversion, as per the ACPI
|
||||
* spec -- except in case of region/bank/index fields -- because these
|
||||
* objects must retain their original type permanently.
|
||||
*/
|
||||
target_type = ACPI_TYPE_ANY;
|
||||
}
|
||||
|
||||
/* Do the actual store operation */
|
||||
|
||||
switch (target_type) {
|
||||
case ACPI_TYPE_BUFFER_FIELD:
|
||||
case ACPI_TYPE_LOCAL_REGION_FIELD:
|
||||
case ACPI_TYPE_LOCAL_BANK_FIELD:
|
||||
case ACPI_TYPE_LOCAL_INDEX_FIELD:
|
||||
|
||||
/* For fields, copy the source data to the target field. */
|
||||
|
||||
status = acpi_ex_write_data_to_field(source_desc, target_desc,
|
||||
&walk_state->result_obj);
|
||||
break;
|
||||
|
||||
case ACPI_TYPE_INTEGER:
|
||||
case ACPI_TYPE_STRING:
|
||||
case ACPI_TYPE_BUFFER:
|
||||
|
||||
/*
|
||||
* These target types are all of type Integer/String/Buffer, and
|
||||
* therefore support implicit conversion before the store.
|
||||
*
|
||||
* Copy and/or convert the source object to a new target object
|
||||
* The simple data types all support implicit source operand
|
||||
* conversion before the store.
|
||||
*/
|
||||
|
||||
if ((walk_state->opcode == AML_COPY_OP) || !implicit_conversion) {
|
||||
/*
|
||||
* However, copy_object and Stores to arg_x do not perform
|
||||
* an implicit conversion, as per the ACPI specification.
|
||||
* A direct store is performed instead.
|
||||
*/
|
||||
status = acpi_ex_store_direct_to_node(source_desc, node,
|
||||
walk_state);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Store with implicit source operand conversion support */
|
||||
|
||||
status =
|
||||
acpi_ex_store_object_to_object(source_desc, target_desc,
|
||||
&new_desc, walk_state);
|
||||
@@ -467,13 +461,12 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
|
||||
* the Name's type to that of the value being stored in it.
|
||||
* source_desc reference count is incremented by attach_object.
|
||||
*
|
||||
* Note: This may change the type of the node if an explicit store
|
||||
* has been performed such that the node/object type has been
|
||||
* changed.
|
||||
* Note: This may change the type of the node if an explicit
|
||||
* store has been performed such that the node/object type
|
||||
* has been changed.
|
||||
*/
|
||||
status =
|
||||
acpi_ns_attach_object(node, new_desc,
|
||||
new_desc->common.type);
|
||||
status = acpi_ns_attach_object(node, new_desc,
|
||||
new_desc->common.type);
|
||||
|
||||
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
|
||||
"Store %s into %s via Convert/Attach\n",
|
||||
@@ -484,38 +477,83 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
|
||||
}
|
||||
break;
|
||||
|
||||
case ACPI_TYPE_BUFFER_FIELD:
|
||||
case ACPI_TYPE_LOCAL_REGION_FIELD:
|
||||
case ACPI_TYPE_LOCAL_BANK_FIELD:
|
||||
case ACPI_TYPE_LOCAL_INDEX_FIELD:
|
||||
/*
|
||||
* For all fields, always write the source data to the target
|
||||
* field. Any required implicit source operand conversion is
|
||||
* performed in the function below as necessary. Note, field
|
||||
* objects must retain their original type permanently.
|
||||
*/
|
||||
status = acpi_ex_write_data_to_field(source_desc, target_desc,
|
||||
&walk_state->result_obj);
|
||||
break;
|
||||
|
||||
default:
|
||||
|
||||
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
|
||||
"Storing [%s] (%p) directly into node [%s] (%p)"
|
||||
" with no implicit conversion\n",
|
||||
acpi_ut_get_object_type_name(source_desc),
|
||||
source_desc,
|
||||
acpi_ut_get_object_type_name(target_desc),
|
||||
node));
|
||||
|
||||
/*
|
||||
* No conversions for all other types. Directly store a copy of
|
||||
* the source object. NOTE: This is a departure from the ACPI
|
||||
* spec, which states "If conversion is impossible, abort the
|
||||
* running control method".
|
||||
* the source object. This is the ACPI spec-defined behavior for
|
||||
* the copy_object operator.
|
||||
*
|
||||
* This code implements "If conversion is impossible, treat the
|
||||
* Store operation as a CopyObject".
|
||||
* NOTE: For the Store operator, this is a departure from the
|
||||
* ACPI spec, which states "If conversion is impossible, abort
|
||||
* the running control method". Instead, this code implements
|
||||
* "If conversion is impossible, treat the Store operation as
|
||||
* a CopyObject".
|
||||
*/
|
||||
status =
|
||||
acpi_ut_copy_iobject_to_iobject(source_desc, &new_desc,
|
||||
walk_state);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
return_ACPI_STATUS(status);
|
||||
}
|
||||
|
||||
status =
|
||||
acpi_ns_attach_object(node, new_desc,
|
||||
new_desc->common.type);
|
||||
acpi_ut_remove_reference(new_desc);
|
||||
status = acpi_ex_store_direct_to_node(source_desc, node,
|
||||
walk_state);
|
||||
break;
|
||||
}
|
||||
|
||||
return_ACPI_STATUS(status);
|
||||
}
|
||||
|
||||
/*******************************************************************************
|
||||
*
|
||||
* FUNCTION: acpi_ex_store_direct_to_node
|
||||
*
|
||||
* PARAMETERS: source_desc - Value to be stored
|
||||
* node - Named object to receive the value
|
||||
* walk_state - Current walk state
|
||||
*
|
||||
* RETURN: Status
|
||||
*
|
||||
* DESCRIPTION: "Store" an object directly to a node. This involves a copy
|
||||
* and an attach.
|
||||
*
|
||||
******************************************************************************/
|
||||
|
||||
static acpi_status
|
||||
acpi_ex_store_direct_to_node(union acpi_operand_object *source_desc,
|
||||
struct acpi_namespace_node *node,
|
||||
struct acpi_walk_state *walk_state)
|
||||
{
|
||||
acpi_status status;
|
||||
union acpi_operand_object *new_desc;
|
||||
|
||||
ACPI_FUNCTION_TRACE(ex_store_direct_to_node);
|
||||
|
||||
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
|
||||
"Storing [%s] (%p) directly into node [%s] (%p)"
|
||||
" with no implicit conversion\n",
|
||||
acpi_ut_get_object_type_name(source_desc),
|
||||
source_desc, acpi_ut_get_type_name(node->type),
|
||||
node));
|
||||
|
||||
/* Copy the source object to a new object */
|
||||
|
||||
status =
|
||||
acpi_ut_copy_iobject_to_iobject(source_desc, &new_desc, walk_state);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
return_ACPI_STATUS(status);
|
||||
}
|
||||
|
||||
/* Attach the new object to the node */
|
||||
|
||||
status = acpi_ns_attach_object(node, new_desc, new_desc->common.type);
|
||||
acpi_ut_remove_reference(new_desc);
|
||||
return_ACPI_STATUS(status);
|
||||
}
|
||||
|
||||
@@ -175,9 +175,10 @@ static void start_transaction(struct acpi_ec *ec)
|
||||
static void advance_transaction(struct acpi_ec *ec, u8 status)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct transaction *t = ec->curr;
|
||||
struct transaction *t;
|
||||
|
||||
spin_lock_irqsave(&ec->lock, flags);
|
||||
t = ec->curr;
|
||||
if (!t)
|
||||
goto unlock;
|
||||
if (t->wlen > t->wi) {
|
||||
|
||||
@@ -614,9 +614,12 @@ static void handle_root_bridge_removal(struct acpi_device *device)
|
||||
ej_event->device = device;
|
||||
ej_event->event = ACPI_NOTIFY_EJECT_REQUEST;
|
||||
|
||||
get_device(&device->dev);
|
||||
status = acpi_os_hotplug_execute(acpi_bus_hot_remove_device, ej_event);
|
||||
if (ACPI_FAILURE(status))
|
||||
if (ACPI_FAILURE(status)) {
|
||||
put_device(&device->dev);
|
||||
kfree(ej_event);
|
||||
}
|
||||
}
|
||||
|
||||
static void _handle_hotplug_event_root(struct work_struct *work)
|
||||
|
||||
@@ -121,17 +121,10 @@ static struct dmi_system_id __cpuinitdata processor_power_dmi_table[] = {
|
||||
*/
|
||||
static void acpi_safe_halt(void)
|
||||
{
|
||||
current_thread_info()->status &= ~TS_POLLING;
|
||||
/*
|
||||
* TS_POLLING-cleared state must be visible before we
|
||||
* test NEED_RESCHED:
|
||||
*/
|
||||
smp_mb();
|
||||
if (!need_resched()) {
|
||||
if (!tif_need_resched()) {
|
||||
safe_halt();
|
||||
local_irq_disable();
|
||||
}
|
||||
current_thread_info()->status |= TS_POLLING;
|
||||
}
|
||||
|
||||
#ifdef ARCH_APICTIMER_STOPS_ON_C3
|
||||
@@ -739,6 +732,11 @@ static int acpi_idle_enter_c1(struct cpuidle_device *dev,
|
||||
if (unlikely(!pr))
|
||||
return -EINVAL;
|
||||
|
||||
if (cx->entry_method == ACPI_CSTATE_FFH) {
|
||||
if (current_set_polling_and_test())
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
lapic_timer_state_broadcast(pr, cx, 1);
|
||||
acpi_idle_do_entry(cx);
|
||||
|
||||
@@ -792,18 +790,9 @@ static int acpi_idle_enter_simple(struct cpuidle_device *dev,
|
||||
if (unlikely(!pr))
|
||||
return -EINVAL;
|
||||
|
||||
if (cx->entry_method != ACPI_CSTATE_FFH) {
|
||||
current_thread_info()->status &= ~TS_POLLING;
|
||||
/*
|
||||
* TS_POLLING-cleared state must be visible before we test
|
||||
* NEED_RESCHED:
|
||||
*/
|
||||
smp_mb();
|
||||
|
||||
if (unlikely(need_resched())) {
|
||||
current_thread_info()->status |= TS_POLLING;
|
||||
if (cx->entry_method == ACPI_CSTATE_FFH) {
|
||||
if (current_set_polling_and_test())
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -821,9 +810,6 @@ static int acpi_idle_enter_simple(struct cpuidle_device *dev,
|
||||
|
||||
sched_clock_idle_wakeup_event(0);
|
||||
|
||||
if (cx->entry_method != ACPI_CSTATE_FFH)
|
||||
current_thread_info()->status |= TS_POLLING;
|
||||
|
||||
lapic_timer_state_broadcast(pr, cx, 0);
|
||||
return index;
|
||||
}
|
||||
@@ -860,18 +846,9 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev,
|
||||
}
|
||||
}
|
||||
|
||||
if (cx->entry_method != ACPI_CSTATE_FFH) {
|
||||
current_thread_info()->status &= ~TS_POLLING;
|
||||
/*
|
||||
* TS_POLLING-cleared state must be visible before we test
|
||||
* NEED_RESCHED:
|
||||
*/
|
||||
smp_mb();
|
||||
|
||||
if (unlikely(need_resched())) {
|
||||
current_thread_info()->status |= TS_POLLING;
|
||||
if (cx->entry_method == ACPI_CSTATE_FFH) {
|
||||
if (current_set_polling_and_test())
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
acpi_unlazy_tlb(smp_processor_id());
|
||||
@@ -917,9 +894,6 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev,
|
||||
|
||||
sched_clock_idle_wakeup_event(0);
|
||||
|
||||
if (cx->entry_method != ACPI_CSTATE_FFH)
|
||||
current_thread_info()->status |= TS_POLLING;
|
||||
|
||||
lapic_timer_state_broadcast(pr, cx, 0);
|
||||
return index;
|
||||
}
|
||||
|
||||
@@ -244,8 +244,6 @@ static void acpi_scan_bus_device_check(acpi_handle handle, u32 ost_source)
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
acpi_evaluate_hotplug_ost(handle, ost_source,
|
||||
ACPI_OST_SC_INSERT_IN_PROGRESS, NULL);
|
||||
error = acpi_bus_scan(handle);
|
||||
if (error) {
|
||||
acpi_handle_warn(handle, "Namespace scan failure\n");
|
||||
|
||||
@@ -846,7 +846,7 @@ acpi_video_init_brightness(struct acpi_video_device *device)
|
||||
for (i = 2; i < br->count; i++)
|
||||
if (level_old == br->levels[i])
|
||||
break;
|
||||
if (i == br->count)
|
||||
if (i == br->count || !level)
|
||||
level = max_level;
|
||||
}
|
||||
|
||||
|
||||
@@ -545,7 +545,7 @@ static struct kobject *brd_probe(dev_t dev, int *part, void *data)
|
||||
|
||||
mutex_lock(&brd_devices_mutex);
|
||||
brd = brd_init_one(MINOR(dev) >> part_shift);
|
||||
kobj = brd ? get_disk(brd->brd_disk) : ERR_PTR(-ENOMEM);
|
||||
kobj = brd ? get_disk(brd->brd_disk) : NULL;
|
||||
mutex_unlock(&brd_devices_mutex);
|
||||
|
||||
*part = 0;
|
||||
|
||||
@@ -1741,7 +1741,7 @@ static struct kobject *loop_probe(dev_t dev, int *part, void *data)
|
||||
if (err < 0)
|
||||
err = loop_add(&lo, MINOR(dev) >> part_shift);
|
||||
if (err < 0)
|
||||
kobj = ERR_PTR(err);
|
||||
kobj = NULL;
|
||||
else
|
||||
kobj = get_disk(lo->lo_disk);
|
||||
mutex_unlock(&loop_index_mutex);
|
||||
|
||||
@@ -21,7 +21,9 @@
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/notifier.h>
|
||||
#include <asm/cputime.h>
|
||||
#ifdef CONFIG_BL_SWITCHER
|
||||
#include <asm/bL_switcher.h>
|
||||
#endif
|
||||
|
||||
static spinlock_t cpufreq_stats_lock;
|
||||
|
||||
@@ -448,6 +450,7 @@ static void cpufreq_stats_cleanup(void)
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_BL_SWITCHER
|
||||
static int cpufreq_stats_switcher_notifier(struct notifier_block *nfb,
|
||||
unsigned long action, void *_arg)
|
||||
{
|
||||
@@ -472,6 +475,7 @@ static int cpufreq_stats_switcher_notifier(struct notifier_block *nfb,
|
||||
static struct notifier_block switcher_notifier = {
|
||||
.notifier_call = cpufreq_stats_switcher_notifier,
|
||||
};
|
||||
#endif
|
||||
|
||||
static int __init cpufreq_stats_init(void)
|
||||
{
|
||||
@@ -479,15 +483,18 @@ static int __init cpufreq_stats_init(void)
|
||||
spin_lock_init(&cpufreq_stats_lock);
|
||||
|
||||
ret = cpufreq_stats_setup();
|
||||
#ifdef CONFIG_BL_SWITCHER
|
||||
if (!ret)
|
||||
bL_switcher_register_notifier(&switcher_notifier);
|
||||
|
||||
#endif
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit cpufreq_stats_exit(void)
|
||||
{
|
||||
#ifdef CONFIG_BL_SWITCHER
|
||||
bL_switcher_unregister_notifier(&switcher_notifier);
|
||||
#endif
|
||||
cpufreq_stats_cleanup();
|
||||
}
|
||||
|
||||
|
||||
@@ -551,9 +551,15 @@ static bool dmi_matches(const struct dmi_system_id *dmi)
|
||||
int s = dmi->matches[i].slot;
|
||||
if (s == DMI_NONE)
|
||||
break;
|
||||
if (dmi_ident[s]
|
||||
&& strstr(dmi_ident[s], dmi->matches[i].substr))
|
||||
continue;
|
||||
if (dmi_ident[s]) {
|
||||
if (!dmi->matches[i].exact_match &&
|
||||
strstr(dmi_ident[s], dmi->matches[i].substr))
|
||||
continue;
|
||||
else if (dmi->matches[i].exact_match &&
|
||||
!strcmp(dmi_ident[s], dmi->matches[i].substr))
|
||||
continue;
|
||||
}
|
||||
|
||||
/* No match */
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -869,6 +869,30 @@ static const struct dmi_system_id intel_no_lvds[] = {
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO Q900"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = intel_no_lvds_dmi_callback,
|
||||
.ident = "Intel D410PT",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_VENDOR, "Intel"),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "D410PT"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = intel_no_lvds_dmi_callback,
|
||||
.ident = "Intel D425KT",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_VENDOR, "Intel"),
|
||||
DMI_EXACT_MATCH(DMI_BOARD_NAME, "D425KT"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = intel_no_lvds_dmi_callback,
|
||||
.ident = "Intel D510MO",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_VENDOR, "Intel"),
|
||||
DMI_EXACT_MATCH(DMI_BOARD_NAME, "D510MO"),
|
||||
},
|
||||
},
|
||||
|
||||
{ } /* terminating entry */
|
||||
};
|
||||
|
||||
@@ -36,6 +36,8 @@ nva3_hda_eld(struct nv50_disp_priv *priv, int or, u8 *data, u32 size)
|
||||
if (data && data[0]) {
|
||||
for (i = 0; i < size; i++)
|
||||
nv_wr32(priv, 0x61c440 + soff, (i << 8) | data[i]);
|
||||
for (; i < 0x60; i++)
|
||||
nv_wr32(priv, 0x61c440 + soff, (i << 8));
|
||||
nv_mask(priv, 0x61c448 + soff, 0x80000003, 0x80000003);
|
||||
} else
|
||||
if (data) {
|
||||
|
||||
@@ -41,6 +41,8 @@ nvd0_hda_eld(struct nv50_disp_priv *priv, int or, u8 *data, u32 size)
|
||||
if (data && data[0]) {
|
||||
for (i = 0; i < size; i++)
|
||||
nv_wr32(priv, 0x10ec00 + soff, (i << 8) | data[i]);
|
||||
for (; i < 0x60; i++)
|
||||
nv_wr32(priv, 0x10ec00 + soff, (i << 8));
|
||||
nv_mask(priv, 0x10ec10 + soff, 0x80000003, 0x80000003);
|
||||
} else
|
||||
if (data) {
|
||||
|
||||
@@ -47,14 +47,8 @@ int
|
||||
nv50_sor_mthd(struct nouveau_object *object, u32 mthd, void *args, u32 size)
|
||||
{
|
||||
struct nv50_disp_priv *priv = (void *)object->engine;
|
||||
struct nouveau_bios *bios = nouveau_bios(priv);
|
||||
const u16 type = (mthd & NV50_DISP_SOR_MTHD_TYPE) >> 12;
|
||||
const u8 head = (mthd & NV50_DISP_SOR_MTHD_HEAD) >> 3;
|
||||
const u8 link = (mthd & NV50_DISP_SOR_MTHD_LINK) >> 2;
|
||||
const u8 or = (mthd & NV50_DISP_SOR_MTHD_OR);
|
||||
const u16 mask = (0x0100 << head) | (0x0040 << link) | (0x0001 << or);
|
||||
struct dcb_output outp;
|
||||
u8 ver, hdr;
|
||||
u32 data;
|
||||
int ret = -EINVAL;
|
||||
|
||||
@@ -62,8 +56,6 @@ nv50_sor_mthd(struct nouveau_object *object, u32 mthd, void *args, u32 size)
|
||||
return -EINVAL;
|
||||
data = *(u32 *)args;
|
||||
|
||||
if (type && !dcb_outp_match(bios, type, mask, &ver, &hdr, &outp))
|
||||
return -ENODEV;
|
||||
|
||||
switch (mthd & ~0x3f) {
|
||||
case NV50_DISP_SOR_PWR:
|
||||
|
||||
@@ -278,7 +278,7 @@ static const struct lm90_params lm90_params[] = {
|
||||
[max6696] = {
|
||||
.flags = LM90_HAVE_EMERGENCY
|
||||
| LM90_HAVE_EMERGENCY_ALARM | LM90_HAVE_TEMP3,
|
||||
.alert_alarms = 0x187c,
|
||||
.alert_alarms = 0x1c7c,
|
||||
.max_convrate = 6,
|
||||
.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
|
||||
},
|
||||
@@ -1500,19 +1500,22 @@ static void lm90_alert(struct i2c_client *client, unsigned int flag)
|
||||
if ((alarms & 0x7f) == 0 && (alarms2 & 0xfe) == 0) {
|
||||
dev_info(&client->dev, "Everything OK\n");
|
||||
} else {
|
||||
if (alarms & 0x61)
|
||||
if ((alarms & 0x61) || (alarms2 & 0x80))
|
||||
dev_warn(&client->dev,
|
||||
"temp%d out of range, please check!\n", 1);
|
||||
if (alarms & 0x1a)
|
||||
if ((alarms & 0x1a) || (alarms2 & 0x20))
|
||||
dev_warn(&client->dev,
|
||||
"temp%d out of range, please check!\n", 2);
|
||||
if (alarms & 0x04)
|
||||
dev_warn(&client->dev,
|
||||
"temp%d diode open, please check!\n", 2);
|
||||
|
||||
if (alarms2 & 0x18)
|
||||
if (alarms2 & 0x5a)
|
||||
dev_warn(&client->dev,
|
||||
"temp%d out of range, please check!\n", 3);
|
||||
if (alarms2 & 0x04)
|
||||
dev_warn(&client->dev,
|
||||
"temp%d diode open, please check!\n", 3);
|
||||
|
||||
/*
|
||||
* Disable ALERT# output, because these chips don't implement
|
||||
|
||||
@@ -359,7 +359,7 @@ static int intel_idle(struct cpuidle_device *dev,
|
||||
if (!(lapic_timer_reliable_states & (1 << (cstate))))
|
||||
clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu);
|
||||
|
||||
if (!need_resched()) {
|
||||
if (!current_set_polling_and_test()) {
|
||||
|
||||
__monitor((void *)¤t_thread_info()->flags, 0, 0);
|
||||
smp_mb();
|
||||
|
||||
@@ -776,7 +776,7 @@ static int sh_vou_try_fmt_vid_out(struct file *file, void *priv,
|
||||
v4l_bound_align_image(&pix->width, 0, VOU_MAX_IMAGE_WIDTH, 1,
|
||||
&pix->height, 0, VOU_MAX_IMAGE_HEIGHT, 1, 0);
|
||||
|
||||
for (i = 0; ARRAY_SIZE(vou_fmt); i++)
|
||||
for (i = 0; i < ARRAY_SIZE(vou_fmt); i++)
|
||||
if (vou_fmt[i].pfmt == pix->pixelformat)
|
||||
return 0;
|
||||
|
||||
|
||||
@@ -90,8 +90,10 @@ int pwm_channel_alloc(int index, struct pwm_channel *ch)
|
||||
unsigned long flags;
|
||||
int status = 0;
|
||||
|
||||
/* insist on PWM init, with this signal pinned out */
|
||||
if (!pwm || !(pwm->mask & 1 << index))
|
||||
if (!pwm)
|
||||
return -EPROBE_DEFER;
|
||||
|
||||
if (!(pwm->mask & 1 << index))
|
||||
return -ENODEV;
|
||||
|
||||
if (index < 0 || index >= PWM_NCHAN || !ch)
|
||||
|
||||
@@ -485,8 +485,11 @@ int mei_nfc_host_init(struct mei_device *dev)
|
||||
if (ndev->cl_info)
|
||||
return 0;
|
||||
|
||||
cl_info = mei_cl_allocate(dev);
|
||||
cl = mei_cl_allocate(dev);
|
||||
ndev->cl_info = mei_cl_allocate(dev);
|
||||
ndev->cl = mei_cl_allocate(dev);
|
||||
|
||||
cl = ndev->cl;
|
||||
cl_info = ndev->cl_info;
|
||||
|
||||
if (!cl || !cl_info) {
|
||||
ret = -ENOMEM;
|
||||
@@ -527,10 +530,9 @@ int mei_nfc_host_init(struct mei_device *dev)
|
||||
|
||||
cl->device_uuid = mei_nfc_guid;
|
||||
|
||||
|
||||
list_add_tail(&cl->device_link, &dev->device_list);
|
||||
|
||||
ndev->cl_info = cl_info;
|
||||
ndev->cl = cl;
|
||||
ndev->req_id = 1;
|
||||
|
||||
INIT_WORK(&ndev->init_work, mei_nfc_init);
|
||||
|
||||
@@ -814,9 +814,6 @@ static int c_can_do_rx_poll(struct net_device *dev, int quota)
|
||||
msg_ctrl_save = priv->read_reg(priv,
|
||||
C_CAN_IFACE(MSGCTRL_REG, 0));
|
||||
|
||||
if (msg_ctrl_save & IF_MCONT_EOB)
|
||||
return num_rx_pkts;
|
||||
|
||||
if (msg_ctrl_save & IF_MCONT_MSGLST) {
|
||||
c_can_handle_lost_msg_obj(dev, 0, msg_obj);
|
||||
num_rx_pkts++;
|
||||
@@ -824,6 +821,9 @@ static int c_can_do_rx_poll(struct net_device *dev, int quota)
|
||||
continue;
|
||||
}
|
||||
|
||||
if (msg_ctrl_save & IF_MCONT_EOB)
|
||||
return num_rx_pkts;
|
||||
|
||||
if (!(msg_ctrl_save & IF_MCONT_NEWDAT))
|
||||
continue;
|
||||
|
||||
|
||||
@@ -1544,9 +1544,9 @@ static int kvaser_usb_init_one(struct usb_interface *intf,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void kvaser_usb_get_endpoints(const struct usb_interface *intf,
|
||||
struct usb_endpoint_descriptor **in,
|
||||
struct usb_endpoint_descriptor **out)
|
||||
static int kvaser_usb_get_endpoints(const struct usb_interface *intf,
|
||||
struct usb_endpoint_descriptor **in,
|
||||
struct usb_endpoint_descriptor **out)
|
||||
{
|
||||
const struct usb_host_interface *iface_desc;
|
||||
struct usb_endpoint_descriptor *endpoint;
|
||||
@@ -1557,12 +1557,18 @@ static void kvaser_usb_get_endpoints(const struct usb_interface *intf,
|
||||
for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
|
||||
endpoint = &iface_desc->endpoint[i].desc;
|
||||
|
||||
if (usb_endpoint_is_bulk_in(endpoint))
|
||||
if (!*in && usb_endpoint_is_bulk_in(endpoint))
|
||||
*in = endpoint;
|
||||
|
||||
if (usb_endpoint_is_bulk_out(endpoint))
|
||||
if (!*out && usb_endpoint_is_bulk_out(endpoint))
|
||||
*out = endpoint;
|
||||
|
||||
/* use first bulk endpoint for in and out */
|
||||
if (*in && *out)
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static int kvaser_usb_probe(struct usb_interface *intf,
|
||||
@@ -1576,8 +1582,8 @@ static int kvaser_usb_probe(struct usb_interface *intf,
|
||||
if (!dev)
|
||||
return -ENOMEM;
|
||||
|
||||
kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out);
|
||||
if (!dev->bulk_in || !dev->bulk_out) {
|
||||
err = kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out);
|
||||
if (err) {
|
||||
dev_err(&intf->dev, "Cannot get usb endpoint(s)");
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -1600,7 +1600,8 @@ static void write_ofld_wr(struct adapter *adap, struct sk_buff *skb,
|
||||
flits = skb_transport_offset(skb) / 8;
|
||||
sgp = ndesc == 1 ? (struct sg_ent *)&d->flit[flits] : sgl;
|
||||
sgl_flits = make_sgl(skb, sgp, skb_transport_header(skb),
|
||||
skb->tail - skb->transport_header,
|
||||
skb_tail_pointer(skb) -
|
||||
skb_transport_header(skb),
|
||||
adap->pdev);
|
||||
if (need_skb_unmap()) {
|
||||
setup_deferred_unmapping(skb, adap->pdev, sgp, sgl_flits);
|
||||
|
||||
@@ -1544,7 +1544,7 @@ static void mlx4_master_deactivate_admin_state(struct mlx4_priv *priv, int slave
|
||||
vp_oper->vlan_idx = NO_INDX;
|
||||
}
|
||||
if (NO_INDX != vp_oper->mac_idx) {
|
||||
__mlx4_unregister_mac(&priv->dev, port, vp_oper->mac_idx);
|
||||
__mlx4_unregister_mac(&priv->dev, port, vp_oper->state.mac);
|
||||
vp_oper->mac_idx = NO_INDX;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1096,11 +1096,6 @@ static int virtnet_cpu_callback(struct notifier_block *nfb,
|
||||
{
|
||||
struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb);
|
||||
|
||||
mutex_lock(&vi->config_lock);
|
||||
|
||||
if (!vi->config_enable)
|
||||
goto done;
|
||||
|
||||
switch(action & ~CPU_TASKS_FROZEN) {
|
||||
case CPU_ONLINE:
|
||||
case CPU_DOWN_FAILED:
|
||||
@@ -1114,8 +1109,6 @@ static int virtnet_cpu_callback(struct notifier_block *nfb,
|
||||
break;
|
||||
}
|
||||
|
||||
done:
|
||||
mutex_unlock(&vi->config_lock);
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
@@ -1672,6 +1665,8 @@ static int virtnet_freeze(struct virtio_device *vdev)
|
||||
struct virtnet_info *vi = vdev->priv;
|
||||
int i;
|
||||
|
||||
unregister_hotcpu_notifier(&vi->nb);
|
||||
|
||||
/* Prevent config work handler from accessing the device */
|
||||
mutex_lock(&vi->config_lock);
|
||||
vi->config_enable = false;
|
||||
@@ -1720,6 +1715,10 @@ static int virtnet_restore(struct virtio_device *vdev)
|
||||
virtnet_set_queues(vi, vi->curr_queue_pairs);
|
||||
rtnl_unlock();
|
||||
|
||||
err = register_hotcpu_notifier(&vi->nb);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
@@ -125,7 +125,7 @@ static const struct iwl_ht_params iwl7000_ht_params = {
|
||||
|
||||
|
||||
const struct iwl_cfg iwl7260_2ac_cfg = {
|
||||
.name = "Intel(R) Dual Band Wireless AC7260",
|
||||
.name = "Intel(R) Dual Band Wireless AC 7260",
|
||||
.fw_name_pre = IWL7260_FW_PRE,
|
||||
IWL_DEVICE_7000,
|
||||
.ht_params = &iwl7000_ht_params,
|
||||
@@ -133,8 +133,44 @@ const struct iwl_cfg iwl7260_2ac_cfg = {
|
||||
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl3160_ac_cfg = {
|
||||
.name = "Intel(R) Dual Band Wireless AC3160",
|
||||
const struct iwl_cfg iwl7260_2n_cfg = {
|
||||
.name = "Intel(R) Dual Band Wireless N 7260",
|
||||
.fw_name_pre = IWL7260_FW_PRE,
|
||||
IWL_DEVICE_7000,
|
||||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL7260_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl7260_n_cfg = {
|
||||
.name = "Intel(R) Wireless N 7260",
|
||||
.fw_name_pre = IWL7260_FW_PRE,
|
||||
IWL_DEVICE_7000,
|
||||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL7260_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL7260_TX_POWER_VERSION,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl3160_2ac_cfg = {
|
||||
.name = "Intel(R) Dual Band Wireless AC 3160",
|
||||
.fw_name_pre = IWL3160_FW_PRE,
|
||||
IWL_DEVICE_7000,
|
||||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL3160_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL3160_TX_POWER_VERSION,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl3160_2n_cfg = {
|
||||
.name = "Intel(R) Dual Band Wireless N 3160",
|
||||
.fw_name_pre = IWL3160_FW_PRE,
|
||||
IWL_DEVICE_7000,
|
||||
.ht_params = &iwl7000_ht_params,
|
||||
.nvm_ver = IWL3160_NVM_VERSION,
|
||||
.nvm_calib_ver = IWL3160_TX_POWER_VERSION,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl3160_n_cfg = {
|
||||
.name = "Intel(R) Wireless N 3160",
|
||||
.fw_name_pre = IWL3160_FW_PRE,
|
||||
IWL_DEVICE_7000,
|
||||
.ht_params = &iwl7000_ht_params,
|
||||
|
||||
@@ -321,6 +321,10 @@ extern const struct iwl_cfg iwl105_bgn_cfg;
|
||||
extern const struct iwl_cfg iwl105_bgn_d_cfg;
|
||||
extern const struct iwl_cfg iwl135_bgn_cfg;
|
||||
extern const struct iwl_cfg iwl7260_2ac_cfg;
|
||||
extern const struct iwl_cfg iwl3160_ac_cfg;
|
||||
extern const struct iwl_cfg iwl7260_2n_cfg;
|
||||
extern const struct iwl_cfg iwl7260_n_cfg;
|
||||
extern const struct iwl_cfg iwl3160_2ac_cfg;
|
||||
extern const struct iwl_cfg iwl3160_2n_cfg;
|
||||
extern const struct iwl_cfg iwl3160_n_cfg;
|
||||
|
||||
#endif /* __IWL_CONFIG_H__ */
|
||||
|
||||
@@ -267,10 +267,83 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = {
|
||||
|
||||
/* 7000 Series */
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4070, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4062, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4072, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4170, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4060, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x406A, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4160, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4062, iwl7260_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4162, iwl7260_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0x4270, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0x4272, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0x4260, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0x426A, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0x4262, iwl7260_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4470, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4472, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4460, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x446A, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4462, iwl7260_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4870, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x486E, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4570, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4560, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0x4370, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0x4360, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x5070, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4020, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x402A, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0x4220, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0x4420, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC070, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x0070, iwl3160_ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x8070, iwl3160_ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC072, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC170, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC060, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC06A, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC160, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC062, iwl7260_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC162, iwl7260_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC770, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC760, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0xC270, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0xC272, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0xC260, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0xC26A, iwl7260_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0xC262, iwl7260_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC470, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC472, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC460, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC462, iwl7260_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC570, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC560, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0xC370, iwl7260_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC360, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC020, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC02A, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B2, 0xC220, iwl7260_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B1, 0xC420, iwl7260_2n_cfg)},
|
||||
|
||||
/* 3160 Series */
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x0070, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x0072, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x0170, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x0172, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x0060, iwl3160_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x0062, iwl3160_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B4, 0x0270, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B4, 0x0272, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x0470, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x0472, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B4, 0x0370, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x8070, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x8072, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x8170, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x8172, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x8060, iwl3160_2n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x8062, iwl3160_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B4, 0x8270, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x8470, iwl3160_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x08B3, 0x8570, iwl3160_2ac_cfg)},
|
||||
|
||||
{0}
|
||||
};
|
||||
|
||||
@@ -913,7 +913,10 @@ static ssize_t lbs_debugfs_write(struct file *f, const char __user *buf,
|
||||
char *p2;
|
||||
struct debug_data *d = f->private_data;
|
||||
|
||||
pdata = kmalloc(cnt, GFP_KERNEL);
|
||||
if (cnt == 0)
|
||||
return 0;
|
||||
|
||||
pdata = kmalloc(cnt + 1, GFP_KERNEL);
|
||||
if (pdata == NULL)
|
||||
return 0;
|
||||
|
||||
@@ -922,6 +925,7 @@ static ssize_t lbs_debugfs_write(struct file *f, const char __user *buf,
|
||||
kfree(pdata);
|
||||
return 0;
|
||||
}
|
||||
pdata[cnt] = '\0';
|
||||
|
||||
p0 = pdata;
|
||||
for (i = 0; i < num_of_items; i++) {
|
||||
|
||||
@@ -3400,10 +3400,13 @@ void rt2800_link_tuner(struct rt2x00_dev *rt2x00dev, struct link_qual *qual,
|
||||
|
||||
vgc = rt2800_get_default_vgc(rt2x00dev);
|
||||
|
||||
if (rt2x00_rt(rt2x00dev, RT5592) && qual->rssi > -65)
|
||||
vgc += 0x20;
|
||||
else if (qual->rssi > -80)
|
||||
vgc += 0x10;
|
||||
if (rt2x00_rt(rt2x00dev, RT5592)) {
|
||||
if (qual->rssi > -65)
|
||||
vgc += 0x20;
|
||||
} else {
|
||||
if (qual->rssi > -80)
|
||||
vgc += 0x10;
|
||||
}
|
||||
|
||||
rt2800_set_vgc(rt2x00dev, qual, vgc);
|
||||
}
|
||||
|
||||
@@ -148,6 +148,8 @@ static bool rt2800usb_txstatus_timeout(struct rt2x00_dev *rt2x00dev)
|
||||
return false;
|
||||
}
|
||||
|
||||
#define TXSTATUS_READ_INTERVAL 1000000
|
||||
|
||||
static bool rt2800usb_tx_sta_fifo_read_completed(struct rt2x00_dev *rt2x00dev,
|
||||
int urb_status, u32 tx_status)
|
||||
{
|
||||
@@ -176,8 +178,9 @@ static bool rt2800usb_tx_sta_fifo_read_completed(struct rt2x00_dev *rt2x00dev,
|
||||
queue_work(rt2x00dev->workqueue, &rt2x00dev->txdone_work);
|
||||
|
||||
if (rt2800usb_txstatus_pending(rt2x00dev)) {
|
||||
/* Read register after 250 us */
|
||||
hrtimer_start(&rt2x00dev->txstatus_timer, ktime_set(0, 250000),
|
||||
/* Read register after 1 ms */
|
||||
hrtimer_start(&rt2x00dev->txstatus_timer,
|
||||
ktime_set(0, TXSTATUS_READ_INTERVAL),
|
||||
HRTIMER_MODE_REL);
|
||||
return false;
|
||||
}
|
||||
@@ -202,8 +205,9 @@ static void rt2800usb_async_read_tx_status(struct rt2x00_dev *rt2x00dev)
|
||||
if (test_and_set_bit(TX_STATUS_READING, &rt2x00dev->flags))
|
||||
return;
|
||||
|
||||
/* Read TX_STA_FIFO register after 500 us */
|
||||
hrtimer_start(&rt2x00dev->txstatus_timer, ktime_set(0, 500000),
|
||||
/* Read TX_STA_FIFO register after 2 ms */
|
||||
hrtimer_start(&rt2x00dev->txstatus_timer,
|
||||
ktime_set(0, 2*TXSTATUS_READ_INTERVAL),
|
||||
HRTIMER_MODE_REL);
|
||||
}
|
||||
|
||||
|
||||
@@ -181,6 +181,7 @@ static void rt2x00lib_autowakeup(struct work_struct *work)
|
||||
static void rt2x00lib_bc_buffer_iter(void *data, u8 *mac,
|
||||
struct ieee80211_vif *vif)
|
||||
{
|
||||
struct ieee80211_tx_control control = {};
|
||||
struct rt2x00_dev *rt2x00dev = data;
|
||||
struct sk_buff *skb;
|
||||
|
||||
@@ -195,7 +196,7 @@ static void rt2x00lib_bc_buffer_iter(void *data, u8 *mac,
|
||||
*/
|
||||
skb = ieee80211_get_buffered_bc(rt2x00dev->hw, vif);
|
||||
while (skb) {
|
||||
rt2x00mac_tx(rt2x00dev->hw, NULL, skb);
|
||||
rt2x00mac_tx(rt2x00dev->hw, &control, skb);
|
||||
skb = ieee80211_get_buffered_bc(rt2x00dev->hw, vif);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -146,7 +146,7 @@ void rt2x00queue_remove_l2pad(struct sk_buff *skb, unsigned int header_length);
|
||||
* @local: frame is not from mac80211
|
||||
*/
|
||||
int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
|
||||
bool local);
|
||||
struct ieee80211_sta *sta, bool local);
|
||||
|
||||
/**
|
||||
* rt2x00queue_update_beacon - Send new beacon from mac80211
|
||||
|
||||
@@ -90,7 +90,7 @@ static int rt2x00mac_tx_rts_cts(struct rt2x00_dev *rt2x00dev,
|
||||
frag_skb->data, data_length, tx_info,
|
||||
(struct ieee80211_rts *)(skb->data));
|
||||
|
||||
retval = rt2x00queue_write_tx_frame(queue, skb, true);
|
||||
retval = rt2x00queue_write_tx_frame(queue, skb, NULL, true);
|
||||
if (retval) {
|
||||
dev_kfree_skb_any(skb);
|
||||
rt2x00_warn(rt2x00dev, "Failed to send RTS/CTS frame\n");
|
||||
@@ -151,7 +151,7 @@ void rt2x00mac_tx(struct ieee80211_hw *hw,
|
||||
goto exit_fail;
|
||||
}
|
||||
|
||||
if (unlikely(rt2x00queue_write_tx_frame(queue, skb, false)))
|
||||
if (unlikely(rt2x00queue_write_tx_frame(queue, skb, control->sta, false)))
|
||||
goto exit_fail;
|
||||
|
||||
/*
|
||||
@@ -754,6 +754,9 @@ void rt2x00mac_flush(struct ieee80211_hw *hw, u32 queues, bool drop)
|
||||
struct rt2x00_dev *rt2x00dev = hw->priv;
|
||||
struct data_queue *queue;
|
||||
|
||||
if (!test_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags))
|
||||
return;
|
||||
|
||||
tx_queue_for_each(rt2x00dev, queue)
|
||||
rt2x00queue_flush_queue(queue, drop);
|
||||
}
|
||||
|
||||
@@ -635,7 +635,7 @@ static void rt2x00queue_bar_check(struct queue_entry *entry)
|
||||
}
|
||||
|
||||
int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
|
||||
bool local)
|
||||
struct ieee80211_sta *sta, bool local)
|
||||
{
|
||||
struct ieee80211_tx_info *tx_info;
|
||||
struct queue_entry *entry;
|
||||
@@ -649,7 +649,7 @@ int rt2x00queue_write_tx_frame(struct data_queue *queue, struct sk_buff *skb,
|
||||
* after that we are free to use the skb->cb array
|
||||
* for our information.
|
||||
*/
|
||||
rt2x00queue_create_tx_descriptor(queue->rt2x00dev, skb, &txdesc, NULL);
|
||||
rt2x00queue_create_tx_descriptor(queue->rt2x00dev, skb, &txdesc, sta);
|
||||
|
||||
/*
|
||||
* All information is retrieved from the skb->cb array,
|
||||
|
||||
@@ -88,6 +88,7 @@ struct xenvif {
|
||||
unsigned long credit_usec;
|
||||
unsigned long remaining_credit;
|
||||
struct timer_list credit_timeout;
|
||||
u64 credit_window_start;
|
||||
|
||||
/* Statistics */
|
||||
unsigned long rx_gso_checksum_fixup;
|
||||
|
||||
@@ -275,8 +275,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
|
||||
vif->credit_bytes = vif->remaining_credit = ~0UL;
|
||||
vif->credit_usec = 0UL;
|
||||
init_timer(&vif->credit_timeout);
|
||||
/* Initialize 'expires' now: it's used to track the credit window. */
|
||||
vif->credit_timeout.expires = jiffies;
|
||||
vif->credit_window_start = get_jiffies_64();
|
||||
|
||||
dev->netdev_ops = &xenvif_netdev_ops;
|
||||
dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO;
|
||||
|
||||
@@ -1423,9 +1423,8 @@ out:
|
||||
|
||||
static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
|
||||
{
|
||||
unsigned long now = jiffies;
|
||||
unsigned long next_credit =
|
||||
vif->credit_timeout.expires +
|
||||
u64 now = get_jiffies_64();
|
||||
u64 next_credit = vif->credit_window_start +
|
||||
msecs_to_jiffies(vif->credit_usec / 1000);
|
||||
|
||||
/* Timer could already be pending in rare cases. */
|
||||
@@ -1433,8 +1432,8 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
|
||||
return true;
|
||||
|
||||
/* Passed the point where we can replenish credit? */
|
||||
if (time_after_eq(now, next_credit)) {
|
||||
vif->credit_timeout.expires = now;
|
||||
if (time_after_eq64(now, next_credit)) {
|
||||
vif->credit_window_start = now;
|
||||
tx_add_credit(vif);
|
||||
}
|
||||
|
||||
@@ -1446,6 +1445,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
|
||||
tx_credit_callback;
|
||||
mod_timer(&vif->credit_timeout,
|
||||
next_credit);
|
||||
vif->credit_window_start = next_credit;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@@ -24,6 +24,12 @@
|
||||
struct backend_info {
|
||||
struct xenbus_device *dev;
|
||||
struct xenvif *vif;
|
||||
|
||||
/* This is the state that will be reflected in xenstore when any
|
||||
* active hotplug script completes.
|
||||
*/
|
||||
enum xenbus_state state;
|
||||
|
||||
enum xenbus_state frontend_state;
|
||||
struct xenbus_watch hotplug_status_watch;
|
||||
u8 have_hotplug_status_watch:1;
|
||||
@@ -33,11 +39,15 @@ static int connect_rings(struct backend_info *);
|
||||
static void connect(struct backend_info *);
|
||||
static void backend_create_xenvif(struct backend_info *be);
|
||||
static void unregister_hotplug_status_watch(struct backend_info *be);
|
||||
static void set_backend_state(struct backend_info *be,
|
||||
enum xenbus_state state);
|
||||
|
||||
static int netback_remove(struct xenbus_device *dev)
|
||||
{
|
||||
struct backend_info *be = dev_get_drvdata(&dev->dev);
|
||||
|
||||
set_backend_state(be, XenbusStateClosed);
|
||||
|
||||
unregister_hotplug_status_watch(be);
|
||||
if (be->vif) {
|
||||
kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE);
|
||||
@@ -126,6 +136,8 @@ static int netback_probe(struct xenbus_device *dev,
|
||||
if (err)
|
||||
goto fail;
|
||||
|
||||
be->state = XenbusStateInitWait;
|
||||
|
||||
/* This kicks hotplug scripts, so do it immediately. */
|
||||
backend_create_xenvif(be);
|
||||
|
||||
@@ -198,24 +210,113 @@ static void backend_create_xenvif(struct backend_info *be)
|
||||
kobject_uevent(&dev->dev.kobj, KOBJ_ONLINE);
|
||||
}
|
||||
|
||||
|
||||
static void disconnect_backend(struct xenbus_device *dev)
|
||||
static void backend_disconnect(struct backend_info *be)
|
||||
{
|
||||
struct backend_info *be = dev_get_drvdata(&dev->dev);
|
||||
|
||||
if (be->vif)
|
||||
xenvif_disconnect(be->vif);
|
||||
}
|
||||
|
||||
static void destroy_backend(struct xenbus_device *dev)
|
||||
static void backend_connect(struct backend_info *be)
|
||||
{
|
||||
struct backend_info *be = dev_get_drvdata(&dev->dev);
|
||||
if (be->vif)
|
||||
connect(be);
|
||||
}
|
||||
|
||||
if (be->vif) {
|
||||
kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE);
|
||||
xenbus_rm(XBT_NIL, dev->nodename, "hotplug-status");
|
||||
xenvif_free(be->vif);
|
||||
be->vif = NULL;
|
||||
static inline void backend_switch_state(struct backend_info *be,
|
||||
enum xenbus_state state)
|
||||
{
|
||||
struct xenbus_device *dev = be->dev;
|
||||
|
||||
pr_debug("%s -> %s\n", dev->nodename, xenbus_strstate(state));
|
||||
be->state = state;
|
||||
|
||||
/* If we are waiting for a hotplug script then defer the
|
||||
* actual xenbus state change.
|
||||
*/
|
||||
if (!be->have_hotplug_status_watch)
|
||||
xenbus_switch_state(dev, state);
|
||||
}
|
||||
|
||||
/* Handle backend state transitions:
|
||||
*
|
||||
* The backend state starts in InitWait and the following transitions are
|
||||
* allowed.
|
||||
*
|
||||
* InitWait -> Connected
|
||||
*
|
||||
* ^ \ |
|
||||
* | \ |
|
||||
* | \ |
|
||||
* | \ |
|
||||
* | \ |
|
||||
* | \ |
|
||||
* | V V
|
||||
*
|
||||
* Closed <-> Closing
|
||||
*
|
||||
* The state argument specifies the eventual state of the backend and the
|
||||
* function transitions to that state via the shortest path.
|
||||
*/
|
||||
static void set_backend_state(struct backend_info *be,
|
||||
enum xenbus_state state)
|
||||
{
|
||||
while (be->state != state) {
|
||||
switch (be->state) {
|
||||
case XenbusStateClosed:
|
||||
switch (state) {
|
||||
case XenbusStateInitWait:
|
||||
case XenbusStateConnected:
|
||||
pr_info("%s: prepare for reconnect\n",
|
||||
be->dev->nodename);
|
||||
backend_switch_state(be, XenbusStateInitWait);
|
||||
break;
|
||||
case XenbusStateClosing:
|
||||
backend_switch_state(be, XenbusStateClosing);
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
}
|
||||
break;
|
||||
case XenbusStateInitWait:
|
||||
switch (state) {
|
||||
case XenbusStateConnected:
|
||||
backend_connect(be);
|
||||
backend_switch_state(be, XenbusStateConnected);
|
||||
break;
|
||||
case XenbusStateClosing:
|
||||
case XenbusStateClosed:
|
||||
backend_switch_state(be, XenbusStateClosing);
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
}
|
||||
break;
|
||||
case XenbusStateConnected:
|
||||
switch (state) {
|
||||
case XenbusStateInitWait:
|
||||
case XenbusStateClosing:
|
||||
case XenbusStateClosed:
|
||||
backend_disconnect(be);
|
||||
backend_switch_state(be, XenbusStateClosing);
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
}
|
||||
break;
|
||||
case XenbusStateClosing:
|
||||
switch (state) {
|
||||
case XenbusStateInitWait:
|
||||
case XenbusStateConnected:
|
||||
case XenbusStateClosed:
|
||||
backend_switch_state(be, XenbusStateClosed);
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
}
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -227,41 +328,33 @@ static void frontend_changed(struct xenbus_device *dev,
|
||||
{
|
||||
struct backend_info *be = dev_get_drvdata(&dev->dev);
|
||||
|
||||
pr_debug("frontend state %s", xenbus_strstate(frontend_state));
|
||||
pr_debug("%s -> %s\n", dev->otherend, xenbus_strstate(frontend_state));
|
||||
|
||||
be->frontend_state = frontend_state;
|
||||
|
||||
switch (frontend_state) {
|
||||
case XenbusStateInitialising:
|
||||
if (dev->state == XenbusStateClosed) {
|
||||
printk(KERN_INFO "%s: %s: prepare for reconnect\n",
|
||||
__func__, dev->nodename);
|
||||
xenbus_switch_state(dev, XenbusStateInitWait);
|
||||
}
|
||||
set_backend_state(be, XenbusStateInitWait);
|
||||
break;
|
||||
|
||||
case XenbusStateInitialised:
|
||||
break;
|
||||
|
||||
case XenbusStateConnected:
|
||||
if (dev->state == XenbusStateConnected)
|
||||
break;
|
||||
if (be->vif)
|
||||
connect(be);
|
||||
set_backend_state(be, XenbusStateConnected);
|
||||
break;
|
||||
|
||||
case XenbusStateClosing:
|
||||
disconnect_backend(dev);
|
||||
xenbus_switch_state(dev, XenbusStateClosing);
|
||||
set_backend_state(be, XenbusStateClosing);
|
||||
break;
|
||||
|
||||
case XenbusStateClosed:
|
||||
xenbus_switch_state(dev, XenbusStateClosed);
|
||||
set_backend_state(be, XenbusStateClosed);
|
||||
if (xenbus_dev_is_online(dev))
|
||||
break;
|
||||
destroy_backend(dev);
|
||||
/* fall through if not online */
|
||||
case XenbusStateUnknown:
|
||||
set_backend_state(be, XenbusStateClosed);
|
||||
device_unregister(&dev->dev);
|
||||
break;
|
||||
|
||||
@@ -354,7 +447,9 @@ static void hotplug_status_changed(struct xenbus_watch *watch,
|
||||
if (IS_ERR(str))
|
||||
return;
|
||||
if (len == sizeof("connected")-1 && !memcmp(str, "connected", len)) {
|
||||
xenbus_switch_state(be->dev, XenbusStateConnected);
|
||||
/* Complete any pending state change */
|
||||
xenbus_switch_state(be->dev, be->state);
|
||||
|
||||
/* Not interested in this watch anymore. */
|
||||
unregister_hotplug_status_watch(be);
|
||||
}
|
||||
@@ -384,12 +479,8 @@ static void connect(struct backend_info *be)
|
||||
err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
|
||||
hotplug_status_changed,
|
||||
"%s/%s", dev->nodename, "hotplug-status");
|
||||
if (err) {
|
||||
/* Switch now, since we can't do a watch. */
|
||||
xenbus_switch_state(dev, XenbusStateConnected);
|
||||
} else {
|
||||
if (!err)
|
||||
be->have_hotplug_status_watch = 1;
|
||||
}
|
||||
|
||||
netif_wake_queue(be->vif->dev);
|
||||
}
|
||||
|
||||
@@ -484,28 +484,29 @@ static inline bool pcie_cap_has_lnkctl(const struct pci_dev *dev)
|
||||
{
|
||||
int type = pci_pcie_type(dev);
|
||||
|
||||
return pcie_cap_version(dev) > 1 ||
|
||||
return type == PCI_EXP_TYPE_ENDPOINT ||
|
||||
type == PCI_EXP_TYPE_LEG_END ||
|
||||
type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
type == PCI_EXP_TYPE_ENDPOINT ||
|
||||
type == PCI_EXP_TYPE_LEG_END;
|
||||
type == PCI_EXP_TYPE_UPSTREAM ||
|
||||
type == PCI_EXP_TYPE_DOWNSTREAM ||
|
||||
type == PCI_EXP_TYPE_PCI_BRIDGE ||
|
||||
type == PCI_EXP_TYPE_PCIE_BRIDGE;
|
||||
}
|
||||
|
||||
static inline bool pcie_cap_has_sltctl(const struct pci_dev *dev)
|
||||
{
|
||||
int type = pci_pcie_type(dev);
|
||||
|
||||
return pcie_cap_version(dev) > 1 ||
|
||||
type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
(type == PCI_EXP_TYPE_DOWNSTREAM &&
|
||||
pcie_caps_reg(dev) & PCI_EXP_FLAGS_SLOT);
|
||||
return (type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
type == PCI_EXP_TYPE_DOWNSTREAM) &&
|
||||
pcie_caps_reg(dev) & PCI_EXP_FLAGS_SLOT;
|
||||
}
|
||||
|
||||
static inline bool pcie_cap_has_rtctl(const struct pci_dev *dev)
|
||||
{
|
||||
int type = pci_pcie_type(dev);
|
||||
|
||||
return pcie_cap_version(dev) > 1 ||
|
||||
type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
return type == PCI_EXP_TYPE_ROOT_PORT ||
|
||||
type == PCI_EXP_TYPE_RC_EC;
|
||||
}
|
||||
|
||||
|
||||
@@ -510,7 +510,8 @@ static int aac_send_raw_srb(struct aac_dev* dev, void __user * arg)
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
if (fibsize > (dev->max_fib_size - sizeof(struct aac_fibhdr))) {
|
||||
if ((fibsize < (sizeof(struct user_aac_srb) - sizeof(struct user_sgentry))) ||
|
||||
(fibsize > (dev->max_fib_size - sizeof(struct aac_fibhdr)))) {
|
||||
rcode = -EINVAL;
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
@@ -1010,6 +1010,7 @@ static int register_root_hub(struct usb_hcd *hcd)
|
||||
dev_name(&usb_dev->dev), retval);
|
||||
return retval;
|
||||
}
|
||||
usb_dev->lpm_capable = usb_device_supports_lpm(usb_dev);
|
||||
}
|
||||
|
||||
retval = usb_new_device (usb_dev);
|
||||
|
||||
@@ -135,7 +135,7 @@ struct usb_hub *usb_hub_to_struct_hub(struct usb_device *hdev)
|
||||
return usb_get_intfdata(hdev->actconfig->interface[0]);
|
||||
}
|
||||
|
||||
static int usb_device_supports_lpm(struct usb_device *udev)
|
||||
int usb_device_supports_lpm(struct usb_device *udev)
|
||||
{
|
||||
/* USB 2.1 (and greater) devices indicate LPM support through
|
||||
* their USB 2.0 Extended Capabilities BOS descriptor.
|
||||
@@ -156,6 +156,11 @@ static int usb_device_supports_lpm(struct usb_device *udev)
|
||||
"Power management will be impacted.\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* udev is root hub */
|
||||
if (!udev->parent)
|
||||
return 1;
|
||||
|
||||
if (udev->parent->lpm_capable)
|
||||
return 1;
|
||||
|
||||
@@ -1124,6 +1129,11 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
|
||||
usb_clear_port_feature(hub->hdev, port1,
|
||||
USB_PORT_FEAT_C_ENABLE);
|
||||
}
|
||||
if (portchange & USB_PORT_STAT_C_RESET) {
|
||||
need_debounce_delay = true;
|
||||
usb_clear_port_feature(hub->hdev, port1,
|
||||
USB_PORT_FEAT_C_RESET);
|
||||
}
|
||||
if ((portchange & USB_PORT_STAT_C_BH_RESET) &&
|
||||
hub_is_superspeed(hub->hdev)) {
|
||||
need_debounce_delay = true;
|
||||
@@ -1557,10 +1567,15 @@ static int hub_configure(struct usb_hub *hub,
|
||||
if (hub->has_indicators && blinkenlights)
|
||||
hub->indicator [0] = INDICATOR_CYCLE;
|
||||
|
||||
for (i = 0; i < hdev->maxchild; i++)
|
||||
if (usb_hub_create_port_device(hub, i + 1) < 0)
|
||||
for (i = 0; i < hdev->maxchild; i++) {
|
||||
ret = usb_hub_create_port_device(hub, i + 1);
|
||||
if (ret < 0) {
|
||||
dev_err(hub->intfdev,
|
||||
"couldn't create port%d device.\n", i + 1);
|
||||
hdev->maxchild = i;
|
||||
goto fail_keep_maxchild;
|
||||
}
|
||||
}
|
||||
|
||||
usb_hub_adjust_deviceremovable(hdev, hub->descriptor);
|
||||
|
||||
@@ -1568,6 +1583,8 @@ static int hub_configure(struct usb_hub *hub,
|
||||
return 0;
|
||||
|
||||
fail:
|
||||
hdev->maxchild = 0;
|
||||
fail_keep_maxchild:
|
||||
dev_err (hub_dev, "config failed, %s (err %d)\n",
|
||||
message, ret);
|
||||
/* hub_disconnect() frees urb and descriptor */
|
||||
|
||||
@@ -35,6 +35,7 @@ extern int usb_get_device_descriptor(struct usb_device *dev,
|
||||
unsigned int size);
|
||||
extern int usb_get_bos_descriptor(struct usb_device *dev);
|
||||
extern void usb_release_bos_descriptor(struct usb_device *dev);
|
||||
extern int usb_device_supports_lpm(struct usb_device *udev);
|
||||
extern char *usb_cache_string(struct usb_device *udev, int index);
|
||||
extern int usb_set_configuration(struct usb_device *dev, int configuration);
|
||||
extern int usb_choose_configuration(struct usb_device *udev);
|
||||
|
||||
@@ -1593,7 +1593,11 @@ static int mos7840_tiocmget(struct tty_struct *tty)
|
||||
return -ENODEV;
|
||||
|
||||
status = mos7840_get_uart_reg(port, MODEM_STATUS_REGISTER, &msr);
|
||||
if (status != 1)
|
||||
return -EIO;
|
||||
status = mos7840_get_uart_reg(port, MODEM_CONTROL_REGISTER, &mcr);
|
||||
if (status != 1)
|
||||
return -EIO;
|
||||
result = ((mcr & MCR_DTR) ? TIOCM_DTR : 0)
|
||||
| ((mcr & MCR_RTS) ? TIOCM_RTS : 0)
|
||||
| ((mcr & MCR_LOOPBACK) ? TIOCM_LOOP : 0)
|
||||
|
||||
@@ -1376,6 +1376,23 @@ static const struct usb_device_id option_ids[] = {
|
||||
.driver_info = (kernel_ulong_t)&net_intf2_blacklist },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */
|
||||
.driver_info = (kernel_ulong_t)&net_intf2_blacklist },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1545, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1546, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1547, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1565, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1566, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1567, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1589, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1590, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1591, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1592, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1594, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1596, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1598, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1600, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2002, 0xff,
|
||||
0xff, 0xff), .driver_info = (kernel_ulong_t)&zte_k3765_z_blacklist },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2003, 0xff, 0xff, 0xff) },
|
||||
|
||||
@@ -49,6 +49,9 @@ int adf_interface_blank(struct adf_interface *intf, u8 state)
|
||||
if (!intf->ops || !intf->ops->blank)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (state > DRM_MODE_DPMS_OFF)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&dev->client_lock);
|
||||
if (state != DRM_MODE_DPMS_ON)
|
||||
flush_kthread_worker(&dev->post_worker);
|
||||
|
||||
@@ -519,10 +519,10 @@ int adf_fbdev_blank(int blank, struct fb_info *info)
|
||||
dpms_state = DRM_MODE_DPMS_STANDBY;
|
||||
break;
|
||||
case FB_BLANK_VSYNC_SUSPEND:
|
||||
dpms_state = DRM_MODE_DPMS_STANDBY;
|
||||
dpms_state = DRM_MODE_DPMS_SUSPEND;
|
||||
break;
|
||||
case FB_BLANK_HSYNC_SUSPEND:
|
||||
dpms_state = DRM_MODE_DPMS_SUSPEND;
|
||||
dpms_state = DRM_MODE_DPMS_STANDBY;
|
||||
break;
|
||||
case FB_BLANK_POWERDOWN:
|
||||
dpms_state = DRM_MODE_DPMS_OFF;
|
||||
|
||||
@@ -118,7 +118,7 @@ static const struct backlight_ops atmel_pwm_bl_ops = {
|
||||
.update_status = atmel_pwm_bl_set_intensity,
|
||||
};
|
||||
|
||||
static int __init atmel_pwm_bl_probe(struct platform_device *pdev)
|
||||
static int atmel_pwm_bl_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct backlight_properties props;
|
||||
const struct atmel_pwm_bl_platform_data *pdata;
|
||||
@@ -203,7 +203,7 @@ err_free_mem:
|
||||
return retval;
|
||||
}
|
||||
|
||||
static int __exit atmel_pwm_bl_remove(struct platform_device *pdev)
|
||||
static int atmel_pwm_bl_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct atmel_pwm_bl *pwmbl = platform_get_drvdata(pdev);
|
||||
|
||||
@@ -222,10 +222,11 @@ static struct platform_driver atmel_pwm_bl_driver = {
|
||||
.name = "atmel-pwm-bl",
|
||||
},
|
||||
/* REVISIT add suspend() and resume() */
|
||||
.remove = __exit_p(atmel_pwm_bl_remove),
|
||||
.probe = atmel_pwm_bl_probe,
|
||||
.remove = atmel_pwm_bl_remove,
|
||||
};
|
||||
|
||||
module_platform_driver_probe(atmel_pwm_bl_driver, atmel_pwm_bl_probe);
|
||||
module_platform_driver(atmel_pwm_bl_driver);
|
||||
|
||||
MODULE_AUTHOR("Hans-Christian egtvedt <hans-christian.egtvedt@atmel.com>");
|
||||
MODULE_DESCRIPTION("Atmel PWM backlight driver");
|
||||
|
||||
@@ -795,12 +795,21 @@ static int hvfb_remove(struct hv_device *hdev)
|
||||
}
|
||||
|
||||
|
||||
static DEFINE_PCI_DEVICE_TABLE(pci_stub_id_table) = {
|
||||
{
|
||||
.vendor = PCI_VENDOR_ID_MICROSOFT,
|
||||
.device = PCI_DEVICE_ID_HYPERV_VIDEO,
|
||||
},
|
||||
{ /* end of list */ }
|
||||
};
|
||||
|
||||
static const struct hv_vmbus_device_id id_table[] = {
|
||||
/* Synthetic Video Device GUID */
|
||||
{HV_SYNTHVID_GUID},
|
||||
{}
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(pci, pci_stub_id_table);
|
||||
MODULE_DEVICE_TABLE(vmbus, id_table);
|
||||
|
||||
static struct hv_driver hvfb_drv = {
|
||||
@@ -810,14 +819,43 @@ static struct hv_driver hvfb_drv = {
|
||||
.remove = hvfb_remove,
|
||||
};
|
||||
|
||||
static int hvfb_pci_stub_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *ent)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void hvfb_pci_stub_remove(struct pci_dev *pdev)
|
||||
{
|
||||
}
|
||||
|
||||
static struct pci_driver hvfb_pci_stub_driver = {
|
||||
.name = KBUILD_MODNAME,
|
||||
.id_table = pci_stub_id_table,
|
||||
.probe = hvfb_pci_stub_probe,
|
||||
.remove = hvfb_pci_stub_remove,
|
||||
};
|
||||
|
||||
static int __init hvfb_drv_init(void)
|
||||
{
|
||||
return vmbus_driver_register(&hvfb_drv);
|
||||
int ret;
|
||||
|
||||
ret = vmbus_driver_register(&hvfb_drv);
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
|
||||
ret = pci_register_driver(&hvfb_pci_stub_driver);
|
||||
if (ret != 0) {
|
||||
vmbus_driver_unregister(&hvfb_drv);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __exit hvfb_drv_exit(void)
|
||||
{
|
||||
pci_unregister_driver(&hvfb_pci_stub_driver);
|
||||
vmbus_driver_unregister(&hvfb_drv);
|
||||
}
|
||||
|
||||
|
||||
@@ -56,10 +56,19 @@ static void configfs_d_iput(struct dentry * dentry,
|
||||
struct configfs_dirent *sd = dentry->d_fsdata;
|
||||
|
||||
if (sd) {
|
||||
BUG_ON(sd->s_dentry != dentry);
|
||||
/* Coordinate with configfs_readdir */
|
||||
spin_lock(&configfs_dirent_lock);
|
||||
sd->s_dentry = NULL;
|
||||
/* Coordinate with configfs_attach_attr where will increase
|
||||
* sd->s_count and update sd->s_dentry to new allocated one.
|
||||
* Only set sd->dentry to null when this dentry is the only
|
||||
* sd owner.
|
||||
* If not do so, configfs_d_iput may run just after
|
||||
* configfs_attach_attr and set sd->s_dentry to null
|
||||
* even it's still in use.
|
||||
*/
|
||||
if (atomic_read(&sd->s_count) <= 2)
|
||||
sd->s_dentry = NULL;
|
||||
|
||||
spin_unlock(&configfs_dirent_lock);
|
||||
configfs_put(sd);
|
||||
}
|
||||
@@ -426,8 +435,11 @@ static int configfs_attach_attr(struct configfs_dirent * sd, struct dentry * den
|
||||
struct configfs_attribute * attr = sd->s_element;
|
||||
int error;
|
||||
|
||||
spin_lock(&configfs_dirent_lock);
|
||||
dentry->d_fsdata = configfs_get(sd);
|
||||
sd->s_dentry = dentry;
|
||||
spin_unlock(&configfs_dirent_lock);
|
||||
|
||||
error = configfs_create(dentry, (attr->ca_mode & S_IALLUGO) | S_IFREG,
|
||||
configfs_init_file);
|
||||
if (error) {
|
||||
|
||||
@@ -1669,6 +1669,12 @@ int __get_dumpable(unsigned long mm_flags)
|
||||
return (ret > SUID_DUMP_USER) ? SUID_DUMP_ROOT : ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* This returns the actual value of the suid_dumpable flag. For things
|
||||
* that are using this for checking for privilege transitions, it must
|
||||
* test against SUID_DUMP_USER rather than treating it as a boolean
|
||||
* value.
|
||||
*/
|
||||
int get_dumpable(struct mm_struct *mm)
|
||||
{
|
||||
return __get_dumpable(mm->flags);
|
||||
|
||||
@@ -1160,29 +1160,24 @@ _nfs4_opendata_reclaim_to_nfs4_state(struct nfs4_opendata *data)
|
||||
int ret;
|
||||
|
||||
if (!data->rpc_done) {
|
||||
ret = data->rpc_status;
|
||||
goto err;
|
||||
if (data->rpc_status) {
|
||||
ret = data->rpc_status;
|
||||
goto err;
|
||||
}
|
||||
/* cached opens have already been processed */
|
||||
goto update;
|
||||
}
|
||||
|
||||
ret = -ESTALE;
|
||||
if (!(data->f_attr.valid & NFS_ATTR_FATTR_TYPE) ||
|
||||
!(data->f_attr.valid & NFS_ATTR_FATTR_FILEID) ||
|
||||
!(data->f_attr.valid & NFS_ATTR_FATTR_CHANGE))
|
||||
goto err;
|
||||
|
||||
ret = -ENOMEM;
|
||||
state = nfs4_get_open_state(inode, data->owner);
|
||||
if (state == NULL)
|
||||
goto err;
|
||||
|
||||
ret = nfs_refresh_inode(inode, &data->f_attr);
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
if (data->o_res.delegation_type != 0)
|
||||
nfs4_opendata_check_deleg(data, state);
|
||||
update:
|
||||
update_open_stateid(state, &data->o_res.stateid, NULL,
|
||||
data->o_arg.fmode);
|
||||
atomic_inc(&state->count);
|
||||
|
||||
return state;
|
||||
err:
|
||||
@@ -4572,6 +4567,7 @@ static int _nfs4_proc_getlk(struct nfs4_state *state, int cmd, struct file_lock
|
||||
status = 0;
|
||||
}
|
||||
request->fl_ops->fl_release_private(request);
|
||||
request->fl_ops = NULL;
|
||||
out:
|
||||
return status;
|
||||
}
|
||||
|
||||
@@ -536,16 +536,12 @@ static int svc_export_parse(struct cache_detail *cd, char *mesg, int mlen)
|
||||
if (err)
|
||||
goto out3;
|
||||
exp.ex_anon_uid= make_kuid(&init_user_ns, an_int);
|
||||
if (!uid_valid(exp.ex_anon_uid))
|
||||
goto out3;
|
||||
|
||||
/* anon gid */
|
||||
err = get_int(&mesg, &an_int);
|
||||
if (err)
|
||||
goto out3;
|
||||
exp.ex_anon_gid= make_kgid(&init_user_ns, an_int);
|
||||
if (!gid_valid(exp.ex_anon_gid))
|
||||
goto out3;
|
||||
|
||||
/* fsid */
|
||||
err = get_int(&mesg, &an_int);
|
||||
@@ -583,6 +579,17 @@ static int svc_export_parse(struct cache_detail *cd, char *mesg, int mlen)
|
||||
exp.ex_uuid);
|
||||
if (err)
|
||||
goto out4;
|
||||
/*
|
||||
* For some reason exportfs has been passing down an
|
||||
* invalid (-1) uid & gid on the "dummy" export which it
|
||||
* uses to test export support. To make sure exportfs
|
||||
* sees errors from check_export we therefore need to
|
||||
* delay these checks till after check_export:
|
||||
*/
|
||||
if (!uid_valid(exp.ex_anon_uid))
|
||||
goto out4;
|
||||
if (!gid_valid(exp.ex_anon_gid))
|
||||
goto out4;
|
||||
}
|
||||
|
||||
expp = svc_export_lookup(&exp);
|
||||
|
||||
215
fs/nfsd/vfs.c
215
fs/nfsd/vfs.c
@@ -297,8 +297,104 @@ commit_metadata(struct svc_fh *fhp)
|
||||
}
|
||||
|
||||
/*
|
||||
* Set various file attributes.
|
||||
* N.B. After this call fhp needs an fh_put
|
||||
* Go over the attributes and take care of the small differences between
|
||||
* NFS semantics and what Linux expects.
|
||||
*/
|
||||
static void
|
||||
nfsd_sanitize_attrs(struct inode *inode, struct iattr *iap)
|
||||
{
|
||||
/*
|
||||
* NFSv2 does not differentiate between "set-[ac]time-to-now"
|
||||
* which only requires access, and "set-[ac]time-to-X" which
|
||||
* requires ownership.
|
||||
* So if it looks like it might be "set both to the same time which
|
||||
* is close to now", and if inode_change_ok fails, then we
|
||||
* convert to "set to now" instead of "set to explicit time"
|
||||
*
|
||||
* We only call inode_change_ok as the last test as technically
|
||||
* it is not an interface that we should be using.
|
||||
*/
|
||||
#define BOTH_TIME_SET (ATTR_ATIME_SET | ATTR_MTIME_SET)
|
||||
#define MAX_TOUCH_TIME_ERROR (30*60)
|
||||
if ((iap->ia_valid & BOTH_TIME_SET) == BOTH_TIME_SET &&
|
||||
iap->ia_mtime.tv_sec == iap->ia_atime.tv_sec) {
|
||||
/*
|
||||
* Looks probable.
|
||||
*
|
||||
* Now just make sure time is in the right ballpark.
|
||||
* Solaris, at least, doesn't seem to care what the time
|
||||
* request is. We require it be within 30 minutes of now.
|
||||
*/
|
||||
time_t delta = iap->ia_atime.tv_sec - get_seconds();
|
||||
if (delta < 0)
|
||||
delta = -delta;
|
||||
if (delta < MAX_TOUCH_TIME_ERROR &&
|
||||
inode_change_ok(inode, iap) != 0) {
|
||||
/*
|
||||
* Turn off ATTR_[AM]TIME_SET but leave ATTR_[AM]TIME.
|
||||
* This will cause notify_change to set these times
|
||||
* to "now"
|
||||
*/
|
||||
iap->ia_valid &= ~BOTH_TIME_SET;
|
||||
}
|
||||
}
|
||||
|
||||
/* sanitize the mode change */
|
||||
if (iap->ia_valid & ATTR_MODE) {
|
||||
iap->ia_mode &= S_IALLUGO;
|
||||
iap->ia_mode |= (inode->i_mode & ~S_IALLUGO);
|
||||
}
|
||||
|
||||
/* Revoke setuid/setgid on chown */
|
||||
if (!S_ISDIR(inode->i_mode) &&
|
||||
(((iap->ia_valid & ATTR_UID) && !uid_eq(iap->ia_uid, inode->i_uid)) ||
|
||||
((iap->ia_valid & ATTR_GID) && !gid_eq(iap->ia_gid, inode->i_gid)))) {
|
||||
iap->ia_valid |= ATTR_KILL_PRIV;
|
||||
if (iap->ia_valid & ATTR_MODE) {
|
||||
/* we're setting mode too, just clear the s*id bits */
|
||||
iap->ia_mode &= ~S_ISUID;
|
||||
if (iap->ia_mode & S_IXGRP)
|
||||
iap->ia_mode &= ~S_ISGID;
|
||||
} else {
|
||||
/* set ATTR_KILL_* bits and let VFS handle it */
|
||||
iap->ia_valid |= (ATTR_KILL_SUID | ATTR_KILL_SGID);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static __be32
|
||||
nfsd_get_write_access(struct svc_rqst *rqstp, struct svc_fh *fhp,
|
||||
struct iattr *iap)
|
||||
{
|
||||
struct inode *inode = fhp->fh_dentry->d_inode;
|
||||
int host_err;
|
||||
|
||||
if (iap->ia_size < inode->i_size) {
|
||||
__be32 err;
|
||||
|
||||
err = nfsd_permission(rqstp, fhp->fh_export, fhp->fh_dentry,
|
||||
NFSD_MAY_TRUNC | NFSD_MAY_OWNER_OVERRIDE);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
host_err = get_write_access(inode);
|
||||
if (host_err)
|
||||
goto out_nfserrno;
|
||||
|
||||
host_err = locks_verify_truncate(inode, NULL, iap->ia_size);
|
||||
if (host_err)
|
||||
goto out_put_write_access;
|
||||
return 0;
|
||||
|
||||
out_put_write_access:
|
||||
put_write_access(inode);
|
||||
out_nfserrno:
|
||||
return nfserrno(host_err);
|
||||
}
|
||||
|
||||
/*
|
||||
* Set various file attributes. After this call fhp needs an fh_put.
|
||||
*/
|
||||
__be32
|
||||
nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
|
||||
@@ -332,114 +428,43 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
|
||||
if (!iap->ia_valid)
|
||||
goto out;
|
||||
|
||||
nfsd_sanitize_attrs(inode, iap);
|
||||
|
||||
/*
|
||||
* NFSv2 does not differentiate between "set-[ac]time-to-now"
|
||||
* which only requires access, and "set-[ac]time-to-X" which
|
||||
* requires ownership.
|
||||
* So if it looks like it might be "set both to the same time which
|
||||
* is close to now", and if inode_change_ok fails, then we
|
||||
* convert to "set to now" instead of "set to explicit time"
|
||||
*
|
||||
* We only call inode_change_ok as the last test as technically
|
||||
* it is not an interface that we should be using. It is only
|
||||
* valid if the filesystem does not define it's own i_op->setattr.
|
||||
*/
|
||||
#define BOTH_TIME_SET (ATTR_ATIME_SET | ATTR_MTIME_SET)
|
||||
#define MAX_TOUCH_TIME_ERROR (30*60)
|
||||
if ((iap->ia_valid & BOTH_TIME_SET) == BOTH_TIME_SET &&
|
||||
iap->ia_mtime.tv_sec == iap->ia_atime.tv_sec) {
|
||||
/*
|
||||
* Looks probable.
|
||||
*
|
||||
* Now just make sure time is in the right ballpark.
|
||||
* Solaris, at least, doesn't seem to care what the time
|
||||
* request is. We require it be within 30 minutes of now.
|
||||
*/
|
||||
time_t delta = iap->ia_atime.tv_sec - get_seconds();
|
||||
if (delta < 0)
|
||||
delta = -delta;
|
||||
if (delta < MAX_TOUCH_TIME_ERROR &&
|
||||
inode_change_ok(inode, iap) != 0) {
|
||||
/*
|
||||
* Turn off ATTR_[AM]TIME_SET but leave ATTR_[AM]TIME.
|
||||
* This will cause notify_change to set these times
|
||||
* to "now"
|
||||
*/
|
||||
iap->ia_valid &= ~BOTH_TIME_SET;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* The size case is special.
|
||||
* It changes the file as well as the attributes.
|
||||
* The size case is special, it changes the file in addition to the
|
||||
* attributes.
|
||||
*/
|
||||
if (iap->ia_valid & ATTR_SIZE) {
|
||||
if (iap->ia_size < inode->i_size) {
|
||||
err = nfsd_permission(rqstp, fhp->fh_export, dentry,
|
||||
NFSD_MAY_TRUNC|NFSD_MAY_OWNER_OVERRIDE);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
|
||||
host_err = get_write_access(inode);
|
||||
if (host_err)
|
||||
goto out_nfserr;
|
||||
|
||||
err = nfsd_get_write_access(rqstp, fhp, iap);
|
||||
if (err)
|
||||
goto out;
|
||||
size_change = 1;
|
||||
host_err = locks_verify_truncate(inode, NULL, iap->ia_size);
|
||||
if (host_err) {
|
||||
put_write_access(inode);
|
||||
goto out_nfserr;
|
||||
}
|
||||
}
|
||||
|
||||
/* sanitize the mode change */
|
||||
if (iap->ia_valid & ATTR_MODE) {
|
||||
iap->ia_mode &= S_IALLUGO;
|
||||
iap->ia_mode |= (inode->i_mode & ~S_IALLUGO);
|
||||
}
|
||||
|
||||
/* Revoke setuid/setgid on chown */
|
||||
if (!S_ISDIR(inode->i_mode) &&
|
||||
(((iap->ia_valid & ATTR_UID) && !uid_eq(iap->ia_uid, inode->i_uid)) ||
|
||||
((iap->ia_valid & ATTR_GID) && !gid_eq(iap->ia_gid, inode->i_gid)))) {
|
||||
iap->ia_valid |= ATTR_KILL_PRIV;
|
||||
if (iap->ia_valid & ATTR_MODE) {
|
||||
/* we're setting mode too, just clear the s*id bits */
|
||||
iap->ia_mode &= ~S_ISUID;
|
||||
if (iap->ia_mode & S_IXGRP)
|
||||
iap->ia_mode &= ~S_ISGID;
|
||||
} else {
|
||||
/* set ATTR_KILL_* bits and let VFS handle it */
|
||||
iap->ia_valid |= (ATTR_KILL_SUID | ATTR_KILL_SGID);
|
||||
}
|
||||
}
|
||||
|
||||
/* Change the attributes. */
|
||||
|
||||
iap->ia_valid |= ATTR_CTIME;
|
||||
|
||||
err = nfserr_notsync;
|
||||
if (!check_guard || guardtime == inode->i_ctime.tv_sec) {
|
||||
host_err = nfsd_break_lease(inode);
|
||||
if (host_err)
|
||||
goto out_nfserr;
|
||||
fh_lock(fhp);
|
||||
|
||||
host_err = notify_change(dentry, iap);
|
||||
err = nfserrno(host_err);
|
||||
fh_unlock(fhp);
|
||||
if (check_guard && guardtime != inode->i_ctime.tv_sec) {
|
||||
err = nfserr_notsync;
|
||||
goto out_put_write_access;
|
||||
}
|
||||
|
||||
host_err = nfsd_break_lease(inode);
|
||||
if (host_err)
|
||||
goto out_put_write_access_nfserror;
|
||||
|
||||
fh_lock(fhp);
|
||||
host_err = notify_change(dentry, iap);
|
||||
fh_unlock(fhp);
|
||||
|
||||
out_put_write_access_nfserror:
|
||||
err = nfserrno(host_err);
|
||||
out_put_write_access:
|
||||
if (size_change)
|
||||
put_write_access(inode);
|
||||
if (!err)
|
||||
commit_metadata(fhp);
|
||||
out:
|
||||
return err;
|
||||
|
||||
out_nfserr:
|
||||
err = nfserrno(host_err);
|
||||
goto out;
|
||||
}
|
||||
|
||||
#if defined(CONFIG_NFSD_V2_ACL) || \
|
||||
|
||||
@@ -99,9 +99,6 @@ extern void setup_new_exec(struct linux_binprm * bprm);
|
||||
extern void would_dump(struct linux_binprm *, struct file *);
|
||||
|
||||
extern int suid_dumpable;
|
||||
#define SUID_DUMP_DISABLE 0 /* No setuid dumping */
|
||||
#define SUID_DUMP_USER 1 /* Dump as user of process */
|
||||
#define SUID_DUMP_ROOT 2 /* Dump as root */
|
||||
|
||||
/* Stack area protections */
|
||||
#define EXSTACK_DEFAULT 0 /* Whatever the arch defaults to */
|
||||
|
||||
@@ -456,7 +456,8 @@ enum dmi_field {
|
||||
};
|
||||
|
||||
struct dmi_strmatch {
|
||||
unsigned char slot;
|
||||
unsigned char slot:7;
|
||||
unsigned char exact_match:1;
|
||||
char substr[79];
|
||||
};
|
||||
|
||||
@@ -474,7 +475,8 @@ struct dmi_system_id {
|
||||
*/
|
||||
#define dmi_device_id dmi_system_id
|
||||
|
||||
#define DMI_MATCH(a, b) { a, b }
|
||||
#define DMI_MATCH(a, b) { .slot = a, .substr = b }
|
||||
#define DMI_EXACT_MATCH(a, b) { .slot = a, .substr = b, .exact_match = 1 }
|
||||
|
||||
#define PLATFORM_NAME_SIZE 20
|
||||
#define PLATFORM_MODULE_PREFIX "platform:"
|
||||
|
||||
@@ -332,6 +332,10 @@ static inline void arch_pick_mmap_layout(struct mm_struct *mm) {}
|
||||
extern void set_dumpable(struct mm_struct *mm, int value);
|
||||
extern int get_dumpable(struct mm_struct *mm);
|
||||
|
||||
#define SUID_DUMP_DISABLE 0 /* No setuid dumping */
|
||||
#define SUID_DUMP_USER 1 /* Dump as user of process */
|
||||
#define SUID_DUMP_ROOT 2 /* Dump as root */
|
||||
|
||||
/* mm flags */
|
||||
/* dumpable bits */
|
||||
#define MMF_DUMPABLE 0 /* core dump is permitted */
|
||||
@@ -2485,34 +2489,98 @@ static inline int tsk_is_polling(struct task_struct *p)
|
||||
{
|
||||
return task_thread_info(p)->status & TS_POLLING;
|
||||
}
|
||||
static inline void current_set_polling(void)
|
||||
static inline void __current_set_polling(void)
|
||||
{
|
||||
current_thread_info()->status |= TS_POLLING;
|
||||
}
|
||||
|
||||
static inline void current_clr_polling(void)
|
||||
static inline bool __must_check current_set_polling_and_test(void)
|
||||
{
|
||||
__current_set_polling();
|
||||
|
||||
/*
|
||||
* Polling state must be visible before we test NEED_RESCHED,
|
||||
* paired by resched_task()
|
||||
*/
|
||||
smp_mb();
|
||||
|
||||
return unlikely(tif_need_resched());
|
||||
}
|
||||
|
||||
static inline void __current_clr_polling(void)
|
||||
{
|
||||
current_thread_info()->status &= ~TS_POLLING;
|
||||
smp_mb__after_clear_bit();
|
||||
}
|
||||
|
||||
static inline bool __must_check current_clr_polling_and_test(void)
|
||||
{
|
||||
__current_clr_polling();
|
||||
|
||||
/*
|
||||
* Polling state must be visible before we test NEED_RESCHED,
|
||||
* paired by resched_task()
|
||||
*/
|
||||
smp_mb();
|
||||
|
||||
return unlikely(tif_need_resched());
|
||||
}
|
||||
#elif defined(TIF_POLLING_NRFLAG)
|
||||
static inline int tsk_is_polling(struct task_struct *p)
|
||||
{
|
||||
return test_tsk_thread_flag(p, TIF_POLLING_NRFLAG);
|
||||
}
|
||||
static inline void current_set_polling(void)
|
||||
|
||||
static inline void __current_set_polling(void)
|
||||
{
|
||||
set_thread_flag(TIF_POLLING_NRFLAG);
|
||||
}
|
||||
|
||||
static inline void current_clr_polling(void)
|
||||
static inline bool __must_check current_set_polling_and_test(void)
|
||||
{
|
||||
__current_set_polling();
|
||||
|
||||
/*
|
||||
* Polling state must be visible before we test NEED_RESCHED,
|
||||
* paired by resched_task()
|
||||
*
|
||||
* XXX: assumes set/clear bit are identical barrier wise.
|
||||
*/
|
||||
smp_mb__after_clear_bit();
|
||||
|
||||
return unlikely(tif_need_resched());
|
||||
}
|
||||
|
||||
static inline void __current_clr_polling(void)
|
||||
{
|
||||
clear_thread_flag(TIF_POLLING_NRFLAG);
|
||||
}
|
||||
|
||||
static inline bool __must_check current_clr_polling_and_test(void)
|
||||
{
|
||||
__current_clr_polling();
|
||||
|
||||
/*
|
||||
* Polling state must be visible before we test NEED_RESCHED,
|
||||
* paired by resched_task()
|
||||
*/
|
||||
smp_mb__after_clear_bit();
|
||||
|
||||
return unlikely(tif_need_resched());
|
||||
}
|
||||
|
||||
#else
|
||||
static inline int tsk_is_polling(struct task_struct *p) { return 0; }
|
||||
static inline void current_set_polling(void) { }
|
||||
static inline void current_clr_polling(void) { }
|
||||
static inline void __current_set_polling(void) { }
|
||||
static inline void __current_clr_polling(void) { }
|
||||
|
||||
static inline bool __must_check current_set_polling_and_test(void)
|
||||
{
|
||||
return unlikely(tif_need_resched());
|
||||
}
|
||||
static inline bool __must_check current_clr_polling_and_test(void)
|
||||
{
|
||||
return unlikely(tif_need_resched());
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
|
||||
@@ -107,6 +107,8 @@ static inline int test_ti_thread_flag(struct thread_info *ti, int flag)
|
||||
#define set_need_resched() set_thread_flag(TIF_NEED_RESCHED)
|
||||
#define clear_need_resched() clear_thread_flag(TIF_NEED_RESCHED)
|
||||
|
||||
#define tif_need_resched() test_thread_flag(TIF_NEED_RESCHED)
|
||||
|
||||
#if defined TIF_RESTORE_SIGMASK && !defined HAVE_SET_RESTORE_SIGMASK
|
||||
/*
|
||||
* An arch can define its own version of set_restore_sigmask() to get the
|
||||
|
||||
@@ -165,6 +165,7 @@ static inline struct inet6_dev *ip6_dst_idev(struct dst_entry *dst)
|
||||
static inline void rt6_clean_expires(struct rt6_info *rt)
|
||||
{
|
||||
rt->rt6i_flags &= ~RTF_EXPIRES;
|
||||
rt->dst.expires = 0;
|
||||
}
|
||||
|
||||
static inline void rt6_set_expires(struct rt6_info *rt, unsigned long expires)
|
||||
|
||||
@@ -113,7 +113,7 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn,
|
||||
__be32 key);
|
||||
|
||||
int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
|
||||
const struct tnl_ptk_info *tpi, bool log_ecn_error);
|
||||
const struct tnl_ptk_info *tpi, int hdr_len, bool log_ecn_error);
|
||||
int ip_tunnel_changelink(struct net_device *dev, struct nlattr *tb[],
|
||||
struct ip_tunnel_parm *p);
|
||||
int ip_tunnel_newlink(struct net_device *dev, struct nlattr *tb[],
|
||||
|
||||
@@ -171,4 +171,13 @@ static inline void snd_compr_fragment_elapsed(struct snd_compr_stream *stream)
|
||||
wake_up(&stream->runtime->sleep);
|
||||
}
|
||||
|
||||
static inline void snd_compr_drain_notify(struct snd_compr_stream *stream)
|
||||
{
|
||||
if (snd_BUG_ON(!stream))
|
||||
return;
|
||||
|
||||
stream->runtime->state = SNDRV_PCM_STATE_SETUP;
|
||||
wake_up(&stream->runtime->sleep);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
@@ -579,6 +579,55 @@ TRACE_EVENT(sched_task_usage_ratio,
|
||||
__entry->ratio)
|
||||
);
|
||||
|
||||
/*
|
||||
* Tracepoint for HMP (CONFIG_SCHED_HMP) task migrations,
|
||||
* marking the forced transition of runnable or running tasks.
|
||||
*/
|
||||
TRACE_EVENT(sched_hmp_migrate_force_running,
|
||||
|
||||
TP_PROTO(struct task_struct *tsk, int running),
|
||||
|
||||
TP_ARGS(tsk, running),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__array(char, comm, TASK_COMM_LEN)
|
||||
__field(int, running)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN);
|
||||
__entry->running = running;
|
||||
),
|
||||
|
||||
TP_printk("running=%d comm=%s",
|
||||
__entry->running, __entry->comm)
|
||||
);
|
||||
|
||||
/*
|
||||
* Tracepoint for HMP (CONFIG_SCHED_HMP) task migrations,
|
||||
* marking the forced transition of runnable or running
|
||||
* tasks when a task is about to go idle.
|
||||
*/
|
||||
TRACE_EVENT(sched_hmp_migrate_idle_running,
|
||||
|
||||
TP_PROTO(struct task_struct *tsk, int running),
|
||||
|
||||
TP_ARGS(tsk, running),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__array(char, comm, TASK_COMM_LEN)
|
||||
__field(int, running)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN);
|
||||
__entry->running = running;
|
||||
),
|
||||
|
||||
TP_printk("running=%d comm=%s",
|
||||
__entry->running, __entry->comm)
|
||||
);
|
||||
|
||||
/*
|
||||
* Tracepoint for HMP (CONFIG_SCHED_HMP) task migrations.
|
||||
*/
|
||||
|
||||
@@ -425,13 +425,15 @@ struct perf_event_mmap_page {
|
||||
/*
|
||||
* Control data for the mmap() data buffer.
|
||||
*
|
||||
* User-space reading the @data_head value should issue an rmb(), on
|
||||
* SMP capable platforms, after reading this value -- see
|
||||
* perf_event_wakeup().
|
||||
* User-space reading the @data_head value should issue an smp_rmb(),
|
||||
* after reading this value.
|
||||
*
|
||||
* When the mapping is PROT_WRITE the @data_tail value should be
|
||||
* written by userspace to reflect the last read data. In this case
|
||||
* the kernel will not over-write unread data.
|
||||
* written by userspace to reflect the last read data, after issueing
|
||||
* an smp_mb() to separate the data read from the ->data_tail store.
|
||||
* In this case the kernel will not over-write unread data.
|
||||
*
|
||||
* See perf_output_put_handle() for the data ordering.
|
||||
*/
|
||||
__u64 data_head; /* head in the data section */
|
||||
__u64 data_tail; /* user-space written tail */
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
#include <drm/drm_mode.h>
|
||||
|
||||
#define ADF_NAME_LEN 32
|
||||
#define ADF_MAX_CUSTOM_DATA_SIZE PAGE_SIZE
|
||||
#define ADF_MAX_CUSTOM_DATA_SIZE 4096
|
||||
|
||||
enum adf_interface_type {
|
||||
ADF_INTF_DSI = 0,
|
||||
@@ -126,7 +126,7 @@ struct adf_buffer_config {
|
||||
|
||||
__s64 acquire_fence;
|
||||
};
|
||||
#define ADF_MAX_BUFFERS (PAGE_SIZE / sizeof(struct adf_buffer_config))
|
||||
#define ADF_MAX_BUFFERS (4096 / sizeof(struct adf_buffer_config))
|
||||
|
||||
/**
|
||||
* struct adf_post_config - request to flip to a new set of buffers
|
||||
@@ -152,7 +152,7 @@ struct adf_post_config {
|
||||
|
||||
__s64 complete_fence;
|
||||
};
|
||||
#define ADF_MAX_INTERFACES (PAGE_SIZE / sizeof(__u32))
|
||||
#define ADF_MAX_INTERFACES (4096 / sizeof(__u32))
|
||||
|
||||
/**
|
||||
* struct adf_simple_buffer_allocate - request to allocate a "simple" buffer
|
||||
@@ -233,7 +233,7 @@ struct adf_device_data {
|
||||
size_t custom_data_size;
|
||||
void __user *custom_data;
|
||||
};
|
||||
#define ADF_MAX_ATTACHMENTS (PAGE_SIZE / sizeof(struct adf_attachment))
|
||||
#define ADF_MAX_ATTACHMENTS (4096 / sizeof(struct adf_attachment_config))
|
||||
|
||||
/**
|
||||
* struct adf_device_data - describes a display interface
|
||||
@@ -273,7 +273,7 @@ struct adf_interface_data {
|
||||
size_t custom_data_size;
|
||||
void __user *custom_data;
|
||||
};
|
||||
#define ADF_MAX_MODES (PAGE_SIZE / sizeof(struct drm_mode_modeinfo))
|
||||
#define ADF_MAX_MODES (4096 / sizeof(struct drm_mode_modeinfo))
|
||||
|
||||
/**
|
||||
* struct adf_overlay_engine_data - describes an overlay engine
|
||||
@@ -293,7 +293,7 @@ struct adf_overlay_engine_data {
|
||||
size_t custom_data_size;
|
||||
void __user *custom_data;
|
||||
};
|
||||
#define ADF_MAX_SUPPORTED_FORMATS (PAGE_SIZE / sizeof(__u32))
|
||||
#define ADF_MAX_SUPPORTED_FORMATS (4096 / sizeof(__u32))
|
||||
|
||||
#define ADF_SET_EVENT _IOW('D', 0, struct adf_set_event)
|
||||
#define ADF_BLANK _IOW('D', 1, __u8)
|
||||
|
||||
37
ipc/shm.c
37
ipc/shm.c
@@ -208,15 +208,18 @@ static void shm_open(struct vm_area_struct *vma)
|
||||
*/
|
||||
static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp)
|
||||
{
|
||||
struct file *shm_file;
|
||||
|
||||
shm_file = shp->shm_file;
|
||||
shp->shm_file = NULL;
|
||||
ns->shm_tot -= (shp->shm_segsz + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
||||
shm_rmid(ns, shp);
|
||||
shm_unlock(shp);
|
||||
if (!is_file_hugepages(shp->shm_file))
|
||||
shmem_lock(shp->shm_file, 0, shp->mlock_user);
|
||||
if (!is_file_hugepages(shm_file))
|
||||
shmem_lock(shm_file, 0, shp->mlock_user);
|
||||
else if (shp->mlock_user)
|
||||
user_shm_unlock(file_inode(shp->shm_file)->i_size,
|
||||
shp->mlock_user);
|
||||
fput (shp->shm_file);
|
||||
user_shm_unlock(file_inode(shm_file)->i_size, shp->mlock_user);
|
||||
fput(shm_file);
|
||||
ipc_rcu_putref(shp, shm_rcu_free);
|
||||
}
|
||||
|
||||
@@ -974,15 +977,25 @@ SYSCALL_DEFINE3(shmctl, int, shmid, int, cmd, struct shmid_ds __user *, buf)
|
||||
ipc_lock_object(&shp->shm_perm);
|
||||
if (!ns_capable(ns->user_ns, CAP_IPC_LOCK)) {
|
||||
kuid_t euid = current_euid();
|
||||
err = -EPERM;
|
||||
if (!uid_eq(euid, shp->shm_perm.uid) &&
|
||||
!uid_eq(euid, shp->shm_perm.cuid))
|
||||
!uid_eq(euid, shp->shm_perm.cuid)) {
|
||||
err = -EPERM;
|
||||
goto out_unlock0;
|
||||
if (cmd == SHM_LOCK && !rlimit(RLIMIT_MEMLOCK))
|
||||
}
|
||||
if (cmd == SHM_LOCK && !rlimit(RLIMIT_MEMLOCK)) {
|
||||
err = -EPERM;
|
||||
goto out_unlock0;
|
||||
}
|
||||
}
|
||||
|
||||
shm_file = shp->shm_file;
|
||||
|
||||
/* check if shm_destroy() is tearing down shp */
|
||||
if (shm_file == NULL) {
|
||||
err = -EIDRM;
|
||||
goto out_unlock0;
|
||||
}
|
||||
|
||||
if (is_file_hugepages(shm_file))
|
||||
goto out_unlock0;
|
||||
|
||||
@@ -1101,6 +1114,14 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr,
|
||||
goto out_unlock;
|
||||
|
||||
ipc_lock_object(&shp->shm_perm);
|
||||
|
||||
/* check if shm_destroy() is tearing down shp */
|
||||
if (shp->shm_file == NULL) {
|
||||
ipc_unlock_object(&shp->shm_perm);
|
||||
err = -EIDRM;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
path = shp->shm_file->f_path;
|
||||
path_get(&path);
|
||||
shp->shm_nattch++;
|
||||
|
||||
@@ -44,7 +44,7 @@ static inline int cpu_idle_poll(void)
|
||||
rcu_idle_enter();
|
||||
trace_cpu_idle_rcuidle(0, smp_processor_id());
|
||||
local_irq_enable();
|
||||
while (!need_resched())
|
||||
while (!tif_need_resched())
|
||||
cpu_relax();
|
||||
trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id());
|
||||
rcu_idle_exit();
|
||||
@@ -92,8 +92,7 @@ static void cpu_idle_loop(void)
|
||||
if (cpu_idle_force_poll || tick_check_broadcast_expired()) {
|
||||
cpu_idle_poll();
|
||||
} else {
|
||||
current_clr_polling();
|
||||
if (!need_resched()) {
|
||||
if (!current_clr_polling_and_test()) {
|
||||
stop_critical_timings();
|
||||
rcu_idle_enter();
|
||||
arch_cpu_idle();
|
||||
@@ -103,7 +102,7 @@ static void cpu_idle_loop(void)
|
||||
} else {
|
||||
local_irq_enable();
|
||||
}
|
||||
current_set_polling();
|
||||
__current_set_polling();
|
||||
}
|
||||
arch_cpu_idle_exit();
|
||||
}
|
||||
@@ -129,7 +128,7 @@ void cpu_startup_entry(enum cpuhp_state state)
|
||||
*/
|
||||
boot_init_stack_canary();
|
||||
#endif
|
||||
current_set_polling();
|
||||
__current_set_polling();
|
||||
arch_cpu_idle_prepare();
|
||||
cpu_idle_loop();
|
||||
}
|
||||
|
||||
@@ -87,10 +87,31 @@ again:
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* Publish the known good head. Rely on the full barrier implied
|
||||
* by atomic_dec_and_test() order the rb->head read and this
|
||||
* write.
|
||||
* Since the mmap() consumer (userspace) can run on a different CPU:
|
||||
*
|
||||
* kernel user
|
||||
*
|
||||
* READ ->data_tail READ ->data_head
|
||||
* smp_mb() (A) smp_rmb() (C)
|
||||
* WRITE $data READ $data
|
||||
* smp_wmb() (B) smp_mb() (D)
|
||||
* STORE ->data_head WRITE ->data_tail
|
||||
*
|
||||
* Where A pairs with D, and B pairs with C.
|
||||
*
|
||||
* I don't think A needs to be a full barrier because we won't in fact
|
||||
* write data until we see the store from userspace. So we simply don't
|
||||
* issue the data WRITE until we observe it. Be conservative for now.
|
||||
*
|
||||
* OTOH, D needs to be a full barrier since it separates the data READ
|
||||
* from the tail WRITE.
|
||||
*
|
||||
* For B a WMB is sufficient since it separates two WRITEs, and for C
|
||||
* an RMB is sufficient since it separates two READs.
|
||||
*
|
||||
* See perf_output_begin().
|
||||
*/
|
||||
smp_wmb();
|
||||
rb->user_page->data_head = head;
|
||||
|
||||
/*
|
||||
@@ -154,9 +175,11 @@ int perf_output_begin(struct perf_output_handle *handle,
|
||||
* Userspace could choose to issue a mb() before updating the
|
||||
* tail pointer. So that all reads will be completed before the
|
||||
* write is issued.
|
||||
*
|
||||
* See perf_output_put_handle().
|
||||
*/
|
||||
tail = ACCESS_ONCE(rb->user_page->data_tail);
|
||||
smp_rmb();
|
||||
smp_mb();
|
||||
offset = head = local_read(&rb->head);
|
||||
head += size;
|
||||
if (unlikely(!perf_output_space(rb, tail, offset, head)))
|
||||
|
||||
@@ -257,7 +257,8 @@ ok:
|
||||
if (task->mm)
|
||||
dumpable = get_dumpable(task->mm);
|
||||
rcu_read_lock();
|
||||
if (!dumpable && !ptrace_has_cap(__task_cred(task)->user_ns, mode)) {
|
||||
if (dumpable != SUID_DUMP_USER &&
|
||||
!ptrace_has_cap(__task_cred(task)->user_ns, mode)) {
|
||||
rcu_read_unlock();
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
@@ -31,7 +31,6 @@
|
||||
#include <linux/task_work.h>
|
||||
|
||||
#include <trace/events/sched.h>
|
||||
#ifdef CONFIG_HMP_VARIABLE_SCALE
|
||||
#include <linux/sysfs.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#ifdef CONFIG_HMP_FREQUENCY_INVARIANT_SCALE
|
||||
@@ -40,7 +39,6 @@
|
||||
*/
|
||||
#include <linux/cpufreq.h>
|
||||
#endif /* CONFIG_HMP_FREQUENCY_INVARIANT_SCALE */
|
||||
#endif /* CONFIG_HMP_VARIABLE_SCALE */
|
||||
|
||||
#include "sched.h"
|
||||
|
||||
@@ -1212,8 +1210,7 @@ static u32 __compute_runnable_contrib(u64 n)
|
||||
return contrib + runnable_avg_yN_sum[n];
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HMP_VARIABLE_SCALE
|
||||
|
||||
#ifdef CONFIG_SCHED_HMP
|
||||
#define HMP_VARIABLE_SCALE_SHIFT 16ULL
|
||||
struct hmp_global_attr {
|
||||
struct attribute attr;
|
||||
@@ -1224,6 +1221,7 @@ struct hmp_global_attr {
|
||||
int *value;
|
||||
int (*to_sysfs)(int);
|
||||
int (*from_sysfs)(int);
|
||||
ssize_t (*to_sysfs_text)(char *buf, int buf_size);
|
||||
};
|
||||
|
||||
#define HMP_DATA_SYSFS_MAX 8
|
||||
@@ -1294,7 +1292,7 @@ struct cpufreq_extents {
|
||||
|
||||
static struct cpufreq_extents freq_scale[CONFIG_NR_CPUS];
|
||||
#endif /* CONFIG_HMP_FREQUENCY_INVARIANT_SCALE */
|
||||
#endif /* CONFIG_HMP_VARIABLE_SCALE */
|
||||
#endif /* CONFIG_SCHED_HMP */
|
||||
|
||||
/* We can represent the historical contribution to runnable average as the
|
||||
* coefficients of a geometric series. To do this we sub-divide our runnable
|
||||
@@ -1340,7 +1338,7 @@ static __always_inline int __update_entity_runnable_avg(u64 now,
|
||||
#endif /* CONFIG_HMP_FREQUENCY_INVARIANT_SCALE */
|
||||
|
||||
delta = now - sa->last_runnable_update;
|
||||
#ifdef CONFIG_HMP_VARIABLE_SCALE
|
||||
#ifdef CONFIG_SCHED_HMP
|
||||
delta = hmp_variable_scale_convert(delta);
|
||||
#endif
|
||||
/*
|
||||
@@ -3843,7 +3841,6 @@ static inline void hmp_next_down_delay(struct sched_entity *se, int cpu)
|
||||
cpu_rq(cpu)->avg.hmp_last_up_migration = 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HMP_VARIABLE_SCALE
|
||||
/*
|
||||
* Heterogenous multiprocessor (HMP) optimizations
|
||||
*
|
||||
@@ -3876,27 +3873,35 @@ static inline void hmp_next_down_delay(struct sched_entity *se, int cpu)
|
||||
* The scale factor hmp_data.multiplier is a fixed point
|
||||
* number: (32-HMP_VARIABLE_SCALE_SHIFT).HMP_VARIABLE_SCALE_SHIFT
|
||||
*/
|
||||
static u64 hmp_variable_scale_convert(u64 delta)
|
||||
static inline u64 hmp_variable_scale_convert(u64 delta)
|
||||
{
|
||||
#ifdef CONFIG_HMP_VARIABLE_SCALE
|
||||
u64 high = delta >> 32ULL;
|
||||
u64 low = delta & 0xffffffffULL;
|
||||
low *= hmp_data.multiplier;
|
||||
high *= hmp_data.multiplier;
|
||||
return (low >> HMP_VARIABLE_SCALE_SHIFT)
|
||||
+ (high << (32ULL - HMP_VARIABLE_SCALE_SHIFT));
|
||||
#else
|
||||
return delta;
|
||||
#endif
|
||||
}
|
||||
|
||||
static ssize_t hmp_show(struct kobject *kobj,
|
||||
struct attribute *attr, char *buf)
|
||||
{
|
||||
ssize_t ret = 0;
|
||||
struct hmp_global_attr *hmp_attr =
|
||||
container_of(attr, struct hmp_global_attr, attr);
|
||||
int temp = *(hmp_attr->value);
|
||||
int temp;
|
||||
|
||||
if (hmp_attr->to_sysfs_text != NULL)
|
||||
return hmp_attr->to_sysfs_text(buf, PAGE_SIZE);
|
||||
|
||||
temp = *(hmp_attr->value);
|
||||
if (hmp_attr->to_sysfs != NULL)
|
||||
temp = hmp_attr->to_sysfs(temp);
|
||||
ret = sprintf(buf, "%d\n", temp);
|
||||
return ret;
|
||||
|
||||
return (ssize_t)sprintf(buf, "%d\n", temp);
|
||||
}
|
||||
|
||||
static ssize_t hmp_store(struct kobject *a, struct attribute *attr,
|
||||
@@ -3925,11 +3930,31 @@ static ssize_t hmp_store(struct kobject *a, struct attribute *attr,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t hmp_print_domains(char *outbuf, int outbufsize)
|
||||
{
|
||||
char buf[64];
|
||||
const char nospace[] = "%s", space[] = " %s";
|
||||
const char *fmt = nospace;
|
||||
struct hmp_domain *domain;
|
||||
struct list_head *pos;
|
||||
int outpos = 0;
|
||||
list_for_each(pos, &hmp_domains) {
|
||||
domain = list_entry(pos, struct hmp_domain, hmp_domains);
|
||||
if (cpumask_scnprintf(buf, 64, &domain->possible_cpus)) {
|
||||
outpos += sprintf(outbuf+outpos, fmt, buf);
|
||||
fmt = space;
|
||||
}
|
||||
}
|
||||
strcat(outbuf, "\n");
|
||||
return outpos+1;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HMP_VARIABLE_SCALE
|
||||
static int hmp_period_tofrom_sysfs(int value)
|
||||
{
|
||||
return (LOAD_AVG_PERIOD << HMP_VARIABLE_SCALE_SHIFT) / value;
|
||||
}
|
||||
|
||||
#endif
|
||||
/* max value for threshold is 1024 */
|
||||
static int hmp_theshold_from_sysfs(int value)
|
||||
{
|
||||
@@ -3937,9 +3962,10 @@ static int hmp_theshold_from_sysfs(int value)
|
||||
return -1;
|
||||
return value;
|
||||
}
|
||||
#ifdef CONFIG_HMP_FREQUENCY_INVARIANT_SCALE
|
||||
/* freqinvar control is only 0,1 off/on */
|
||||
static int hmp_freqinvar_from_sysfs(int value)
|
||||
#if defined(CONFIG_SCHED_HMP_LITTLE_PACKING) || \
|
||||
defined(CONFIG_HMP_FREQUENCY_INVARIANT_SCALE)
|
||||
/* toggle control is only 0,1 off/on */
|
||||
static int hmp_toggle_from_sysfs(int value)
|
||||
{
|
||||
if (value < 0 || value > 1)
|
||||
return -1;
|
||||
@@ -3959,7 +3985,9 @@ static void hmp_attr_add(
|
||||
const char *name,
|
||||
int *value,
|
||||
int (*to_sysfs)(int),
|
||||
int (*from_sysfs)(int))
|
||||
int (*from_sysfs)(int),
|
||||
ssize_t (*to_sysfs_text)(char *, int),
|
||||
umode_t mode)
|
||||
{
|
||||
int i = 0;
|
||||
while (hmp_data.attributes[i] != NULL) {
|
||||
@@ -3967,13 +3995,17 @@ static void hmp_attr_add(
|
||||
if (i >= HMP_DATA_SYSFS_MAX)
|
||||
return;
|
||||
}
|
||||
hmp_data.attr[i].attr.mode = 0644;
|
||||
if (mode)
|
||||
hmp_data.attr[i].attr.mode = mode;
|
||||
else
|
||||
hmp_data.attr[i].attr.mode = 0644;
|
||||
hmp_data.attr[i].show = hmp_show;
|
||||
hmp_data.attr[i].store = hmp_store;
|
||||
hmp_data.attr[i].attr.name = name;
|
||||
hmp_data.attr[i].value = value;
|
||||
hmp_data.attr[i].to_sysfs = to_sysfs;
|
||||
hmp_data.attr[i].from_sysfs = from_sysfs;
|
||||
hmp_data.attr[i].to_sysfs_text = to_sysfs_text;
|
||||
hmp_data.attributes[i] = &hmp_data.attr[i].attr;
|
||||
hmp_data.attributes[i + 1] = NULL;
|
||||
}
|
||||
@@ -3982,40 +4014,59 @@ static int hmp_attr_init(void)
|
||||
{
|
||||
int ret;
|
||||
memset(&hmp_data, sizeof(hmp_data), 0);
|
||||
hmp_attr_add("hmp_domains",
|
||||
NULL,
|
||||
NULL,
|
||||
NULL,
|
||||
hmp_print_domains,
|
||||
0444);
|
||||
hmp_attr_add("up_threshold",
|
||||
&hmp_up_threshold,
|
||||
NULL,
|
||||
hmp_theshold_from_sysfs,
|
||||
NULL,
|
||||
0);
|
||||
hmp_attr_add("down_threshold",
|
||||
&hmp_down_threshold,
|
||||
NULL,
|
||||
hmp_theshold_from_sysfs,
|
||||
NULL,
|
||||
0);
|
||||
#ifdef CONFIG_HMP_VARIABLE_SCALE
|
||||
/* by default load_avg_period_ms == LOAD_AVG_PERIOD
|
||||
* meaning no change
|
||||
*/
|
||||
hmp_data.multiplier = hmp_period_tofrom_sysfs(LOAD_AVG_PERIOD);
|
||||
|
||||
hmp_attr_add("load_avg_period_ms",
|
||||
&hmp_data.multiplier,
|
||||
hmp_period_tofrom_sysfs,
|
||||
hmp_period_tofrom_sysfs);
|
||||
hmp_attr_add("up_threshold",
|
||||
&hmp_up_threshold,
|
||||
hmp_period_tofrom_sysfs,
|
||||
NULL,
|
||||
hmp_theshold_from_sysfs);
|
||||
hmp_attr_add("down_threshold",
|
||||
&hmp_down_threshold,
|
||||
NULL,
|
||||
hmp_theshold_from_sysfs);
|
||||
0);
|
||||
#endif
|
||||
#ifdef CONFIG_HMP_FREQUENCY_INVARIANT_SCALE
|
||||
/* default frequency-invariant scaling ON */
|
||||
hmp_data.freqinvar_load_scale_enabled = 1;
|
||||
hmp_attr_add("frequency_invariant_load_scale",
|
||||
&hmp_data.freqinvar_load_scale_enabled,
|
||||
NULL,
|
||||
hmp_freqinvar_from_sysfs);
|
||||
hmp_toggle_from_sysfs,
|
||||
NULL,
|
||||
0);
|
||||
#endif
|
||||
#ifdef CONFIG_SCHED_HMP_LITTLE_PACKING
|
||||
hmp_attr_add("packing_enable",
|
||||
&hmp_packing_enabled,
|
||||
NULL,
|
||||
hmp_freqinvar_from_sysfs);
|
||||
hmp_toggle_from_sysfs,
|
||||
NULL,
|
||||
0);
|
||||
hmp_attr_add("packing_limit",
|
||||
&hmp_full_threshold,
|
||||
NULL,
|
||||
hmp_packing_from_sysfs);
|
||||
hmp_packing_from_sysfs,
|
||||
NULL,
|
||||
0);
|
||||
#endif
|
||||
hmp_data.attr_group.name = "hmp";
|
||||
hmp_data.attr_group.attrs = hmp_data.attributes;
|
||||
@@ -4024,7 +4075,6 @@ static int hmp_attr_init(void)
|
||||
return 0;
|
||||
}
|
||||
late_initcall(hmp_attr_init);
|
||||
#endif /* CONFIG_HMP_VARIABLE_SCALE */
|
||||
/*
|
||||
* return the load of the lowest-loaded CPU in a given HMP domain
|
||||
* min_cpu optionally points to an int to receive the CPU.
|
||||
@@ -6915,6 +6965,69 @@ out_unlock:
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Move task in a runnable state to another CPU.
|
||||
*
|
||||
* Tailored on 'active_load_balance_stop_cpu' with slight
|
||||
* modification to locking and pre-transfer checks. Note
|
||||
* rq->lock must be held before calling.
|
||||
*/
|
||||
static void hmp_migrate_runnable_task(struct rq *rq)
|
||||
{
|
||||
struct sched_domain *sd;
|
||||
int src_cpu = cpu_of(rq);
|
||||
struct rq *src_rq = rq;
|
||||
int dst_cpu = rq->push_cpu;
|
||||
struct rq *dst_rq = cpu_rq(dst_cpu);
|
||||
struct task_struct *p = rq->migrate_task;
|
||||
/*
|
||||
* One last check to make sure nobody else is playing
|
||||
* with the source rq.
|
||||
*/
|
||||
if (src_rq->active_balance)
|
||||
return;
|
||||
|
||||
if (src_rq->nr_running <= 1)
|
||||
return;
|
||||
|
||||
if (task_rq(p) != src_rq)
|
||||
return;
|
||||
/*
|
||||
* Not sure if this applies here but one can never
|
||||
* be too cautious
|
||||
*/
|
||||
BUG_ON(src_rq == dst_rq);
|
||||
|
||||
double_lock_balance(src_rq, dst_rq);
|
||||
|
||||
rcu_read_lock();
|
||||
for_each_domain(dst_cpu, sd) {
|
||||
if (cpumask_test_cpu(src_cpu, sched_domain_span(sd)))
|
||||
break;
|
||||
}
|
||||
|
||||
if (likely(sd)) {
|
||||
struct lb_env env = {
|
||||
.sd = sd,
|
||||
.dst_cpu = dst_cpu,
|
||||
.dst_rq = dst_rq,
|
||||
.src_cpu = src_cpu,
|
||||
.src_rq = src_rq,
|
||||
.idle = CPU_IDLE,
|
||||
};
|
||||
|
||||
schedstat_inc(sd, alb_count);
|
||||
|
||||
if (move_specific_task(&env, p))
|
||||
schedstat_inc(sd, alb_pushed);
|
||||
else
|
||||
schedstat_inc(sd, alb_failed);
|
||||
}
|
||||
|
||||
rcu_read_unlock();
|
||||
double_unlock_balance(src_rq, dst_rq);
|
||||
}
|
||||
|
||||
static DEFINE_SPINLOCK(hmp_force_migration);
|
||||
|
||||
/*
|
||||
@@ -6927,13 +7040,14 @@ static void hmp_force_up_migration(int this_cpu)
|
||||
struct sched_entity *curr, *orig;
|
||||
struct rq *target;
|
||||
unsigned long flags;
|
||||
unsigned int force;
|
||||
unsigned int force, got_target;
|
||||
struct task_struct *p;
|
||||
|
||||
if (!spin_trylock(&hmp_force_migration))
|
||||
return;
|
||||
for_each_online_cpu(cpu) {
|
||||
force = 0;
|
||||
got_target = 0;
|
||||
target = cpu_rq(cpu);
|
||||
raw_spin_lock_irqsave(&target->lock, flags);
|
||||
curr = target->cfs.curr;
|
||||
@@ -6956,15 +7070,14 @@ static void hmp_force_up_migration(int this_cpu)
|
||||
if (hmp_up_migration(cpu, &target_cpu, curr)) {
|
||||
if (!target->active_balance) {
|
||||
get_task_struct(p);
|
||||
target->active_balance = 1;
|
||||
target->push_cpu = target_cpu;
|
||||
target->migrate_task = p;
|
||||
force = 1;
|
||||
got_target = 1;
|
||||
trace_sched_hmp_migrate(p, target->push_cpu, HMP_MIGRATE_FORCE);
|
||||
hmp_next_up_delay(&p->se, target->push_cpu);
|
||||
}
|
||||
}
|
||||
if (!force && !target->active_balance) {
|
||||
if (!got_target && !target->active_balance) {
|
||||
/*
|
||||
* For now we just check the currently running task.
|
||||
* Selecting the lightest task for offloading will
|
||||
@@ -6975,14 +7088,29 @@ static void hmp_force_up_migration(int this_cpu)
|
||||
target->push_cpu = hmp_offload_down(cpu, curr);
|
||||
if (target->push_cpu < NR_CPUS) {
|
||||
get_task_struct(p);
|
||||
target->active_balance = 1;
|
||||
target->migrate_task = p;
|
||||
force = 1;
|
||||
got_target = 1;
|
||||
trace_sched_hmp_migrate(p, target->push_cpu, HMP_MIGRATE_OFFLOAD);
|
||||
hmp_next_down_delay(&p->se, target->push_cpu);
|
||||
}
|
||||
}
|
||||
/*
|
||||
* We have a target with no active_balance. If the task
|
||||
* is not currently running move it, otherwise let the
|
||||
* CPU stopper take care of it.
|
||||
*/
|
||||
if (got_target && !target->active_balance) {
|
||||
if (!task_running(target, p)) {
|
||||
trace_sched_hmp_migrate_force_running(p, 0);
|
||||
hmp_migrate_runnable_task(target);
|
||||
} else {
|
||||
target->active_balance = 1;
|
||||
force = 1;
|
||||
}
|
||||
}
|
||||
|
||||
raw_spin_unlock_irqrestore(&target->lock, flags);
|
||||
|
||||
if (force)
|
||||
stop_one_cpu_nowait(cpu_of(target),
|
||||
hmp_active_task_migration_cpu_stop,
|
||||
@@ -7002,7 +7130,7 @@ static unsigned int hmp_idle_pull(int this_cpu)
|
||||
int cpu;
|
||||
struct sched_entity *curr, *orig;
|
||||
struct hmp_domain *hmp_domain = NULL;
|
||||
struct rq *target, *rq;
|
||||
struct rq *target = NULL, *rq;
|
||||
unsigned long flags, ratio = 0;
|
||||
unsigned int force = 0;
|
||||
struct task_struct *p = NULL;
|
||||
@@ -7054,14 +7182,25 @@ static unsigned int hmp_idle_pull(int this_cpu)
|
||||
raw_spin_lock_irqsave(&target->lock, flags);
|
||||
if (!target->active_balance && task_rq(p) == target) {
|
||||
get_task_struct(p);
|
||||
target->active_balance = 1;
|
||||
target->push_cpu = this_cpu;
|
||||
target->migrate_task = p;
|
||||
force = 1;
|
||||
trace_sched_hmp_migrate(p, target->push_cpu, HMP_MIGRATE_IDLE_PULL);
|
||||
hmp_next_up_delay(&p->se, target->push_cpu);
|
||||
/*
|
||||
* if the task isn't running move it right away.
|
||||
* Otherwise setup the active_balance mechanic and let
|
||||
* the CPU stopper do its job.
|
||||
*/
|
||||
if (!task_running(target, p)) {
|
||||
trace_sched_hmp_migrate_idle_running(p, 0);
|
||||
hmp_migrate_runnable_task(target);
|
||||
} else {
|
||||
target->active_balance = 1;
|
||||
force = 1;
|
||||
}
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&target->lock, flags);
|
||||
|
||||
if (force) {
|
||||
stop_one_cpu_nowait(cpu_of(target),
|
||||
hmp_idle_pull_cpu_stop,
|
||||
|
||||
@@ -827,9 +827,12 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
|
||||
if (isspace(ch)) {
|
||||
parser->buffer[parser->idx] = 0;
|
||||
parser->cont = false;
|
||||
} else {
|
||||
} else if (parser->idx < parser->size - 1) {
|
||||
parser->cont = true;
|
||||
parser->buffer[parser->idx++] = ch;
|
||||
} else {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
*ppos += read;
|
||||
|
||||
@@ -26,7 +26,7 @@ static int perf_trace_event_perm(struct ftrace_event_call *tp_event,
|
||||
{
|
||||
/* The ftrace function trace is allowed only for root. */
|
||||
if (ftrace_event_is_function(tp_event) &&
|
||||
perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN))
|
||||
perf_paranoid_tracepoint_raw() && !capable(CAP_SYS_ADMIN))
|
||||
return -EPERM;
|
||||
|
||||
/* No tracing, just counting, so no obvious leak */
|
||||
|
||||
@@ -1201,8 +1201,8 @@ static unsigned long kmem_cache_flags(unsigned long object_size,
|
||||
/*
|
||||
* Enable debugging if selected on the kernel commandline.
|
||||
*/
|
||||
if (slub_debug && (!slub_debug_slabs ||
|
||||
!strncmp(slub_debug_slabs, name, strlen(slub_debug_slabs))))
|
||||
if (slub_debug && (!slub_debug_slabs || (name &&
|
||||
!strncmp(slub_debug_slabs, name, strlen(slub_debug_slabs)))))
|
||||
flags |= slub_debug;
|
||||
|
||||
return flags;
|
||||
|
||||
@@ -61,6 +61,7 @@ static int __init batadv_init(void)
|
||||
batadv_recv_handler_init();
|
||||
|
||||
batadv_iv_init();
|
||||
batadv_nc_init();
|
||||
|
||||
batadv_event_workqueue = create_singlethread_workqueue("bat_events");
|
||||
|
||||
@@ -138,7 +139,7 @@ int batadv_mesh_init(struct net_device *soft_iface)
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
|
||||
ret = batadv_nc_init(bat_priv);
|
||||
ret = batadv_nc_mesh_init(bat_priv);
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
|
||||
@@ -163,7 +164,7 @@ void batadv_mesh_free(struct net_device *soft_iface)
|
||||
batadv_vis_quit(bat_priv);
|
||||
|
||||
batadv_gw_node_purge(bat_priv);
|
||||
batadv_nc_free(bat_priv);
|
||||
batadv_nc_mesh_free(bat_priv);
|
||||
batadv_dat_free(bat_priv);
|
||||
batadv_bla_free(bat_priv);
|
||||
|
||||
|
||||
@@ -34,6 +34,20 @@ static void batadv_nc_worker(struct work_struct *work);
|
||||
static int batadv_nc_recv_coded_packet(struct sk_buff *skb,
|
||||
struct batadv_hard_iface *recv_if);
|
||||
|
||||
/**
|
||||
* batadv_nc_init - one-time initialization for network coding
|
||||
*/
|
||||
int __init batadv_nc_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/* Register our packet type */
|
||||
ret = batadv_recv_handler_register(BATADV_CODED,
|
||||
batadv_nc_recv_coded_packet);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_nc_start_timer - initialise the nc periodic worker
|
||||
* @bat_priv: the bat priv with all the soft interface information
|
||||
@@ -45,10 +59,10 @@ static void batadv_nc_start_timer(struct batadv_priv *bat_priv)
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_nc_init - initialise coding hash table and start house keeping
|
||||
* batadv_nc_mesh_init - initialise coding hash table and start house keeping
|
||||
* @bat_priv: the bat priv with all the soft interface information
|
||||
*/
|
||||
int batadv_nc_init(struct batadv_priv *bat_priv)
|
||||
int batadv_nc_mesh_init(struct batadv_priv *bat_priv)
|
||||
{
|
||||
bat_priv->nc.timestamp_fwd_flush = jiffies;
|
||||
bat_priv->nc.timestamp_sniffed_purge = jiffies;
|
||||
@@ -70,11 +84,6 @@ int batadv_nc_init(struct batadv_priv *bat_priv)
|
||||
batadv_hash_set_lock_class(bat_priv->nc.coding_hash,
|
||||
&batadv_nc_decoding_hash_lock_class_key);
|
||||
|
||||
/* Register our packet type */
|
||||
if (batadv_recv_handler_register(BATADV_CODED,
|
||||
batadv_nc_recv_coded_packet) < 0)
|
||||
goto err;
|
||||
|
||||
INIT_DELAYED_WORK(&bat_priv->nc.work, batadv_nc_worker);
|
||||
batadv_nc_start_timer(bat_priv);
|
||||
|
||||
@@ -1722,12 +1731,11 @@ free_nc_packet:
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_nc_free - clean up network coding memory
|
||||
* batadv_nc_mesh_free - clean up network coding memory
|
||||
* @bat_priv: the bat priv with all the soft interface information
|
||||
*/
|
||||
void batadv_nc_free(struct batadv_priv *bat_priv)
|
||||
void batadv_nc_mesh_free(struct batadv_priv *bat_priv)
|
||||
{
|
||||
batadv_recv_handler_unregister(BATADV_CODED);
|
||||
cancel_delayed_work_sync(&bat_priv->nc.work);
|
||||
|
||||
batadv_nc_purge_paths(bat_priv, bat_priv->nc.coding_hash, NULL);
|
||||
|
||||
@@ -22,8 +22,9 @@
|
||||
|
||||
#ifdef CONFIG_BATMAN_ADV_NC
|
||||
|
||||
int batadv_nc_init(struct batadv_priv *bat_priv);
|
||||
void batadv_nc_free(struct batadv_priv *bat_priv);
|
||||
int batadv_nc_init(void);
|
||||
int batadv_nc_mesh_init(struct batadv_priv *bat_priv);
|
||||
void batadv_nc_mesh_free(struct batadv_priv *bat_priv);
|
||||
void batadv_nc_update_nc_node(struct batadv_priv *bat_priv,
|
||||
struct batadv_orig_node *orig_node,
|
||||
struct batadv_orig_node *orig_neigh_node,
|
||||
@@ -47,12 +48,17 @@ int batadv_nc_init_debugfs(struct batadv_priv *bat_priv);
|
||||
|
||||
#else /* ifdef CONFIG_BATMAN_ADV_NC */
|
||||
|
||||
static inline int batadv_nc_init(struct batadv_priv *bat_priv)
|
||||
static inline int batadv_nc_init(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void batadv_nc_free(struct batadv_priv *bat_priv)
|
||||
static inline int batadv_nc_mesh_init(struct batadv_priv *bat_priv)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void batadv_nc_mesh_free(struct batadv_priv *bat_priv)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -40,7 +40,7 @@ again:
|
||||
struct iphdr _iph;
|
||||
ip:
|
||||
iph = skb_header_pointer(skb, nhoff, sizeof(_iph), &_iph);
|
||||
if (!iph)
|
||||
if (!iph || iph->ihl < 5)
|
||||
return false;
|
||||
|
||||
if (ip_is_fragment(iph))
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user