The current implementation of the energy-aware wake-up path relies on
find_best_target() to select an ordered list of CPU candidates for task
placement. The first candidate of the list saving energy is then chosen,
disregarding all the others to avoid the overhead of an expensive
energy_diff.
With the recent refactoring of select_energy_cpu_idx(), the cost of
exploring multiple CPUs has been reduced, hence offering the opportunity
to select the most energy-efficient candidate at a lower cost. This commit
seizes this opportunity by allowing to change select_energy_cpu_idx()'s
behaviour as to ignore the order of CPUs returned by find_best_target()
and to pick the best candidate energy-wise.
As this functionality is still considered as experimental, it is hidden
behind a sched_feature named FBT_STRICT_ORDER (like the equivalent
feature in Android 4.14) which defaults to true, hence keeping the
current behaviour by default.
Change-Id: I0cb833bfec1a4a053eddaff1652c0b6cad554f97
Suggested-by: Patrick Bellasi <patrick.bellasi@arm.com>
Suggested-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
We have the ability to track minimum capacity forced onto a CPU by
userspace or external actors. This is provided though a minimum
frequency scale factor exposed by arch_scale_min_freq_capacity.
The use of this information is enabled through the MIN_CAPACITY_CAPPING
feature. If not enabled, the minimum frequency scale factor will
remain 0 and it will not impact energy estimation or scheduling
decisions.
Change-Id: Ibc61f2bf4fddf186695b72b262e602a6e8bfde37
Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Changes in 4.9.69
usb: gadget: udc: renesas_usb3: fix number of the pipes
can: ti_hecc: Fix napi poll return value for repoll
can: kvaser_usb: free buf in error paths
can: kvaser_usb: Fix comparison bug in kvaser_usb_read_bulk_callback()
can: kvaser_usb: ratelimit errors if incomplete messages are received
can: kvaser_usb: cancel urb on -EPIPE and -EPROTO
can: ems_usb: cancel urb on -EPIPE and -EPROTO
can: esd_usb2: cancel urb on -EPIPE and -EPROTO
can: usb_8dev: cancel urb on -EPIPE and -EPROTO
virtio: release virtio index when fail to device_register
hv: kvp: Avoid reading past allocated blocks from KVP file
isa: Prevent NULL dereference in isa_bus driver callbacks
scsi: dma-mapping: always provide dma_get_cache_alignment
scsi: use dma_get_cache_alignment() as minimum DMA alignment
scsi: libsas: align sata_device's rps_resp on a cacheline
efi: Move some sysfs files to be read-only by root
efi/esrt: Use memunmap() instead of kfree() to free the remapping
ASN.1: fix out-of-bounds read when parsing indefinite length item
ASN.1: check for error from ASN1_OP_END__ACT actions
KEYS: add missing permission check for request_key() destination
X.509: reject invalid BIT STRING for subjectPublicKey
X.509: fix comparisons of ->pkey_algo
x86/PCI: Make broadcom_postcore_init() check acpi_disabled
KVM: x86: fix APIC page invalidation
btrfs: fix missing error return in btrfs_drop_snapshot
ALSA: pcm: prevent UAF in snd_pcm_info
ALSA: seq: Remove spurious WARN_ON() at timer check
ALSA: usb-audio: Fix out-of-bound error
ALSA: usb-audio: Add check return value for usb_string()
iommu/vt-d: Fix scatterlist offset handling
smp/hotplug: Move step CPUHP_AP_SMPCFD_DYING to the correct place
s390: fix compat system call table
KVM: s390: Fix skey emulation permission check
powerpc/64s: Initialize ISAv3 MMU registers before setting partition table
brcmfmac: change driver unbind order of the sdio function devices
kdb: Fix handling of kallsyms_symbol_next() return value
drm/exynos: gem: Drop NONCONTIG flag for buffers allocated without IOMMU
media: dvb: i2c transfers over usb cannot be done from stack
arm64: KVM: fix VTTBR_BADDR_MASK BUG_ON off-by-one
arm: KVM: Fix VTTBR_BADDR_MASK BUG_ON off-by-one
KVM: VMX: remove I/O port 0x80 bypass on Intel hosts
KVM: arm/arm64: Fix broken GICH_ELRSR big endian conversion
KVM: arm/arm64: vgic-irqfd: Fix MSI entry allocation
KVM: arm/arm64: vgic-its: Check result of allocation before use
arm64: fpsimd: Prevent registers leaking from dead tasks
bus: arm-cci: Fix use of smp_processor_id() in preemptible context
bus: arm-ccn: Check memory allocation failure
bus: arm-ccn: Fix use of smp_processor_id() in preemptible context
bus: arm-ccn: fix module unloading Error: Removing state 147 which has instances left.
crypto: talitos - fix AEAD test failures
crypto: talitos - fix memory corruption on SEC2
crypto: talitos - fix setkey to check key weakness
crypto: talitos - fix AEAD for sha224 on non sha224 capable chips
crypto: talitos - fix use of sg_link_tbl_len
crypto: talitos - fix ctr-aes-talitos
usb: f_fs: Force Reserved1=1 in OS_DESC_EXT_COMPAT
ARM: BUG if jumping to usermode address in kernel mode
ARM: avoid faulting on qemu
thp: reduce indentation level in change_huge_pmd()
thp: fix MADV_DONTNEED vs. numa balancing race
mm: drop unused pmdp_huge_get_and_clear_notify()
Revert "drm/armada: Fix compile fail"
Revert "spi: SPI_FSL_DSPI should depend on HAS_DMA"
ARM: 8657/1: uaccess: consistently check object sizes
vti6: Don't report path MTU below IPV6_MIN_MTU.
ARM: OMAP2+: gpmc-onenand: propagate error on initialization failure
x86/selftests: Add clobbers for int80 on x86_64
x86/platform/uv/BAU: Fix HUB errors by remove initial write to sw-ack register
sched/fair: Make select_idle_cpu() more aggressive
x86/hpet: Prevent might sleep splat on resume
powerpc/64: Invalidate process table caching after setting process table
selftest/powerpc: Fix false failures for skipped tests
powerpc: Fix compiling a BE kernel with a powerpc64le toolchain
lirc: fix dead lock between open and wakeup_filter
module: set __jump_table alignment to 8
powerpc/64: Fix checksum folding in csum_add()
ARM: OMAP2+: Fix device node reference counts
ARM: OMAP2+: Release device node after it is no longer needed.
ASoC: rcar: avoid SSI_MODEx settings for SSI8
gpio: altera: Use handle_level_irq when configured as a level_high
HID: chicony: Add support for another ASUS Zen AiO keyboard
usb: gadget: configs: plug memory leak
USB: gadgetfs: Fix a potential memory leak in 'dev_config()'
usb: dwc3: gadget: Fix system suspend/resume on TI platforms
usb: gadget: pxa27x: Test for a valid argument pointer
usb: gadget: udc: net2280: Fix tmp reusage in net2280 driver
kvm: nVMX: VMCLEAR should not cause the vCPU to shut down
libata: drop WARN from protocol error in ata_sff_qc_issue()
workqueue: trigger WARN if queue_delayed_work() is called with NULL @wq
scsi: qla2xxx: Fix ql_dump_buffer
scsi: lpfc: Fix crash during Hardware error recovery on SLI3 adapters
irqchip/crossbar: Fix incorrect type of register size
KVM: nVMX: reset nested_run_pending if the vCPU is going to be reset
arm: KVM: Survive unknown traps from guests
arm64: KVM: Survive unknown traps from guests
KVM: arm/arm64: VGIC: Fix command handling while ITS being disabled
spi_ks8995: fix "BUG: key accdaa28 not in .data!"
spi_ks8995: regs_size incorrect for some devices
bnx2x: prevent crash when accessing PTP with interface down
bnx2x: fix possible overrun of VFPF multicast addresses array
bnx2x: fix detection of VLAN filtering feature for VF
bnx2x: do not rollback VF MAC/VLAN filters we did not configure
rds: tcp: Sequence teardown of listen and acceptor sockets to avoid races
ibmvnic: Fix overflowing firmware/hardware TX queue
ibmvnic: Allocate number of rx/tx buffers agreed on by firmware
ipv6: reorder icmpv6_init() and ip6_mr_init()
crypto: s5p-sss - Fix completing crypto request in IRQ handler
i2c: riic: fix restart condition
blk-mq: initialize mq kobjects in blk_mq_init_allocated_queue()
zram: set physical queue limits to avoid array out of bounds accesses
netfilter: don't track fragmented packets
axonram: Fix gendisk handling
drm/amd/amdgpu: fix console deadlock if late init failed
powerpc/powernv/ioda2: Gracefully fail if too many TCE levels requested
EDAC, i5000, i5400: Fix use of MTR_DRAM_WIDTH macro
EDAC, i5000, i5400: Fix definition of NRECMEMB register
kbuild: pkg: use --transform option to prefix paths in tar
coccinelle: fix parallel build with CHECK=scripts/coccicheck
x86/mpx/selftests: Fix up weird arrays
mac80211_hwsim: Fix memory leak in hwsim_new_radio_nl()
gre6: use log_ecn_error module parameter in ip6_tnl_rcv()
route: also update fnhe_genid when updating a route cache
route: update fnhe_expires for redirect when the fnhe exists
drivers/rapidio/devices/rio_mport_cdev.c: fix resource leak in error handling path in 'rio_dma_transfer()'
lib/genalloc.c: make the avail variable an atomic_long_t
dynamic-debug-howto: fix optional/omitted ending line number to be LARGE instead of 0
NFS: Fix a typo in nfs_rename()
sunrpc: Fix rpc_task_begin trace point
xfs: fix forgotten rcu read unlock when skipping inode reclaim
dt-bindings: usb: fix reg-property port-number range
block: wake up all tasks blocked in get_request()
sparc64/mm: set fields in deferred pages
zsmalloc: calling zs_map_object() from irq is a bug
sctp: do not free asoc when it is already dead in sctp_sendmsg
sctp: use the right sk after waking up from wait_buf sleep
bpf: fix lockdep splat
clk: uniphier: fix DAPLL2 clock rate of Pro5
atm: horizon: Fix irq release error
jump_label: Invoke jump_label_test() via early_initcall()
xfrm: Copy policy family in clone_policy
IB/mlx4: Increase maximal message size under UD QP
IB/mlx5: Assign send CQ and recv CQ of UMR QP
afs: Connect up the CB.ProbeUuid
Linux 4.9.69
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
The ENERGY_AWARE sched feature flag cannot be set unless
CONFIG_SCHED_DEBUG is enabled.
So this patch allows the flag to default to true at build time
if the config is set.
Change-Id: I8835a571fdb7a8f8ee6a54af1e11a69f3b5ce8e6
Signed-off-by: John Stultz <john.stultz@linaro.org>
This patch introduces the ENERGY_AWARE sched feature, which is
implemented using jump labels when SCHED_DEBUG is defined. It is
statically set false when SCHED_DEBUG is not defined. Hence this doesn't
allow energy awareness to be enabled without SCHED_DEBUG. This
sched_feature knob will be replaced later with a more appropriate
control knob when things have matured a bit.
ENERGY_AWARE is based on per-entity load-tracking hence FAIR_GROUP_SCHED
must be enable. This dependency isn't checked at compile time yet.
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Andres Oportus <andresoportus@google.com>
The current load balancer may not try to prevent a task from moving
out of a preferred node to a less preferred node. The reason for this
being:
- Since sched features NUMA and NUMA_RESIST_LOWER are disabled by
default, migrate_degrades_locality() always returns false.
- Even if NUMA_RESIST_LOWER were to be enabled, if its cache hot,
migrate_degrades_locality() never gets called.
The above behaviour can mean that tasks can move out of their
preferred node but they may be eventually be brought back to their
preferred node by numa balancer (due to higher numa faults).
To avoid the above, this commit merges migrate_degrades_locality() and
migrate_improves_locality(). It also replaces 3 sched features NUMA,
NUMA_FAVOUR_HIGHER and NUMA_RESIST_LOWER by a single sched feature
NUMA.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mike Galbraith <efault@gmx.de>
Link: http://lkml.kernel.org/r/1434455762-30857-2-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch favours moving tasks towards NUMA node that recorded a higher
number of NUMA faults during active load balancing. Ideally this is
self-reinforcing as the longer the task runs on that node, the more faults
it should incur causing task_numa_placement to keep the task running on that
node. In reality a big weakness is that the nodes CPUs can be overloaded
and it would be more efficient to queue tasks on an idle node and migrate
to the new node. This would require additional smarts in the balancer so
for now the balancer will simply prefer to place the task on the preferred
node for a PTE scans which is controlled by the numa_balancing_settle_count
sysctl. Once the settle_count number of scans has complete the schedule
is free to place the task on an alternative node if the load is imbalanced.
[srikar@linux.vnet.ibm.com: Fixed statistics]
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
[ Tunable and use higher faults instead of preferred. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-23-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
PTE scanning and NUMA hinting fault handling is expensive so commit
5bca2303 ("mm: sched: numa: Delay PTE scanning until a task is scheduled
on a new node") deferred the PTE scan until a task had been scheduled on
another node. The problem is that in the purely shared memory case that
this may never happen and no NUMA hinting fault information will be
captured. We are not ruling out the possibility that something better
can be done here but for now, this patch needs to be reverted and depend
entirely on the scan_delay to avoid punishing short-lived processes.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-16-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Pull Automatic NUMA Balancing bare-bones from Mel Gorman:
"There are three implementations for NUMA balancing, this tree
(balancenuma), numacore which has been developed in tip/master and
autonuma which is in aa.git.
In almost all respects balancenuma is the dumbest of the three because
its main impact is on the VM side with no attempt to be smart about
scheduling. In the interest of getting the ball rolling, it would be
desirable to see this much merged for 3.8 with the view to building
scheduler smarts on top and adapting the VM where required for 3.9.
The most recent set of comparisons available from different people are
mel: https://lkml.org/lkml/2012/12/9/108
mingo: https://lkml.org/lkml/2012/12/7/331
tglx: https://lkml.org/lkml/2012/12/10/437
srikar: https://lkml.org/lkml/2012/12/10/397
The results are a mixed bag. In my own tests, balancenuma does
reasonably well. It's dumb as rocks and does not regress against
mainline. On the other hand, Ingo's tests shows that balancenuma is
incapable of converging for this workloads driven by perf which is bad
but is potentially explained by the lack of scheduler smarts. Thomas'
results show balancenuma improves on mainline but falls far short of
numacore or autonuma. Srikar's results indicate we all suffer on a
large machine with imbalanced node sizes.
My own testing showed that recent numacore results have improved
dramatically, particularly in the last week but not universally.
We've butted heads heavily on system CPU usage and high levels of
migration even when it shows that overall performance is better.
There are also cases where it regresses. Of interest is that for
specjbb in some configurations it will regress for lower numbers of
warehouses and show gains for higher numbers which is not reported by
the tool by default and sometimes missed in treports. Recently I
reported for numacore that the JVM was crashing with
NullPointerExceptions but currently it's unclear what the source of
this problem is. Initially I thought it was in how numacore batch
handles PTEs but I'm no longer think this is the case. It's possible
numacore is just able to trigger it due to higher rates of migration.
These reports were quite late in the cycle so I/we would like to start
with this tree as it contains much of the code we can agree on and has
not changed significantly over the last 2-3 weeks."
* tag 'balancenuma-v11' of git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux-balancenuma: (50 commits)
mm/rmap, migration: Make rmap_walk_anon() and try_to_unmap_anon() more scalable
mm/rmap: Convert the struct anon_vma::mutex to an rwsem
mm: migrate: Account a transhuge page properly when rate limiting
mm: numa: Account for failed allocations and isolations as migration failures
mm: numa: Add THP migration for the NUMA working set scanning fault case build fix
mm: numa: Add THP migration for the NUMA working set scanning fault case.
mm: sched: numa: Delay PTE scanning until a task is scheduled on a new node
mm: sched: numa: Control enabling and disabling of NUMA balancing if !SCHED_DEBUG
mm: sched: numa: Control enabling and disabling of NUMA balancing
mm: sched: Adapt the scanning rate if a NUMA hinting fault does not migrate
mm: numa: Use a two-stage filter to restrict pages being migrated for unlikely task<->node relationships
mm: numa: migrate: Set last_nid on newly allocated page
mm: numa: split_huge_page: Transfer last_nid on tail page
mm: numa: Introduce last_nid to the page frame
sched: numa: Slowly increase the scanning period as NUMA faults are handled
mm: numa: Rate limit setting of pte_numa if node is saturated
mm: numa: Rate limit the amount of memory that is migrated between nodes
mm: numa: Structures for Migrate On Fault per NUMA migration rate limiting
mm: numa: Migrate pages handled during a pmd_numa hinting fault
mm: numa: Migrate on reference policy
...
Due to the fact that migrations are driven by the CPU a task is running
on there is no point tracking NUMA faults until one task runs on a new
node. This patch tracks the first node used by an address space. Until
it changes, PTE scanning is disabled and no NUMA hinting faults are
trapped. This should help workloads that are short-lived, do not care
about NUMA placement or have bound themselves to a single node.
This takes advantage of the logic in "mm: sched: numa: Implement slow
start for working set sampling" to delay when the checks are made. This
will take advantage of processes that set their CPU and node bindings
early in their lifetime. It will also potentially allow any initial load
balancing to take place.
Signed-off-by: Mel Gorman <mgorman@suse.de>
This patch adds Kconfig options and kernel parameters to allow the
enabling and disabling of automatic NUMA balancing. The existance
of such a switch was and is very important when debugging problems
related to transparent hugepages and we should have the same for
automatic NUMA placement.
Signed-off-by: Mel Gorman <mgorman@suse.de>
NOTE: This patch is based on "sched, numa, mm: Add fault driven
placement and migration policy" but as it throws away all the policy
to just leave a basic foundation I had to drop the signed-offs-by.
This patch creates a bare-bones method for setting PTEs pte_numa in the
context of the scheduler that when faulted later will be faulted onto the
node the CPU is running on. In itself this does nothing useful but any
placement policy will fundamentally depend on receiving hints on placement
from fault context and doing something intelligent about it.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
Commits 367456c756 ("sched: Ditch per cgroup task lists for
load-balancing") and 5d6523ebd ("sched: Fix load-balance wreckage")
left some more wreckage.
By setting loop_max unconditionally to ->nr_running load-balancing
could take a lot of time on very long runqueues (hackbench!). So keep
the sysctl as max limit of the amount of tasks we'll iterate.
Furthermore, the min load filter for migration completely fails with
cgroups since inequality in per-cpu state can easily lead to such
small loads :/
Furthermore the change to add new tasks to the tail of the queue
instead of the head seems to have some effect.. not quite sure I
understand why.
Combined these fixes solve the huge hackbench regression reported by
Tim when hackbench is ran in a cgroup.
Reported-by: Tim Chen <tim.c.chen@linux.intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1335365763.28150.267.camel@twins
[ got rid of the CONFIG_PREEMPT tuning and made small readability edits ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
There's too many sched*.[ch] files in kernel/, give them their own
directory.
(No code changed, other than Makefile glue added.)
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>