mirror of
https://github.com/hardkernel/linux.git
synced 2026-03-31 18:23:00 +09:00
Revert "ANDROID: sched/tune: Initialize raw_spin_lock in boosted_groups"
This reverts commit c5616f2f87.
If we re-init the per-cpu boostgroup spinlock every time that
we add a new boosted cgroup, we can easily wipe out (reinit)
a spinlock struct while in a critical section. We should only
be setting up the per-cpu boostgroup data, and the spin_lock
initialization need only happen once - which we're already
doing in a postcore_initcall.
For example:
-------- CPU 0 -------- | -------- CPU1 --------
cgroupX boost group added |
schedtune_enqueue_task |
acquires(bg->lock) | cgroupY boost group added
| for_each_cpu()
| raw_spin_lock_init(bg->lock)
releases(bg->lock) |
BUG (already unlocked) |
|
This results in the following BUG from the debug spinlock code:
BUG: spinlock already unlocked on CPU#5, rcuop/6/68
Bug: 32668852
Change-Id: I3016702780b461a0cd95e26c538cd18df27d6316
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
This commit is contained in:
committed by
Todd Kjos
parent
71184089b8
commit
f6bec4e8c7
@@ -647,7 +647,6 @@ schedtune_boostgroup_init(struct schedtune *st)
|
||||
bg = &per_cpu(cpu_boost_groups, cpu);
|
||||
bg->group[st->idx].boost = 0;
|
||||
bg->group[st->idx].tasks = 0;
|
||||
raw_spin_lock_init(&bg->lock);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
Reference in New Issue
Block a user