-
Uladzislau Rezki (Sony) authored
Get rid of preempt_disable() and preempt_enable() when the preload is done for splitting purpose. The reason is that calling spin_lock() with disabled preemption is forbidden in CONFIG_PREEMPT_RT kernel. Therefore, we do not guarantee that a CPU is preloaded, instead we minimize the case when it is not with this change. For example, run the special test case that follows the preload pattern and path. 20 "unbind" threads run it and each does 1000000 allocations. Only 3.5 times among 1000000 a CPU was not preloaded thus. So it can happen but the number is rather negligible. Link: http://lkml.kernel.org/r/20191009164934.10166-1-urezki@gmail.com Fixes: 82dd23e8 ("mm/vmalloc.c: preload a CPU with one object for split purpose") Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Daniel Wagner <dwagner@suse.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Hillf Danton <hdanton@sina.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
975448e7