Commit 0c76eae0 authored by Stephen Rothwell's avatar Stephen Rothwell

Merge remote-tracking branch 'livepatching/for-next'

parents 9d6d7105 e1fd0398
......@@ -33,18 +33,6 @@ Description:
An attribute which indicates whether the patch is currently in
transition.
What: /sys/kernel/livepatch/<patch>/signal
Date: Nov 2017
KernelVersion: 4.15.0
Contact: live-patching@vger.kernel.org
Description:
A writable attribute that allows administrator to affect the
course of an existing transition. Writing 1 sends a fake
signal to all remaining blocking tasks. The fake signal
means that no proper signal is delivered (there is no data in
signal pending structures). Tasks are interrupted or woken up,
and forced to change their patched state.
What: /sys/kernel/livepatch/<patch>/force
Date: Nov 2017
KernelVersion: 4.15.0
......
This diff is collapsed.
===================================
Atomic Replace & Cumulative Patches
===================================
There might be dependencies between livepatches. If multiple patches need
to do different changes to the same function(s) then we need to define
an order in which the patches will be installed. And function implementations
from any newer livepatch must be done on top of the older ones.
This might become a maintenance nightmare. Especially when more patches
modified the same function in different ways.
An elegant solution comes with the feature called "Atomic Replace". It allows
creation of so called "Cumulative Patches". They include all wanted changes
from all older livepatches and completely replace them in one transition.
Usage
-----
The atomic replace can be enabled by setting "replace" flag in struct klp_patch,
for example:
static struct klp_patch patch = {
.mod = THIS_MODULE,
.objs = objs,
.replace = true,
};
All processes are then migrated to use the code only from the new patch.
Once the transition is finished, all older patches are automatically
disabled.
Ftrace handlers are transparently removed from functions that are no
longer modified by the new cumulative patch.
As a result, the livepatch authors might maintain sources only for one
cumulative patch. It helps to keep the patch consistent while adding or
removing various fixes or features.
Users could keep only the last patch installed on the system after
the transition to has finished. It helps to clearly see what code is
actually in use. Also the livepatch might then be seen as a "normal"
module that modifies the kernel behavior. The only difference is that
it can be updated at runtime without breaking its functionality.
Features
--------
The atomic replace allows:
+ Atomically revert some functions in a previous patch while
upgrading other functions.
+ Remove eventual performance impact caused by core redirection
for functions that are no longer patched.
+ Decrease user confusion about dependencies between livepatches.
Limitations:
------------
+ Once the operation finishes, there is no straightforward way
to reverse it and restore the replaced patches atomically.
A good practice is to set .replace flag in any released livepatch.
Then re-adding an older livepatch is equivalent to downgrading
to that patch. This is safe as long as the livepatches do _not_ do
extra modifications in (un)patching callbacks or in the module_init()
or module_exit() functions, see below.
Also note that the replaced patch can be removed and loaded again
only when the transition was not forced.
+ Only the (un)patching callbacks from the _new_ cumulative livepatch are
executed. Any callbacks from the replaced patches are ignored.
In other words, the cumulative patch is responsible for doing any actions
that are necessary to properly replace any older patch.
As a result, it might be dangerous to replace newer cumulative patches by
older ones. The old livepatches might not provide the necessary callbacks.
This might be seen as a limitation in some scenarios. But it makes life
easier in many others. Only the new cumulative livepatch knows what
fixes/features are added/removed and what special actions are necessary
for a smooth transition.
In any case, it would be a nightmare to think about the order of
the various callbacks and their interactions if the callbacks from all
enabled patches were called.
+ There is no special handling of shadow variables. Livepatch authors
must create their own rules how to pass them from one cumulative
patch to the other. Especially that they should not blindly remove
them in module_exit() functions.
A good practice might be to remove shadow variables in the post-unpatch
callback. It is called only when the livepatch is properly disabled.
This diff is collapsed.
......@@ -8892,6 +8892,7 @@ F: arch/x86/kernel/livepatch.c
F: Documentation/livepatch/
F: Documentation/ABI/testing/sysfs-kernel-livepatch
F: samples/livepatch/
F: tools/testing/selftests/livepatch/
L: live-patching@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching.git
......
......@@ -24,6 +24,7 @@
#include <linux/module.h>
#include <linux/ftrace.h>
#include <linux/completion.h>
#include <linux/list.h>
#if IS_ENABLED(CONFIG_LIVEPATCH)
......@@ -40,11 +41,14 @@
* @new_func: pointer to the patched function code
* @old_sympos: a hint indicating which symbol position the old function
* can be found (optional)
* @old_addr: the address of the function being patched
* @old_func: pointer to the function being patched
* @kobj: kobject for sysfs resources
* @node: list node for klp_object func_list
* @stack_node: list node for klp_ops func_stack list
* @old_size: size of the old function
* @new_size: size of the new function
* @kobj_added: @kobj has been added and needs freeing
* @nop: temporary patch to use the original code again; dyn. allocated
* @patched: the func has been added to the klp_ops list
* @transition: the func is currently being applied or reverted
*
......@@ -77,10 +81,13 @@ struct klp_func {
unsigned long old_sympos;
/* internal */
unsigned long old_addr;
void *old_func;
struct kobject kobj;
struct list_head node;
struct list_head stack_node;
unsigned long old_size, new_size;
bool kobj_added;
bool nop;
bool patched;
bool transition;
};
......@@ -115,8 +122,12 @@ struct klp_callbacks {
* @funcs: function entries for functions to be patched in the object
* @callbacks: functions to be executed pre/post (un)patching
* @kobj: kobject for sysfs resources
* @func_list: dynamic list of the function entries
* @node: list node for klp_patch obj_list
* @mod: kernel module associated with the patched object
* (NULL for vmlinux)
* @kobj_added: @kobj has been added and needs freeing
* @dynamic: temporary object for nop functions; dynamically allocated
* @patched: the object's funcs have been added to the klp_ops list
*/
struct klp_object {
......@@ -127,7 +138,11 @@ struct klp_object {
/* internal */
struct kobject kobj;
struct list_head func_list;
struct list_head node;
struct module *mod;
bool kobj_added;
bool dynamic;
bool patched;
};
......@@ -135,35 +150,54 @@ struct klp_object {
* struct klp_patch - patch structure for live patching
* @mod: reference to the live patch module
* @objs: object entries for kernel objects to be patched
* @list: list node for global list of registered patches
* @replace: replace all actively used patches
* @list: list node for global list of actively used patches
* @kobj: kobject for sysfs resources
* @obj_list: dynamic list of the object entries
* @kobj_added: @kobj has been added and needs freeing
* @enabled: the patch is enabled (but operation may be incomplete)
* @forced: was involved in a forced transition
* @free_work: patch cleanup from workqueue-context
* @finish: for waiting till it is safe to remove the patch module
*/
struct klp_patch {
/* external */
struct module *mod;
struct klp_object *objs;
bool replace;
/* internal */
struct list_head list;
struct kobject kobj;
struct list_head obj_list;
bool kobj_added;
bool enabled;
bool forced;
struct work_struct free_work;
struct completion finish;
};
#define klp_for_each_object(patch, obj) \
#define klp_for_each_object_static(patch, obj) \
for (obj = patch->objs; obj->funcs || obj->name; obj++)
#define klp_for_each_func(obj, func) \
#define klp_for_each_object_safe(patch, obj, tmp_obj) \
list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, node)
#define klp_for_each_object(patch, obj) \
list_for_each_entry(obj, &patch->obj_list, node)
#define klp_for_each_func_static(obj, func) \
for (func = obj->funcs; \
func->old_name || func->new_func || func->old_sympos; \
func++)
int klp_register_patch(struct klp_patch *);
int klp_unregister_patch(struct klp_patch *);
#define klp_for_each_func_safe(obj, func, tmp_func) \
list_for_each_entry_safe(func, tmp_func, &obj->func_list, node)
#define klp_for_each_func(obj, func) \
list_for_each_entry(func, &obj->func_list, node)
int klp_enable_patch(struct klp_patch *);
int klp_disable_patch(struct klp_patch *);
void arch_klp_init_object_loaded(struct klp_patch *patch,
struct klp_object *obj);
......
This diff is collapsed.
......@@ -5,6 +5,11 @@
#include <linux/livepatch.h>
extern struct mutex klp_mutex;
extern struct list_head klp_patches;
void klp_free_patch_start(struct klp_patch *patch);
void klp_discard_replaced_patches(struct klp_patch *new_patch);
void klp_discard_nops(struct klp_patch *new_patch);
static inline bool klp_is_object_loaded(struct klp_object *obj)
{
......
......@@ -34,7 +34,7 @@
static LIST_HEAD(klp_ops);
struct klp_ops *klp_find_ops(unsigned long old_addr)
struct klp_ops *klp_find_ops(void *old_func)
{
struct klp_ops *ops;
struct klp_func *func;
......@@ -42,7 +42,7 @@ struct klp_ops *klp_find_ops(unsigned long old_addr)
list_for_each_entry(ops, &klp_ops, node) {
func = list_first_entry(&ops->func_stack, struct klp_func,
stack_node);
if (func->old_addr == old_addr)
if (func->old_func == old_func)
return ops;
}
......@@ -118,7 +118,15 @@ static void notrace klp_ftrace_handler(unsigned long ip,
}
}
/*
* NOPs are used to replace existing patches with original code.
* Do nothing! Setting pc would cause an infinite loop.
*/
if (func->nop)
goto unlock;
klp_arch_set_pc(regs, (unsigned long)func->new_func);
unlock:
preempt_enable_notrace();
}
......@@ -142,17 +150,18 @@ static void klp_unpatch_func(struct klp_func *func)
if (WARN_ON(!func->patched))
return;
if (WARN_ON(!func->old_addr))
if (WARN_ON(!func->old_func))
return;
ops = klp_find_ops(func->old_addr);
ops = klp_find_ops(func->old_func);
if (WARN_ON(!ops))
return;
if (list_is_singular(&ops->func_stack)) {
unsigned long ftrace_loc;
ftrace_loc = klp_get_ftrace_location(func->old_addr);
ftrace_loc =
klp_get_ftrace_location((unsigned long)func->old_func);
if (WARN_ON(!ftrace_loc))
return;
......@@ -174,17 +183,18 @@ static int klp_patch_func(struct klp_func *func)
struct klp_ops *ops;
int ret;
if (WARN_ON(!func->old_addr))
if (WARN_ON(!func->old_func))
return -EINVAL;
if (WARN_ON(func->patched))
return -EINVAL;
ops = klp_find_ops(func->old_addr);
ops = klp_find_ops(func->old_func);
if (!ops) {
unsigned long ftrace_loc;
ftrace_loc = klp_get_ftrace_location(func->old_addr);
ftrace_loc =
klp_get_ftrace_location((unsigned long)func->old_func);
if (!ftrace_loc) {
pr_err("failed to find location for function '%s'\n",
func->old_name);
......@@ -236,15 +246,26 @@ static int klp_patch_func(struct klp_func *func)
return ret;
}
void klp_unpatch_object(struct klp_object *obj)
static void __klp_unpatch_object(struct klp_object *obj, bool nops_only)
{
struct klp_func *func;
klp_for_each_func(obj, func)
klp_for_each_func(obj, func) {
if (nops_only && !func->nop)
continue;
if (func->patched)
klp_unpatch_func(func);
}
obj->patched = false;
if (obj->dynamic || !nops_only)
obj->patched = false;
}
void klp_unpatch_object(struct klp_object *obj)
{
__klp_unpatch_object(obj, false);
}
int klp_patch_object(struct klp_object *obj)
......@@ -267,11 +288,21 @@ int klp_patch_object(struct klp_object *obj)
return 0;
}
void klp_unpatch_objects(struct klp_patch *patch)
static void __klp_unpatch_objects(struct klp_patch *patch, bool nops_only)
{
struct klp_object *obj;
klp_for_each_object(patch, obj)
if (obj->patched)
klp_unpatch_object(obj);
__klp_unpatch_object(obj, nops_only);
}
void klp_unpatch_objects(struct klp_patch *patch)
{
__klp_unpatch_objects(patch, false);
}
void klp_unpatch_objects_dynamic(struct klp_patch *patch)
{
__klp_unpatch_objects(patch, true);
}
......@@ -10,7 +10,7 @@
* struct klp_ops - structure for tracking registered ftrace ops structs
*
* A single ftrace_ops is shared between all enabled replacement functions
* (klp_func structs) which have the same old_addr. This allows the switch
* (klp_func structs) which have the same old_func. This allows the switch
* between function versions to happen instantaneously by updating the klp_ops
* struct's func_stack list. The winner is the klp_func at the top of the
* func_stack (front of the list).
......@@ -25,10 +25,11 @@ struct klp_ops {
struct ftrace_ops fops;
};
struct klp_ops *klp_find_ops(unsigned long old_addr);
struct klp_ops *klp_find_ops(void *old_func);
int klp_patch_object(struct klp_object *obj);
void klp_unpatch_object(struct klp_object *obj);
void klp_unpatch_objects(struct klp_patch *patch);
void klp_unpatch_objects_dynamic(struct klp_patch *patch);
#endif /* _LIVEPATCH_PATCH_H */
......@@ -29,11 +29,13 @@
#define MAX_STACK_ENTRIES 100
#define STACK_ERR_BUF_SIZE 128
#define SIGNALS_TIMEOUT 15
struct klp_patch *klp_transition_patch;
static int klp_target_state = KLP_UNDEFINED;
static bool klp_forced = false;
static unsigned int klp_signals_cnt;
/*
* This work can be performed periodically to finish patching or unpatching any
......@@ -87,6 +89,11 @@ static void klp_complete_transition(void)
klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) {
klp_discard_replaced_patches(klp_transition_patch);
klp_discard_nops(klp_transition_patch);
}
if (klp_target_state == KLP_UNPATCHED) {
/*
* All tasks have transitioned to KLP_UNPATCHED so we can now
......@@ -136,13 +143,6 @@ static void klp_complete_transition(void)
pr_notice("'%s': %s complete\n", klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
/*
* klp_forced set implies unbounded increase of module's ref count if
* the module is disabled/enabled in a loop.
*/
if (!klp_forced && klp_target_state == KLP_UNPATCHED)
module_put(klp_transition_patch->mod);
klp_target_state = KLP_UNDEFINED;
klp_transition_patch = NULL;
}
......@@ -224,11 +224,11 @@ static int klp_check_stack_func(struct klp_func *func,
* Check for the to-be-patched function
* (the previous func).
*/
ops = klp_find_ops(func->old_addr);
ops = klp_find_ops(func->old_func);
if (list_is_singular(&ops->func_stack)) {
/* original function */
func_addr = func->old_addr;
func_addr = (unsigned long)func->old_func;
func_size = func->old_size;
} else {
/* previously patched function */
......@@ -347,6 +347,47 @@ static bool klp_try_switch_task(struct task_struct *task)
}
/*
* Sends a fake signal to all non-kthread tasks with TIF_PATCH_PENDING set.
* Kthreads with TIF_PATCH_PENDING set are woken up.
*/
static void klp_send_signals(void)
{
struct task_struct *g, *task;
if (klp_signals_cnt == SIGNALS_TIMEOUT)
pr_notice("signaling remaining tasks\n");
read_lock(&tasklist_lock);
for_each_process_thread(g, task) {
if (!klp_patch_pending(task))
continue;
/*
* There is a small race here. We could see TIF_PATCH_PENDING
* set and decide to wake up a kthread or send a fake signal.
* Meanwhile the task could migrate itself and the action
* would be meaningless. It is not serious though.
*/
if (task->flags & PF_KTHREAD) {
/*
* Wake up a kthread which sleeps interruptedly and
* still has not been migrated.
*/
wake_up_state(task, TASK_INTERRUPTIBLE);
} else {
/*
* Send fake signal to all non-kthread tasks which are
* still not migrated.
*/
spin_lock_irq(&task->sighand->siglock);
signal_wake_up(task, 0);
spin_unlock_irq(&task->sighand->siglock);
}
}
read_unlock(&tasklist_lock);
}
/*
* Try to switch all remaining tasks to the target patch state by walking the
* stacks of sleeping tasks and looking for any to-be-patched or
......@@ -359,6 +400,7 @@ void klp_try_complete_transition(void)
{
unsigned int cpu;
struct task_struct *g, *task;
struct klp_patch *patch;
bool complete = true;
WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED);
......@@ -396,6 +438,10 @@ void klp_try_complete_transition(void)
put_online_cpus();
if (!complete) {
if (klp_signals_cnt && !(klp_signals_cnt % SIGNALS_TIMEOUT))
klp_send_signals();
klp_signals_cnt++;
/*
* Some tasks weren't able to be switched over. Try again
* later and/or wait for other methods like kernel exit
......@@ -407,7 +453,18 @@ void klp_try_complete_transition(void)
}
/* we're done, now cleanup the data structures */
patch = klp_transition_patch;
klp_complete_transition();
/*
* It would make more sense to free the patch in
* klp_complete_transition() but it is called also
* from klp_cancel_transition().
*/
if (!patch->enabled) {
klp_free_patch_start(patch);
schedule_work(&patch->free_work);
}
}
/*
......@@ -446,6 +503,8 @@ void klp_start_transition(void)
if (task->patch_state != klp_target_state)
set_tsk_thread_flag(task, TIF_PATCH_PENDING);
}
klp_signals_cnt = 0;
}
/*
......@@ -568,47 +627,6 @@ void klp_copy_process(struct task_struct *child)
/* TIF_PATCH_PENDING gets copied in setup_thread_stack() */
}
/*
* Sends a fake signal to all non-kthread tasks with TIF_PATCH_PENDING set.
* Kthreads with TIF_PATCH_PENDING set are woken up. Only admin can request this
* action currently.
*/
void klp_send_signals(void)
{
struct task_struct *g, *task;
pr_notice("signaling remaining tasks\n");
read_lock(&tasklist_lock);
for_each_process_thread(g, task) {
if (!klp_patch_pending(task))
continue;
/*
* There is a small race here. We could see TIF_PATCH_PENDING
* set and decide to wake up a kthread or send a fake signal.
* Meanwhile the task could migrate itself and the action
* would be meaningless. It is not serious though.
*/
if (task->flags & PF_KTHREAD) {
/*
* Wake up a kthread which sleeps interruptedly and
* still has not been migrated.
*/
wake_up_state(task, TASK_INTERRUPTIBLE);
} else {
/*
* Send fake signal to all non-kthread tasks which are
* still not migrated.
*/
spin_lock_irq(&task->sighand->siglock);
signal_wake_up(task, 0);
spin_unlock_irq(&task->sighand->siglock);
}
}
read_unlock(&tasklist_lock);
}
/*
* Drop TIF_PATCH_PENDING of all tasks on admin's request. This forces an
* existing transition to finish.
......@@ -620,6 +638,7 @@ void klp_send_signals(void)
*/
void klp_force_transition(void)
{
struct klp_patch *patch;
struct task_struct *g, *task;
unsigned int cpu;
......@@ -633,5 +652,6 @@ void klp_force_transition(void)
for_each_possible_cpu(cpu)
klp_update_patch_state(idle_task(cpu));
klp_forced = true;
list_for_each_entry(patch, &klp_patches, list)
patch->forced = true;
}
......@@ -11,7 +11,6 @@ void klp_cancel_transition(void);
void klp_start_transition(void);
void klp_try_complete_transition(void);
void klp_reverse_transition(void);
void klp_send_signals(void);
void klp_force_transition(void);
#endif /* _LIVEPATCH_TRANSITION_H */
......@@ -2007,6 +2007,27 @@ config TEST_MEMCAT_P
If unsure, say N.
config TEST_LIVEPATCH
tristate "Test livepatching"
default n
depends on LIVEPATCH
depends on m
help
Test kernel livepatching features for correctness. The tests will
load test modules that will be livepatched in various scenarios.
To run all the livepatching tests:
make -C tools/testing/selftests TARGETS=livepatch run_tests
Alternatively, individual tests may be invoked:
tools/testing/selftests/livepatch/test-callbacks.sh
tools/testing/selftests/livepatch/test-livepatch.sh
tools/testing/selftests/livepatch/test-shadow-vars.sh
If unsure, say N.
config TEST_OBJAGG
tristate "Perform selftest on object aggreration manager"
default n
......@@ -2015,7 +2036,6 @@ config TEST_OBJAGG
Enable this option to test object aggregation manager on boot
(or module load).
If unsure, say N.
endif # RUNTIME_TESTING_MENU
......
......@@ -77,6 +77,8 @@ obj-$(CONFIG_TEST_DEBUG_VIRTUAL) += test_debug_virtual.o
obj-$(CONFIG_TEST_MEMCAT_P) += test_memcat_p.o
obj-$(CONFIG_TEST_OBJAGG) += test_objagg.o
obj-$(CONFIG_TEST_LIVEPATCH) += livepatch/
ifeq ($(CONFIG_DEBUG_KOBJECT),y)
CFLAGS_kobject.o += -DDEBUG
CFLAGS_kobject_uevent.o += -DDEBUG
......
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for livepatch test code.
obj-$(CONFIG_TEST_LIVEPATCH) += test_klp_atomic_replace.o \
test_klp_callbacks_demo.o \
test_klp_callbacks_demo2.o \
test_klp_callbacks_busy.o \
test_klp_callbacks_mod.o \
test_klp_livepatch.o \
test_klp_shadow_vars.o
# Target modules to be livepatched require CC_FLAGS_FTRACE
CFLAGS_test_klp_callbacks_busy.o += $(CC_FLAGS_FTRACE)
CFLAGS_test_klp_callbacks_mod.o += $(CC_FLAGS_FTRACE)
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/livepatch.h>
static int replace;
module_param(replace, int, 0644);
MODULE_PARM_DESC(replace, "replace (default=0)");
#include <linux/seq_file.h>
static int livepatch_meminfo_proc_show(struct seq_file *m, void *v)
{
seq_printf(m, "%s: %s\n", THIS_MODULE->name,
"this has been live patched");
return 0;
}
static struct klp_func funcs[] = {
{
.old_name = "meminfo_proc_show",
.new_func = livepatch_meminfo_proc_show,
}, {}
};
static struct klp_object objs[] = {
{
/* name being NULL means vmlinux */
.funcs = funcs,
}, {}
};
static struct klp_patch patch = {
.mod = THIS_MODULE,
.objs = objs,
/* set .replace in the init function below for demo purposes */
};
static int test_klp_atomic_replace_init(void)
{
patch.replace = replace;