534c97b095
Pull 'full dynticks' support from Ingo Molnar: "This tree from Frederic Weisbecker adds a new, (exciting! :-) core kernel feature to the timer and scheduler subsystems: 'full dynticks', or CONFIG_NO_HZ_FULL=y. This feature extends the nohz variable-size timer tick feature from idle to busy CPUs (running at most one task) as well, potentially reducing the number of timer interrupts significantly. This feature got motivated by real-time folks and the -rt tree, but the general utility and motivation of full-dynticks runs wider than that: - HPC workloads get faster: CPUs running a single task should be able to utilize a maximum amount of CPU power. A periodic timer tick at HZ=1000 can cause a constant overhead of up to 1.0%. This feature removes that overhead - and speeds up the system by 0.5%-1.0% on typical distro configs even on modern systems. - Real-time workload latency reduction: CPUs running critical tasks should experience as little jitter as possible. The last remaining source of kernel-related jitter was the periodic timer tick. - A single task executing on a CPU is a pretty common situation, especially with an increasing number of cores/CPUs, so this feature helps desktop and mobile workloads as well. The cost of the feature is mainly related to increased timer reprogramming overhead when a CPU switches its tick period, and thus slightly longer to-idle and from-idle latency. Configuration-wise a third mode of operation is added to the existing two NOHZ kconfig modes: - CONFIG_HZ_PERIODIC: [formerly !CONFIG_NO_HZ], now explicitly named as a config option. This is the traditional Linux periodic tick design: there's a HZ tick going on all the time, regardless of whether a CPU is idle or not. - CONFIG_NO_HZ_IDLE: [formerly CONFIG_NO_HZ=y], this turns off the periodic tick when a CPU enters idle mode. - CONFIG_NO_HZ_FULL: this new mode, in addition to turning off the tick when a CPU is idle, also slows the tick down to 1 Hz (one timer interrupt per second) when only a single task is running on a CPU. The .config behavior is compatible: existing !CONFIG_NO_HZ and CONFIG_NO_HZ=y settings get translated to the new values, without the user having to configure anything. CONFIG_NO_HZ_FULL is turned off by default. This feature is based on a lot of infrastructure work that has been steadily going upstream in the last 2-3 cycles: related RCU support and non-periodic cputime support in particular is upstream already. This tree adds the final pieces and activates the feature. The pull request is marked RFC because: - it's marked 64-bit only at the moment - the 32-bit support patch is small but did not get ready in time. - it has a number of fresh commits that came in after the merge window. The overwhelming majority of commits are from before the merge window, but still some aspects of the tree are fresh and so I marked it RFC. - it's a pretty wide-reaching feature with lots of effects - and while the components have been in testing for some time, the full combination is still not very widely used. That it's default-off should reduce its regression abilities and obviously there are no known regressions with CONFIG_NO_HZ_FULL=y enabled either. - the feature is not completely idempotent: there is no 100% equivalent replacement for a periodic scheduler/timer tick. In particular there's ongoing work to map out and reduce its effects on scheduler load-balancing and statistics. This should not impact correctness though, there are no known regressions related to this feature at this point. - it's a pretty ambitious feature that with time will likely be enabled by most Linux distros, and we'd like you to make input on its design/implementation, if you dislike some aspect we missed. Without flaming us to crisp! :-) Future plans: - there's ongoing work to reduce 1Hz to 0Hz, to essentially shut off the periodic tick altogether when there's a single busy task on a CPU. We'd first like 1 Hz to be exposed more widely before we go for the 0 Hz target though. - once we reach 0 Hz we can remove the periodic tick assumption from nr_running>=2 as well, by essentially interrupting busy tasks only as frequently as the sched_latency constraints require us to do - once every 4-40 msecs, depending on nr_running. I am personally leaning towards biting the bullet and doing this in v3.10, like the -rt tree this effort has been going on for too long - but the final word is up to you as usual. More technical details can be found in Documentation/timers/NO_HZ.txt" * 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits) sched: Keep at least 1 tick per second for active dynticks tasks rcu: Fix full dynticks' dependency on wide RCU nocb mode nohz: Protect smp_processor_id() in tick_nohz_task_switch() nohz_full: Add documentation. cputime_nsecs: use math64.h for nsec resolution conversion helpers nohz: Select VIRT_CPU_ACCOUNTING_GEN from full dynticks config nohz: Reduce overhead under high-freq idling patterns nohz: Remove full dynticks' superfluous dependency on RCU tree nohz: Fix unavailable tick_stop tracepoint in dynticks idle nohz: Add basic tracing nohz: Select wide RCU nocb for full dynticks nohz: Disable the tick when irq resume in full dynticks CPU nohz: Re-evaluate the tick for the new task after a context switch nohz: Prepare to stop the tick on irq exit nohz: Implement full dynticks kick nohz: Re-evaluate the tick from the scheduler IPI sched: New helper to prevent from stopping the tick in full dynticks sched: Kick full dynticks CPU that have more than one task enqueued. perf: New helper to prevent full dynticks CPUs from stopping tick perf: Kick full dynticks CPU if events rotation is needed ...
135 lines
4.1 KiB
C
135 lines
4.1 KiB
C
#ifndef _linux_POSIX_TIMERS_H
|
|
#define _linux_POSIX_TIMERS_H
|
|
|
|
#include <linux/spinlock.h>
|
|
#include <linux/list.h>
|
|
#include <linux/sched.h>
|
|
#include <linux/timex.h>
|
|
#include <linux/alarmtimer.h>
|
|
|
|
union cpu_time_count {
|
|
cputime_t cpu;
|
|
unsigned long long sched;
|
|
};
|
|
|
|
struct cpu_timer_list {
|
|
struct list_head entry;
|
|
union cpu_time_count expires, incr;
|
|
struct task_struct *task;
|
|
int firing;
|
|
};
|
|
|
|
/*
|
|
* Bit fields within a clockid:
|
|
*
|
|
* The most significant 29 bits hold either a pid or a file descriptor.
|
|
*
|
|
* Bit 2 indicates whether a cpu clock refers to a thread or a process.
|
|
*
|
|
* Bits 1 and 0 give the type: PROF=0, VIRT=1, SCHED=2, or FD=3.
|
|
*
|
|
* A clockid is invalid if bits 2, 1, and 0 are all set.
|
|
*/
|
|
#define CPUCLOCK_PID(clock) ((pid_t) ~((clock) >> 3))
|
|
#define CPUCLOCK_PERTHREAD(clock) \
|
|
(((clock) & (clockid_t) CPUCLOCK_PERTHREAD_MASK) != 0)
|
|
|
|
#define CPUCLOCK_PERTHREAD_MASK 4
|
|
#define CPUCLOCK_WHICH(clock) ((clock) & (clockid_t) CPUCLOCK_CLOCK_MASK)
|
|
#define CPUCLOCK_CLOCK_MASK 3
|
|
#define CPUCLOCK_PROF 0
|
|
#define CPUCLOCK_VIRT 1
|
|
#define CPUCLOCK_SCHED 2
|
|
#define CPUCLOCK_MAX 3
|
|
#define CLOCKFD CPUCLOCK_MAX
|
|
#define CLOCKFD_MASK (CPUCLOCK_PERTHREAD_MASK|CPUCLOCK_CLOCK_MASK)
|
|
|
|
#define MAKE_PROCESS_CPUCLOCK(pid, clock) \
|
|
((~(clockid_t) (pid) << 3) | (clockid_t) (clock))
|
|
#define MAKE_THREAD_CPUCLOCK(tid, clock) \
|
|
MAKE_PROCESS_CPUCLOCK((tid), (clock) | CPUCLOCK_PERTHREAD_MASK)
|
|
|
|
#define FD_TO_CLOCKID(fd) ((~(clockid_t) (fd) << 3) | CLOCKFD)
|
|
#define CLOCKID_TO_FD(clk) ((unsigned int) ~((clk) >> 3))
|
|
|
|
/* POSIX.1b interval timer structure. */
|
|
struct k_itimer {
|
|
struct list_head list; /* free/ allocate list */
|
|
struct hlist_node t_hash;
|
|
spinlock_t it_lock;
|
|
clockid_t it_clock; /* which timer type */
|
|
timer_t it_id; /* timer id */
|
|
int it_overrun; /* overrun on pending signal */
|
|
int it_overrun_last; /* overrun on last delivered signal */
|
|
int it_requeue_pending; /* waiting to requeue this timer */
|
|
#define REQUEUE_PENDING 1
|
|
int it_sigev_notify; /* notify word of sigevent struct */
|
|
struct signal_struct *it_signal;
|
|
union {
|
|
struct pid *it_pid; /* pid of process to send signal to */
|
|
struct task_struct *it_process; /* for clock_nanosleep */
|
|
};
|
|
struct sigqueue *sigq; /* signal queue entry. */
|
|
union {
|
|
struct {
|
|
struct hrtimer timer;
|
|
ktime_t interval;
|
|
} real;
|
|
struct cpu_timer_list cpu;
|
|
struct {
|
|
unsigned int clock;
|
|
unsigned int node;
|
|
unsigned long incr;
|
|
unsigned long expires;
|
|
} mmtimer;
|
|
struct {
|
|
struct alarm alarmtimer;
|
|
ktime_t interval;
|
|
} alarm;
|
|
struct rcu_head rcu;
|
|
} it;
|
|
};
|
|
|
|
struct k_clock {
|
|
int (*clock_getres) (const clockid_t which_clock, struct timespec *tp);
|
|
int (*clock_set) (const clockid_t which_clock,
|
|
const struct timespec *tp);
|
|
int (*clock_get) (const clockid_t which_clock, struct timespec * tp);
|
|
int (*clock_adj) (const clockid_t which_clock, struct timex *tx);
|
|
int (*timer_create) (struct k_itimer *timer);
|
|
int (*nsleep) (const clockid_t which_clock, int flags,
|
|
struct timespec *, struct timespec __user *);
|
|
long (*nsleep_restart) (struct restart_block *restart_block);
|
|
int (*timer_set) (struct k_itimer * timr, int flags,
|
|
struct itimerspec * new_setting,
|
|
struct itimerspec * old_setting);
|
|
int (*timer_del) (struct k_itimer * timr);
|
|
#define TIMER_RETRY 1
|
|
void (*timer_get) (struct k_itimer * timr,
|
|
struct itimerspec * cur_setting);
|
|
};
|
|
|
|
extern struct k_clock clock_posix_cpu;
|
|
extern struct k_clock clock_posix_dynamic;
|
|
|
|
void posix_timers_register_clock(const clockid_t clock_id, struct k_clock *new_clock);
|
|
|
|
/* function to call to trigger timer event */
|
|
int posix_timer_event(struct k_itimer *timr, int si_private);
|
|
|
|
void posix_cpu_timer_schedule(struct k_itimer *timer);
|
|
|
|
void run_posix_cpu_timers(struct task_struct *task);
|
|
void posix_cpu_timers_exit(struct task_struct *task);
|
|
void posix_cpu_timers_exit_group(struct task_struct *task);
|
|
|
|
bool posix_cpu_timers_can_stop_tick(struct task_struct *tsk);
|
|
|
|
void set_process_cpu_timer(struct task_struct *task, unsigned int clock_idx,
|
|
cputime_t *newval, cputime_t *oldval);
|
|
|
|
long clock_nanosleep_restart(struct restart_block *restart_block);
|
|
|
|
void update_rlimit_cpu(struct task_struct *task, unsigned long rlim_new);
|
|
|
|
#endif
|