The clock/timer APIs are not application facing APIs, however, similar
to arch_ and a few other APIs they are available to implement drivers
and add support for new hardware and are documented and available to be
used outside of the clock/kernel subsystems.
Remove the leading z_ and provide them as clock_* APIs for someone
writing a new timer driver to use.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This function would correctly suppress attempts to set timeouts that
were too soon for the driver or farther out than what was already set,
but when it actually set the timeout it would use the requested value
and not clamp it to the minimum of it and the current timeout
expiration, leading to "too-long" timeouts being set at the driver.
In uniprocessor configurations, that turns out to have been benign
because something else would always come back along when timeout state
changed and fix the broken value before the expiration.
But in SMP, this opens up races. For example, the idle thread on one
CPU can see that there are no active threads and schedule a maximum
value timeout at the same time as the other thread adds a new timeout
that expects a near-term expiration. The broken code here would see
that the new timeout exists, decide that yes it needs to override, but
then set the K_TICKS_FOREVER value it got from the idle thread!
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.
Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
The computation was using the already-adjusted input value that
assumed relative timeouts and not the actual argument the user passed.
Absolute timeouts were consistently waking up one tick early.
Fixes#32499
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was an edge case in the timeout handling (exposed by, but not
strictly related to, the recent timeslice fix): the next_timeout()
computation would include time slice expiration as a clamp on the
result, but this would be invoked also on the z_set_timeout_expiry()
path which gets hooked on entry to a new thread which is needed to set
the timeout in the first place. So if no other timer interrupt was
scheduled, it was possible to miss the first timeslice interrupt after
thread scheduling.
The explanation is much longer than the fix (use <= as the comparator
instead of <).
In practice this was only being hit in the existing test suite on
riscv miv running under renode using non-default clock rates.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Fix an edge case that snuck in with the recent fix: if timeslicing is
enabled, the CPU's slice_ticks will be zero, and thus match a timeout
object's dticks value of zero, and thus get suppressed (because "we
already have a timeout scheduled for that") incorrectly.
Fixes#31789
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Time slices don't have a timeout struct associated and stored in
timeout_list. Time slice timeout is direct programmed in the system
clock and tracked in _current_cpu->slice_ticks.
There is one issue where the time slice timeout can be missed because
the system clock is re-programmed to a longer timeout. To this happens,
it is only necessary that the timeout_list is empty (any timeout set)
and a new timeout longer than remaining time slice is set. This is cause
because z_add_timeout does not check for the slice ticks.
The following example spots the issue:
K_THREAD_STACK_DEFINE(tstack, STACK_SIZE);
K_THREAD_STACK_ARRAY_DEFINE(tstacks, NUM_THREAD, STACK_SIZE);
K_SEM_DEFINE(sema, 0, NUM_THREAD);
static inline void spin_for_ms(int ms)
{
uint32_t t32 = k_uptime_get_32();
while (k_uptime_get_32() - t32 < ms) {
}
}
static void thread_time_slice(void *p1, void *p2, void *p3)
{
printk("thread[%d] - Before spin\n", (int)(uintptr_t)p1);
/* Spinning for longer than slice */
spin_for_ms(SLICE_SIZE + 20);
/* The following print should not happen before another
* same priority thread starts.
*/
printk("thread[%d] - After spinning\n", (int)(uintptr_t)p1);
k_sem_give(&sema);
}
void main(void)
{
k_tid_t tid[NUM_THREAD];
struct k_thread t[NUM_THREAD];
uint32_t slice_ticks = k_ms_to_ticks_ceil32(SLICE_SIZE);
int old_prio = k_thread_priority_get(k_current_get());
/* disable timeslice */
k_sched_time_slice_set(0, K_PRIO_PREEMPT(0));
for (int j = 0; j < 2; j++) {
k_sem_reset(&sema);
/* update priority for current thread */
k_thread_priority_set(k_current_get(), K_PRIO_PREEMPT(j));
/* synchronize to tick boundary */
k_usleep(1);
/* create delayed threads with equal preemptive priority */
for (int i = 0; i < NUM_THREAD; i++) {
tid[i] = k_thread_create(&t[i], tstacks[i], STACK_SIZE,
thread_time_slice, (void *)i, NULL,
NULL, K_PRIO_PREEMPT(j), 0,
K_NO_WAIT);
}
/* enable time slice (and reset the counter!) */
k_sched_time_slice_set(SLICE_SIZE, K_PRIO_PREEMPT(0));
/* Spins for while to spend this thread time but not longer */
/* than a slice. This is important */
spin_for_ms(100);
printk("before sleep\n");
/* relinquish CPU and wait for each thread to complete */
k_sleep(K_TICKS(slice_ticks * (NUM_THREAD + 1)));
for (int i = 0; i < NUM_THREAD; i++) {
k_sem_take(&sema, K_FOREVER);
}
/* test case teardown */
for (int i = 0; i < NUM_THREAD; i++) {
k_thread_abort(tid[i]);
}
/* disable time slice */
k_sched_time_slice_set(0, K_PRIO_PREEMPT(0));
}
k_thread_priority_set(k_current_get(), old_prio);
}
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Replaces all existing variants of value clamping with the MIN and MAX
macros with the CLAMP macro.
Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
API that takes _timeout structures but doesn't change data in them is
updated to const-qualify the underlying object, allowing information
to be retrieved from contexts where the containing object is
immutable.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Both operands of an operator in the arithmetic conversions
performed shall have the same essential type category.
Changes are related to converting the integer constants to the
unsigned integer constants
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
When to->dticks is an int64_t it may happen that the calculated
remaining time is a value that cannot be exactly represented in the
destination int32_t, producing an implementation-defined result which
can include a signal (interrupt). Cap the maximum delay to the
largest value suported by the int32_t result.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The dticks field was changed from a signed to an unsigned
value so now the assert test to ensure it isn't negative
is no longer needed.
fixes#26355
Signed-off-by: David Leach <david.leach@nxp.com>
MISRA-C Rule 5.3 states that identifiers in inner scope should
not hide identifiers in outer scope.
In the function z_set_timeout_expiry(), the parameter "idle"
and an inner variable "next" collide with function named idle()
and next(). So rename those variables.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The "forever" token has always been interpreted above z_add_timeout()
(because it's always taken ticks, but K_FOREVER used to be in ms).
But it was discovered that k_delayed_work_submit_to_queue() was never
testing for this and passing a raw K_FOREVER down, where it got
interpreted as a negative timeout and caused it to fire at the next
tick.
Now that we actually see the original k_timeout_t here, we might as
well check it locally and do the correct thing (that is, nothing) if
asked to schedule a timeout that will never fire.
Fixes#24409
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add a call to get the system tick count as an official API (and
redefine the existing millisecond API in terms of it). Sophisticated
applications need to be able to count ticks directly, and the newer
timeout API supports that. Uptime should too, for symmetry.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>