Skip to content

Spinlock validation augmentation #13800

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

andyross
Copy link
Collaborator

No need to merge this for 1.14. It's a check I added at one point when we noticed a spot where the kernel was trying to swap away with a spinlock held (recursive irqlocks worked like that, but with spinlocks you must be 100% sure there is only one lock being released in _Swap() -- and that's a good thing!).

It's cheap and easy, but as it happens everything else passes with it, so there's no rush to get it into the tree.

@andyross andyross requested a review from andrewboie as a code owner February 26, 2019 17:03
@codecov-io
Copy link

codecov-io commented Feb 26, 2019

Codecov Report

Merging #13800 into master will increase coverage by <.01%.
The diff coverage is 66.66%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #13800      +/-   ##
==========================================
+ Coverage   51.97%   51.98%   +<.01%     
==========================================
  Files         308      308              
  Lines       45508    45517       +9     
  Branches    10546    10547       +1     
==========================================
+ Hits        23653    23661       +8     
  Misses      17055    17055              
- Partials     4800     4801       +1
Impacted Files Coverage Δ
kernel/include/kernel_structs.h 90.9% <ø> (ø) ⬆️
kernel/include/kswap.h 90% <0%> (-10%) ⬇️
kernel/thread.c 84.81% <100%> (+0.19%) ⬆️
include/spinlock.h 75% <66.66%> (+25%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c0183fd...61511e2. Read the comment docs.

@@ -715,6 +715,7 @@ int z_spin_lock_valid(struct k_spinlock *l)
}
}
l->thread_cpu = _current_cpu->id | (u32_t)_current;
_current_cpu->spin_depth++;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs an #ifdef since spin_depth isn't always there

@@ -107,6 +108,10 @@ struct _cpu {
/* True when _current is allowed to context switch */
u8_t swap_ok;
#endif

#ifdef SPIN_VALIDATE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make SPIN_VALIDATE a Kconfig?

@@ -724,6 +725,7 @@ int z_spin_unlock_valid(struct k_spinlock *l)
return 0;
}
l->thread_cpu = 0;
_current_cpu->spin_depth--;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs an #ifdef since spin_depth isn't always there

Andy Ross added 2 commits March 11, 2019 21:42
Catching the error in a function is nice, but one really wants to know
where it happened, and where the recursive lock was taken (or where
the unowned release was actually grabbed).  Add a layer of macro
indirection to catch this info and log it with the assertion.

Signed-off-by: Andy Ross <[email protected]>
Right now the validation layer catches mismatched locking, but another
common gotcha is failing to release an outer nested lock before
entering a blocking primitive.  In these cases, the OS would swap
away, and the next thread (if any) to try to take the outer lock would
hit a "not my spinlock" error, which while correct isn't very
informative.

By keeping a single per-cpu nesting count we can check this at the
point of the swap, which is more helpful.

Signed-off-by: Andy Ross <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants