-
Notifications
You must be signed in to change notification settings - Fork 7.5k
Spinlock validation augmentation #13800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #13800 +/- ##
==========================================
+ Coverage 51.97% 51.98% +<.01%
==========================================
Files 308 308
Lines 45508 45517 +9
Branches 10546 10547 +1
==========================================
+ Hits 23653 23661 +8
Misses 17055 17055
- Partials 4800 4801 +1
Continue to review full report at Codecov.
|
@@ -715,6 +715,7 @@ int z_spin_lock_valid(struct k_spinlock *l) | |||
} | |||
} | |||
l->thread_cpu = _current_cpu->id | (u32_t)_current; | |||
_current_cpu->spin_depth++; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
needs an #ifdef since spin_depth isn't always there
@@ -107,6 +108,10 @@ struct _cpu { | |||
/* True when _current is allowed to context switch */ | |||
u8_t swap_ok; | |||
#endif | |||
|
|||
#ifdef SPIN_VALIDATE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make SPIN_VALIDATE a Kconfig?
@@ -724,6 +725,7 @@ int z_spin_unlock_valid(struct k_spinlock *l) | |||
return 0; | |||
} | |||
l->thread_cpu = 0; | |||
_current_cpu->spin_depth--; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
needs an #ifdef since spin_depth isn't always there
Catching the error in a function is nice, but one really wants to know where it happened, and where the recursive lock was taken (or where the unowned release was actually grabbed). Add a layer of macro indirection to catch this info and log it with the assertion. Signed-off-by: Andy Ross <[email protected]>
Right now the validation layer catches mismatched locking, but another common gotcha is failing to release an outer nested lock before entering a blocking primitive. In these cases, the OS would swap away, and the next thread (if any) to try to take the outer lock would hit a "not my spinlock" error, which while correct isn't very informative. By keeping a single per-cpu nesting count we can check this at the point of the swap, which is more helpful. Signed-off-by: Andy Ross <[email protected]>
34d266b
to
61511e2
Compare
No need to merge this for 1.14. It's a check I added at one point when we noticed a spot where the kernel was trying to swap away with a spinlock held (recursive irqlocks worked like that, but with spinlocks you must be 100% sure there is only one lock being released in _Swap() -- and that's a good thing!).
It's cheap and easy, but as it happens everything else passes with it, so there's no rush to get it into the tree.