Changes between Initial Version and Version 1 of Ticket #231
- Timestamp:
- 2010-04-24T20:13:49Z (15 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Ticket #231 – Description
initial v1 3 3 In order to figure out more, I improved the spinlock code to be more sensitive to random lock corruption (which I can thus rule out) and also to be more observable by providing a global ring buffer for recording the locking history. See the attachement to see the diff. I am also going to attach screenshots which illustrate the panics. 4 4 5 Frankly speaking, my suspect number one is actually Qemu (since the HelenOS code looks good to me atm.), but I am logging this ticket anyway just for the case I am wrong. One more thing which makes me think that this is rather a Qemu issue is that with the given ring buffer and the spinlock_lock_debug() code, I would expect the panic to occur in spinlock_lock_debug() on either of the two checks for multiple CPUs in the CS, and not so late in spinlock_unlock(). 5 Frankly speaking, my suspect number one is actually Qemu (since the HelenOS code looks good to me atm.), but I am logging this ticket anyway just for the case I am wrong. One more thing which makes me think that this is rather a Qemu issue is that with the given ring buffer and the spinlock_lock_debug() code, I would expect the panic to occur in spinlock_lock_debug() on either of the two checks for multiple CPUs in the CS, and not so late in spinlock_unlock(). With this behavior, the simulated CPUs appear to use some very strange memory model (i.e. we observe the effect of the lock_event_record() on both CPUs that manage to "lock" the spinlock, but in most of the cases do not hit the "not alone in critical section" panic).