Changeset 1b20da0 in mainline for uspace/lib/c/generic/rcu.c
- Timestamp:
- 2018-02-28T17:52:03Z (7 years ago)
- Branches:
- lfn, master, serial, ticket/834-toolchain-update, topic/msim-upgrade, topic/simplify-dev-export
- Children:
- 3061bc1
- Parents:
- df6ded8
- git-author:
- Jiří Zárevúcky <zarevucky.jiri@…> (2018-02-28 17:26:03)
- git-committer:
- Jiří Zárevúcky <zarevucky.jiri@…> (2018-02-28 17:52:03)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
uspace/lib/c/generic/rcu.c
rdf6ded8 r1b20da0 32 32 /** 33 33 * @file 34 * 35 * User space RCU is based on URCU utilizing signals [1]. This 36 * implementation does not however signal each thread of the process 34 * 35 * User space RCU is based on URCU utilizing signals [1]. This 36 * implementation does not however signal each thread of the process 37 37 * to issue a memory barrier. Instead, we introduced a syscall that 38 38 * issues memory barriers (via IPIs) on cpus that are running threads 39 39 * of the current process. First, it does not require us to schedule 40 * and run every thread of the process. Second, IPIs are less intrusive 40 * and run every thread of the process. Second, IPIs are less intrusive 41 41 * than switching contexts and entering user space. 42 * 42 * 43 43 * This algorithm is further modified to require a single instead of 44 44 * two reader group changes per grace period. Signal-URCU flips 45 * the reader group and waits for readers of the previous group 45 * the reader group and waits for readers of the previous group 46 46 * twice in succession in order to wait for new readers that were 47 * delayed and mistakenly associated with the previous reader group. 47 * delayed and mistakenly associated with the previous reader group. 48 48 * The modified algorithm ensures that the new reader group is 49 49 * always empty (by explicitly waiting for it to become empty). 50 50 * Only then does it flip the reader group and wait for preexisting 51 51 * readers of the old reader group (invariant of SRCU [2, 3]). 52 * 53 * 52 * 53 * 54 54 * [1] User-level implementations of read-copy update, 55 55 * 2012, appendix 56 56 * http://www.rdrop.com/users/paulmck/RCU/urcu-supp-accepted.2011.08.30a.pdf 57 * 57 * 58 58 * [2] linux/kernel/srcu.c in Linux 3.5-rc2, 59 59 * 2012 60 60 * http://tomoyo.sourceforge.jp/cgi-bin/lxr/source/kernel/srcu.c?v=linux-3.5-rc2-ccs-1.8.3 61 61 * 62 * [3] [RFC PATCH 5/5 single-thread-version] implement 62 * [3] [RFC PATCH 5/5 single-thread-version] implement 63 63 * per-domain single-thread state machine, 64 64 * 2012, Lai … … 162 162 163 163 /** Registers a fibril so it may start using RCU read sections. 164 * 164 * 165 165 * A fibril must be registered with rcu before it can enter RCU critical 166 166 * sections delineated by rcu_read_lock() and rcu_read_unlock(). … … 178 178 179 179 /** Deregisters a fibril that had been using RCU read sections. 180 * 180 * 181 181 * A fibril must be deregistered before it exits if it had 182 182 * been registered with rcu via rcu_register_fibril(). … … 186 186 assert(fibril_rcu.registered); 187 187 188 /* 188 /* 189 189 * Forcefully unlock any reader sections. The fibril is exiting 190 190 * so it is not holding any references to data protected by the 191 * rcu section. Therefore, it is safe to unlock. Otherwise, 191 * rcu section. Therefore, it is safe to unlock. Otherwise, 192 192 * rcu_synchronize() would wait indefinitely. 193 193 */ … … 202 202 } 203 203 204 /** Delimits the start of an RCU reader critical section. 205 * 206 * RCU reader sections may be nested. 204 /** Delimits the start of an RCU reader critical section. 205 * 206 * RCU reader sections may be nested. 207 207 */ 208 208 void rcu_read_lock(void) … … 252 252 lock_sync(blocking_mode); 253 253 254 /* 255 * Exit early if we were stuck waiting for the mutex for a full grace 254 /* 255 * Exit early if we were stuck waiting for the mutex for a full grace 256 256 * period. Started waiting during gp_in_progress (or gp_in_progress + 1 257 257 * if the value propagated to this cpu too late) so wait for the next … … 267 267 ++ACCESS_ONCE(rcu.cur_gp); 268 268 269 /* 270 * Pairs up with MB_FORCE_L (ie CC_BAR_L). Makes changes prior 271 * to rcu_synchronize() visible to new readers. 269 /* 270 * Pairs up with MB_FORCE_L (ie CC_BAR_L). Makes changes prior 271 * to rcu_synchronize() visible to new readers. 272 272 */ 273 273 memory_barrier(); /* MB_A */ 274 274 275 /* 276 * Pairs up with MB_A. 277 * 275 /* 276 * Pairs up with MB_A. 277 * 278 278 * If the memory barrier is issued before CC_BAR_L in the target 279 279 * thread, it pairs up with MB_A and the thread sees all changes 280 280 * prior to rcu_synchronize(). Ie any reader sections are new 281 * rcu readers. 282 * 281 * rcu readers. 282 * 283 283 * If the memory barrier is issued after CC_BAR_L, it pairs up 284 284 * with MB_B and it will make the most recent nesting_cnt visible 285 285 * in this thread. Since the reader may have already accessed 286 286 * memory protected by RCU (it ran instructions passed CC_BAR_L), 287 * it is a preexisting reader. Seeing the most recent nesting_cnt 287 * it is a preexisting reader. Seeing the most recent nesting_cnt 288 288 * ensures the thread will be identified as a preexisting reader 289 289 * and we will wait for it in wait_for_readers(old_reader_group). … … 291 291 force_mb_in_all_threads(); /* MB_FORCE_L */ 292 292 293 /* 293 /* 294 294 * Pairs with MB_FORCE_L (ie CC_BAR_L, CC_BAR_U) and makes the most 295 295 * current fibril.nesting_cnt visible to this cpu. … … 321 321 static void force_mb_in_all_threads(void) 322 322 { 323 /* 324 * Only issue barriers in running threads. The scheduler will 323 /* 324 * Only issue barriers in running threads. The scheduler will 325 325 * execute additional memory barriers when switching to threads 326 326 * of the process that are currently not running. … … 339 339 while (!list_empty(&rcu.fibrils_list)) { 340 340 list_foreach_safe(rcu.fibrils_list, fibril_it, next_fibril) { 341 fibril_rcu_data_t *fib = member_to_inst(fibril_it, 341 fibril_rcu_data_t *fib = member_to_inst(fibril_it, 342 342 fibril_rcu_data_t, link); 343 343 … … 393 393 assert(rcu.sync_lock.locked); 394 394 395 /* 395 /* 396 396 * Blocked threads have a priority over fibrils when accessing sync(). 397 397 * Pass the lock onto a waiting thread. … … 421 421 { 422 422 assert(rcu.sync_lock.locked); 423 /* 424 * Release the futex to avoid deadlocks in singlethreaded apps 425 * but keep sync locked. 423 /* 424 * Release the futex to avoid deadlocks in singlethreaded apps 425 * but keep sync locked. 426 426 */ 427 427 futex_up(&rcu.sync_lock.futex); … … 446 446 static size_t get_other_group(size_t group) 447 447 { 448 if (group == RCU_GROUP_A) 448 if (group == RCU_GROUP_A) 449 449 return RCU_GROUP_B; 450 450 else
Note:
See TracChangeset
for help on using the changeset viewer.