Changeset 1b20da0 in mainline for kernel/generic/src/smp/smp_call.c
- Timestamp:
- 2018-02-28T17:52:03Z (7 years ago)
- Branches:
- lfn, master, serial, ticket/834-toolchain-update, topic/msim-upgrade, topic/simplify-dev-export
- Children:
- 3061bc1
- Parents:
- df6ded8
- git-author:
- Jiří Zárevúcky <zarevucky.jiri@…> (2018-02-28 17:26:03)
- git-committer:
- Jiří Zárevúcky <zarevucky.jiri@…> (2018-02-28 17:52:03)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
kernel/generic/src/smp/smp_call.c
rdf6ded8 r1b20da0 61 61 62 62 /** Invokes a function on a specific cpu and waits for it to complete. 63 * 64 * Calls @a func on the CPU denoted by its logical id @cpu_id . 65 * The function will execute with interrupts disabled. It should 66 * be a quick and simple function and must never block. 67 * 63 * 64 * Calls @a func on the CPU denoted by its logical id @cpu_id . 65 * The function will execute with interrupts disabled. It should 66 * be a quick and simple function and must never block. 67 * 68 68 * If @a cpu_id is the local CPU, the function will be invoked 69 69 * directly. 70 * 70 * 71 71 * All memory accesses of prior to smp_call() will be visible 72 72 * to @a func on cpu @a cpu_id. Similarly, any changes @a func 73 73 * makes on cpu @a cpu_id will be visible on this cpu once 74 74 * smp_call() returns. 75 * 75 * 76 76 * Invoking @a func on the destination cpu acts as a memory barrier 77 77 * on that cpu. 78 * 78 * 79 79 * @param cpu_id Destination CPU's logical id (eg CPU->id) 80 80 * @param func Function to call. … … 89 89 90 90 /** Invokes a function on a specific cpu asynchronously. 91 * 92 * Calls @a func on the CPU denoted by its logical id @cpu_id . 93 * The function will execute with interrupts disabled. It should 94 * be a quick and simple function and must never block. 95 * 96 * Pass @a call_info to smp_call_wait() in order to wait for 91 * 92 * Calls @a func on the CPU denoted by its logical id @cpu_id . 93 * The function will execute with interrupts disabled. It should 94 * be a quick and simple function and must never block. 95 * 96 * Pass @a call_info to smp_call_wait() in order to wait for 97 97 * @a func to complete. 98 * 98 * 99 99 * @a call_info must be valid until/after @a func returns. Use 100 100 * smp_call_wait() to wait until it is safe to free @a call_info. 101 * 101 * 102 102 * If @a cpu_id is the local CPU, the function will be invoked 103 103 * directly. If the destination cpu id @a cpu_id is invalid 104 104 * or denotes an inactive cpu, the call is discarded immediately. 105 * 105 * 106 106 * All memory accesses of the caller prior to smp_call_async() 107 * will be made visible to @a func on the other cpu. Similarly, 107 * will be made visible to @a func on the other cpu. Similarly, 108 108 * any changes @a func makes on cpu @a cpu_id will be visible 109 109 * to this cpu when smp_call_wait() returns. 110 * 110 * 111 111 * Invoking @a func on the destination cpu acts as a memory barrier 112 112 * on that cpu. 113 * 113 * 114 114 * Interrupts must be enabled. Otherwise you run the risk 115 115 * of a deadlock. 116 * 116 * 117 117 * @param cpu_id Destination CPU's logical id (eg CPU->id). 118 118 * @param func Function to call. … … 121 121 * be valid until the function completes. 122 122 */ 123 void smp_call_async(unsigned int cpu_id, smp_call_func_t func, void *arg, 123 void smp_call_async(unsigned int cpu_id, smp_call_func_t func, void *arg, 124 124 smp_call_t *call_info) 125 125 { 126 /* 127 * Interrupts must not be disabled or you run the risk of a deadlock 126 /* 127 * Interrupts must not be disabled or you run the risk of a deadlock 128 128 * if both the destination and source cpus try to send an IPI to each 129 * other with interrupts disabled. Because the interrupts are disabled 130 * the IPIs cannot be delivered and both cpus will forever busy wait 129 * other with interrupts disabled. Because the interrupts are disabled 130 * the IPIs cannot be delivered and both cpus will forever busy wait 131 131 * for an acknowledgment of the IPI from the other cpu. 132 132 */ … … 155 155 * If a platform supports SMP it must implement arch_smp_call_ipi(). 156 156 * It should issue an IPI on cpu_id and invoke smp_call_ipi_recv() 157 * on cpu_id in turn. 158 * 157 * on cpu_id in turn. 158 * 159 159 * Do not implement as just an empty dummy function. Instead 160 * consider providing a full implementation or at least a version 160 * consider providing a full implementation or at least a version 161 161 * that panics if invoked. Note that smp_call_async() never 162 162 * calls arch_smp_call_ipi() on uniprocessors even if CONFIG_SMP. … … 177 177 178 178 /** Waits for a function invoked on another CPU asynchronously to complete. 179 * 179 * 180 180 * Does not sleep but rather spins. 181 * 181 * 182 182 * Example usage: 183 183 * @code … … 185 185 * puts((char*)p); 186 186 * } 187 * 187 * 188 188 * smp_call_t call_info; 189 189 * smp_call_async(cpus[2].id, hello, "hi!\n", &call_info); … … 191 191 * smp_call_wait(&call_info); 192 192 * @endcode 193 * 193 * 194 194 * @param call_info Initialized by smp_call_async(). 195 195 */ … … 202 202 203 203 /** Architecture independent smp call IPI handler. 204 * 204 * 205 205 * Interrupts must be disabled. Tolerates spurious calls. 206 206 */ … … 213 213 list_initialize(&calls_list); 214 214 215 /* 215 /* 216 216 * Acts as a load memory barrier. Any changes made by the cpu that 217 217 * added the smp_call to calls_list will be made visible to this cpu. … … 222 222 223 223 /* Walk the list manually, so that we can safely remove list items. */ 224 for (link_t *cur = calls_list.head.next, *next = cur->next; 224 for (link_t *cur = calls_list.head.next, *next = cur->next; 225 225 !list_empty(&calls_list); cur = next, next = cur->next) { 226 226 … … 254 254 static void call_done(smp_call_t *call_info) 255 255 { 256 /* 257 * Separate memory accesses of the called function from the 256 /* 257 * Separate memory accesses of the called function from the 258 258 * announcement of its completion. 259 259 */ … … 265 265 { 266 266 do { 267 /* 267 /* 268 268 * Ensure memory accesses following call_wait() are ordered 269 * after completion of the called function on another cpu. 269 * after completion of the called function on another cpu. 270 270 * Also, speed up loading of call_info->pending. 271 271 */
Note:
See TracChangeset
for help on using the changeset viewer.