Changeset f47fd19 in mainline for kernel/arch/sparc64/src/trap/trap_table.S
- Timestamp:
- 2006-08-21T13:36:34Z (18 years ago)
- Branches:
- lfn, master, serial, ticket/834-toolchain-update, topic/msim-upgrade, topic/simplify-dev-export
- Children:
- a796127
- Parents:
- ee289cf0
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
kernel/arch/sparc64/src/trap/trap_table.S
ree289cf0 rf47fd19 44 44 #include <arch/trap/mmu.h> 45 45 #include <arch/stack.h> 46 #include <arch/regdef.h> 46 47 47 48 #define TABLE_SIZE TRAP_TABLE_SIZE … … 276 277 277 278 278 /* Preemptible trap handler. 279 * 280 * This trap handler makes arrangements to 281 * make calling scheduler() possible. 282 * 283 * The caller is responsible for doing save 284 * and allocating PREEMPTIBLE_HANDLER_STACK_FRAME_SIZE 285 * bytes on stack. 279 /* Preemptible trap handler for TL=1. 280 * 281 * This trap handler makes arrangements to make calling of scheduler() from 282 * within a trap context possible. It is guaranteed to function only when traps 283 * are not nested (i.e. for TL=1). 284 * 285 * Every trap handler on TL=1 that makes a call to the scheduler needs to 286 * be based on this function. The reason behind it is that the nested 287 * trap levels and the automatic saving of the interrupted context by hardware 288 * does not work well together with scheduling (i.e. a thread cannot be rescheduled 289 * with TL>0). Therefore it is necessary to eliminate the effect of trap levels 290 * by software and save the necessary state on the kernel stack. 291 * 292 * Note that for traps with TL>1, more state needs to be saved. This function 293 * is therefore not going to work when TL>1. 294 * 295 * The caller is responsible for doing SAVE and allocating 296 * PREEMPTIBLE_HANDLER_STACK_FRAME_SIZE bytes on the stack. 286 297 * 287 298 * Input registers: … … 300 311 rdpr %pstate, %g4 301 312 313 /* 314 * The following memory accesses will not fault 315 * because special provisions are made to have 316 * the kernel stack of THREAD locked in DTLB. 317 */ 302 318 stx %g1, [%fp + STACK_BIAS + SAVED_TSTATE] 303 319 stx %g2, [%fp + STACK_BIAS + SAVED_TPC] … … 314 330 * - switch to normal globals. 315 331 */ 316 and %g4, ~ 1, %g4 ! mask alternate globals332 and %g4, ~(PSTATE_AG_BIT|PSTATE_IG_BIT|PSTATE_MG_BIT), %g4 317 333 wrpr %g4, 0, %pstate 318 334 … … 325 341 * Call the higher-level handler. 326 342 */ 343 mov %fp, %o1 ! calculate istate address 327 344 call %l0 328 nop329 330 /* 331 * Restore the normal global register set.345 add %o1, STACK_BIAS + SAVED_PSTATE, %o1 ! calculate istate address 346 347 /* 348 * Restore the normal global register set. 332 349 */ 333 350 RESTORE_GLOBALS … … 335 352 /* 336 353 * Restore PSTATE from saved copy. 337 * Alternate globals become active.354 * Alternate/Interrupt/MM globals become active. 338 355 */ 339 356 ldx [%fp + STACK_BIAS + SAVED_PSTATE], %l4 … … 358 375 359 376 /* 360 * On execution of retry instruction, CWP will be restored from TSTATE register.361 * However, because of scheduling, it is possible that CWP in saved TSTATE362 * is different from current CWP. The following chunk of code fixes CWP363 * in the saved copy of TSTATE.377 * On execution of the RETRY instruction, CWP will be restored from the TSTATE 378 * register. However, because of scheduling, it is possible that CWP in the saved 379 * TSTATE is different from the current CWP. The following chunk of code fixes 380 * CWP in the saved copy of TSTATE. 364 381 */ 365 382 rdpr %cwp, %g4 ! read current CWP
Note:
See TracChangeset
for help on using the changeset viewer.