Loading...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 | /* SPDX-License-Identifier: GPL-2.0 */ /* * etrap.S: Sparc trap window preparation for entry into the * Linux kernel. * * Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu) */ #include <asm/head.h> #include <asm/asi.h> #include <asm/contregs.h> #include <asm/page.h> #include <asm/psr.h> #include <asm/ptrace.h> #include <asm/winmacro.h> #include <asm/asmmacro.h> #include <asm/thread_info.h> /* Registers to not touch at all. */ #define t_psr l0 /* Set by caller */ #define t_pc l1 /* Set by caller */ #define t_npc l2 /* Set by caller */ #define t_wim l3 /* Set by caller */ #define t_twinmask l4 /* Set at beginning of this entry routine. */ #define t_kstack l5 /* Set right before pt_regs frame is built */ #define t_retpc l6 /* If you change this, change winmacro.h header file */ #define t_systable l7 /* Never touch this, could be the syscall table ptr. */ #define curptr g6 /* Set after pt_regs frame is built */ .text .align 4 /* SEVEN WINDOW PATCH INSTRUCTIONS */ .globl tsetup_7win_patch1, tsetup_7win_patch2 .globl tsetup_7win_patch3, tsetup_7win_patch4 .globl tsetup_7win_patch5, tsetup_7win_patch6 tsetup_7win_patch1: sll %t_wim, 0x6, %t_wim tsetup_7win_patch2: and %g2, 0x7f, %g2 tsetup_7win_patch3: and %g2, 0x7f, %g2 tsetup_7win_patch4: and %g1, 0x7f, %g1 tsetup_7win_patch5: sll %t_wim, 0x6, %t_wim tsetup_7win_patch6: and %g2, 0x7f, %g2 /* END OF PATCH INSTRUCTIONS */ /* At trap time, interrupts and all generic traps do the * following: * * rd %psr, %l0 * b some_handler * rd %wim, %l3 * nop * * Then 'some_handler' if it needs a trap frame (ie. it has * to call c-code and the trap cannot be handled in-window) * then it does the SAVE_ALL macro in entry.S which does * * sethi %hi(trap_setup), %l4 * jmpl %l4 + %lo(trap_setup), %l6 * nop */ /* 2 3 4 window number * ----- * O T S mnemonic * * O == Current window before trap * T == Window entered when trap occurred * S == Window we will need to save if (1<<T) == %wim * * Before execution gets here, it must be guaranteed that * %l0 contains trap time %psr, %l1 and %l2 contain the * trap pc and npc, and %l3 contains the trap time %wim. */ .globl trap_setup, tsetup_patch1, tsetup_patch2 .globl tsetup_patch3, tsetup_patch4 .globl tsetup_patch5, tsetup_patch6 trap_setup: /* Calculate mask of trap window. See if from user * or kernel and branch conditionally. */ mov 1, %t_twinmask andcc %t_psr, PSR_PS, %g0 ! fromsupv_p = (psr & PSR_PS) be trap_setup_from_user ! nope, from user mode sll %t_twinmask, %t_psr, %t_twinmask ! t_twinmask = (1 << psr) /* From kernel, allocate more kernel stack and * build a pt_regs trap frame. */ sub %fp, (STACKFRAME_SZ + TRACEREG_SZ), %t_kstack STORE_PT_ALL(t_kstack, t_psr, t_pc, t_npc, g2) /* See if we are in the trap window. */ andcc %t_twinmask, %t_wim, %g0 bne trap_setup_kernel_spill ! in trap window, clean up nop /* Trap from kernel with a window available. * Just do it... */ jmpl %t_retpc + 0x8, %g0 ! return to caller mov %t_kstack, %sp ! jump onto new stack trap_setup_kernel_spill: ld [%curptr + TI_UWINMASK], %g1 orcc %g0, %g1, %g0 bne trap_setup_user_spill ! there are some user windows, yuck /* Spill from kernel, but only kernel windows, adjust * %wim and go. */ srl %t_wim, 0x1, %g2 ! begin computation of new %wim tsetup_patch1: sll %t_wim, 0x7, %t_wim ! patched on 7 window Sparcs or %t_wim, %g2, %g2 tsetup_patch2: and %g2, 0xff, %g2 ! patched on 7 window Sparcs save %g0, %g0, %g0 /* Set new %wim value */ wr %g2, 0x0, %wim /* Save the kernel window onto the corresponding stack. */ STORE_WINDOW(sp) restore %g0, %g0, %g0 jmpl %t_retpc + 0x8, %g0 ! return to caller mov %t_kstack, %sp ! and onto new kernel stack #define STACK_OFFSET (THREAD_SIZE - TRACEREG_SZ - STACKFRAME_SZ) trap_setup_from_user: /* We can't use %curptr yet. */ LOAD_CURRENT(t_kstack, t_twinmask) sethi %hi(STACK_OFFSET), %t_twinmask or %t_twinmask, %lo(STACK_OFFSET), %t_twinmask add %t_kstack, %t_twinmask, %t_kstack mov 1, %t_twinmask sll %t_twinmask, %t_psr, %t_twinmask ! t_twinmask = (1 << psr) /* Build pt_regs frame. */ STORE_PT_ALL(t_kstack, t_psr, t_pc, t_npc, g2) #if 0 /* If we're sure every task_struct is THREAD_SIZE aligned, we can speed this up. */ sethi %hi(STACK_OFFSET), %curptr or %curptr, %lo(STACK_OFFSET), %curptr sub %t_kstack, %curptr, %curptr #else sethi %hi(~(THREAD_SIZE - 1)), %curptr and %t_kstack, %curptr, %curptr #endif /* Clear current_thread_info->w_saved */ st %g0, [%curptr + TI_W_SAVED] /* See if we are in the trap window. */ andcc %t_twinmask, %t_wim, %g0 bne trap_setup_user_spill ! yep we are orn %g0, %t_twinmask, %g1 ! negate trap win mask into %g1 /* Trap from user, but not into the invalid window. * Calculate new umask. The way this works is, * any window from the %wim at trap time until * the window right before the one we are in now, * is a user window. A diagram: * * 7 6 5 4 3 2 1 0 window number * --------------- * I L T mnemonic * * Window 'I' is the invalid window in our example, * window 'L' is the window the user was in when * the trap occurred, window T is the trap window * we are in now. So therefore, windows 5, 4 and * 3 are user windows. The following sequence * computes the user winmask to represent this. */ subcc %t_wim, %t_twinmask, %g2 bneg,a 1f sub %g2, 0x1, %g2 1: andn %g2, %t_twinmask, %g2 tsetup_patch3: and %g2, 0xff, %g2 ! patched on 7win Sparcs st %g2, [%curptr + TI_UWINMASK] ! store new umask jmpl %t_retpc + 0x8, %g0 ! return to caller mov %t_kstack, %sp ! and onto kernel stack trap_setup_user_spill: /* A spill occurred from either kernel or user mode * and there exist some user windows to deal with. * A mask of the currently valid user windows * is in %g1 upon entry to here. */ tsetup_patch4: and %g1, 0xff, %g1 ! patched on 7win Sparcs, mask srl %t_wim, 0x1, %g2 ! compute new %wim tsetup_patch5: sll %t_wim, 0x7, %t_wim ! patched on 7win Sparcs or %t_wim, %g2, %g2 ! %g2 is new %wim tsetup_patch6: and %g2, 0xff, %g2 ! patched on 7win Sparcs andn %g1, %g2, %g1 ! clear this bit in %g1 st %g1, [%curptr + TI_UWINMASK] save %g0, %g0, %g0 wr %g2, 0x0, %wim /* Call MMU-architecture dependent stack checking * routine. */ b tsetup_srmmu_stackchk andcc %sp, 0x7, %g0 /* Architecture specific stack checking routines. When either * of these routines are called, the globals are free to use * as they have been safely stashed on the new kernel stack * pointer. Thus the definition below for simplicity. */ #define glob_tmp g1 .globl tsetup_srmmu_stackchk tsetup_srmmu_stackchk: /* Check results of callers andcc %sp, 0x7, %g0 */ bne trap_setup_user_stack_is_bolixed sethi %hi(PAGE_OFFSET), %glob_tmp cmp %glob_tmp, %sp bleu,a 1f LEON_PI( lda [%g0] ASI_LEON_MMUREGS, %glob_tmp) ! read MMU control SUN_PI_( lda [%g0] ASI_M_MMUREGS, %glob_tmp) ! read MMU control trap_setup_user_stack_is_bolixed: /* From user/kernel into invalid window w/bad user * stack. Save bad user stack, and return to caller. */ SAVE_BOLIXED_USER_STACK(curptr, g3) restore %g0, %g0, %g0 jmpl %t_retpc + 0x8, %g0 mov %t_kstack, %sp 1: /* Clear the fault status and turn on the no_fault bit. */ or %glob_tmp, 0x2, %glob_tmp ! or in no_fault bit LEON_PI(sta %glob_tmp, [%g0] ASI_LEON_MMUREGS) ! set it SUN_PI_(sta %glob_tmp, [%g0] ASI_M_MMUREGS) ! set it /* Dump the registers and cross fingers. */ STORE_WINDOW(sp) /* Clear the no_fault bit and check the status. */ andn %glob_tmp, 0x2, %glob_tmp LEON_PI(sta %glob_tmp, [%g0] ASI_LEON_MMUREGS) SUN_PI_(sta %glob_tmp, [%g0] ASI_M_MMUREGS) mov AC_M_SFAR, %glob_tmp LEON_PI(lda [%glob_tmp] ASI_LEON_MMUREGS, %g0) SUN_PI_(lda [%glob_tmp] ASI_M_MMUREGS, %g0) mov AC_M_SFSR, %glob_tmp LEON_PI(lda [%glob_tmp] ASI_LEON_MMUREGS, %glob_tmp)! save away status of winstore SUN_PI_(lda [%glob_tmp] ASI_M_MMUREGS, %glob_tmp) ! save away status of winstore andcc %glob_tmp, 0x2, %g0 ! did we fault? bne trap_setup_user_stack_is_bolixed ! failure nop restore %g0, %g0, %g0 jmpl %t_retpc + 0x8, %g0 mov %t_kstack, %sp |