1f1f3347dSVineet Gupta /* 2f1f3347dSVineet Gupta * TLB Management (flush/create/diagnostics) for ARC700 3f1f3347dSVineet Gupta * 4f1f3347dSVineet Gupta * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) 5f1f3347dSVineet Gupta * 6f1f3347dSVineet Gupta * This program is free software; you can redistribute it and/or modify 7f1f3347dSVineet Gupta * it under the terms of the GNU General Public License version 2 as 8f1f3347dSVineet Gupta * published by the Free Software Foundation. 9d79e678dSVineet Gupta * 10d79e678dSVineet Gupta * vineetg: Aug 2011 11d79e678dSVineet Gupta * -Reintroduce duplicate PD fixup - some customer chips still have the issue 12d79e678dSVineet Gupta * 13d79e678dSVineet Gupta * vineetg: May 2011 14d79e678dSVineet Gupta * -No need to flush_cache_page( ) for each call to update_mmu_cache() 15d79e678dSVineet Gupta * some of the LMBench tests improved amazingly 16d79e678dSVineet Gupta * = page-fault thrice as fast (75 usec to 28 usec) 17d79e678dSVineet Gupta * = mmap twice as fast (9.6 msec to 4.6 msec), 18d79e678dSVineet Gupta * = fork (5.3 msec to 3.7 msec) 19d79e678dSVineet Gupta * 20d79e678dSVineet Gupta * vineetg: April 2011 : 21d79e678dSVineet Gupta * -MMU v3: PD{0,1} bits layout changed: They don't overlap anymore, 22d79e678dSVineet Gupta * helps avoid a shift when preparing PD0 from PTE 23d79e678dSVineet Gupta * 24d79e678dSVineet Gupta * vineetg: April 2011 : Preparing for MMU V3 25d79e678dSVineet Gupta * -MMU v2/v3 BCRs decoded differently 26d79e678dSVineet Gupta * -Remove TLB_SIZE hardcoding as it's variable now: 256 or 512 27d79e678dSVineet Gupta * -tlb_entry_erase( ) can be void 28d79e678dSVineet Gupta * -local_flush_tlb_range( ): 29d79e678dSVineet Gupta * = need not "ceil" @end 30d79e678dSVineet Gupta * = walks MMU only if range spans < 32 entries, as opposed to 256 31d79e678dSVineet Gupta * 32d79e678dSVineet Gupta * Vineetg: Sept 10th 2008 33d79e678dSVineet Gupta * -Changes related to MMU v2 (Rel 4.8) 34d79e678dSVineet Gupta * 35d79e678dSVineet Gupta * Vineetg: Aug 29th 2008 36d79e678dSVineet Gupta * -In TLB Flush operations (Metal Fix MMU) there is a explict command to 37d79e678dSVineet Gupta * flush Micro-TLBS. If TLB Index Reg is invalid prior to TLBIVUTLB cmd, 38d79e678dSVineet Gupta * it fails. Thus need to load it with ANY valid value before invoking 39d79e678dSVineet Gupta * TLBIVUTLB cmd 40d79e678dSVineet Gupta * 41d79e678dSVineet Gupta * Vineetg: Aug 21th 2008: 42d79e678dSVineet Gupta * -Reduced the duration of IRQ lockouts in TLB Flush routines 43d79e678dSVineet Gupta * -Multiple copies of TLB erase code seperated into a "single" function 44d79e678dSVineet Gupta * -In TLB Flush routines, interrupt disabling moved UP to retrieve ASID 45d79e678dSVineet Gupta * in interrupt-safe region. 46d79e678dSVineet Gupta * 47d79e678dSVineet Gupta * Vineetg: April 23rd Bug #93131 48d79e678dSVineet Gupta * Problem: tlb_flush_kernel_range() doesnt do anything if the range to 49d79e678dSVineet Gupta * flush is more than the size of TLB itself. 50d79e678dSVineet Gupta * 51d79e678dSVineet Gupta * Rahul Trivedi : Codito Technologies 2004 52f1f3347dSVineet Gupta */ 53f1f3347dSVineet Gupta 54f1f3347dSVineet Gupta #include <linux/module.h> 55483e9bcbSVineet Gupta #include <linux/bug.h> 56f1f3347dSVineet Gupta #include <asm/arcregs.h> 57d79e678dSVineet Gupta #include <asm/setup.h> 58f1f3347dSVineet Gupta #include <asm/mmu_context.h> 59da1677b0SVineet Gupta #include <asm/mmu.h> 60f1f3347dSVineet Gupta 61d79e678dSVineet Gupta /* Need for ARC MMU v2 62d79e678dSVineet Gupta * 63d79e678dSVineet Gupta * ARC700 MMU-v1 had a Joint-TLB for Code and Data and is 2 way set-assoc. 64d79e678dSVineet Gupta * For a memcpy operation with 3 players (src/dst/code) such that all 3 pages 65d79e678dSVineet Gupta * map into same set, there would be contention for the 2 ways causing severe 66d79e678dSVineet Gupta * Thrashing. 67d79e678dSVineet Gupta * 68d79e678dSVineet Gupta * Although J-TLB is 2 way set assoc, ARC700 caches J-TLB into uTLBS which has 69d79e678dSVineet Gupta * much higher associativity. u-D-TLB is 8 ways, u-I-TLB is 4 ways. 70d79e678dSVineet Gupta * Given this, the thrasing problem should never happen because once the 3 71d79e678dSVineet Gupta * J-TLB entries are created (even though 3rd will knock out one of the prev 72d79e678dSVineet Gupta * two), the u-D-TLB and u-I-TLB will have what is required to accomplish memcpy 73d79e678dSVineet Gupta * 74d79e678dSVineet Gupta * Yet we still see the Thrashing because a J-TLB Write cause flush of u-TLBs. 75d79e678dSVineet Gupta * This is a simple design for keeping them in sync. So what do we do? 76d79e678dSVineet Gupta * The solution which James came up was pretty neat. It utilised the assoc 77d79e678dSVineet Gupta * of uTLBs by not invalidating always but only when absolutely necessary. 78d79e678dSVineet Gupta * 79d79e678dSVineet Gupta * - Existing TLB commands work as before 80d79e678dSVineet Gupta * - New command (TLBWriteNI) for TLB write without clearing uTLBs 81d79e678dSVineet Gupta * - New command (TLBIVUTLB) to invalidate uTLBs. 82d79e678dSVineet Gupta * 83d79e678dSVineet Gupta * The uTLBs need only be invalidated when pages are being removed from the 84d79e678dSVineet Gupta * OS page table. If a 'victim' TLB entry is being overwritten in the main TLB 85d79e678dSVineet Gupta * as a result of a miss, the removed entry is still allowed to exist in the 86d79e678dSVineet Gupta * uTLBs as it is still valid and present in the OS page table. This allows the 87d79e678dSVineet Gupta * full associativity of the uTLBs to hide the limited associativity of the main 88d79e678dSVineet Gupta * TLB. 89d79e678dSVineet Gupta * 90d79e678dSVineet Gupta * During a miss handler, the new "TLBWriteNI" command is used to load 91d79e678dSVineet Gupta * entries without clearing the uTLBs. 92d79e678dSVineet Gupta * 93d79e678dSVineet Gupta * When the OS page table is updated, TLB entries that may be associated with a 94d79e678dSVineet Gupta * removed page are removed (flushed) from the TLB using TLBWrite. In this 95d79e678dSVineet Gupta * circumstance, the uTLBs must also be cleared. This is done by using the 96d79e678dSVineet Gupta * existing TLBWrite command. An explicit IVUTLB is also required for those 97d79e678dSVineet Gupta * corner cases when TLBWrite was not executed at all because the corresp 98d79e678dSVineet Gupta * J-TLB entry got evicted/replaced. 99d79e678dSVineet Gupta */ 100d79e678dSVineet Gupta 101da1677b0SVineet Gupta 102f1f3347dSVineet Gupta /* A copy of the ASID from the PID reg is kept in asid_cache */ 103f1f3347dSVineet Gupta int asid_cache = FIRST_ASID; 104f1f3347dSVineet Gupta 105f1f3347dSVineet Gupta /* ASID to mm struct mapping. We have one extra entry corresponding to 106f1f3347dSVineet Gupta * NO_ASID to save us a compare when clearing the mm entry for old asid 107f1f3347dSVineet Gupta * see get_new_mmu_context (asm-arc/mmu_context.h) 108f1f3347dSVineet Gupta */ 109f1f3347dSVineet Gupta struct mm_struct *asid_mm_map[NUM_ASID + 1]; 110cc562d2eSVineet Gupta 111d79e678dSVineet Gupta /* 112d79e678dSVineet Gupta * Utility Routine to erase a J-TLB entry 113483e9bcbSVineet Gupta * Caller needs to setup Index Reg (manually or via getIndex) 114d79e678dSVineet Gupta */ 115483e9bcbSVineet Gupta static inline void __tlb_entry_erase(void) 116d79e678dSVineet Gupta { 117d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBPD1, 0); 118d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBPD0, 0); 119d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBWrite); 120d79e678dSVineet Gupta } 121d79e678dSVineet Gupta 122483e9bcbSVineet Gupta static inline unsigned int tlb_entry_lkup(unsigned long vaddr_n_asid) 123483e9bcbSVineet Gupta { 124483e9bcbSVineet Gupta unsigned int idx; 125483e9bcbSVineet Gupta 126483e9bcbSVineet Gupta write_aux_reg(ARC_REG_TLBPD0, vaddr_n_asid); 127483e9bcbSVineet Gupta 128483e9bcbSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBProbe); 129483e9bcbSVineet Gupta idx = read_aux_reg(ARC_REG_TLBINDEX); 130483e9bcbSVineet Gupta 131483e9bcbSVineet Gupta return idx; 132483e9bcbSVineet Gupta } 133483e9bcbSVineet Gupta 134d79e678dSVineet Gupta static void tlb_entry_erase(unsigned int vaddr_n_asid) 135d79e678dSVineet Gupta { 136d79e678dSVineet Gupta unsigned int idx; 137d79e678dSVineet Gupta 138d79e678dSVineet Gupta /* Locate the TLB entry for this vaddr + ASID */ 139483e9bcbSVineet Gupta idx = tlb_entry_lkup(vaddr_n_asid); 140d79e678dSVineet Gupta 141d79e678dSVineet Gupta /* No error means entry found, zero it out */ 142d79e678dSVineet Gupta if (likely(!(idx & TLB_LKUP_ERR))) { 143d79e678dSVineet Gupta __tlb_entry_erase(); 144483e9bcbSVineet Gupta } else { 145d79e678dSVineet Gupta /* Duplicate entry error */ 146483e9bcbSVineet Gupta WARN(idx == TLB_DUP_ERR, "Probe returned Dup PD for %x\n", 147d79e678dSVineet Gupta vaddr_n_asid); 148d79e678dSVineet Gupta } 149d79e678dSVineet Gupta } 150d79e678dSVineet Gupta 151d79e678dSVineet Gupta /**************************************************************************** 152d79e678dSVineet Gupta * ARC700 MMU caches recently used J-TLB entries (RAM) as uTLBs (FLOPs) 153d79e678dSVineet Gupta * 154d79e678dSVineet Gupta * New IVUTLB cmd in MMU v2 explictly invalidates the uTLB 155d79e678dSVineet Gupta * 156d79e678dSVineet Gupta * utlb_invalidate ( ) 157d79e678dSVineet Gupta * -For v2 MMU calls Flush uTLB Cmd 158d79e678dSVineet Gupta * -For v1 MMU does nothing (except for Metal Fix v1 MMU) 159d79e678dSVineet Gupta * This is because in v1 TLBWrite itself invalidate uTLBs 160d79e678dSVineet Gupta ***************************************************************************/ 161d79e678dSVineet Gupta 162d79e678dSVineet Gupta static void utlb_invalidate(void) 163d79e678dSVineet Gupta { 164d79e678dSVineet Gupta #if (CONFIG_ARC_MMU_VER >= 2) 165d79e678dSVineet Gupta 166483e9bcbSVineet Gupta #if (CONFIG_ARC_MMU_VER == 2) 167d79e678dSVineet Gupta /* MMU v2 introduced the uTLB Flush command. 168d79e678dSVineet Gupta * There was however an obscure hardware bug, where uTLB flush would 169d79e678dSVineet Gupta * fail when a prior probe for J-TLB (both totally unrelated) would 170d79e678dSVineet Gupta * return lkup err - because the entry didnt exist in MMU. 171d79e678dSVineet Gupta * The Workround was to set Index reg with some valid value, prior to 172d79e678dSVineet Gupta * flush. This was fixed in MMU v3 hence not needed any more 173d79e678dSVineet Gupta */ 174d79e678dSVineet Gupta unsigned int idx; 175d79e678dSVineet Gupta 176d79e678dSVineet Gupta /* make sure INDEX Reg is valid */ 177d79e678dSVineet Gupta idx = read_aux_reg(ARC_REG_TLBINDEX); 178d79e678dSVineet Gupta 179d79e678dSVineet Gupta /* If not write some dummy val */ 180d79e678dSVineet Gupta if (unlikely(idx & TLB_LKUP_ERR)) 181d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBINDEX, 0xa); 182d79e678dSVineet Gupta #endif 183d79e678dSVineet Gupta 184d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBIVUTLB); 185d79e678dSVineet Gupta #endif 186d79e678dSVineet Gupta 187d79e678dSVineet Gupta } 188d79e678dSVineet Gupta 189483e9bcbSVineet Gupta static void tlb_entry_insert(unsigned int pd0, unsigned int pd1) 190483e9bcbSVineet Gupta { 191483e9bcbSVineet Gupta unsigned int idx; 192483e9bcbSVineet Gupta 193483e9bcbSVineet Gupta /* 194483e9bcbSVineet Gupta * First verify if entry for this vaddr+ASID already exists 195483e9bcbSVineet Gupta * This also sets up PD0 (vaddr, ASID..) for final commit 196483e9bcbSVineet Gupta */ 197483e9bcbSVineet Gupta idx = tlb_entry_lkup(pd0); 198483e9bcbSVineet Gupta 199483e9bcbSVineet Gupta /* 200483e9bcbSVineet Gupta * If Not already present get a free slot from MMU. 201483e9bcbSVineet Gupta * Otherwise, Probe would have located the entry and set INDEX Reg 202483e9bcbSVineet Gupta * with existing location. This will cause Write CMD to over-write 203483e9bcbSVineet Gupta * existing entry with new PD0 and PD1 204483e9bcbSVineet Gupta */ 205483e9bcbSVineet Gupta if (likely(idx & TLB_LKUP_ERR)) 206483e9bcbSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBGetIndex); 207483e9bcbSVineet Gupta 208483e9bcbSVineet Gupta /* setup the other half of TLB entry (pfn, rwx..) */ 209483e9bcbSVineet Gupta write_aux_reg(ARC_REG_TLBPD1, pd1); 210483e9bcbSVineet Gupta 211483e9bcbSVineet Gupta /* 212483e9bcbSVineet Gupta * Commit the Entry to MMU 213483e9bcbSVineet Gupta * It doesnt sound safe to use the TLBWriteNI cmd here 214483e9bcbSVineet Gupta * which doesn't flush uTLBs. I'd rather be safe than sorry. 215483e9bcbSVineet Gupta */ 216483e9bcbSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBWrite); 217483e9bcbSVineet Gupta } 218483e9bcbSVineet Gupta 219d79e678dSVineet Gupta /* 220d79e678dSVineet Gupta * Un-conditionally (without lookup) erase the entire MMU contents 221d79e678dSVineet Gupta */ 222d79e678dSVineet Gupta 223d79e678dSVineet Gupta noinline void local_flush_tlb_all(void) 224d79e678dSVineet Gupta { 225d79e678dSVineet Gupta unsigned long flags; 226d79e678dSVineet Gupta unsigned int entry; 227d79e678dSVineet Gupta struct cpuinfo_arc_mmu *mmu = &cpuinfo_arc700[smp_processor_id()].mmu; 228d79e678dSVineet Gupta 229d79e678dSVineet Gupta local_irq_save(flags); 230d79e678dSVineet Gupta 231d79e678dSVineet Gupta /* Load PD0 and PD1 with template for a Blank Entry */ 232d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBPD1, 0); 233d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBPD0, 0); 234d79e678dSVineet Gupta 235d79e678dSVineet Gupta for (entry = 0; entry < mmu->num_tlb; entry++) { 236d79e678dSVineet Gupta /* write this entry to the TLB */ 237d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBINDEX, entry); 238d79e678dSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBWrite); 239d79e678dSVineet Gupta } 240d79e678dSVineet Gupta 241d79e678dSVineet Gupta utlb_invalidate(); 242d79e678dSVineet Gupta 243d79e678dSVineet Gupta local_irq_restore(flags); 244d79e678dSVineet Gupta } 245d79e678dSVineet Gupta 246d79e678dSVineet Gupta /* 247d79e678dSVineet Gupta * Flush the entrie MM for userland. The fastest way is to move to Next ASID 248d79e678dSVineet Gupta */ 249d79e678dSVineet Gupta noinline void local_flush_tlb_mm(struct mm_struct *mm) 250d79e678dSVineet Gupta { 251d79e678dSVineet Gupta /* 252d79e678dSVineet Gupta * Small optimisation courtesy IA64 253d79e678dSVineet Gupta * flush_mm called during fork,exit,munmap etc, multiple times as well. 254d79e678dSVineet Gupta * Only for fork( ) do we need to move parent to a new MMU ctxt, 255d79e678dSVineet Gupta * all other cases are NOPs, hence this check. 256d79e678dSVineet Gupta */ 257d79e678dSVineet Gupta if (atomic_read(&mm->mm_users) == 0) 258d79e678dSVineet Gupta return; 259d79e678dSVineet Gupta 260d79e678dSVineet Gupta /* 261d79e678dSVineet Gupta * Workaround for Android weirdism: 262d79e678dSVineet Gupta * A binder VMA could end up in a task such that vma->mm != tsk->mm 263d79e678dSVineet Gupta * old code would cause h/w - s/w ASID to get out of sync 264d79e678dSVineet Gupta */ 265d79e678dSVineet Gupta if (current->mm != mm) 266d79e678dSVineet Gupta destroy_context(mm); 267d79e678dSVineet Gupta else 268d79e678dSVineet Gupta get_new_mmu_context(mm); 269d79e678dSVineet Gupta } 270d79e678dSVineet Gupta 271d79e678dSVineet Gupta /* 272d79e678dSVineet Gupta * Flush a Range of TLB entries for userland. 273d79e678dSVineet Gupta * @start is inclusive, while @end is exclusive 274d79e678dSVineet Gupta * Difference between this and Kernel Range Flush is 275d79e678dSVineet Gupta * -Here the fastest way (if range is too large) is to move to next ASID 276d79e678dSVineet Gupta * without doing any explicit Shootdown 277d79e678dSVineet Gupta * -In case of kernel Flush, entry has to be shot down explictly 278d79e678dSVineet Gupta */ 279d79e678dSVineet Gupta void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, 280d79e678dSVineet Gupta unsigned long end) 281d79e678dSVineet Gupta { 282d79e678dSVineet Gupta unsigned long flags; 283d79e678dSVineet Gupta unsigned int asid; 284d79e678dSVineet Gupta 285d79e678dSVineet Gupta /* If range @start to @end is more than 32 TLB entries deep, 286d79e678dSVineet Gupta * its better to move to a new ASID rather than searching for 287d79e678dSVineet Gupta * individual entries and then shooting them down 288d79e678dSVineet Gupta * 289d79e678dSVineet Gupta * The calc above is rough, doesn't account for unaligned parts, 290d79e678dSVineet Gupta * since this is heuristics based anyways 291d79e678dSVineet Gupta */ 292d79e678dSVineet Gupta if (unlikely((end - start) >= PAGE_SIZE * 32)) { 293d79e678dSVineet Gupta local_flush_tlb_mm(vma->vm_mm); 294d79e678dSVineet Gupta return; 295d79e678dSVineet Gupta } 296d79e678dSVineet Gupta 297d79e678dSVineet Gupta /* 298d79e678dSVineet Gupta * @start moved to page start: this alone suffices for checking 299d79e678dSVineet Gupta * loop end condition below, w/o need for aligning @end to end 300d79e678dSVineet Gupta * e.g. 2000 to 4001 will anyhow loop twice 301d79e678dSVineet Gupta */ 302d79e678dSVineet Gupta start &= PAGE_MASK; 303d79e678dSVineet Gupta 304d79e678dSVineet Gupta local_irq_save(flags); 305d79e678dSVineet Gupta asid = vma->vm_mm->context.asid; 306d79e678dSVineet Gupta 307d79e678dSVineet Gupta if (asid != NO_ASID) { 308d79e678dSVineet Gupta while (start < end) { 309d79e678dSVineet Gupta tlb_entry_erase(start | (asid & 0xff)); 310d79e678dSVineet Gupta start += PAGE_SIZE; 311d79e678dSVineet Gupta } 312d79e678dSVineet Gupta } 313d79e678dSVineet Gupta 314d79e678dSVineet Gupta utlb_invalidate(); 315d79e678dSVineet Gupta 316d79e678dSVineet Gupta local_irq_restore(flags); 317d79e678dSVineet Gupta } 318d79e678dSVineet Gupta 319d79e678dSVineet Gupta /* Flush the kernel TLB entries - vmalloc/modules (Global from MMU perspective) 320d79e678dSVineet Gupta * @start, @end interpreted as kvaddr 321d79e678dSVineet Gupta * Interestingly, shared TLB entries can also be flushed using just 322d79e678dSVineet Gupta * @start,@end alone (interpreted as user vaddr), although technically SASID 323d79e678dSVineet Gupta * is also needed. However our smart TLbProbe lookup takes care of that. 324d79e678dSVineet Gupta */ 325d79e678dSVineet Gupta void local_flush_tlb_kernel_range(unsigned long start, unsigned long end) 326d79e678dSVineet Gupta { 327d79e678dSVineet Gupta unsigned long flags; 328d79e678dSVineet Gupta 329d79e678dSVineet Gupta /* exactly same as above, except for TLB entry not taking ASID */ 330d79e678dSVineet Gupta 331d79e678dSVineet Gupta if (unlikely((end - start) >= PAGE_SIZE * 32)) { 332d79e678dSVineet Gupta local_flush_tlb_all(); 333d79e678dSVineet Gupta return; 334d79e678dSVineet Gupta } 335d79e678dSVineet Gupta 336d79e678dSVineet Gupta start &= PAGE_MASK; 337d79e678dSVineet Gupta 338d79e678dSVineet Gupta local_irq_save(flags); 339d79e678dSVineet Gupta while (start < end) { 340d79e678dSVineet Gupta tlb_entry_erase(start); 341d79e678dSVineet Gupta start += PAGE_SIZE; 342d79e678dSVineet Gupta } 343d79e678dSVineet Gupta 344d79e678dSVineet Gupta utlb_invalidate(); 345d79e678dSVineet Gupta 346d79e678dSVineet Gupta local_irq_restore(flags); 347d79e678dSVineet Gupta } 348d79e678dSVineet Gupta 349d79e678dSVineet Gupta /* 350d79e678dSVineet Gupta * Delete TLB entry in MMU for a given page (??? address) 351d79e678dSVineet Gupta * NOTE One TLB entry contains translation for single PAGE 352d79e678dSVineet Gupta */ 353d79e678dSVineet Gupta 354d79e678dSVineet Gupta void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page) 355d79e678dSVineet Gupta { 356d79e678dSVineet Gupta unsigned long flags; 357d79e678dSVineet Gupta 358d79e678dSVineet Gupta /* Note that it is critical that interrupts are DISABLED between 359d79e678dSVineet Gupta * checking the ASID and using it flush the TLB entry 360d79e678dSVineet Gupta */ 361d79e678dSVineet Gupta local_irq_save(flags); 362d79e678dSVineet Gupta 363d79e678dSVineet Gupta if (vma->vm_mm->context.asid != NO_ASID) { 364d79e678dSVineet Gupta tlb_entry_erase((page & PAGE_MASK) | 365d79e678dSVineet Gupta (vma->vm_mm->context.asid & 0xff)); 366d79e678dSVineet Gupta utlb_invalidate(); 367d79e678dSVineet Gupta } 368d79e678dSVineet Gupta 369d79e678dSVineet Gupta local_irq_restore(flags); 370d79e678dSVineet Gupta } 371cc562d2eSVineet Gupta 372cc562d2eSVineet Gupta /* 373cc562d2eSVineet Gupta * Routine to create a TLB entry 374cc562d2eSVineet Gupta */ 375cc562d2eSVineet Gupta void create_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) 376cc562d2eSVineet Gupta { 377cc562d2eSVineet Gupta unsigned long flags; 378483e9bcbSVineet Gupta unsigned int asid_or_sasid, rwx; 379483e9bcbSVineet Gupta unsigned long pd0, pd1; 380cc562d2eSVineet Gupta 381cc562d2eSVineet Gupta /* 382cc562d2eSVineet Gupta * create_tlb() assumes that current->mm == vma->mm, since 383cc562d2eSVineet Gupta * -it ASID for TLB entry is fetched from MMU ASID reg (valid for curr) 384cc562d2eSVineet Gupta * -completes the lazy write to SASID reg (again valid for curr tsk) 385cc562d2eSVineet Gupta * 386cc562d2eSVineet Gupta * Removing the assumption involves 387cc562d2eSVineet Gupta * -Using vma->mm->context{ASID,SASID}, as opposed to MMU reg. 388cc562d2eSVineet Gupta * -Fix the TLB paranoid debug code to not trigger false negatives. 389cc562d2eSVineet Gupta * -More importantly it makes this handler inconsistent with fast-path 390cc562d2eSVineet Gupta * TLB Refill handler which always deals with "current" 391cc562d2eSVineet Gupta * 392cc562d2eSVineet Gupta * Lets see the use cases when current->mm != vma->mm and we land here 393cc562d2eSVineet Gupta * 1. execve->copy_strings()->__get_user_pages->handle_mm_fault 394cc562d2eSVineet Gupta * Here VM wants to pre-install a TLB entry for user stack while 395cc562d2eSVineet Gupta * current->mm still points to pre-execve mm (hence the condition). 396cc562d2eSVineet Gupta * However the stack vaddr is soon relocated (randomization) and 397cc562d2eSVineet Gupta * move_page_tables() tries to undo that TLB entry. 398cc562d2eSVineet Gupta * Thus not creating TLB entry is not any worse. 399cc562d2eSVineet Gupta * 400cc562d2eSVineet Gupta * 2. ptrace(POKETEXT) causes a CoW - debugger(current) inserting a 401cc562d2eSVineet Gupta * breakpoint in debugged task. Not creating a TLB now is not 402cc562d2eSVineet Gupta * performance critical. 403cc562d2eSVineet Gupta * 404cc562d2eSVineet Gupta * Both the cases above are not good enough for code churn. 405cc562d2eSVineet Gupta */ 406cc562d2eSVineet Gupta if (current->active_mm != vma->vm_mm) 407cc562d2eSVineet Gupta return; 408cc562d2eSVineet Gupta 409cc562d2eSVineet Gupta local_irq_save(flags); 410cc562d2eSVineet Gupta 411cc562d2eSVineet Gupta tlb_paranoid_check(vma->vm_mm->context.asid, address); 412cc562d2eSVineet Gupta 413cc562d2eSVineet Gupta address &= PAGE_MASK; 414cc562d2eSVineet Gupta 415cc562d2eSVineet Gupta /* update this PTE credentials */ 416cc562d2eSVineet Gupta pte_val(*ptep) |= (_PAGE_PRESENT | _PAGE_ACCESSED); 417cc562d2eSVineet Gupta 418d091fcb9SVineet Gupta /* Create HW TLB(PD0,PD1) from PTE */ 419cc562d2eSVineet Gupta 420cc562d2eSVineet Gupta /* ASID for this task */ 421cc562d2eSVineet Gupta asid_or_sasid = read_aux_reg(ARC_REG_PID) & 0xff; 422cc562d2eSVineet Gupta 423483e9bcbSVineet Gupta pd0 = address | asid_or_sasid | (pte_val(*ptep) & PTE_BITS_IN_PD0); 424cc562d2eSVineet Gupta 42564b703efSVineet Gupta /* 42664b703efSVineet Gupta * ARC MMU provides fully orthogonal access bits for K/U mode, 42764b703efSVineet Gupta * however Linux only saves 1 set to save PTE real-estate 42864b703efSVineet Gupta * Here we convert 3 PTE bits into 6 MMU bits: 42964b703efSVineet Gupta * -Kernel only entries have Kr Kw Kx 0 0 0 43064b703efSVineet Gupta * -User entries have mirrored K and U bits 43164b703efSVineet Gupta */ 43264b703efSVineet Gupta rwx = pte_val(*ptep) & PTE_BITS_RWX; 43364b703efSVineet Gupta 43464b703efSVineet Gupta if (pte_val(*ptep) & _PAGE_GLOBAL) 43564b703efSVineet Gupta rwx <<= 3; /* r w x => Kr Kw Kx 0 0 0 */ 43664b703efSVineet Gupta else 43764b703efSVineet Gupta rwx |= (rwx << 3); /* r w x => Kr Kw Kx Ur Uw Ux */ 43864b703efSVineet Gupta 439483e9bcbSVineet Gupta pd1 = rwx | (pte_val(*ptep) & PTE_BITS_NON_RWX_IN_PD1); 440cc562d2eSVineet Gupta 441483e9bcbSVineet Gupta tlb_entry_insert(pd0, pd1); 442cc562d2eSVineet Gupta 443cc562d2eSVineet Gupta local_irq_restore(flags); 444cc562d2eSVineet Gupta } 445cc562d2eSVineet Gupta 446eacd0e95SVineet Gupta /* 447eacd0e95SVineet Gupta * Called at the end of pagefault, for a userspace mapped page 448eacd0e95SVineet Gupta * -pre-install the corresponding TLB entry into MMU 4494102b533SVineet Gupta * -Finalize the delayed D-cache flush of kernel mapping of page due to 4504102b533SVineet Gupta * flush_dcache_page(), copy_user_page() 4514102b533SVineet Gupta * 4524102b533SVineet Gupta * Note that flush (when done) involves both WBACK - so physical page is 4534102b533SVineet Gupta * in sync as well as INV - so any non-congruent aliases don't remain 454cc562d2eSVineet Gupta */ 45524603fddSVineet Gupta void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, 456cc562d2eSVineet Gupta pte_t *ptep) 457cc562d2eSVineet Gupta { 45824603fddSVineet Gupta unsigned long vaddr = vaddr_unaligned & PAGE_MASK; 4594102b533SVineet Gupta unsigned long paddr = pte_val(*ptep) & PAGE_MASK; 46029b93c68SVineet Gupta struct page *page = pfn_to_page(pte_pfn(*ptep)); 461cc562d2eSVineet Gupta 46224603fddSVineet Gupta create_tlb(vma, vaddr, ptep); 46324603fddSVineet Gupta 46429b93c68SVineet Gupta if (page == ZERO_PAGE(0)) { 46529b93c68SVineet Gupta return; 46629b93c68SVineet Gupta } 46729b93c68SVineet Gupta 4684102b533SVineet Gupta /* 4694102b533SVineet Gupta * Exec page : Independent of aliasing/page-color considerations, 4704102b533SVineet Gupta * since icache doesn't snoop dcache on ARC, any dirty 4714102b533SVineet Gupta * K-mapping of a code page needs to be wback+inv so that 4724102b533SVineet Gupta * icache fetch by userspace sees code correctly. 4734102b533SVineet Gupta * !EXEC page: If K-mapping is NOT congruent to U-mapping, flush it 4744102b533SVineet Gupta * so userspace sees the right data. 4754102b533SVineet Gupta * (Avoids the flush for Non-exec + congruent mapping case) 4764102b533SVineet Gupta */ 4773e87974dSVineet Gupta if ((vma->vm_flags & VM_EXEC) || 4783e87974dSVineet Gupta addr_not_cache_congruent(paddr, vaddr)) { 479eacd0e95SVineet Gupta 4802ed21daeSVineet Gupta int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); 481eacd0e95SVineet Gupta if (dirty) { 4824102b533SVineet Gupta /* wback + inv dcache lines */ 4836ec18a81SVineet Gupta __flush_dcache_page(paddr, paddr); 4844102b533SVineet Gupta 4854102b533SVineet Gupta /* invalidate any existing icache lines */ 4864102b533SVineet Gupta if (vma->vm_flags & VM_EXEC) 48724603fddSVineet Gupta __inv_icache_page(paddr, vaddr); 48824603fddSVineet Gupta } 489cc562d2eSVineet Gupta } 490eacd0e95SVineet Gupta } 491cc562d2eSVineet Gupta 492cc562d2eSVineet Gupta /* Read the Cache Build Confuration Registers, Decode them and save into 493cc562d2eSVineet Gupta * the cpuinfo structure for later use. 494cc562d2eSVineet Gupta * No Validation is done here, simply read/convert the BCRs 495cc562d2eSVineet Gupta */ 496ce759956SPaul Gortmaker void read_decode_mmu_bcr(void) 497cc562d2eSVineet Gupta { 498cc562d2eSVineet Gupta struct cpuinfo_arc_mmu *mmu = &cpuinfo_arc700[smp_processor_id()].mmu; 499da1677b0SVineet Gupta unsigned int tmp; 500da1677b0SVineet Gupta struct bcr_mmu_1_2 { 501da1677b0SVineet Gupta #ifdef CONFIG_CPU_BIG_ENDIAN 502da1677b0SVineet Gupta unsigned int ver:8, ways:4, sets:4, u_itlb:8, u_dtlb:8; 503da1677b0SVineet Gupta #else 504da1677b0SVineet Gupta unsigned int u_dtlb:8, u_itlb:8, sets:4, ways:4, ver:8; 505da1677b0SVineet Gupta #endif 506da1677b0SVineet Gupta } *mmu2; 507da1677b0SVineet Gupta 508da1677b0SVineet Gupta struct bcr_mmu_3 { 509da1677b0SVineet Gupta #ifdef CONFIG_CPU_BIG_ENDIAN 510da1677b0SVineet Gupta unsigned int ver:8, ways:4, sets:4, osm:1, reserv:3, pg_sz:4, 511da1677b0SVineet Gupta u_itlb:4, u_dtlb:4; 512da1677b0SVineet Gupta #else 513da1677b0SVineet Gupta unsigned int u_dtlb:4, u_itlb:4, pg_sz:4, reserv:3, osm:1, sets:4, 514da1677b0SVineet Gupta ways:4, ver:8; 515da1677b0SVineet Gupta #endif 516da1677b0SVineet Gupta } *mmu3; 517cc562d2eSVineet Gupta 518cc562d2eSVineet Gupta tmp = read_aux_reg(ARC_REG_MMU_BCR); 519cc562d2eSVineet Gupta mmu->ver = (tmp >> 24); 520cc562d2eSVineet Gupta 521cc562d2eSVineet Gupta if (mmu->ver <= 2) { 522cc562d2eSVineet Gupta mmu2 = (struct bcr_mmu_1_2 *)&tmp; 523cc562d2eSVineet Gupta mmu->pg_sz = PAGE_SIZE; 524cc562d2eSVineet Gupta mmu->sets = 1 << mmu2->sets; 525cc562d2eSVineet Gupta mmu->ways = 1 << mmu2->ways; 526cc562d2eSVineet Gupta mmu->u_dtlb = mmu2->u_dtlb; 527cc562d2eSVineet Gupta mmu->u_itlb = mmu2->u_itlb; 528cc562d2eSVineet Gupta } else { 529cc562d2eSVineet Gupta mmu3 = (struct bcr_mmu_3 *)&tmp; 530cc562d2eSVineet Gupta mmu->pg_sz = 512 << mmu3->pg_sz; 531cc562d2eSVineet Gupta mmu->sets = 1 << mmu3->sets; 532cc562d2eSVineet Gupta mmu->ways = 1 << mmu3->ways; 533cc562d2eSVineet Gupta mmu->u_dtlb = mmu3->u_dtlb; 534cc562d2eSVineet Gupta mmu->u_itlb = mmu3->u_itlb; 535cc562d2eSVineet Gupta } 536cc562d2eSVineet Gupta 537cc562d2eSVineet Gupta mmu->num_tlb = mmu->sets * mmu->ways; 538cc562d2eSVineet Gupta } 539cc562d2eSVineet Gupta 540af617428SVineet Gupta char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len) 541af617428SVineet Gupta { 542af617428SVineet Gupta int n = 0; 543e3edeb67SNoam Camus struct cpuinfo_arc_mmu *p_mmu = &cpuinfo_arc700[cpu_id].mmu; 544af617428SVineet Gupta 545af617428SVineet Gupta n += scnprintf(buf + n, len - n, "ARC700 MMU [v%x]\t: %dk PAGE, ", 546af617428SVineet Gupta p_mmu->ver, TO_KB(p_mmu->pg_sz)); 547af617428SVineet Gupta 548af617428SVineet Gupta n += scnprintf(buf + n, len - n, 549af617428SVineet Gupta "J-TLB %d (%dx%d), uDTLB %d, uITLB %d, %s\n", 550af617428SVineet Gupta p_mmu->num_tlb, p_mmu->sets, p_mmu->ways, 551af617428SVineet Gupta p_mmu->u_dtlb, p_mmu->u_itlb, 5528235703eSVineet Gupta IS_ENABLED(CONFIG_ARC_MMU_SASID) ? "SASID" : ""); 553af617428SVineet Gupta 554af617428SVineet Gupta return buf; 555af617428SVineet Gupta } 556af617428SVineet Gupta 557ce759956SPaul Gortmaker void arc_mmu_init(void) 558cc562d2eSVineet Gupta { 559af617428SVineet Gupta char str[256]; 560af617428SVineet Gupta struct cpuinfo_arc_mmu *mmu = &cpuinfo_arc700[smp_processor_id()].mmu; 561af617428SVineet Gupta 562af617428SVineet Gupta printk(arc_mmu_mumbojumbo(0, str, sizeof(str))); 563af617428SVineet Gupta 564af617428SVineet Gupta /* For efficiency sake, kernel is compile time built for a MMU ver 565af617428SVineet Gupta * This must match the hardware it is running on. 566af617428SVineet Gupta * Linux built for MMU V2, if run on MMU V1 will break down because V1 567af617428SVineet Gupta * hardware doesn't understand cmds such as WriteNI, or IVUTLB 568af617428SVineet Gupta * On the other hand, Linux built for V1 if run on MMU V2 will do 569af617428SVineet Gupta * un-needed workarounds to prevent memcpy thrashing. 570af617428SVineet Gupta * Similarly MMU V3 has new features which won't work on older MMU 571af617428SVineet Gupta */ 572af617428SVineet Gupta if (mmu->ver != CONFIG_ARC_MMU_VER) { 573af617428SVineet Gupta panic("MMU ver %d doesn't match kernel built for %d...\n", 574af617428SVineet Gupta mmu->ver, CONFIG_ARC_MMU_VER); 575af617428SVineet Gupta } 576af617428SVineet Gupta 577af617428SVineet Gupta if (mmu->pg_sz != PAGE_SIZE) 578af617428SVineet Gupta panic("MMU pg size != PAGE_SIZE (%luk)\n", TO_KB(PAGE_SIZE)); 579af617428SVineet Gupta 580cc562d2eSVineet Gupta /* Enable the MMU */ 581cc562d2eSVineet Gupta write_aux_reg(ARC_REG_PID, MMU_ENABLE); 58241195d23SVineet Gupta 58341195d23SVineet Gupta /* In smp we use this reg for interrupt 1 scratch */ 58441195d23SVineet Gupta #ifndef CONFIG_SMP 58541195d23SVineet Gupta /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */ 58641195d23SVineet Gupta write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir); 58741195d23SVineet Gupta #endif 588cc562d2eSVineet Gupta } 589cc562d2eSVineet Gupta 590cc562d2eSVineet Gupta /* 591cc562d2eSVineet Gupta * TLB Programmer's Model uses Linear Indexes: 0 to {255, 511} for 128 x {2,4} 592cc562d2eSVineet Gupta * The mapping is Column-first. 593cc562d2eSVineet Gupta * --------------------- ----------- 594cc562d2eSVineet Gupta * |way0|way1|way2|way3| |way0|way1| 595cc562d2eSVineet Gupta * --------------------- ----------- 596cc562d2eSVineet Gupta * [set0] | 0 | 1 | 2 | 3 | | 0 | 1 | 597cc562d2eSVineet Gupta * [set1] | 4 | 5 | 6 | 7 | | 2 | 3 | 598cc562d2eSVineet Gupta * ~ ~ ~ ~ 599cc562d2eSVineet Gupta * [set127] | 508| 509| 510| 511| | 254| 255| 600cc562d2eSVineet Gupta * --------------------- ----------- 601cc562d2eSVineet Gupta * For normal operations we don't(must not) care how above works since 602cc562d2eSVineet Gupta * MMU cmd getIndex(vaddr) abstracts that out. 603cc562d2eSVineet Gupta * However for walking WAYS of a SET, we need to know this 604cc562d2eSVineet Gupta */ 605cc562d2eSVineet Gupta #define SET_WAY_TO_IDX(mmu, set, way) ((set) * mmu->ways + (way)) 606cc562d2eSVineet Gupta 607cc562d2eSVineet Gupta /* Handling of Duplicate PD (TLB entry) in MMU. 608cc562d2eSVineet Gupta * -Could be due to buggy customer tapeouts or obscure kernel bugs 609cc562d2eSVineet Gupta * -MMU complaints not at the time of duplicate PD installation, but at the 610cc562d2eSVineet Gupta * time of lookup matching multiple ways. 611cc562d2eSVineet Gupta * -Ideally these should never happen - but if they do - workaround by deleting 612cc562d2eSVineet Gupta * the duplicate one. 613cc562d2eSVineet Gupta * -Knob to be verbose abt it.(TODO: hook them up to debugfs) 614cc562d2eSVineet Gupta */ 615cc562d2eSVineet Gupta volatile int dup_pd_verbose = 1;/* Be slient abt it or complain (default) */ 616cc562d2eSVineet Gupta 617cc562d2eSVineet Gupta void do_tlb_overlap_fault(unsigned long cause, unsigned long address, 618cc562d2eSVineet Gupta struct pt_regs *regs) 619cc562d2eSVineet Gupta { 620cc562d2eSVineet Gupta int set, way, n; 621cc562d2eSVineet Gupta unsigned int pd0[4], pd1[4]; /* assume max 4 ways */ 622cc562d2eSVineet Gupta unsigned long flags, is_valid; 623cc562d2eSVineet Gupta struct cpuinfo_arc_mmu *mmu = &cpuinfo_arc700[smp_processor_id()].mmu; 624cc562d2eSVineet Gupta 625cc562d2eSVineet Gupta local_irq_save(flags); 626cc562d2eSVineet Gupta 627cc562d2eSVineet Gupta /* re-enable the MMU */ 628cc562d2eSVineet Gupta write_aux_reg(ARC_REG_PID, MMU_ENABLE | read_aux_reg(ARC_REG_PID)); 629cc562d2eSVineet Gupta 630cc562d2eSVineet Gupta /* loop thru all sets of TLB */ 631cc562d2eSVineet Gupta for (set = 0; set < mmu->sets; set++) { 632cc562d2eSVineet Gupta 633cc562d2eSVineet Gupta /* read out all the ways of current set */ 634cc562d2eSVineet Gupta for (way = 0, is_valid = 0; way < mmu->ways; way++) { 635cc562d2eSVineet Gupta write_aux_reg(ARC_REG_TLBINDEX, 636cc562d2eSVineet Gupta SET_WAY_TO_IDX(mmu, set, way)); 637cc562d2eSVineet Gupta write_aux_reg(ARC_REG_TLBCOMMAND, TLBRead); 638cc562d2eSVineet Gupta pd0[way] = read_aux_reg(ARC_REG_TLBPD0); 639cc562d2eSVineet Gupta pd1[way] = read_aux_reg(ARC_REG_TLBPD1); 640cc562d2eSVineet Gupta is_valid |= pd0[way] & _PAGE_PRESENT; 641cc562d2eSVineet Gupta } 642cc562d2eSVineet Gupta 643cc562d2eSVineet Gupta /* If all the WAYS in SET are empty, skip to next SET */ 644cc562d2eSVineet Gupta if (!is_valid) 645cc562d2eSVineet Gupta continue; 646cc562d2eSVineet Gupta 647cc562d2eSVineet Gupta /* Scan the set for duplicate ways: needs a nested loop */ 648cc562d2eSVineet Gupta for (way = 0; way < mmu->ways; way++) { 649cc562d2eSVineet Gupta if (!pd0[way]) 650cc562d2eSVineet Gupta continue; 651cc562d2eSVineet Gupta 652cc562d2eSVineet Gupta for (n = way + 1; n < mmu->ways; n++) { 653cc562d2eSVineet Gupta if ((pd0[way] & PAGE_MASK) == 654cc562d2eSVineet Gupta (pd0[n] & PAGE_MASK)) { 655cc562d2eSVineet Gupta 656cc562d2eSVineet Gupta if (dup_pd_verbose) { 657cc562d2eSVineet Gupta pr_info("Duplicate PD's @" 658cc562d2eSVineet Gupta "[%d:%d]/[%d:%d]\n", 659cc562d2eSVineet Gupta set, way, set, n); 660cc562d2eSVineet Gupta pr_info("TLBPD0[%u]: %08x\n", 661cc562d2eSVineet Gupta way, pd0[way]); 662cc562d2eSVineet Gupta } 663cc562d2eSVineet Gupta 664cc562d2eSVineet Gupta /* 665cc562d2eSVineet Gupta * clear entry @way and not @n. This is 666cc562d2eSVineet Gupta * critical to our optimised loop 667cc562d2eSVineet Gupta */ 668cc562d2eSVineet Gupta pd0[way] = pd1[way] = 0; 669cc562d2eSVineet Gupta write_aux_reg(ARC_REG_TLBINDEX, 670cc562d2eSVineet Gupta SET_WAY_TO_IDX(mmu, set, way)); 671cc562d2eSVineet Gupta __tlb_entry_erase(); 672cc562d2eSVineet Gupta } 673cc562d2eSVineet Gupta } 674cc562d2eSVineet Gupta } 675cc562d2eSVineet Gupta } 676cc562d2eSVineet Gupta 677cc562d2eSVineet Gupta local_irq_restore(flags); 678cc562d2eSVineet Gupta } 679cc562d2eSVineet Gupta 680cc562d2eSVineet Gupta /*********************************************************************** 681cc562d2eSVineet Gupta * Diagnostic Routines 682cc562d2eSVineet Gupta * -Called from Low Level TLB Hanlders if things don;t look good 683cc562d2eSVineet Gupta **********************************************************************/ 684cc562d2eSVineet Gupta 685cc562d2eSVineet Gupta #ifdef CONFIG_ARC_DBG_TLB_PARANOIA 686cc562d2eSVineet Gupta 687cc562d2eSVineet Gupta /* 688cc562d2eSVineet Gupta * Low Level ASM TLB handler calls this if it finds that HW and SW ASIDS 689cc562d2eSVineet Gupta * don't match 690cc562d2eSVineet Gupta */ 6915bd87adfSVineet Gupta void print_asid_mismatch(int mm_asid, int mmu_asid, int is_fast_path) 692cc562d2eSVineet Gupta { 693cc562d2eSVineet Gupta pr_emerg("ASID Mismatch in %s Path Handler: sw-pid=0x%x hw-pid=0x%x\n", 6945bd87adfSVineet Gupta is_fast_path ? "Fast" : "Slow", mm_asid, mmu_asid); 695cc562d2eSVineet Gupta 696cc562d2eSVineet Gupta __asm__ __volatile__("flag 1"); 697cc562d2eSVineet Gupta } 698cc562d2eSVineet Gupta 6995bd87adfSVineet Gupta void tlb_paranoid_check(unsigned int mm_asid, unsigned long addr) 700cc562d2eSVineet Gupta { 7015bd87adfSVineet Gupta unsigned int mmu_asid; 702cc562d2eSVineet Gupta 7035bd87adfSVineet Gupta mmu_asid = read_aux_reg(ARC_REG_PID) & 0xff; 704cc562d2eSVineet Gupta 7055bd87adfSVineet Gupta /* 7065bd87adfSVineet Gupta * At the time of a TLB miss/installation 7075bd87adfSVineet Gupta * - HW version needs to match SW version 7085bd87adfSVineet Gupta * - SW needs to have a valid ASID 7095bd87adfSVineet Gupta */ 7105bd87adfSVineet Gupta if (addr < 0x70000000 && 7115bd87adfSVineet Gupta ((mmu_asid != mm_asid) || (mm_asid == NO_ASID))) 7125bd87adfSVineet Gupta print_asid_mismatch(mm_asid, mmu_asid, 0); 713cc562d2eSVineet Gupta } 714cc562d2eSVineet Gupta #endif 715